Credits: Rit

Credits: Rit

 

RIT is launching a Video Game Design XSeries program with edX, the leading nonprofit online learning destination founded by Harvard and MIT. XSeries programs are designed to provide learners with a rich understanding of an area of study through a series of offerings grouped under one subject. XSeries programs also include an opportunity to earn an XSeries verified certificate to demonstrate competency and knowledge in a specific field.

Enrollment for the five offerings in the Video Game Design XSeries program is now open. The first offering begins Oct. 31.

The series will teach learners about the skills they would need to become a successful video game designer and explore what job opportunities they could pursue in the industry. Offerings are taught by faculty in RIT’s School of Interactive Games and Media.

“You’ll learn how game history influences design, how designers and programmers think, the various roles within the video game design discipline and how all the pieces come together,” said Stephen Jacobs, professor of interactive games and media at RIT and an instructor in the XSeries. “We want to help people develop a deeper understanding of the field, the discipline and explore the related career paths as well.”

The XSeries consists of five, five-week offerings:

  • Video Game Design History (starts Oct. 31)
  • Video Game Design and Balance (starts Jan. 2, 2017)
  • Video Game Asset Creation and Process (starts March 6, 2017)
  • Video Game Design: Teamwork & Collaboration (starts July 24, 2017)
  • Gameplay Programming for Video Game Designers (starts Sept. 11, 2017)

Within each offering, learners will have access to several videos from the instructors each week, readings, discussion boards with other participants and multiple-choice quizzes. Each weekly unit takes about three hours to complete.

The series will explore everything from how to create simple elements of running game code to how different industry roles collaborate to produce, market and ship a video game.

The first offering will explore the history of the video game design industry, with insights from the International Center for the History of Electronic Games (ICHEG) at The Strong National Museum of Play, the largest and most comprehensive public assemblage of video games and related materials in the world.

“Just as writers learn their craft by reading and studying great works of the past, video game designers need to know how game design has developed and evolved over the years,” said Jon-Paul C. Dyson, director of ICHEG and vice president for exhibits at The Strong, who is co-instructor for the Video Game Design History offering. “Participants will have a unique learning opportunity in this course to see prototypes, designer notes, rare games and other iconic artifacts from The Strong’s unparalleled collection that showcase the history of game design.”

RIT’s game design and development graduate and undergraduate programs have been ranked among the Princeton Review’s list of top schools for video game design for more than five years. Graduates of RIT’s game design and development programs have gone on to work at some of the industry’s top employers, including Amazon Games, Apple, Bungie Studios, Blizzard Entertainment, EA Games, Epic Games, Google, Konami Gaming Inc., Microsoft, Rockstar Games, Sony Interactive Entertainment, Valve Corporation and Walt Disney Interactive.

RIT’s game design and development program is housed within the School of Interactive Games and Media, in RIT’s B. Thomas Golisano College of Computing and Information Sciences. The RIT XSeries is made possible with help from RIT’s Innovative Learning Institute.

“As a world-renowned leader in the field of video game design, RIT is an ideal partner to offer this XSeries program to the edX global community of more than 9 million learners,” said Anant Agarwal, edX CEO and MIT professor. “We are proud to work with RIT to launch this program that provides learners with a rich understanding of the history and design of video games and teaches the skills and competencies necessary to excel in this exciting, fast-growing industry.”

Credits: Eurekalert

Credits: Eurekalert

 

Dynamic programming is a technique that can yield relatively efficient solutions to computational problems in economics, genomic analysis, and other fields. But adapting it to computer chips with multiple “cores,” or processing units, requires a level of programming expertise that few economists and biologists have.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stony Brook University aim to change that, with a new system that allows users to describe what they want their programs to do in very general terms. It then automatically produces versions of those programs that are optimized to run on multicore chips. It also guarantees that the new versions will yield exactly the same results that the single-core versions would, albeit much faster.

In experiments, the researchers used the system to “parallelize” several algorithms that used dynamic programming, splitting them up so that they would run on multicore chips. The resulting programs were between three and 11 times as fast as those produced by earlier techniques for automatic parallelization, and they were generally as efficient as those that were hand-parallelized by computer scientists.

The researchers presented their new system last week at the Association for Computing Machinery’s conference on Systems, Programming, Languages and Applications: Software for Humanity.

Dynamic programming offers exponential speedups on a certain class of problems because it stores and reuses the results of computations, rather than recomputing them every time they’re required.

“But you need more memory, because you store the results of intermediate computations,” says Shachar Itzhaky, first author on the new paper and a postdoc in the group of Armando Solar-Lezama, an associate professor of electrical engineering and computer science at MIT. “When you come to implement it, you realize that you don’t get as much speedup as you thought you would, because the memory is slow. When you store and fetch, of course, it’s still faster than redoing the computation, but it’s not as fast as it could have been.”

Outsourcing complexity

Computer scientists avoid this problem by reordering computations so that those requiring a particular stored value are executed in sequence, minimizing the number of times that the value has to be recalled from memory. That’s relatively easy to do with a single-core computer, but with multicore computers, when multiple cores are sharing data stored at multiple locations, memory management become much more complex. A hand-optimized, parallel version of a dynamic-programming algorithm is typically 10 times as long as the single-core version, and the individual lines of code are more complex, to boot.

The CSAIL researchers’ new system — dubbed Bellmania, after Richard Bellman, the applied mathematician who pioneered dynamic programming — adopts a parallelization strategy called recursive divide-and-conquer. Suppose that the task of a parallel algorithm is to perform a sequence of computations on a grid of numbers, known as a matrix. Its first task might be to divide the grid into four parts, each to be processed separately.

But then it might divide each of those four parts into four parts, and each of those into another four parts, and so on. Because this approach — recursion — involves breaking a problem into smaller subproblems, it naturally lends itself to parallelization.

Joining Itzhaky on the new paper are Solar-Lezama; Charles Leiserson, the Edwin Sibley Webster Professor of Electrical Engineering and Computer Science; Rohit Singh and Kuat Yessenov, who were MIT both graduate students in electrical engineering and computer science when the work was done; Yongquan Lu, an MIT undergraduate who participated in the project through MIT’s Undergraduate Research Opportunities Program; and Rezaul Chowdhury, an assistant professor of computer science at Stony Brook, who was formerly a research affiliate in Leiserson’s group.

Leiserson’s group specializes in divide-and-conquer parallelization techniques; Solar-Lezama’s specializes in program synthesis, or automatically generating code from high-level specifications. With Bellmania, the user simply has to describe the first step of the process — the division of the matrix and the procedures to be applied to the resulting segments. Bellmania then determines how to continue subdividing the problem so as to use memory efficiently.

Rapid search

At each level of recursion — with each successively smaller subdivision of the matrix — a program generated by Bellmania will typically perform some operation on some segment of the matrix and farm the rest out to subroutines, which can be performed in parallel. Each of those subroutines, in turn, will perform some operation on some segment of the data and farm the rest out to further subroutines, and so on.

Bellmania determines how much data should be processed at each level and which subroutines should handle the rest. “The goal is to arrange the memory accesses such that when you read a cell [of the matrix], you do as much computation as you can with it, so that you will not have to read it again later,” Itzhaky says.

Finding the optimal division of tasks requires canvassing a wide range of possibilities. Solar-Lezama’s group has developed a suite of tools to make that type of search more efficient; even so, Bellmania takes about 15 minutes to parallelize a typical dynamic-programming algorithm. That’s still much faster than a human programmer could perform the same task, however. And the result is guaranteed to be correct; hand-optimized code is so complex that it’s easy for errors to creep in.

Credits: Gamefromscratch

Credits: Gamefromscratch

 

A conversation just happened on /r/gamedev with the confusing title “What don’t new game developers know that they don’t know?”.  Essentially it was a question asking what important advice new developers don’t know (for lack of experience) but should.  My answer seems to have been extremely well received and the question is quite good, so I figure I would replicate the answer here for those of you that don’t frequent reddit.

So essentially what follows is my advice I would give my beginner self if I owned a time machine.

  • most people fail and fail hard. Perseverance is probably the most underrated but required skill of a successful game developer.
  • learning tasks in parallel is rarely productive. While I know you want to learn everything now… don’t. Learn one task/language/skill to a base level of competency before moving on to the next. It’s hard because everything is shiny and new, but it’s critical. Trying to learn too many things at once results in learning nothing at all.
  • when you are 90% done, you’re actually 50% done. Maybe… if you are lucky.
  • there is no ego in programming, or at least there shouldn’t be. A plumber or carpenter come into a job with a toolbox full of tools, and don’t limit themselves to a number three hex drive because “it’s cool”. They use the best tool for the job and sometimes that tool is a horrific hack and that’s ok too. Programmers… have often failed to learn this lesson. People invest themselves in “their language” and this frankly, is stupid. Working in C++, Java, Haskell, F#, Go, Rust or whatever other language doesn’t make you cool, just as working in GameMaker, Lua, JavaScript doesn’t make you uncool. Only exception, PHP. %#@k PHP. Moral of the story… use the right language or tool for the job. Sometimes that means performance, sometimes that means maintainability, sometimes that means quickness, sometimes that means designer friendly and sometimes it means using the tool the rest of your team is using. Be pragmatic, always be pragmatic. I personally would never hire a programmer who claimed there was a “best” language. If you have been programming for several years and don’t have several languages in your arsenal, you are probably doing it wrong.
  • if it feels wrong, it probably is. If you encounter such a code smell, even if you can’t fix it, comment on it and move on.
  • even if you work alone, comments are good. But comments for the sake of comments is worse than no comments at all. Take the time to write legible code, take the time to smartly comment what you’ve written. Six months or six years down the road, you will thank yourself.
  • version control or at least automatic backups. Do it. Now.
  • premature ejac…. er, optimization is the root of all evil. Yes, you need to be mindful of performance on a macro level, but don’t sweat the micro level until you have to. If you are just starting out, but your primary concern is performance, you are doing it wrong.
  • learn how to use a debugger. Early. It will be an invaluable skill and will help you understand how your code works better than any other single task you can perform. Also learn how to use a profiler and code linter, these will give you great insights into your code as well, but you can do this later. If you have learned the basics of a programming language but haven’t groked debugging yet, stop everything and dedicate yourself to that task. For a somewhat generic debugging tutorial start here. Seriously, learn it, now.
  • if you have access to a peer, TAKE IT. Peer reviewed coding, while at first annoying, is invaluable. Even if there is a large skill mismatch between the two people. Even just having someone go through your code and ask “why did you do this?” forces you to explain it… and often you realize… hey… why did I do that? Many programmers are solitary creatures, so the idea of peer code reviews or paired programming is anathema. Or people can be very shy about showing their code… it’s worth it to get over it. Of course not everyone has access to another programmer to use as a sounding board and not being in person makes it a lot harder and a lot less useful.
  • books are great, as is a gigantic collection of links to great articles on the web. That said unfortunately, you can’t just buy a book and learn via osmosis… you actually have to read the thing. More to the point, if you are following a tutorial or video, DO THE WORK. You will learn it a great deal more, develop muscle memory of sorts, if you actually do it. This means typing out the code and getting it to run, instead of just loading the project and pressing play. Trust me, you learn a lot more actually recreating the project.

 

Credits: Newindianexpress

Credits: Newindianexpress

A couple of software developed by the Indian Institute of Technology, Kharagpur will soon revolutionise the way in which the disabled persons can cope up with day-to-day activities. Buying vegetables from the market or composing poetry will just be a touch on the smartdevice away.

Softwares ‘Akash Baani’ and ‘Sanyog’ may help persons with disabilities, including those affected with cerebral, to express their feelings by touching icons on the screen, which will then create sentences, and further translate into voice messages.

A workshop was recently held in Kolkata to train NGOs on how these software can be used to help the disabled become more self-dependent. Professor Anupam Basu, the brainchild behind these software, while speaking to Express said, “In ‘Sanyog’, for creating a sentence like ‘I want to watch TV’, a user has to select an icon of himself, a TV and that of an eye, which will then form the sentence which will be then turn into a voice message from text to speech.

For buying vegetables, the user may select the vegetable he/she wants to buy and the quantity. The sentences will be blurted out to the shopkeeper.” Speaking about ‘Akash Baani’, Prof Basu says, “A large number of icons will be present in the software.

Each icon will have a message with it. Whenever a user selects an icon, the stored message will be spoken.” While ‘Akash Bani’ costs Rs 1,000, Sanyog’s is on the higher end priced Rs 8,000. When asked about those who can’t touch small icons on the smart devices, Prof Basu explained about an access switch that is more like a big red flat joystick with which the user may browse with whatever part of body that works — may browse the icons and select them,” Professor Basu said.

The software that have been put on display in Rashtrapati Bhavan are being used in the Indian Institute of Cerebral Palsy and Action for Ability Development and Inclusion in the national capital. It has taken 2 years to develop ‘Akash Baani’ and 4 years to develop ‘Sanyog’.

“The technologies are being transferred to Society for Natural Language Technology Research (SNLTR), under the IT department of West Bengal government,” Professor Basu said, adding that support from the state government and local NGOs are needed for expand the reach of these software into the rural areas.

Credits: Designnews

Credits: Designnews

Every time a new embedded software project starts, the air is electrified with energy, hope, and excitement. For engineers, there are few things on Earth as exciting as creating a new project and bringing together new and innovative ideas that have the potential to change the world. Unfortunately, shortly after project kick-off, engineers can quickly lose their passion as they are forced to dig into the nuts and bolts by once again writing microcontroller drivers, trying to integrate real-time operating systems (RTOSes) and third-party components. These repetitive project tasks can consume time, energy, and dampen product innovation. An interesting solution is beginning to arrive that could help developers — embedded system platforms.

An embedded system platform contains all the building blocks that a developer needs to quickly get a microcontroller up and running in a short time period and direct their focus on the product. Too much time and money is wasted just trying to get a microcontrollers software up and running. The idea behind the platform is that drivers, frameworks, libraries, schedulers, and sometimes even application code are already provided so that developers can focus on their product features rather than the mundane and repetitive software tasks.

 

 

ESC logo

HAL Design for MCUs. The speed at which a developer is expected to write software often results in device drivers that are difficult to understand, hard to maintain, and difficult to port. Join Jacob Beningo atESC Silicon Valley , Dec. 6-8, 2016 in San Jose, Calif., as he describes methods and techniques that can be used to develop a reusable hardware abstraction layer (HAL) that is easy to maintain and use across multiple projects and platforms.Register here for the event, hosted by Design News ’ parent company, UBM.

Embedded software platforms provide developers with an opportunity to shave months from the development cycle by leveraging existing HALs and APIs. Becoming a microcontroller expert in all the little nuances is no longer required. HALs and APIs abstract the lower level hardware and make development similar to writing software on a PC, although developers still need to keep in mind that they are working in a resource-constrained environment. Make a simple call to that UART HAL and serial data can be transmitting in minutes rather than weeks.

There are many advantages to platform development that developers should keep in mind:

  • Leveraging existing software to prevent reinventing the wheel
  • Faster time to market
  • Potential to decrease overall project costs
  • Increased firmware robustness

 

There are certainly a few potential issues that developers should be concerned with, as well:

  • Platform licensing models
  • Cost to change platforms if direction changes in the future
  • Becoming dependent upon a third-party platform
  • Having too much free time due to smoothly moving projects

 

The truth is that embedded system development has become increasing complex in the past decade as microcontrollers have increased exponentially in their capability. That capability has been driven by mobile technologies and the need for more connectivity in our devices. The typical development time line has stayed roughly the same. With more to do, smaller budgets, and

the same time to do it in, developers need to become smarter and find new methods and ways to develop their systems without compromising robustness, integrity, and features.

One possible solution is to use embedded platforms such as the Renesas Synergy Platform, Electric Imp, and Microchip Harmony, among others. (These are the platforms I’ve had the opportunity to explore with so far.) Platforms can vary from extending the traditional developers capabilities through radically transformational and different development techniques. In either case, given time, budget, and feature sets, it is very obvious that building embedded systems from the ground will very soon no longer be an option.

 

Credits: Htmlgoodies

Credits: Htmlgoodies

 

During the development of a website, developers must use a variety of tools. Software to create the site, edit graphics, transfer files, SSH or telnet into your server–this article will cover five of the best.

NoteTabPro

NoteTabPro is a text editor on steroids. It supports HTML, Perl, LaTeX, ASP, Java, Javascript, PHP, AutoLISP, SQL, COBOL, 4DOS, JCL, VHDL, ADO, VBScript, VRML and more. It features a tabbed interface, as well as “clipbook” libraries for HTML, JavaScript and CSS that make the task of editing a web page a lot easier. They have a “light” version that is free, a full “Pro” version that retails for$29.95, and a Standard version for $19.95.

WS_FTP Pro

Obviously you will need a tool that enables you to move files from your local machine to your web server. IPSwitch’s WS_FTP Pro is an industry standard that has been used by web developers for years, with over 40 million users worldwide. It allows you to transfer files over FTP, SSL, SSH and HTTP/S transfer protocols. It is also secure, with 256-bit AES encryption, FIPS 140-2 validated cryptography, OpenPGP file encryption, and file integrity validation up to SHA-512. This isn’t your father’s File Transfer Protocol (FTP) software. It retails for $54.95, or $89.95 with a one year support agreement, and also they have a free demo version available for download.

PuTTY

PuTTY is a free implementation of Telnet and SSH which can be used on Windows and Unix platforms, and it includes an xterm terminal emulator. It supports standard telnet sessions, SSH-2 and SSH-1, as well as local echo. The software isn’t as full featured as some commercial SSH tools, but it will get the job done–and well. If you need to telnet or SSH into your web server, this is the tool to use.

Dreamweaver

Adobe’s Dreamweaver CS5 is a full-featured WYSIWYG editor that allows developers to design visually as well as directly within the code. It supports PHP-based CMSes such as Drupal, WordPress and Joomla, and enables developers to create websites using HTML 5. It also features CSS Starter Layouts to get you started, and is integrated with Adobe BrowserLab, which allows developers to preview dynamic web pages and local content using multiple views and diagnostic and comparison tools. It retails for $399, and a demo version is available for download for free.

PaintShop Pro

Corel’s PaintShop Pro has been a web developer’s friend since Jasc Software owned it over six years ago. It allows you to import, edit and share your images. It enables those of us without graphic skills to make quick fixes to images via its Express Lab feature. Creating GIF images with transparent backgrounds is a snap. It also allows users to upload images directly to Facebook, YouTube and Flickr. PaintShop Pro retails for $99.99, and a free trial version is available for download. As you can see, the software you choose can make your life easier, and enhance the development process from start to finish. If you know of other tools that belong in every web developer’s toolbox, let us know so we can spread the word!

Credits: Cio-today

Credits: Cio-today

As Microsoft continues to roll out new preview builds of its next Windows 10 update, it is also working to make those releases easier and more efficient to download. The new Unified Update Platform (UUP) will become available for developers in stages, with the first version — for Windows Mobile — announced yesterday.

In addition to the Unified Update Platform, Microsoft yesterday also released an Insider Preview Build for the next major refresh of Windows 10. Set for general release in early 2017, the so-called “Creators Update” of Windows 10 will place a heavy emphasis on 3D imaging, painting and other creativity tools.

The new Windows Build 14959 for Mobile and PC gives developers in the Insider Fast ring a chance to take a number of new features for a test run, and also fixes known issues with how the previous build managed applications, displays and settings. Microsoft is encouraging developers to install the latest build ahead of a problem-finding “Bug Bash” set to start next Monday.

‘Differential Downloads’ for Efficiency

Up until now, when Microsoft released a major update of its Windows operating system, users have had to download the entire update package, which could be both time-consuming and resource-intensive. With UUP, however, the download size of updates will be reduced, with more of the heavy lifting handled on Microsoft’s cloud rather than on the side of the customer’s device.

“We have converged technologies in our build and publishing systems to enable differential downloads for all devices built on the Mobile and PC OS,” Bill Karagounis, director of program management for the Windows Insider Program and OS Fundamentals, wrote yesterday on the Windows blog. “A differential download package contains only the changes that have been made since the last time you updated your device, rather than a full build.”

With updates on PCs, for example, users can expect to see the download size for major Windows updates reduced by around 35 percent, Karagounis said. Users updating on mobile devices, meanwhile, will see more of the processing handled by the Windows Update service, which will help improve update speeds and device battery life.

The rollout of UUP will also streamline updates on Windows Mobile so users do not have to install more than one build at a time to get the latest version of the OS.

“On your phone, we would sometimes require you to install in two-hops (updates) to get current,” Karagounis noted. “With UUP, we now have logic in the client that can automatically fallback to what we call a ‘canonical’ build, allowing you to update your phone in one-hop, just like the PC.”

Developers Now Testing Paint 3D

With the latest Insider Preview build released yesterday, developers will be able to test such coming Windows features as Paint 3D, part of the Creators Update arriving next year. Build 14959 also adds new display scaling capabilities, fixes previous issues with automatic brightness settings and resolves a tap-to-pay problem on Windows Mobile.

More new features unveiled during a live-streamed Windows event last week will be rolled out to developers in additional builds over the coming weeks, according to Dona Sarkar, software engineer for the Windows and Device Group.

“As I mentioned last week, Windows is an iceberg, the features that people ‘see’ are quite a small percent of the engineering work that we do to enable new UI to be visible,” Sarkar wrote yesterday in a blog post. “We’re excited to get more of the new Creators Update features in the hands of Insiders in the next couple of months.”

Starting next Monday, Nov. 7, Microsoft will also kick off its Bug Bash for Windows engineers and Insiders on the same day, Sarkar said. In the past, in-house engineers could get started a day ahead of Insiders on Microsoft-issued “quests” for glitches and problems. The Bug Bash is scheduled to run through Nov. 13.

Credits: Heise

Credits: Heise

The software Collections 2.3, which are now available in the beta-file, besides the stand-alone Eclipse Neon also include PHP 7.0 and MySQL 5.7.The Developer Toolset 6.0 has for the first time GCC 6 on board.

Red Hat has four months after the previous release published updated versions of the Software Collections and the Developer Toolset. In the version 4.6.1, Eclipse Neon migrates for the first time from the toolset as an independent collection into the software Collections 2.3. Other new additions are MySQL 5.7, Redis 3.2 and PHP 7.0 as well as Git 2.9 and the JVM monitoring tool Thermostat. There is also an update of the MongoDB database to version 3.2. Some scripting languages ​​are also included in more recent versions, including PHP 5.6, Python 3.5, and Ruby 2.3.

The Developer Toolset 6.0 includes GCC 6 for the first time, as well as version 6.2.1. In addition, there are numerous updates to the utilities, including binutils 2.27, elfutils 0.167, Valgrind 3.12, SystemTap 3.0 and Dyninst 9.2.0.

Additional information is available in Red Hat’s developer blog . Currently, the software Collections 2.3 as well as the Developer Toolset 6.0 are located in the Betaphase. Both packages are part of RHEL subscriptions (Red Hat Enterprise Linux), which also now have the free Developer Systems Enterprise Linux Developer Suite is available.

Credits: Toptechnews

Credits: Toptechnews

Fighting computer viruses isn’t just for software anymore. Binghamton University researchers will use a grant from the National Science Foundation to study how hardware can help protect computers too.

“The impact will potentially be felt in all computing domains, from mobile to clouds,” said Dmitry Ponomarev, professor of computer science at Binghamton University, State University of New York. Ponomarev is the principal investigator of a project titled “Practical Hardware-Assisted Always-On Malware Detection.”

More than 317 million pieces of new malware–computer viruses, spyware, and other malicious programs–were created in 2014 alone, according to work done by Internet security teams at Symantec and Verizon. Malware is growing in complexity, with crimes such as digital extortion (a hacker steals files or locks a computer and demands a ransom for decryption keys) becoming large avenues of cyber attack.

“This project holds the promise of significantly impacting an area of critical national need to help secure systems against the expanding threats of malware,” said Ponomarev. “[It is] a new approach to improve the effectiveness of malware detection and to allow systems to be protected continuously without requiring the large resource investment needed by software monitors.”

Countering threats has traditionally been left solely to software programs, but Binghamton researchers want to modify a computer’s central processing unit (CPU) chip–essentially, the machine’s brain–by adding logic to check for anomalies while running a program like Microsoft Word. If an anomaly is spotted, the hardware will alert more robust software programs to check out the problem. The hardware won’t be right about suspicious activity 100 percent of the time, but since the hardware is acting as a lookout at a post that has never been monitored before, it will improve the overall effectiveness and efficiency of malware detection.

“The modified microprocessor will have the ability to detect malware as programs execute by analyzing the execution statistics over a window of execution,” said Ponomarev. “Since the hardware detector is not 100-percent accurate, the alarm will trigger the execution of a heavy-weight software detector to carefully inspect suspicious programs. The software detector will make the final decision. The hardware guides the operation of the software; without the hardware the software will be too slow to work on all programs all the time.”

The modified CPU will use low complexity machine learning–the ability to learn without being explicitly programmed–to classify malware from normal programs, which is Yu’s primary area of expertise.

“The detector is, essentially, like a canary in a coal mine to warn software programs when there is a problem,” said Ponomarev. “The hardware detector is fast, but is less flexible and comprehensive. The hardware detector’s role is to find suspicious behavior and better direct the efforts of the software.”

Much of the work–including exploration of the trade-offs of design complexity, detection accuracy, performance and power consumption–will be done in collaboration with former Binghamton professor Nael Abu-Ghazaleh, who moved on to the University of California-Riverside in 2014.

Lei Yu, associate professor of computer science at Binghamton University, is a co-principal investigator of the grant.

Grant funding will support graduate students that will work on the project both in Binghamton and California, conference travel and the investigation itself. The three-year grant is for $275,

Credits: Timesunion

Credits: Timesunion

General Electric Co. has started using augmented reality devices as the company takes a major plunge into the use of artificial intelligence and virtual reality.

At the 2016 GE Minds + Machines conference held this week in San Francisco, Colin Parris, the vice president of GE Software Research, demonstrated how employees are talking to machines and interacting with them using Microsoft’s HoloLens augmented reality device.

GE has created so-called “digital twins” of the machines that it sells — a steam turbine for instance — that are digital replicas of actual machines at customer sites. The company has created a software system that allows customers to speak to the digital twin and ask it questions about potential parts breakdowns, financial forecasts and the best way to fix problems.

The digital twins are loaded with data they can crunch to provide the best advice — which is given in real language not unlike Siri on the iPhone.

 

“This is happening now,” Parris, who works in Niskayuna, said after he talked to a digital twin of a steam turbine at a customer site in Southern California. “What you saw was an example of the human mind working with the mind of a machine.”

The digital twin can run thousands of simulations at a time using environmental and operational data to predict breakdowns or other events.

And when a machine needs to be fixed, GE and its customers can use augmented reality to look inside those machines without having to actually touch them.

Parris put on a Microsoft HoloLens — an augmented reality headset — to superimpose the digital twin over a picture of the actual steam turbine. The HoloLens allowed him to open up the turbine and look at the parts — and see exactly which part may need replacing.

Parris said GE has been partnering with Microsoft on augmented reality technology. He says that AR as it is also called can help GE executives redesign a factory floor by moving parts around in augmented reality.

It can also help with training and production, helping to teach workers how to assemble parts even before they ever step on a factory floor.