An artificial intelligence system learning how to play a game by watching someone play it is old news by this stage, as we’ve had countless examples over the past few years of AI’s learning how to play and beat human competitors in everything from the board game Go to the Atari classic Montezuma’s Revenge.
However, a team of researchers from the Georgia Institute of Technology have gone one step further by developing an AI system capable of recreating a video game by watching someone play it.
The team’s recent paper, ‘Game Engine Learning from Video’, describes how the AI system is able to recreate every element from the NES version of Super Mario Bros. by studying the game’s pixels and how they interact with one another when various game commands are entered i.e. making Mario jump.
The AI system had no access to the game’s code, and instead used two sets of information to recreate what it’s witnessed.
The first set of information includes a visual dictionary containing all of the game’s sprites, the individual images used to build a 2D game’s levels.
The second set of information features a set of basic concepts such as the position of objects and how objects change while the game is being played, i.e. jumping on an enemy and making it disappear.
Essentially, the AI system is given a roadmap of the game’s elements, i.e. Mario, and actions, i.e. jumping from one platform to another.
With this information, the AI is able watch the game being played and determine the rules that govern gameplay (e.g. is this action is taken, then this event occurs), and use this information to recreate what it’s witnessed.
It’s quite an astonishing leap in technology and one that could have a big influence on the video games industry in years to come, as it may enable less skilled developers to use more complicated games as a template for their own creations.