Video games may be responsible for rewiring the human brain in strange new ways, but they also may provide educational value too…for AI algorithms.
Adrien Gaidon, a computer scientist at the Xerox Research Center Europe in Grenoble, France, remembers watching people play Assassin’s Creed when he realized that the game’s photo-realistic scenery might offer a useful way to teach AI algorithms about the real world. Gaidon is now testing his theory by developing highly realistic 3D environments for training artificial intelligence algorithms how to recognize particular real-world objects and scenarios.
Right now, AI algorithms require huge quantities of data in order to learn how to perform even the most simplistic of tasks. Sometimes, this is not a problem. Companies like Facebook and Google, for example, have massive amounts of data to spare to train their algorithms to automatically tag friends in photos or allow cars to drive themselves.
But for small start-ups and businesses working on artificial intelligence algorithms, the enormous data sets required to allow an artificial intelligence to learn and perfect a task typically do not have the means to get it.
To level the playing field, Gaidon and his colleagues used a popular game development engine, called Unity, to generate virtual scenes for training deep-learning algorithms—a very large type of simulated neural network—to recognize objects and situations in real images. Unity is widely used to make 3-D video games, and many common objects are available to developers to use in their creations.
A paper on the Xerox team’s work will be presented at a computer vision conference later this year. By creating a virtual world and letting an algorithm see lots of variations from different angles and with different lighting, it is possible to teach that algorithm to recognize the same object in real images or video footage. “The nice thing about virtual worlds is you can create any kind of scenario,” Gaidon says.
Gaidon’s group also devised a way to convert a real scene into a virtual one by using a laser scanner to capture a scene in 3-D and then importing that information into the virtual world. The group was able to measure the accuracy of the approach by comparing algorithms trained within virtual environments with ones trained using real images annotated by people. “The benefits of simulation are well known,” he says, “but [we wondered], can we generate virtual reality that can fool an AI?”
The Xerox research team hopes to apply the technique in two ways. First, the plan to use it to find empty parking spots on the street using cameras fitted to buses. Normally doing this would involve collecting lots of video footage, and having someone manually annotate empty spaces. A huge amount of training data can be generated automatically using the virtual environment created by the Xerox team. Second, they are exploring whether it could be used to learn about medical issues using virtual hospitals and patients.
The challenge of learning with less data is a well known problem among computer scientists and it is both inspiring to researchers and terrifying to this reporter that progress is being made in making AI learn in a more human way.
“I think this is a very good idea,” says Josh Tenenbaum, a professor of cognitive science and computation at MIT, of the Xerox project. “It’s one that we and many others have been pursuing in different forms.”
What do you think about AI getting smarter through video games? Are you worried that (Terminator 2:) Judgement Day is fast approaching?
[h/t: Technology Review]