Google AI Introduced PlaNet – A Deep Planning Network for Reinforcement Learning

google planet

Google AI has introduced a new Reinforcement Learning agent that learns a world model from pixel input called PlaNet – A Deep Planning Network for Reinforcement Learning.

The research team from Google AI has published a paper under the name: Learning Latent Dynamics for Planning from Pixels, introducing a new model-based agent that learns environment dynamics by only observing images. PlaNet, as it is nicknamed, is able to learn a world model from (only) image input and use it for the planning of its actions.

According to Google AI, PlaNet is able to solve a wide variety of image-based control tasks in an efficient manner.

“PlaNet solves a variety of image-based control tasks, competing with advanced model-free agents in terms of final performance while being 5000% more data efficient on average.“, says Danijar Hafner, Student Researcher at Google AI.

The key idea behind PlaNet is the so-called latent dynamics model. Instead of predicting the following image from preceding ones, this model is predicting a latent state forward, from where the name comes – latent dynamics model. By doing so, the agent is embedding the information in a lower-dimensional space and is able to learn more abstract representations. The goal is to disentangle and learn positions and velocities of objects, as to construct a dynamics model.

The researchers also introduce two novelties in their approach: A Recurrent State Space Model and Latent Overshooting Objective. They show that their approach is successfully capturing the dynamics of the world and the agent is able to leverage it in order to do planning of next actions.

More about PlaNet can be read at the official blog post and in the paper. Also, the code was released as open-source and is available here.

Notify of

Inline Feedbacks
View all comments