PYMNTS.com September 25, 2025
Google DeepMind introduced two artificial intelligence models to help developers build robots that can understand their environment and perform complex tasks.
The new models build upon the Gemini Robotics models the company introduced in March by adding advanced thinking that enables agentic experiences, according to a Thursday (Sept. 25) blog post.
The new Gemini Robotics 1.5 is a vision-language-action (VLA) model that turns visual information and instructions into motor commands, while the new Gemini Robotics-ER 1.5 is a vision-language model (VLM) that creates multistep plans to complete a mission, the post said.
Gemini Robotics-ER 1.5 was made available to developers Thursday, while Gemini Robotics 1.5 is offered to only select partners, per the post.
Carolina Parada, senior engineering manager at...







