PYMNTS.com September 25, 2025

Google DeepMind introduced two artificial intelligence models to help developers build robots that can understand their environment and perform complex tasks.

The new models build upon the Gemini Robotics models the company introduced in March by adding advanced thinking that enables agentic experiences, according to a Thursday (Sept. 25) blog post.

The new Gemini Robotics 1.5 is a vision-language-action (VLA) model that turns visual information and instructions into motor commands, while the new Gemini Robotics-ER 1.5 is a vision-language model (VLM) that creates multistep plans to complete a mission, the post said.

Gemini Robotics-ER 1.5 was made available to developers Thursday, while Gemini Robotics 1.5 is offered to only select partners, per the post.

Carolina Parada, senior engineering manager at...

Today's Sponsors

Venturous
ZeOmega

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Robotics/RPA, Technology
AI-enabled clinical data abstraction: a nurse’s perspective
Contextual AI launches Agent Composer to turn enterprise RAG into production-ready AI agents
OpenAI’s latest product lets you vibe code science
WISeR in 2026: Legal, Compliance, and AI Challenges That Could Reshape Prior Authorization for Skin Substitutes
Dario Amodei warns AI may cause ‘unusually painful’ disruption to jobs

Share Article