News

Google Gemini Robotics 1.5: AI Robots That Think Before Acting

Article Highlights:
  • Google introduces Gemini Robotics 1.5, AI model family for intelligent general-purpose agentic robots
  • Two complementary models: Gemini Robotics 1.5 (VLA) for actions and Gemini Robotics-ER 1.5 (VLM) for planning
  • "Think before acting" capability with transparent natural language reasoning sequences
  • Cross-embodiment learning: automatic skill transfer between robots with different physical configurations
  • Gemini Robotics-ER 1.5 achieves state-of-the-art performance on 15 academic spatial understanding benchmarks
  • Available to developers via Gemini API in Google AI Studio; VLA model for select partners
  • Holistic safety approach with evaluation on upgraded ASIMOV benchmark for semantic and physical safety
Google Gemini Robotics 1.5: AI Robots That Think Before Acting

Introduction

Google has announced Gemini Robotics 1.5, a family of artificial intelligence models designed to bring AI agents into the physical world. This innovation represents a significant step toward creating truly intelligent general-purpose robots capable of perceiving, planning, thinking, using tools, and acting to solve complex multi-step tasks autonomously.

Gemini Robotics 1.5 introduces advanced agentic capabilities that go beyond simply executing commands: robots can now reason, actively plan, and generalize their skills across different physical configurations.

The Two Models of Gemini Robotics 1.5

Google has developed two complementary models that work together in an agentic framework to enable advanced robotic experiences:

Gemini Robotics 1.5 (VLA)

Gemini Robotics 1.5 is Google's most capable vision-language-action model. It transforms visual information and instructions into motor commands that enable the robot to perform a specific task. The distinctive feature is the ability to think before acting, showing its reasoning process to assess and complete complex tasks transparently. The model also learns across different physical embodiments, significantly accelerating the acquisition of new skills.

Gemini Robotics-ER 1.5 (VLM)

Gemini Robotics-ER 1.5 represents Google's most advanced vision-language model for reasoning about the physical world. It functions as a "high-level brain" orchestrating the robot's activities, excelling at planning and making logical decisions within physical environments. The model can natively call digital tools like Google Search to look for information or use custom third-party functions, and creates detailed multi-step plans to complete a mission. It currently achieves state-of-the-art performance on spatial understanding benchmarks.

How the Agentic Framework Works

Most daily tasks require contextual information and multiple steps to complete, making them notoriously challenging for today's robots. Gemini Robotics 1.5 addresses this challenge through intelligent collaboration between the two models.

For example, if a robot was asked: "Based on my location, can you sort these objects into the correct compost, recycling and trash bins?", it would need to search for relevant local recycling guidelines on the internet, look at the objects in front of it, figure out how to sort them based on those rules, and then execute all the steps needed to complete the task.

Gemini Robotics-ER 1.5 orchestrates activities by providing Gemini Robotics 1.5 with natural language instructions for each step. The latter then uses its vision and language understanding to directly perform specific actions, thinking about its actions to better solve semantically complex tasks and explaining its thinking processes in natural language, making decisions more transparent.

Advanced Capabilities and Performance

Understanding the Environment

Gemini Robotics-ER 1.5 is the first thinking model optimized for embodied reasoning. It achieves state-of-the-art performance on 15 academic benchmarks, including Embodied Reasoning Question Answering (ERQA) and Point-Bench, measuring the model's performance on pointing, image question answering, and video question answering. These results were also obtained through internal benchmarks inspired by real-world use cases from Google's trusted tester program.

Thinking Before Acting

Traditional vision-language-action models translate instructions or linguistic plans directly into robot movements. Gemini Robotics 1.5 goes beyond this, generating an internal sequence of reasoning and analysis in natural language to perform tasks that require multiple steps or deeper semantic understanding.

For example, when completing a task like "Sort my laundry by color", the robot thinks at different levels: first it understands that sorting by color means putting white clothes in the white bin and other colors in the black bin, then it thinks about steps to take, like picking up the red sweater and putting it in the black bin, and about the detailed motion involved, like moving a sweater closer to pick it up more easily.

Learning Across Embodiments

Robots come in all shapes and sizes, with different sensing capabilities and degrees of freedom, making it difficult to transfer motions learned from one robot to another. Gemini Robotics 1.5 demonstrates a remarkable ability to learn across different physical embodiments, transferring motions learned from one robot to another without needing to specialize the model for each new configuration.

This breakthrough accelerates learning new behaviors, making robots smarter and more useful. Google observed that tasks only presented to the ALOHA 2 robot during training also work on Apptronik's humanoid robot Apollo and the bi-arm Franka robot, and vice versa.

Availability and Developer Access

Starting today, Gemini Robotics-ER 1.5 is available to developers via the Gemini API in Google AI Studio. Gemini Robotics 1.5 is currently available to select partners. Google encourages the robotics community to explore the model's potential for building the next generation of physical agents.

Safety and Responsible Development

In developing embodied AI capabilities, Google is proactively developing novel safety and alignment approaches to enable agentic AI robots to be responsibly deployed in human-centric environments.

The Responsibility & Safety Council (RSC) and Responsible Development & Innovation (ReDI) team partner with the Robotics team to ensure that development of these models aligns with Google's AI Principles. Gemini Robotics 1.5 implements a holistic approach to safety through high-level semantic reasoning, including thinking about safety before acting, ensuring respectful dialogue with humans via alignment with existing Gemini Safety Policies, and triggering low-level safety subsystems (e.g., for collision avoidance) on-board the robot when needed.

To guide safe development of Gemini Robotics models, Google has released an upgrade of the ASIMOV benchmark, a comprehensive collection of datasets for evaluating and improving semantic safety, with better tail coverage, improved annotations, new safety question types, and new video modalities. In safety evaluations on ASIMOV, Gemini Robotics-ER 1.5 shows state-of-the-art performance, and its thinking ability significantly contributes to improved understanding of semantic safety and better adherence to physical safety constraints.

Conclusion

Gemini Robotics 1.5 marks an important milestone toward solving AGI in the physical world. By introducing agentic capabilities, Google is moving beyond models that react to commands, creating systems that can truly reason, plan, actively use tools, and generalize.

This represents a foundational step toward building robots that can navigate the complexities of the physical world with intelligence and dexterity, and ultimately become more helpful and integrated into our lives. The company is excited to continue this work with the broader research community and looks forward to seeing what the robotics community builds with the latest Gemini Robotics-ER model.

FAQ

What is Google's Gemini Robotics 1.5?

Gemini Robotics 1.5 is a family of AI models from Google designed to bring agentic intelligence into the physical world. It includes two models: Gemini Robotics 1.5 (VLA) that translates vision and language into robotic actions, and Gemini Robotics-ER 1.5 (VLM) that orchestrates complex activities with advanced reasoning and multi-step planning.

How does the "think before acting" capability work in Gemini Robotics 1.5?

Gemini Robotics 1.5 generates an internal sequence of reasoning in natural language before performing physical actions. This allows the robot to analyze complex tasks, evaluate options, and explain its decision-making process transparently, improving resolution of semantically complex tasks.

Can Gemini Robotics 1.5 be used on different types of robots?

Yes, Gemini Robotics 1.5 demonstrates remarkable cross-embodiment learning capability. It can transfer skills learned on one robot to others with different physical configurations, without needing specialization for each embodiment, accelerating learning of new behaviors.

Is Gemini Robotics-ER 1.5 available for developers?

Yes, Gemini Robotics-ER 1.5 is available to developers via the Gemini API in Google AI Studio. Gemini Robotics 1.5 is currently available only to select partners.

What tools can Gemini Robotics-ER 1.5 use?

Gemini Robotics-ER 1.5 can natively call digital tools like Google Search to look for information online and can use custom user-defined third-party functions, allowing robots to access external knowledge to complete complex tasks.

How does Google ensure safety in robots with Gemini Robotics 1.5?

Google implements a holistic approach to safety including semantic reasoning before acting, alignment with Gemini Safety Policies for respectful interactions, and activation of safety subsystems for collision avoidance. Models are evaluated on the upgraded ASIMOV benchmark for semantic and physical safety.

Introduction Google has announced Gemini Robotics 1.5, a family of artificial intelligence models designed to bring AI agents into the physical world. This Evol Magazine
Tag:
Google Gemini