Google released an AI model that teaches robots to understand the physical environment
4/16/2026, 07:27 AM • Евгения Слив

Google has launched the Gemini Robotics-ER 1.6 network, which aims to empower robots to understand their surroundings and operate in the real world. At the core of development is the principle of "embodied reasoning", which allows machines not only to execute commands but also to interpret visual information, plan the sequence of steps and independently determine the point of completion of a task. This marks the transition from rigidly programmed mechanisms to context-sensitive systems.
The model has progressed considerably in spatial analysis: it recognizes and counts objects more accurately, establishes connections between them, and is able to "point" at objects in the process of reflection, breaking down complex operations into stages. An important innovation was the analog and digital readout feature, developed in collaboration with Boston Dynamics. The accuracy of data interpretation from handmeters and displays has increased to 93% thanks to a combination of visual approximation and software calculations.
Google emphasizes that this is the company’s safest robotics model: it better recognizes potential threats and follows regulations for dealing with dangerous items.
