Google DeepMind is Alphabet’s AI research lab focused on building general‑purpose AI systems and applying them to science and real‑world problems. In robotics, DeepMind takes an AI‑first approach: rather than designing task‑specific machines, it develops large multimodal models that let existing robots see, understand language, plan and act. The Gemini Robotics models extend the Gemini family with capabilities tailored to embodied agents, enabling robots to interpret visual scenes, follow natural‑language instructions and manipulate tools with greater flexibility. Gemini Robotics 1.5 has demonstrated the ability to carry out diverse long‑horizon mobile‑manipulation tasks without bespoke training for each scenario, while Gemini Robotics On‑Device runs directly on robot hardware to provide low‑latency control even without a network connection. DeepMind collaborates with hardware makers, universities and internal Google teams to test these models on arms, mobile platforms and humanoid robots, aiming to make real‑world robots more helpful, safe and adaptable in everyday environments.
Google DeepMind develops general‑purpose AI models and applies them to robotics so that existing machines can perceive, reason and act more intelligently.
Gemini Robotics is a set of Gemini‑based models tailored for embodied agents, combining vision, language and action to control robots.
These models allow robots to interpret scenes, ground language in perception, plan multi‑step tasks and adapt to new situations.
Gemini Robotics can be applied to manipulators, mobile bases, humanoids and other platforms that can stream sensor data and accept high‑level commands.
Gemini Robotics On‑Device is an optimized version that runs locally on robot hardware for low‑latency control when cloud connectivity is limited or unavailable.
Yes. Gemini‑based systems are designed to follow natural‑language instructions, ask clarifying questions and coordinate complex sequences of actions.
Warehousing, manufacturing, service robots, assistive devices and home robots can all benefit from more general, adaptable control models.
DeepMind regularly publishes papers, open‑sources code and collaborates with partners; developers can follow official blogs, Git repositories and research releases to stay up to date.