{"id":29351,"date":"2025-09-26T08:50:20","date_gmt":"2025-09-26T08:50:20","guid":{"rendered":"https:\/\/metaverseplanet.net\/blog\/?p=29351"},"modified":"2026-01-06T13:00:30","modified_gmt":"2026-01-06T13:00:30","slug":"gemini-robotics-1-5-making-robots-smarter","status":"publish","type":"post","link":"https:\/\/metaverseplanet.net\/blog\/gemini-robotics-1-5-making-robots-smarter\/","title":{"rendered":"Gemini Robotics 1.5: Making Robots Smarter"},"content":{"rendered":"\n<p>Google DeepMind has introduced robots that &#8220;think before they act&#8221; with <strong>Gemini <em><a href=\"https:\/\/metaverseplanet.net\/blog\/explore-the-latest-trends-and-innovations-in-robotics\/\" data-type=\"category\" data-id=\"119\">Robotics<\/a><\/em><\/strong>. Here are the details of the new generation of <strong>AI-powered robots<\/strong>.<\/p>\n\n\n\n<p>Generative AI systems that create text, images, audio, or video have become a part of daily life. Similarly, these models can now generate not just content, but also <strong>robot behaviors<\/strong>. This very idea forms the foundation of Google DeepMind&#8217;s <strong>Gemini Robotics<\/strong> project. Two new models developed within the scope of this project enable robots to &#8220;<strong>think before taking action<\/strong>.&#8221;<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">The Limited World of Traditional Robots<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"940\" height=\"529\" src=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/gemini-robotics-15-ile-robotlar-daha-akilli-hale-geliyor-yrdt.webp\" alt=\"\" class=\"wp-image-29353\" srcset=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/gemini-robotics-15-ile-robotlar-daha-akilli-hale-geliyor-yrdt.webp 940w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/gemini-robotics-15-ile-robotlar-daha-akilli-hale-geliyor-yrdt-300x169.webp 300w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/gemini-robotics-15-ile-robotlar-daha-akilli-hale-geliyor-yrdt-768x432.webp 768w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/gemini-robotics-15-ile-robotlar-daha-akilli-hale-geliyor-yrdt-390x220.webp 390w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/gemini-robotics-15-ile-robotlar-daha-akilli-hale-geliyor-yrdt-150x84.webp 150w\" sizes=\"(max-width: 940px) 100vw, 940px\" \/><\/figure>\n\n\n\n<p>Traditional robots are programmed with lengthy training sessions only for specific tasks, and as a result, they fail at other jobs. Carolina Parada, head of the robotics division at Google DeepMind, notes that most robots today can perform only a single task, even after months of preparation.<\/p>\n\n\n\n<p>However, <strong>Generative AI<\/strong> has the power to change this picture. This is because these systems possess the <strong>flexibility<\/strong> to work in new environments and with new tasks without requiring reprogramming.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Two Models, One Goal: Robots that Think and Act<\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"478\" src=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/gemii-qfro-1024x478.webp\" alt=\"\" class=\"wp-image-29354\" srcset=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/gemii-qfro-1024x478.webp 1024w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/gemii-qfro-300x140.webp 300w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/gemii-qfro-768x359.webp 768w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/gemii-qfro-1536x717.webp 1536w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/gemii-qfro-150x70.webp 150w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/gemii-qfro-scaled.webp 1200w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>DeepMind&#8217;s new approach relies on two separate models:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Gemini Robotics-ER 1.5 (Embodied Reasoning):<\/strong> This model processes visual and text inputs to plan the steps required to complete a task. It is the &#8220;<strong>thinking<\/strong>&#8221; model.<\/li>\n\n\n\n<li><strong>Gemini Robotics 1.5:<\/strong> This model takes the instructions generated by the ER model and converts them into <strong>real robot movements<\/strong>. It is the &#8220;<strong>acting<\/strong>&#8221; model.<\/li>\n<\/ul>\n\n\n\n<p>Thanks to this duo, robots can develop <strong>smarter solutions<\/strong> for multi-step and complex tasks.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">The Robot&#8217;s New Intuition<\/h2>\n\n\n\n<p><strong>Gemini Robotics-ER 1.5<\/strong> adapts the <strong>reasoning capability<\/strong> we see in modern chatbots to robots. For instance, when laundry needs to be sorted by color, it analyzes the image of the environment and outlines the necessary steps. These steps are then converted into actual movements by <strong>Gemini Robotics 1.5<\/strong>.<\/p>\n\n\n\n<p>According to DeepMind researcher Kanishka Rao, the greatest advancement is that robots now adopt the &#8220;<strong>think first, then act<\/strong>&#8221; approach. This can be interpreted as a parallel to <strong>human intuition<\/strong>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Shared Learning Across Different Robots<\/h2>\n\n\n\n<p>The new systems are built upon the core <strong><em><a href=\"https:\/\/metaverseplanet.net\/blog\/tag\/gemini-news-and-content\/\" data-type=\"post_tag\" data-id=\"101\">Gemini<\/a><\/em> foundation models<\/strong> and have been specially trained to adapt to <strong>physical environments<\/strong>. This allows the robots to work on a <strong>wider range of tasks<\/strong> without being limited to a single one.<\/p>\n\n\n\n<p>Moreover, the learned information can be <strong>transferred<\/strong> to different types of robots. For example, skills acquired with the arms of <strong>Aloha 2<\/strong> can be applied to the humanoid robot named <strong>Apollo<\/strong> without the need for additional training.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Still Far from Daily Use<\/h2>\n\n\n\n<p>Although this is an exciting development, it will take time before we see &#8220;<strong>home robots that do the laundry<\/strong>.&#8221; <strong>Gemini Robotics 1.5<\/strong> is currently only open to <strong>select trusted testers<\/strong>. However, the thinking model, <strong>Gemini Robotics-ER 1.5<\/strong>, has started to be offered to <strong>developers<\/strong> via <strong>Google AI Studio<\/strong>. This allows researchers to utilize this technology in their own robot experiments.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">You Might Also Like;<\/h3>\n\n\n<ul class=\"wp-block-latest-posts__list wp-block-latest-posts\"><li><a class=\"wp-block-latest-posts__post-title\" href=\"https:\/\/metaverseplanet.net\/blog\/the-dark-side-of-nanotechnology\/\">The Dark Side of Nanotechnology: Could Microscopic Swarms Erase Billions?<\/a><\/li>\n<li><a class=\"wp-block-latest-posts__post-title\" href=\"https:\/\/metaverseplanet.net\/blog\/the-illusion-of-digital-immortality\/\">The Illusion of Digital Immortality: Are You Really Uploading Your Mind?<\/a><\/li>\n<li><a class=\"wp-block-latest-posts__post-title\" href=\"https:\/\/metaverseplanet.net\/blog\/artemis-2s-deep-space-eclipse\/\">The View That Changes Everything: Artemis 2\u2019s Deep Space Eclipse<\/a><\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>Google DeepMind has introduced robots that &#8220;think before they act&#8221; with Gemini Robotics. Here are the details of the new generation of AI-powered robots. Generative AI systems that create text, images, audio, or video have become a part of daily life. Similarly, these models can now generate not just content, but also robot behaviors. This &hellip;<\/p>\n","protected":false},"author":1,"featured_media":29355,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"googlesitekit_rrm_CAown96uCw:productID":"","footnotes":""},"categories":[119],"tags":[101,64,345],"class_list":["post-29351","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-robotic","tag-gemini-news-and-content","tag-google-news-and-content","tag-robot-news"],"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/posts\/29351","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/comments?post=29351"}],"version-history":[{"count":0,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/posts\/29351\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/media\/29355"}],"wp:attachment":[{"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/media?parent=29351"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/categories?post=29351"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/tags?post=29351"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}