{"id":21982,"date":"2025-03-24T10:51:48","date_gmt":"2025-03-24T10:51:48","guid":{"rendered":"https:\/\/metaverseplanet.net\/blog\/?p=21982"},"modified":"2026-01-05T09:16:57","modified_gmt":"2026-01-05T09:16:57","slug":"googles-ai-tool-gemini-can-now-see-the-world-using-your-phone-camera","status":"publish","type":"post","link":"https:\/\/metaverseplanet.net\/blog\/googles-ai-tool-gemini-can-now-see-the-world-using-your-phone-camera\/","title":{"rendered":"Google&#8217;s AI Tool Gemini Can Now &#8220;See&#8221; the World Using Your Phone Camera"},"content":{"rendered":"\n<p><strong>Google\u2019s generative AI model, Gemini<\/strong>, has now gained <strong>camera and screen recognition<\/strong> capabilities. However, this feature is <strong>exclusive to paid subscribers<\/strong> only.<\/p>\n\n\n\n<p>The US-based tech giant has made a significant move with its <strong>Gemini AI<\/strong> model, allowing it to <strong>perceive the world visually\u2014just like humans do<\/strong>. But how exactly is this possible?<\/p>\n\n\n\n<p>According to a statement from <strong>Google spokesperson Alex Joseph<\/strong>, <strong>Gemini<\/strong> now supports <strong>camera and screen interaction<\/strong> through the <strong>Live<\/strong> feature. This means that while using <strong>Gemini Live<\/strong>, users can activate their phone\u2019s camera, show their surroundings to the AI, and receive assistance on virtually any topic.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Available Only to Google One AI Premium Subscribers<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/02\/gemini-1024x576.jpeg\" alt=\"\" class=\"wp-image-21905\" srcset=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/02\/gemini-1024x576.jpeg 1024w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/02\/gemini-300x169.jpeg 300w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/02\/gemini-768x432.jpeg 768w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/02\/gemini-390x220.jpeg 390w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/02\/gemini-150x84.jpeg 150w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/02\/gemini-scaled.jpeg 1200w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Google clarified that the ability of <strong>Gemini<\/strong> to process <strong>visual input<\/strong> through the camera and screen is currently <strong>restricted to Google One AI Premium subscribers<\/strong>.<\/p>\n\n\n\n<p>At the moment, this feature appears to be in the <strong>gradual rollout<\/strong> phase. Posts on <strong>Reddit<\/strong> indicate that while some users can already access it, others have yet to receive the update. This suggests that many users will need to wait a little longer before <strong>unlocking Gemini\u2019s visual capabilities<\/strong>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>A New Dimension for Generative AI<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"577\" src=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2024\/12\/Google-Unveils-Gemini-2.0-the-Most-Advanced-Artificial-Intelligence-Model-Ever-2-1024x577.jpeg\" alt=\"\" class=\"wp-image-21199\" srcset=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2024\/12\/Google-Unveils-Gemini-2.0-the-Most-Advanced-Artificial-Intelligence-Model-Ever-2-1024x577.jpeg 1024w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2024\/12\/Google-Unveils-Gemini-2.0-the-Most-Advanced-Artificial-Intelligence-Model-Ever-2-300x169.jpeg 300w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2024\/12\/Google-Unveils-Gemini-2.0-the-Most-Advanced-Artificial-Intelligence-Model-Ever-2-768x433.jpeg 768w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2024\/12\/Google-Unveils-Gemini-2.0-the-Most-Advanced-Artificial-Intelligence-Model-Ever-2-1536x866.jpeg 1536w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2024\/12\/Google-Unveils-Gemini-2.0-the-Most-Advanced-Artificial-Intelligence-Model-Ever-2-390x220.jpeg 390w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2024\/12\/Google-Unveils-Gemini-2.0-the-Most-Advanced-Artificial-Intelligence-Model-Ever-2-150x85.jpeg 150w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2024\/12\/Google-Unveils-Gemini-2.0-the-Most-Advanced-Artificial-Intelligence-Model-Ever-2-scaled.jpeg 1200w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>By integrating <strong>real-world visual perception<\/strong>, <strong>Gemini<\/strong> takes a significant step forward in how generative AI interacts with users. This update allows for <strong>context-aware support<\/strong>, enabling users to show the AI their environment and ask questions related to what it sees\u2014such as identifying objects, reading signs, or understanding screen content.<\/p>\n\n\n\n\n<ul class=\"wp-block-latest-posts__list wp-block-latest-posts\"><\/ul>","protected":false},"excerpt":{"rendered":"<p>Google\u2019s generative AI model, Gemini, has now gained camera and screen recognition capabilities. However, this feature is exclusive to paid subscribers only. The US-based tech giant has made a significant move with its Gemini AI model, allowing it to perceive the world visually\u2014just like humans do. But how exactly is this possible? According to a &hellip;<\/p>\n","protected":false},"author":1,"featured_media":19371,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"googlesitekit_rrm_CAown96uCw:productID":"","footnotes":""},"categories":[332],"tags":[335,210],"class_list":["post-21982","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-information","tag-ai-news","tag-ai-tools-news"],"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/posts\/21982","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/comments?post=21982"}],"version-history":[{"count":0,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/posts\/21982\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/media\/19371"}],"wp:attachment":[{"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/media?parent=21982"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/categories?post=21982"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/tags?post=21982"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}