{"id":29423,"date":"2025-09-27T07:09:05","date_gmt":"2025-09-27T07:09:05","guid":{"rendered":"https:\/\/metaverseplanet.net\/blog\/?p=29423"},"modified":"2026-01-03T13:18:22","modified_gmt":"2026-01-03T13:18:22","slug":"10-ai-tools-that-stole-the-show-this-week","status":"publish","type":"post","link":"https:\/\/metaverseplanet.net\/blog\/10-ai-tools-that-stole-the-show-this-week\/","title":{"rendered":"10 AI Tools That Stole the Show This Week!"},"content":{"rendered":"\n<p>The AI services being launched or announced continue to surprise us every week. We&#8217;ve compiled the most notable ones of the week for you.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">HUNYUAN3D 3.0<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"820\" height=\"472\" src=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/01-edao.webp\" alt=\"\" class=\"wp-image-29424\" srcset=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/01-edao.webp 820w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/01-edao-300x173.webp 300w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/01-edao-768x442.webp 768w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/01-edao-150x86.webp 150w\" sizes=\"(max-width: 820px) 100vw, 820px\" \/><\/figure>\n\n\n\n<p>Tencent released <strong>Hunyuan3D 3.0<\/strong>, offering 3 times higher <strong>precision<\/strong> and 1536\u00b0 <strong>ultra HD voxel modeling<\/strong>. It can capture missing details, realistic facial features, and professional-grade textures for gaming, film, and e-commerce applications. https:\/\/hunyuan-3d.com\/<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">WAN2.2<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"820\" height=\"393\" src=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/02-c0bm.webp\" alt=\"\" class=\"wp-image-29425\" srcset=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/02-c0bm.webp 820w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/02-c0bm-300x144.webp 300w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/02-c0bm-768x368.webp 768w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/02-c0bm-150x72.webp 150w\" sizes=\"(max-width: 820px) 100vw, 820px\" \/><\/figure>\n\n\n\n<p>Wan introduced <strong>Wan2.2<\/strong>, a 5B parameter <strong>video diffusion model<\/strong> with an <strong>MoE architecture<\/strong> that offers higher capacity for the same cost. It can deliver cinema-quality visuals, complex motion generation, and efficient 720p <strong>text-to-video<\/strong> and <strong>image-to-video<\/strong> output at 24 fps. https:\/\/wan.video\/<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">MOONDREAM 3<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"820\" height=\"451\" src=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/03-srvw.webp\" alt=\"\" class=\"wp-image-29426\" srcset=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/03-srvw.webp 820w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/03-srvw-300x165.webp 300w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/03-srvw-768x422.webp 768w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/03-srvw-150x83.webp 150w\" sizes=\"(max-width: 820px) 100vw, 820px\" \/><\/figure>\n\n\n\n<p><strong>Moondream 3<\/strong> launched as a 9B parameter, 2B active <strong>MoE visual-language model<\/strong>, providing state-of-the-art <strong>visual reasoning<\/strong> in a compact, application-friendly design. https:\/\/moondream.ai\/<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">SRPO<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"820\" height=\"531\" src=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/04-ffwg.webp\" alt=\"\" class=\"wp-image-29427\" srcset=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/04-ffwg.webp 820w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/04-ffwg-300x194.webp 300w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/04-ffwg-768x497.webp 768w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/04-ffwg-150x97.webp 150w\" sizes=\"(max-width: 820px) 100vw, 820px\" \/><\/figure>\n\n\n\n<p>Tencent-Hunyuan unveiled <strong>SRPO<\/strong>, a diffusion fine-tuning method that stabilizes the training process, corrects noisy images, and shortens computation time. This method enables <strong>faster optimization<\/strong>, prevents reward hacking, and supports controllable style adjustments for models like FLUX.1.dev. https:\/\/www.srpo.net\/<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">REVE IMAGE<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"820\" height=\"461\" src=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/05-57o0.webp\" alt=\"\" class=\"wp-image-29428\" srcset=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/05-57o0.webp 820w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/05-57o0-300x169.webp 300w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/05-57o0-768x432.webp 768w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/05-57o0-390x220.webp 390w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/05-57o0-150x84.webp 150w\" sizes=\"(max-width: 820px) 100vw, 820px\" \/><\/figure>\n\n\n\n<p>Reve launched <strong>Reve Image<\/strong>, which combines <strong>image generation<\/strong>, restyling, a <strong>drag-and-drop editor<\/strong>, a creative assistant, and a beta API. Users can create and edit images with <strong>natural language<\/strong> and integrate Reve&#8217;s capabilities into their own applications. https:\/\/app.reve.com\/<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">LING-FLASH 2.0<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"820\" height=\"437\" src=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/06-ekyk.webp\" alt=\"\" class=\"wp-image-29429\" srcset=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/06-ekyk.webp 820w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/06-ekyk-300x160.webp 300w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/06-ekyk-768x409.webp 768w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/06-ekyk-150x80.webp 150w\" sizes=\"(max-width: 820px) 100vw, 820px\" \/><\/figure>\n\n\n\n<p><strong>Ling-flash-2.0<\/strong> is now <strong>open-source<\/strong> and is a 100B parameter <strong>MoE LLM<\/strong> with 6.1 billion active parameters. Trained on 20T+ tokens, it exhibits near-perfect performance in <strong>complex reasoning<\/strong>, <strong>code generation<\/strong>, and <strong>frontend development<\/strong>, making it the most advanced among dense models under 40 billion parameters. https:\/\/huggingface.co\/inclusionAI\/Ling-flash-2.0<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">VOXCPM<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"820\" height=\"487\" src=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/07-cu00.webp\" alt=\"\" class=\"wp-image-29430\" srcset=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/07-cu00.webp 820w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/07-cu00-300x178.webp 300w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/07-cu00-768x456.webp 768w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/07-cu00-150x89.webp 150w\" sizes=\"(max-width: 820px) 100vw, 820px\" \/><\/figure>\n\n\n\n<p><strong>VoxCPM<\/strong>, a tokenizer-free <strong>TTS model<\/strong> powered by MiniCPM-4, offers <strong>zero-shot voice cloning<\/strong> and hyper-realistic speech with natural harmony. Trained with over 1.8 million hours of data, it achieves state-of-the-art performance. https:\/\/voxcpm.com\/<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">UMO<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"820\" height=\"407\" src=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/08-jas1.webp\" alt=\"\" class=\"wp-image-29431\" srcset=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/08-jas1.webp 820w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/08-jas1-300x149.webp 300w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/08-jas1-768x381.webp 768w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/08-jas1-150x74.webp 150w\" sizes=\"(max-width: 820px) 100vw, 820px\" \/><\/figure>\n\n\n\n<p><strong>UMO<\/strong>, a <strong>unified multi-identity optimization<\/strong> framework for image customization, was introduced. It can ensure high <strong>identity consistency<\/strong>, reduce entanglement among multiple reference images, and will be fully <strong>open-source<\/strong> with models, scripts, and training code. https:\/\/bytedance.github.io\/UMO\/<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">RAY3<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"820\" height=\"634\" src=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/09-rmjd.webp\" alt=\"\" class=\"wp-image-29432\" srcset=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/09-rmjd.webp 820w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/09-rmjd-300x232.webp 300w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/09-rmjd-768x594.webp 768w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/09-rmjd-150x116.webp 150w\" sizes=\"(max-width: 820px) 100vw, 820px\" \/><\/figure>\n\n\n\n<p>Luma AI introduced <strong>Ray3<\/strong>, the first <strong>reasoning video model<\/strong> capable of producing <strong>studio-quality HDR<\/strong>. Its new draft mode enables fast iteration with improved physics and coherence and is now available for free in <strong>Dream Machine<\/strong>. https:\/\/lumalabs.ai\/ray<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">PAPER2AGENT<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"820\" height=\"453\" src=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/10-gpmp.webp\" alt=\"\" class=\"wp-image-29433\" srcset=\"https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/10-gpmp.webp 820w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/10-gpmp-300x166.webp 300w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/10-gpmp-768x424.webp 768w, https:\/\/metaverseplanet.net\/blog\/wp-content\/uploads\/2025\/09\/10-gpmp-150x83.webp 150w\" sizes=\"(max-width: 820px) 100vw, 820px\" \/><\/figure>\n\n\n\n<p>The newly announced <strong>Paper2Agent<\/strong> infrastructure automatically converts <strong>academic papers<\/strong> into active <strong>AI agents<\/strong>. Using multiple sub-agents, the system builds a robust <strong>Model Context Protocol (MCP)<\/strong> from a paper&#8217;s text and code, enabling the resulting agent to apply the paper&#8217;s methods and data to new projects. https:\/\/github.com\/jmiao24\/Paper2Agent<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">You Might Also Like;<\/h3>\n\n\n<ul class=\"wp-block-latest-posts__list wp-block-latest-posts\"><li><a class=\"wp-block-latest-posts__post-title\" href=\"https:\/\/metaverseplanet.net\/blog\/the-dark-side-of-nanotechnology\/\">The Dark Side of Nanotechnology: Could Microscopic Swarms Erase Billions?<\/a><\/li>\n<li><a class=\"wp-block-latest-posts__post-title\" href=\"https:\/\/metaverseplanet.net\/blog\/the-illusion-of-digital-immortality\/\">The Illusion of Digital Immortality: Are You Really Uploading Your Mind?<\/a><\/li>\n<li><a class=\"wp-block-latest-posts__post-title\" href=\"https:\/\/metaverseplanet.net\/blog\/artemis-2s-deep-space-eclipse\/\">The View That Changes Everything: Artemis 2\u2019s Deep Space Eclipse<\/a><\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>The AI services being launched or announced continue to surprise us every week. We&#8217;ve compiled the most notable ones of the week for you. HUNYUAN3D 3.0 Tencent released Hunyuan3D 3.0, offering 3 times higher precision and 1536\u00b0 ultra HD voxel modeling. It can capture missing details, realistic facial features, and professional-grade textures for gaming, film, &hellip;<\/p>\n","protected":false},"author":1,"featured_media":24583,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"googlesitekit_rrm_CAown96uCw:productID":"","footnotes":""},"categories":[332],"tags":[334,209],"class_list":["post-29423","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-information","tag-ai-tools","tag-list-of-ai-tools"],"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/posts\/29423","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/comments?post=29423"}],"version-history":[{"count":0,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/posts\/29423\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/media\/24583"}],"wp:attachment":[{"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/media?parent=29423"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/categories?post=29423"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/metaverseplanet.net\/blog\/wp-json\/wp\/v2\/tags?post=29423"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}