In recent weeks, OpenAI has formed a partnership with the humanoid robot startup, Figure. Today, they unveiled an intriguing new video featuring “Figure 01,” a robot developed through this collaboration.
It’s fair to say that OpenAI has positioned itself at the forefront of developing artificial intelligence tools that have become integral to our daily lives over the past year. They currently set the highest standards in generating images, texts, and videos. However, they are also making strides to expand their horizons.
A few weeks ago, the AI powerhouse entered into a partnership with the humanoid robot startup Figure. A new video showcasing the robot created from this collaboration was released today, offering a glimpse that is sure to captivate and possibly send shivers down your spine.
Figure’s humanoid robot can now have conversations with humans
A video featuring the Figure 01 robot, developed through the partnership between the two companies, was recently shared on Figure’s official social media channels. The footage demonstrates that the robot can now engage in full conversations with humans, perform simple daily tasks, comprehend what it sees, respond to requests, and simultaneously speak and take action. It even knows how to handle dishes.
OpenAI’s models endow this robot with visual sense-making and language capabilities. Specifically, the robot utilizes a large visual language model (VLM) developed by OpenAI for Figure. In turn, Figure enables its neural networks to carry out rapid, low-level actions required for robotic operation.
For instance, in one segment of the video, a person asks the robot, “Give me something to eat.” Figure 01, using OpenAI’s model, surveys the table, identifies an apple, and hands it to the person. When questioned about its choice, the robot explains, “The only item on the table that could be eaten was the apple.” It’s fair to suggest that as the robot continues to advance, it could evolve into a physical embodiment of ChatGPT.
According to Corey Lynch, head of Figure’s artificial intelligence division, the robot is able to:
- Describe my visual experiences
- Plan for what you will do in the future
- Using memory for actions
- Don’t put words into words what they make sense of
Figure; Microsoft, Nvidia It also received investment from companies such as Amazon
Brett Adcock, founder and CEO of Figure, previously stated that their collaboration with OpenAI would enable robots and humans to work and communicate side by side. Leveraging the world’s most extensive language model through this partnership, Figure easily enhanced its robots’ language processing abilities. According to Corey Lynch, the head of the artificial intelligence department, what would normally take years to achieve in developing speech capabilities became a reality in just two weeks with OpenAI’s support.
As the CEO highlighted, this partnership will allow people to interact with robots using plain language, outlining the tasks they need accomplished. The robots, in turn, will interpret these instructions as actionable tasks and proceed accordingly. Previously, Figure has received investments from tech giants such as Microsoft, NVIDIA, and Amazon, amassing a total investment of over $675 million. With this substantial financial backing, the company’s goal is to expedite its artificial intelligence research and robot production efforts.
Following this development, we will witness the commercial sale of robots. During this phase, various factors, including the safety of the robots, their operational domains, costs, and the potential for investment returns, will be continually assessed. We will keep you updated as soon as a definitive date is announced. What are your thoughts on these robots?
You may also like this content
- Google Unveils New Reasoning AI Model
- NVIDIA Announces New Jetson Orin Nano Super Kit That Could Change All AI Applications
- Artificial Intelligence Models Have Been Discovered To Fool Humans
Follow us on TWITTER (X) and be instantly informed about the latest developments…