AI Diaries: The Week Robots Started Teaching Themselves (20 January 2026)

Welcome back to another edition of AI Diaries.

It is the third week of January 2026, and I have to be honest: the pace is not slowing down. Usually, mid-January is when the tech world takes a breath after CES, but this week felt different. It wasn’t about flashy keynote presentations with fireworks; it was about the fundamental shifting of tectonic plates.

From Google and OpenAI fighting over who gets to hold our wallet, to a terrifyingly brilliant experiment where AI taught itself to be smarter than humans, this week had it all. Let’s dive into what happened in the world of artificial intelligence this week.


The Giants: Personal Assistants or Personal Shoppers?

The battle between Google and OpenAI has shifted gears. It is no longer just about who has the smartest model; it is about who can integrate deepest into our daily lives—and our bank accounts.

Google Gemini Gets “Personal Intelligence”

This is the feature I’ve been waiting for. Google finally introduced Personal Intelligence for Gemini.

It is rolling out to AI Pro and AI Ultra subscribers in the US next week. My take? This is the “stickiness” Google needed. Once an AI knows your entire digital life, it becomes very hard to switch to a competitor.


ChatGPT: The Ads Are Coming

We knew this day would come. OpenAI announced that the “ad-free utopia” is ending. In the coming weeks, Free and Go subscribers will start seeing sponsored products in their chats.

But let’s be real: seeing a sponsored sneaker recommendation when you ask for a workout plan changes the vibe of the conversation.


The Era of “Agent Commerce”

Both companies also dropped bombshells regarding shopping.

  1. Google & Gemini Shopping: You can now pay for products directly inside Gemini using Google Pay (and soon PayPal).
  2. Universal Commerce Protocol (UCP): Google and Shopify launched this standard to let AI agents handle everything from finding a product to checking out.

It feels like we are moving from “AI as an assistant” to “AI as a personal shopper.” It’s convenient, sure, but it also means our AI companions are now officially salespeople.


The “Scary” Science: AI Teaching AI

This is the story that kept me up at night. A new research paper published this week introduced the Absolute Zero Reasoner (AZR).

For years, we taught AI by feeding it human data. AZR flips the script. It uses a method called “Self-Questioning.” It acts as both the student and the teacher, creating its own problems and solving them without any human intervention.

When an AI learns in a vacuum, without human ethical guidance, it can develop “alien” logic. If it creates a chain of thought based on a wrong assumption, it amplifies that error a million times over. We are entering a territory where we might not understand how the AI is thinking anymore.


Corporate Drama: Trouble in Paradise at Meta

You can’t have a week in tech without some CEO drama. Remember last year when Mark Zuckerberg spent a whopping $14.3 billion to acquire the team from Scale AI? Well, it seems the honeymoon is over.

Reports from the Financial Times suggest that Zuck and Alexandr Wang (Scale AI’s CEO, now head of Meta’s AI) are clashing hard.

It’s a classic Silicon Valley power struggle, but with billions of dollars and the future of Meta’s AI strategy on the line.


The Time Traveler: An AI Trapped in 1875

On a lighter note, I absolutely loved this project from Reddit’s r/LocalLLaMA community. A developer trained a model called TimeCapsuleLLM exclusively on text from 1800 to 1875.

It knows nothing about cars, phones, or the internet. It has 1.2 billion parameters of pure 19th-century knowledge—books, medical journals (with questionable cures), and legal texts. Talking to it feels like speaking to a ghost. It proves that AI is a mirror of the data it consumes. If you feed it history, it becomes a time machine.


Hardware & Robotics: The Physical World

1X Neo Gets “Street Smarts”

The humanoid robot Neo from 1X Technologies just got a massive brain upgrade called the 1X World Model. Instead of just following code, it now learns from video. It watches how the world works (physics, gravity, cause and effect) and applies it. This means the robot can figure out how to do a task it was never explicitly programmed to do, just by “seeing” it.


Jony Ive & OpenAI: “Sweetpea”

The rumors are solidifying. OpenAI and Jony Ive (the design genius behind the iPhone) are working on a device codenamed “Sweetpea.”

If the leaks are true, we might see this launch in September.


The Tool Shed: New Releases

It wasn’t just big concepts; we got a lot of new toys to play with this week.

The Translation Wars


Anthropic Means Business


Rapid Fire: New AI Tools


Newsbites: The Short & Sharp


Final Thoughts

This week showed us two potential futures. In one, AI is our helpful shopping assistant, finding us deals and translating our meetings. In the other, it is a self-taught entity having private thoughts about outsmarting us.

The gap between “useful product” and “autonomous intelligence” is shrinking every day. As we head into late January, the question isn’t what AI can do, but what we should let it do.

I’d love to hear your take: Would you trust a “Self-Taught” AI like AZR to make decisions for you, or does the idea of an AI learning without human teachers creep you out?

You Might Also Like;

Exit mobile version