There is a long-standing tradition in the world of computer engineering: whenever a new platform emerges, the first question developers ask is, “Can it run Doom?” Released in 1993, this legendary game has been run on countless unconventional devices, from calculators and robotic lawnmowers to blockchain systems and PDF files. Now, perhaps the most astonishing hardware has been added to this list: live human brain cells grown in a laboratory.
Cortical Labs, an Australia-based biotechnology company known for previous breakthroughs in the field, has announced that approximately 200,000 live human neurons working alongside a silicon chip have successfully learned to play Doom. However, according to researchers, this is not just a quirky experiment or a tech demo. It represents a significant scientific milestone, signaling that biological computers are inching closer to real-world applications.
Neurons and Silicon Combined
The system used in the experiment is built on Cortical Labs’ neural computing platform, CL-1. The company describes this system as “the first biological computer capable of running code,” having introduced it last year as the world’s first “synthetic biological intelligence.”
At the core of the system are human brain cells grown in a lab. Researchers cultivated around 200,000 neurons on a specialized surface called a microelectrode array. These microelectrodes allow the neurons to both receive electrical signals and have their generated signals recorded.
The CL-1 chip acts as a bridge between the digital computing world and the biological system. Digital data from the game is converted into electrical biological signals and transmitted to the neurons. In turn, the electrical activity generated by the neurons is translated back into digital commands, controlling the movements of the in-game character.
As Brett Kagan, Chief Scientific Officer at Cortical Labs, stated, “This was a major milestone because it proved adaptive, real-time, goal-directed learning.”
It All Started with Pong
Cortical Labs’ work on biological computers is not entirely new; it dates back a few years. In 2021, the company developed a biological chip capable of playing the classic Atari game Pong. In that project, between 800,000 and 1 million live brain cells were grown on microelectrode arrays, learning to control on-screen paddles by communicating with a digital system via electrical signals.
Achieving that initial success required an intensive 18-month research process, during which the cells had to be carefully adapted to the game through a meticulous training regimen. For their next challenge, the team set their sights on something much more demanding: Doom.
While Pong is incredibly simple, Doom—even as a retro classic—features a vastly more complex structure. It takes place in a 3D environment with moving enemies, requiring map exploration and real-time decision-making. The biggest challenge for the researchers was translating the game’s visual data into a language that “eyeless” neurons could understand.
The solution was to convert the game’s video feed into patterns of electrical stimulation. Images on the game screen were analyzed and translated into signals that stimulated neurons in different regions. According to Cortical Labs CTO David Hogan, the control logic works like this: “If the neurons fire in a certain pattern, the Doom character shoots. If another pattern occurs, it moves to the right.” This electrical feedback loop allows the neurons to react to the game’s environment.
Trained in Just One Week
One of the most remarkable aspects of the Doom experiment was the speed of the process. Independent developer Sean Cole used Cortical Labs’ cloud platform and a Python-based programming interface to train the neurons to play the game in just about a week.
According to Kagan, this highlights a major shift in the accessibility of biological computing. While the Pong experiment took years of dedicated research, the fact that a developer with limited biology experience could train the system in a matter of days proves the technology is rapidly maturing.
Weaker Than Humans, But Exceptionally Fast Learners
In its current state, the biological system’s performance is still quite limited. Researchers note that the neurons’ playstyle resembles that of an absolute beginner who has never seen a computer before. Despite this, the cells exhibit crucial behaviors: they can seek out enemies, fire weapons, and rotate the character.
While their performance lags far behind human players, researchers emphasize that their learning speed is incredibly impressive. In certain scenarios, biological systems have been observed to learn faster than silicon-based artificial intelligence algorithms. As new learning algorithms are applied, their performance is expected to improve significantly.
The “Black Box” Mystery
While the experiment and its results are groundbreaking, an interesting pattern is emerging. Just as scientists still don’t completely understand how Large Language Models (LLMs) make decisions inside their “black boxes,” Cortical Labs researchers face a similar dilemma with this bio-computer.
The scientific community cannot yet fully explain how the neurons are playing the game. For instance, how the neurons “understand” what is expected of them, or how they perceive a screen without eyes, remains an active area of research.
This mirrors the unanswered questions that frequently arise when humans engage with systems capable of “intelligent” tasks. However, there is a strong possibility that AI and bio-computing could uniquely complement one another in the future.
As AI advances, the computing power required to sustain it grows exponentially, creating a massive scaling problem. Biological computers could help solve or mitigate this processing power bottleneck. These systems could essentially allow machine intelligence to “learn continuously over its lifetime,” much like human brain cells do, suggesting that AI could continuously evolve within the same hardware footprint.
Furthermore, these biological frameworks could be integrated into robotics, such as humanoid robots. Technically speaking, playing Doom can be viewed as a simplified virtual version of controlling a robotic arm. These live neural networks possess fundamental advantages in handling complex environments, making decisions under uncertainty, and processing real-time data.
Best of all, the entire process is highly energy-efficient. The CL-1 setup used in these tests consumes only a few watts of power.
“The really exciting thing here isn’t just that a biological system can play Doom,” Kagan concludes. “It’s that it can handle complexity, uncertainty, and real-time decision-making.”
You Might Also Like;
- The AI Revolution in Home Coffee: Philips Café Aromis 8000 Series
- GoChess Review: The Ultimate AI Smart Chess Board
- Is the Sky Falling? The Frightening 10% Probability of Space Debris
