Biocomputing is one of the most bizarre frontiers in emerging technology, made possible by the fact that our neurons perceive and act in the world using the same language as computers – electrical signals. Human brain cells, grown in bulk on silicon chips, can receive electrical signals from a computer, try to understand them and talk back.
More importantly, they can learn. We first encountered this concept in the DishBrain project at Monash University in Australia. In which I must have felt like a Dr. Frankenstein researchers grew about 800,000 brain cells on a chip, placed it in a simulated environment, and watched as this hideous cyborg abomination learned to play Pong in five minutes. The project was quickly funded by the Australian military and grew into a company called Cortical Labs.
When we interviewed Cortical Labs Chief Scientific Officer Brett Kagan, he told us that even at an early stage, biocomputers enhanced with human neurons appear to learn much faster, consume much less power than today’s AI machine learning chips, while demonstrating “ more intuition”. , insight and creativity.” After all, our brains only use a tiny 20 watts to run nature’s most powerful computers.
“We ran tests against reinforcement learning,” Kagan told us, “and found that in terms of the number of samples a system has to see before it starts to show meaningful learning, biological systems, even as basic and stupid as they are right now, still outperforming the best deep learning algorithms that humans have created.
One downside—besides some clearly thorny ethics—is that “wetware” components need to be kept alive. This means keeping them fed, watered, temperature controlled and protected from germs and viruses. Cortical’s record in 2023 was about 12 months.
Since then, we’ve covered similar projects at Indiana University — where researchers let brain cells self-organize into a three-dimensional, sphere-shaped ‘Brainware’ organoid before sticking electrodes into them — and Swiss startup FinalSpark, which started using dopamine as a reward mechanism for its Neuroplatform biocomputer chips.
If you’re hearing about this brain-on-a-chip thing for the first time, pick your jaw up off the floor and read some of these links – it’s absolutely stunning work. And now Chinese scientists say they’re taking it to the next level.
The MetaBOC project (BOC for brain on a chip, of course) brings together researchers from Tianjin University’s Laboratory of Brain-Computer Interaction and Human-Computer Integration with other teams from Southern University of Science and Technology.
It is open source software designed to act as an interface between brain-on-a-chip biocomputers and other electronic devices, giving brain organoids the ability to perceive the world through electronic signals, to control it with whatever controls are available to it given. approach certain tasks and learn to manage them.
The Tianjin team says they are using sphere-shaped organoids, similar to the Brainoware team in Indiana, because their three-dimensional physical structure allows them to form more complex neural connections, similar to what they do in our brains. These organoids are grown under low-intensity focused ultrasound stimulation, which the researchers say appears to give them a better intelligent foundation to build on.
The MetaBOC system also seeks to meet intelligence with intelligence by using AI algorithms within software to communicate with the biological intelligence of brain cells.
The Tianjin team specifically mentions robotics as an integration goal and provides the rather silly images above, as if deliberately trying to undermine the piece’s credibility. A biocomputer with a brain-on-a-chip, the team says, can now learn to drive the robot, detect controls, and try tasks such as avoiding obstacles, tracking targets, or learning to use arms and hands to grasp various objects.
Because the brain organoid is only able to “see” the world through electrical signals fed to it, it can theoretically train itself to pilot its mini-gundam in a fully simulated environment, allowing it to get most of its fall and hurtle out of the way , without compromising its beefy smart engine.
Now, to be crystal clear, the fully exposed, pink lollipop brain organoids in the robot images above are mockups — “demonstration diagrams of future application scenarios” — rather than brain-driven prototypes. Perhaps the image below from Cortical Labs is a better representation of what these kinds of brain-on-chips will look like in the real world.
But either way, if you’ve built a small robot with adequate sensing and motor capabilities, we see no reason why human brain cells can’t soon be in there and try to control it.
This is a phenomenal time for science and technology, with projects like Neuralink aiming to connect high-speed computer interfaces directly into your brain, while projects like MetaBOC are growing human brain cells into computers, and the booming artificial intelligence industry is trying to beat the best of biological intelligence with some strange facsimile built entirely in silicon.
Science and technology are forced to philosophize when they encounter the limits of our understanding; are dish-brains conscious? Are AIs conscious? Both may end up being indistinguishable from sentient beings at some point in the near future. What are the ethics once that happens? Is it different for biological and silicon intelligence?
“Let’s say,” says Kagan in our wide-ranging interview, “that these systems actually develop consciousness—very unlikely in my opinion, but let’s say it happens. Then you have to decide, okay, is it actually ethical to test with Because we’re testing on creatures with consciousness, we test on animals that I think have some level of consciousness, no worries… We eat animals, many of us, but it’s justifiable.”
Honestly, I can’t believe what I’m writing; that humanity is beginning to take the physical building blocks of its own mind and use them to build cyborg minds capable of intelligently controlling machines.
But this is life in 2024, as we accelerate full throttle toward a mysterious technological singularity, the point where AI intelligence surpasses our own and starts developing things even faster than humans can. The point at which technological progress—already occurring at an unprecedented rate—accelerates toward a vertical line and we lose control of it entirely.
What a time to be alive—and not as a cluster of cells attached to a chip in a dish. Well, as far as we know.
Source: Tianjin University