Fluoride Information

Fluoride is a poison. Fluoride was poison yesterday. Fluoride is poison today. Fluoride will be poison tomorrow. When in doubt, get it out.


An American Affidavit

Monday, January 29, 2024

Here Come the Cyborgs: Mating AI with Human Brain Cells

 

59

Here Come the Cyborgs: Mating AI with Human Brain Cells

VN Alexander

If you read and believe headlines, it seems scientists are very close to being able to merge human brains with AI. In mid-December 2023, a Nature Electronics article triggered a flurry of excitement about progress on that transhuman front:

“‘Biocomputer’ combines lab-grown brain tissue with electronic hardware”

“A system that integrates brain cells into a hybrid machine can recognize voices”

“Brainoware: Pioneering AI and Brain Organoid Fusion”

Scientists are trying to inject human brain tissue into artificial networks because AI isn’t working quite as well as we have been led to think. AI uses a horrendous amount of energy do its kind of parallel processing, while the human brain uses about a light bulb’s worth of power to perform similar feats. So, AI designers are looking to cannibalize some parts from humans to make artificial networks work as efficiently as human brains. But let’s put the fact of AI’s shortcomings aside for the moment and examine this new cyborg innovation.

The breakthrough in biocomputing reported by Hongwei Cai et al. in Nature Electronics involves the creation of a brain organoid. That is a ball of artificially-cultured stem cells that have been coaxed into developing into neurons.

The cells are not taken from someone’s brain—which relieves us of certain ethical concerns. But because this lump of neurons does not have any blood vessels, as normal brain tissue does, the organoid cannot survive for long. And so ultimately, the prospect of training organoids on datasets does not seem practical, economically speaking, at present.

But that is not going to stop this research.  The drive to seamlessly integrate biology and technology is strong.  But can it be done?  And why do so many research scientists and funding agencies assume it’s possible?

Transhuman Hopes

Underlying the hopes of a transhumanist is a philosophy of materialism that follows a logic something like this: living systems are composed of matter and energy: the interactions of all matter and energy can be represented in code, and the material used to create biohardware should be irrelevant and can be synthetic.

With such founding assumptions, transhumanists are confident they can learn to upgrade biological “hardware” with non-biological materials, and reprogram biological “software,” after cracking its “code,” and mix and match with electronics to augment human capabilities.

When researchers integrate brain tissue into an artificial network setup, they treat it as if it were the hardware they’re used to working with. They see each neuron as being either on or off—firing or not—like an electronic switch, and they see the dendrites connecting to other neurons like wires.

They see stronger connections between neurons as being “weighted,” in a statistical sense, through differential repeated interactions.

Not incidentally, if such minded people were to exercise their influence in education, they would treat students like neural networks that can be programmed by rote memorization, and they would assume that they could better trigger the targeted response by simply applying rewards and punishments. This technique produces automatons, not critical thinkers. But that’s another essay.

Organoids Might Have a Different Kind of Intelligence

If researchers think of living systems as digitized computers, they are going to have trouble with their organoids. What if neurons process information very differently from the way that artificial neural nets do?  What if neurons communicate with each other by propagating bioelectric waves through a medium? and what if, when they fire, it’s like rain drops creating concentric rings in a pool of water, with the clashing concentric rings creating interference patterns? What if it’s complicated?

Researchers in my field, Biosemiotics, are now asking such questions.  And in their vision of brain activity, neurons are not just connected as if with wires, but are coordinated with each other by virtue of their shared milieu. When a human brain has a thought, three dimensional bioelectric waves wash over the tissue, creating virtual connections — groups affected by the wave become momentarily coordinated. I don’t think there is an analogous process going on in an artificial neural network, where fluidity is only a metaphor and the structure of the setup is a lot more brittle and fixed.

An incredibly complex system like an organoid cannot be understood better by thinking of it in terms of a less complex system like a circuit board. Each neuron has the benefit of billions of years of evolution; environmental conditions can trigger DNA to produce a variety of proteins for all sorts of uses. Each cell has complex little organelles (that are descended from free-roaming protist creatures!) to handle the processing of all sorts of different signals from the outside. Each cell has receptors and little ion-gated pores that filter signals.

But I’m not a bio snob. Computers are incredible tools in the hands of people.  But can/should digital computers be tools inside the heads of people or can/should brain tissue be incorporated into digital computers?

Brainoware: How it Works

The setup for the invention described in the Nature Electronics article is remarkably simple. The organoid is placed on 2D high-density multielectrode array (MEA), which emits electric pulses, to which the organoid neurons respond by producing their own electrical patterns. This device has been dubbed, “Brainoware,” and it can recognize voices.

From “Brain Organoid Reservoir Computing for Artificial Intelligence,” by Hongwei Cai et al.

First, voice recordings are made and digitized into a 2D pattern that can be modeled on the 2D MEA. This digitized voice model is the input used to stimulate the brain organoid, which, in turn, outputs a pattern that reflects both the voice model and the internal structure of the organoid’s own dynamics. The neurons stimulate and are stimulated by other neurons in a non-linear fashion, that is, some features maybe be dampened, others amplified.

The above illustration of the setup is from the actual article, not from a pre-school reader version of the article.

The experiment was declared a success when, after training, the organoid had improved its ability to distinguish the vowel sounds of a male speaker from seven other male and female speakers. Prior to training, the setup could distinguish the speaker about 51% of the time, and after training, it was about 78% accurate.

But Wait!

Before we get too excited about this success of finally merging man and machine, using enslaved brain cells to build a computer that can eavesdrop on our conversations, I note that over twenty years ago, a very similar experiment was done with a perturbed bucket of water performing a similar role as the brain organoid.

In that experiment, the water was used to distinguish between voice recordings of the words, “One” and “Zero,” with an error rate of only 1.5%. Below is a picture of these researchers’ threedimensional models of the spoken words.

Models of “Zero” is on the left and models of “One” are on the right. From Fernando and Sojakka.

It is my opinion that the Brainoware researchers are not using the full potential of a neuron, if a bucket of water can “process” information better than a brain organoid. It’s a bit like using Shakespeare’s collected works as a doorstop.

In “Pattern Recognition in a Bucket,” Chrisantha Fernando and Sampso Sojakka note that similar experiments on have been done at the Unconventional Computing Laboratory, run by the devilishly charming Andy Adamatzky at the University of the West of England, Bristol UK. For many years now, Adamatsky has used chemicals (forming reaction-diffusion waves) and slime mold to do computation and act as memory reservoirs.

Here is what the Zero and One models look like when they are outputted by the Bucket of Water. From Fernando and Sojakka.

What is a Computer Reservoir?

I had to look this up.  Reading computer science papers is—for me, a philosopher of science who originally started out in literary theory—reminiscent of reading Jacqueses Lacan and Derrida; there is a lot of unnecessarily opaque terminology covering up rather mundane statements.

I gather that a reservoir can be any kind of physical system that is made of individual units that can interact with each other in non-linear ways, and these units must be capable of being changed by the interaction. Even a bucket of water can function as a reservoir, apparently. Miguel Soriano explains it this way in “Viewpoint: Reservoir Computing Speeds Up,”

Reservoirs are able to store information by connecting the units in recurrent loops, where the previous input affects the next response. The change in reaction due to the past allows the computers to be trained to complete specific tasks.

Hope that helps.

Reservoirs are also referred to as “black boxes” because the researchers don’t know (or don’t have to know) the complex dynamics that go on while transforming the input into the output. I reckon that, because every spoken word is never quite the same twice, a non-linear system must process that sound so that it captures an essence of what it is and can identify the same word again and again in very different contexts.

Computer Redesign?

Science Fiction is often ahead of actual research.  In the movie Ex Machina, the femme fatale robot has an artificial brain that is made out of gel, not silicon chips and electronic switches. She might have come out of Adamatsky’s unconventional computing lab.

One of my colleagues, J. Augustus Bacigalupi, proposed a computer redesign called Synthetic Cognition back in 2012, based on an understanding that biological information processing looks a bit more like this:

than this:

Bacigalupi envisioned a terrain emerging in the medium between neurons and imagined that the intersections of diffusing signals, the interference, could itself be harnessed as a useful signal. He suggests that such a different approach would make computers much more efficient insofar as they would naturally integrate multiple signals for free.

Since that early hardly-watched lecture on Synthetic Cognition (while TED talks by Nicholas Negroponte of MIT Media Lab—who thinks we will soon be able to ingest digitized Shakespeare as a pill—get a lot more views), Bacigalupi has gone on to specialize in Biosemiotics, writing papers with me and our mutual colleague, Don Favareau, like their latest one in the Journal of Physiology.

A dozen years ago Bacigalupi saw cyborgs in our future if we used his proposed new technology that would be able to harness what’s special about brain organoids and slime mold.

But the integration of man and machine faces banal challenges, like rotting organic matter and inflammation of cells in contact with the various chemicals of electronic devices.

There is a reason why most of Elon Musk’s Neuralinked primates didn’t make it. A similar issue here is the unintended (we hope!) side-effects of synthetic pharmacological interventions, which are the bane of that industry. You see, biological cells tend to make interpretations of signs, not strict decryptions of code. Such flexibility allows adaptive creativity to happen, as well as terrible, unpredictable outcomes, for example, various autoimmune diseases.

Even relatively simple transhuman tech, like pacemakers and hip replacements can, in some people, provoke allergic reactions to metals.

A body rejecting its pacemaker as foreign and toxic

And I don’t see the point of cannibalizing biology so that computer scientists can make robots pass the Turing Test better.  I do see, for example, NASA’s Artemis team using redesigned technology to create better robots, whose proprioception avails itself of a fluid medium capable of generating interference patterns that help orient it while it explores the lunar surface. Imitating the way biological organisms process information to make better, more reliable and efficient tools, seems common sense.

But I don’t see the point of making tools seem human—or of mixing human and electronic parts.

Computer Slaves

As Ian McEwan makes clear in his 2019 novel, Machines Like Me, the point of making a humanoid robot is to use it as a sex toy and a dishwasher. The drive to dehumanize people into cyborgs or to humanize robots probably grows out of the fact that it is no longer considered okay to enslave ordinary humans (or spouses).

I suspect that those who want a humanoid computer want a perfect mate, who knows everything about the master, can anticipate his every thought and move, and responds accordingly.  Such perfection in a mate does not allow it to express its own opinions or come up with its own goals and purposes.

It is worth going beyond the hype of headlines to explore these issues further.  We can learn a lot about ourselves in doing so. I lead a monthly webinar called We Are not Machines through IPAK-EDU where my students and I explore these kinds of issues.

Despite some concerted efforts to terrorize us, I do not believe we are about to be replaced in the workforce (only the shit jobs will go) and I don’t believe computers will be capable any minute now of taking over and turning us into workerborgs or batteries.

You are amazing just as you are, with your wonky neurons and your viscous brain.  And if we perfect our external tools and use them wisely, we can be even better.

V.N. Alexander, V. N. Alexander (vnalexander.com), Ph.D., is a philosopher of science and novelist, who has just completed a new satirical novel, C0VlD-1984, The Musical.

SUPPORT OFFGUARDIAN

If you enjoy OffG's content, please help us make our monthly fund-raising goal and keep the site alive.

For other ways to donate, including direct-transfer bank details click HERE.

4.4
Article Rating

No comments:

Post a Comment