Friday, August 2, 2024

…Y-Y-YES, B-B-B-BUT WHAT IF THE AI GOES N-NUTS?

 

…Y-Y-YES, B-B-B-BUT WHAT IF THE AI GOES N-NUTS?

Last Wednesday I blogged about the prospect of Artificial Intelligence Executive Orders awaiting the next occupant of the White Out House, be it Orange Man Bad, or Madam Kamalarkey. each, I suspect, will be presented with some version of executive orders basically amounting to "full steam ahead..." with a slight steerage to the right or left to garner the good will of their respective supporters. I warned in that blog that there was some semi-good news coming in the next blog.

Well, this is that blog, and here is that semi-good news: artificial intelligence, at least in its current iterations, may end up by going "nuts":

AI systems could be on the verge of collapsing into nonsense, scientists warn

Observe how this "going nuts process" is already underway, and what might result from its continuance:

Recent years have seen increased excitement about text-generating systems such as OpenAI’s ChatGPT. That excitement has led many to publish blog posts and other content created by those systems, and ever more of the internet has been produced by AI.

Many of the companies producing those systems use text taken from the internet to train them, however. That may lead to a loop in which the same AI systems being used to produce that text are then being trained on it.

...

It takes only a few cycles of both generating and then being trained on that content for those systems to produce nonsense, according to the research.

They found that one system tested with text about medieval architecture only needed nine generations before the output was just a repetitive list of jackrabbits, for instance.

The concept of AI being trained on datasets that was also created by AI and then polluting their output has been referred to as “model collapse”.

In other words, AI appears to suffer from a phenomenon familiar to humans (in whose intelligence image AIs are made): how much progress is progress? When should one deliberately stop a cycle? The question is a crucial one, because in a system undergoing "model collapse" no sense of continuity, community, or tradition can emerge. "Model collapse" suggests rather that any attempt to turn more and more of the management decisions of more and more areas of human society over to artificial intelligence algorithms, the more chaotic and stupid society is bound to become.  In the past, I have used the example of algorithmic trading as an example of the chaos that can result with more and more dependency on such systems. Flash crashes inevitably occur. Fortunately, these have been confined in the past to one or two stocks or commodities, but the real lesson of such events has yet to be learned: putting inhuman and unhuman programs in control of human activities only makes those activities more distant from human thought and society; the markets become more reflective of programming algorithms, and less resemble human assessments and judgements of prices, risk, profit potential, and so on.  A flash crash is a "model collapse" in miniature, and it's a warning about more generalized applications of AI.  The recent travel difficulties caused by a defective download from Microsoft - while not a model collapse - are yet another warning of what could happen when model collapse spreads from algorithmic trading applications to other activties.

The article uses the following example:

Researcher Emily Wenger, who did not work on the study, used the example of a system trained on pictures of different dog breeds: if there are more golden retrievers in the original data, then it will pick those out, and as the process goes round those other dogs will eventually be left out entirely – before the system falls apart and just generates nonsense.

In other words, the initial data set - the "generative context" for all the analogical derivatives which follow - determines the basic shape or content of all that follows, until that model finally collapses. A case in point are all those stories out there about artificial-intelligence-generated academic papers that managed to pass "peer review", prompting the questions: (1) who were these "peers', or were they too the creations of AI? and more importantly (2) what happens when these gibberish papers actually become policy that is enacted by humans?  i've you're a regular reader here, you may have already encountered my "world simulation scenario" version of this story. That high octane speculation runs something like this: what if there is a super-computer with a super-program designed to "game out" all sorts of interlinked events, a "scenario generator and simulator", a kind of algorithmic trading program, but not confined to just market activities, but all sorts of human behaviors and activities. And what if there is a group of human beings whose faith in "science" and "progress" and computer modeling and design is so deep that they have absolute confidence in what their world simulator program tells them to do, even if it seems increasingly irrational. Such advice is, after all, founded on algorithms and mathematics and the best modeling.  Something like this, I strongly suspect, is already happening, and its a warning to those who think AI and such programs - and robots based upon them - are the wave of the future.

It may be indeed a wave, but waves can also devastate a society and civilization, and leave nothing but chaos and destruction in its wake, and that, I strongly suspect, is what is slated for the current round of fascination in and advocacy of AI...

Caveat emptor.

See you on the flip side...

(If you enjoyed today's blog, please share with your friends.)

 

Posted in

Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and "strange stuff". His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into "alternative history and science".


No comments:

Post a Comment