Fluoride Information

Fluoride is a poison. Fluoride was poison yesterday. Fluoride is poison today. Fluoride will be poison tomorrow. When in doubt, get it out.


An American Affidavit

Monday, December 1, 2025

INSURANCE? NO WAY! ARTIFICIAL INTELLIGENCE IS TOO RISKY!

 

INSURANCE? NO WAY! ARTIFICIAL INTELLIGENCE IS TOO RISKY!

If you've been following the artificial intelligence-data center bubble and the efforts of the Trump misadministration to cater to the technoblobbocrats, you will definitely be interested in this story shared by E.E., because it would seem that normal people are not the only ones showing some concerns over the nosedive into amorality and decadance that "artificial 'intelligences'" seem to be taking lately. Insurance companies are now backing away from insuring them, claiming they're just too risky:

First,  we note the "what": artificial intelligence is simply "too risky" to insure  But the really intriguing part of the story is not the what, but the why:

...One underwriter describes the AI models’ outputs to the FT as “too much of a black box.”

AIG, also listed in the FT story, has since sent TechCrunch the following statement: “AIG was not specifically seeking to use these [reported upon] exclusions and has no plans to implement them at this time.”

The industry has good reason to be spooked, the story reminds us. Google’s AI Overview falsely accused a solar company of legal troubles, triggering a $110 million lawsuit back in March. Air Canada last year got stuck honoring a discount its chatbot invented. And fraudsters last year used a digitally cloned version of a senior executive to steal $25 million from the London-based design engineering firm Arup during a video call that seemed entirely real.

What really terrifies insurers isn’t one massive payout; it’s the systemic risk of thousands of simultaneous claims when a widely used AI model steps in it. As one Aon executive put it, insurers can handle a $400 million loss to one company. What they can’t handle is an agentic AI mishap that triggers 10,000 losses at once. (Emphases added)

So there you have it: crazy outputs of artificial intelligence agents committing corporations to goofy transactions that the corporations have to honour, outputs which are in a "black box" that no one fully understands - much less controls - and the implications are clear: what happens when those artificial intelligence "agents" acting for corporations commit said corporations to thousands of transactions that could potentially bankrupt the company, even if "safeguards" are installed to subject all such proposed transactions to "human review and approval." What good is that, when the edgymakayshunal system and the quackademy are churning out fundamentally stupid people? (And don't forget, just a decade ago we were warning about the deleterious effects of the "Common Core" standards and the "adjustable" standardized tests that were to be administered by the computer and artificial intelligences.)

So whither A.I., if no one will insure it?

This is where I suspect it gets rather dicey, for not all advocates of artificial intelligence agents are corporations, as we know. Some of them are governments, and there artificial intelligences are already in widespread use, determining a variety of things from military targets to tracking financial  traffic and government databases (remember PROMIS?).  What happens when those non-insurable artificial intelligences get their hands on...oh, I don't know... things like the deposit insurance trust corporations that insure bank accounts, or on the files of the Securities and Exchange Commission, or (nightmare of nightmares) the Exchange Stabilization Fund? The lesson here is that the only entities big enough to insure against losses are governments, and even they might be stretched to the snapping point. Not all technological progress is genuine cultural and human progress, and artificial "intelligence" is looking more and more like just such a boondoggle.

And as I write this, I am watching a commercial on YouTube for a robotic puppy dog that sells for a mere $39.95 and looks and behaves just like a real dog. Maybe so, except for one thing. A real dog gives love and loyalty. A robot merely mimics them. So I'll take the reality, and that includes in matters of intelligence too. The problem with the A.I. boondoggle that we're quickly learning is that even its mimicry is flawed, and perhaps fatally so.  So Mr. Trump, I say let China "win" the artificial intelligence war, because I have a peculiar sense that China will be even more screwed up within a few years of its introduction than even its Communist overlords are capable of screwing it up.

See you on the flip side...

(If you enjoyed today's blog, please share it with your friends.)

Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and "strange stuff". His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into "alternative history and science".

No comments:

Post a Comment