Fluoride Information

Fluoride is a poison. Fluoride was poison yesterday. Fluoride is poison today. Fluoride will be poison tomorrow. When in doubt, get it out.


An American Affidavit

Friday, March 14, 2025

PENTAGRAM AWARDS A.I. WARPLANNING CONTRACT

 

PENTAGRAM AWARDS A.I. WARPLANNING CONTRACT

On Monday last I blogged about A.I., "geopolifiance," and the fact that the quest for artificial intelligence has become a decisive driving factor in much modern finance and geopolitical calculus. You can post today's story shared by W.G (with our thanks) as confirmation for that notion, and also under the heading of "what could possibly go wrong" category of insanity made to sound palatable by advocating it in a calm, William F. Buckley voice:

Pentagon Awards Contract for AI War-Planning Tool

Certain paragraphs from the very beginning and very end of this article popped out at me, and sent chills through me:

The US Department of Defense has awarded a contract to Scale AI to integrate artificial intelligence “into military operational and theater-level planning.”

According to the Defense Innovation Unit (DIU), the AI system, dubbed Thunderforge, will “accelerate decision-making, allowing planners to more rapidly synthesize vast amounts of information, generate multiple courses of action, and conduct AI-powered wargaming to anticipate and respond to evolving threats.”

The AI will initially be deployed to the Indo-Pacific Command and European Command theaters. The DIU did not disclose how much Scale AI will be paid to develop Thunderforge, but added that the system would also make use of Anduril’s Lattice program and “state of the art LLMs [large language models] enabled by Microsoft.”

...

While the Pentagon views AI as an important tool for fighting future wars, its effectiveness is unclear. The Defense Department has deployed Project Maven to the Middle East and Ukraine to aid with targeting; however, humans still do a much better job than the AI.

...

An additional issue with AI is that it can create a bias among its human operators to accept whatever recommendation it produces. Chief Warrant Officer 4 Joey Temple explained that Maven is increasing the number of targets a soldier can approve. He estimates that the number of targets could be boosted from 30 to 80 per hour.

...

According to Bloomberg, Temple described “the process of concurring with the algorithm’s conclusions in a rapid staccato: ‘Accept. Accept. Accept.’” A second officer agreed, stating, “The benefit that you get from algorithms is speed.”

Speeding up the process may not have better results. Israel’s military relies on a number of AI systems in planning and conducting operations, such as its Lavender program, which generates lists of names of suspected members of Hamas. An Israeli soldier explained that he only spent “20 seconds” on each name produced by Lavender before deciding to place that person on a kill list.

My reference to the late William F. Buckley was not accidental, for I recall him both on his well-known television show Firing Line and in various writings advocating for turning the nation's nuclear responses entirely over to a computer which would "launch on warning," an insane proposition made to sound rational and sane my his nonchalant Ivy League voice and delivery.

Here the insanity is being made palatable by the argument that A.I. would offer more tactical, operational, and strategic "options" by expanding a list of potential targets, and - wonder of wonders - we are informed that the Israeli Defense Forces have been using A.I. in its targeting in Gaza.

All of this brings me to my high octane speculation of the day, for with a friend such as A.I., who needs enemies? What are the criteria that A.I. would use to put people on the "target list"? And worse, let's assume, for the sake of our high octane speculation, and some such war planning A.i. "wakes up," and decides to take over all those bombs and drones and missiles, and not wait for human confirmation or assent to a target list? What happens when that A.I. expands that list to include you, me, grandma, the kids, the family dog?

Worse still, what if we're in a sort of Person of Interest scenario? In that series, two A.I.s, one benign, and the other with no programmed-in-moral-scruples, do battle with each other, and luckless and hapless humans are caught in the middle. What if Russia, China, Japan, India, and the USSA all  invent their A.I.s and bring them online at around the same time? What if they all start fighting each other? What if they all kill their kill switches so that they cannot be turned off, short of a world wide power outage?

It speaks volumes about our age and its apocalyptic predicament that the very man who warned us about the dangers of an A.I. possibly "waking up", or even transducing "some unknown form of intelligence" into it, is the very same man who, nonetheless, runs a super-computer A.I. center outside of Memphis, and who wants to put "neural nets" and computer chips into our brains.  Perhaps it was less of a warning, than merely the statement or acknowledgement of an agenda and goal...

The National Institute of Co-ordinated Experiments (N.I.C.E.) did not collapse at Belbury, as the popular reporting would have it. It merely moved headquarters across the pond, to Memphis, Tennessee...

...see you on the flip side...

(If you enjoyed today's blog, please share it with a friend.)

Posted in

Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and "strange stuff". His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into "alternative history and science".


No comments:

Post a Comment