DOD Developing AI Weapons? Beware the Frankenstein Chatbots
Big Tech is rushing ahead of any legal framework for artificial intelligence, or AI, in the quest for big profits, while pushing for self-regulation instead of the constraints imposed by the rule of law.
Miss a day, miss a lot. Subscribe to The Defender's Top News of the Day. It's free.
By Ralph Nader
Rick Claypool is a level-headed policy analyst and number-cruncher for Public Citizen, who is known for reporting the decline in corporate crime enforcement with each succeeding presidency. (Biden less than Trump.)
His latest report (with Cheyenne Hunt) clearly shows him in an unusually agitated state. Its title is “‘Sorry in Advance!’ Rapid Rush to Deploy Generative AI [artificial intelligence] Risks a Wide Array of Automated Harms.”
Claypool is not engaging in hyperbole or horrible hypotheticals concerning Chatbots controlling humanity. He is extrapolating from what is already starting to happen in almost every sector of our society.
I challenge you to read his report without experiencing cognitive dissonance and throwing up your hands thinking the genie is already out of a million bottles.
Claypool takes you through “real-world harms [that] the rush to release and monetize these tools can cause — and, in many cases, is already causing.”
Claypool’s analysis takes you through five broad areas of concern, excluding the horrific autonomous weapons the U.S. Department of Defense (DOD), aka the Department of Offense, is deeply involved in developing.
The various section titles of his report foreshadow the coming abuses: “Damaging Democracy,” “Consumer Concerns” (rip-offs and vast privacy surveillances), “Worsening Inequality,” “Undermining Worker Rights” (and jobs) and “Environmental Concerns” (damaging the environment via their carbon footprints).
Before he gets specific, Claypool previews his conclusion:
“Until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause.”
Just how he doesn’t say. Because with so many increasing generators of these Chatbots around the world, this flood of Frankenstein Chatbots may present the same problem that the dean of the Harvard Law School, Roscoe Pound, described regarding the prohibition of alcoholic beverages in the 1920s as being beyond “the limits of effective legal action.”
Claypool quotes Sam Altman, CEO of OpenAI, who released in November 2022 the shocking ChatGPT AI product, saying afterward: “I think we are potentially not that far away from potentially scary ones.”
Altman has been busy up on Capitol Hill mesmerizing legislators by saying “regulation is needed,” by which he means the industry itself writing the rules and standards for Congress.
Using its existing authority, the Federal Trade Commission, in the author’s words:
“has already warned that generative AI tools are powerful enough to create synthetic content — plausible sounding news stories, authoritative-looking academic studies, hoax images, and deepfake videos — and that this synthetic content is becoming difficult to distinguish from authentic content.”
He adds that “these tools are easy for just about anyone to use.” BIG TECH is rushing way ahead of any legal framework for AI in the quest for big profits while pushing for self-regulation instead of the constraints imposed by the rule of law.
There is no end to the predicted disasters, both from people inside the industry and its outside critics. Destruction of livelihoods; harmful health impacts from promotion of quack remedies; financial fraud; political and electoral fakeries; stripping of the information commons; subversion of the open internet; faking your facial image, voice, words and behavior; tricking you and others with lies every day.
AI’s potential for deception will make Fox News’ deceptions look comparatively restrained.
With Congress and the White House issuing unenforceable exhortations to the industry to be nice, safe and responsible, critics are looking to the European Union’s first stage passage of an AI Act to protect its people from the more overt damages to their common and individual rights and interests.
The Act’s focus is on which uses of AI need to be curbed, including the adverse impact on elections. It mandates the labeling of AI-generated content.
On May 16, Public Citizen petitioned the Federal Election Commission to issue a rule preventing the use of AI to deceive voters.
All legislative bodies will have to confront the barriers of secrecy — claims by governments on weapons and surveillance development and the already asserted “trade secrets” by corporations.
In the U.S., there will also be First Amendment defenses for free speech by these artificial entities called corporations. Their corporate lawyers will have a lucrative field day concocting delays and obstructions.
Our nation and the world are barely organized enough to control through treaties the use of nuclear weapons — through treaties, poorly prepared for devastating pandemics, and virtually nowhere in foreseeing and forestalling the mega-threats of generative AI “to society and humanity.”
Those were the words of an open warning letter calling for a six-month pause, signed by top CEOs (such as Elon Musk), technologists and academics.
With few exceptions, a lazy Congress, readying for a long July 4 holiday break followed by taking off all of August for a congressional recess, is oblivious to its special powers and duties to the American people.
Let’s see some congressional urgency to put some specificity and enforcement teeth behind and beyond Biden’s nonbinding “Blueprint for an AI Bill of Rights” published by the White House Office of Science and Technology Policy in October 2022.
Rep. Ted Lieu (D-Calif.), who sits on the House Committee on Science, Space, and Technology, is pushing for the creation of a new federal agency to regulate AI technologies.
For now, I have two recommendations. Demand your senators and representatives join you for local town meetings during Congress’s August recess where you and your lawmakers can listen to each other and address the pressing issues. Tell them that this runaway robotic juggernaut is stripping humans of their own mental identities, autonomy and self-reliant judgments.
Everyone is at risk. Even Microsoft and Google have little idea of the whirlwind they are unleashing, driven by shortsighted profits, not wisdom, civic principles and accountabilities to public institutions and the people themselves.
Have your local experts formulate the focus of the town meeting agendas, backed by your sense of urgency.
Then demand that your members of Congress end their three-day-a-week work routine and conduct rigorous hearings in Washington and around the country with a deadline for passing legislation. Tell them they, too, are at risk for the fakery, slander and imitations of the Chatbots.
Lastly, upgrade and make more precise your skepticism toward the Chatbots already entering and affecting your lives and localities. Be on guard and develop an ever-larger circle of trusting relatives, friends, neighbors and coworkers.
The corporate Chatbots are coming on fast without any legal or ethical frameworks to restrain and discipline them from subverting your freedoms and a true sense of reality.
Originally published by Common Dreams.
Ralph Nader is a consumer advocate and the author of “The Seventeen Solutions: Bold Ideas for Our American Future” (2012). His new book is “Wrecking America: How Trump’s Lies and Lawbreaking Betray All” (2020, co-authored with Mark Green).
The views and opinions expressed in this article are those of the authors and do not necessarily reflect the views of Children's Health Defense.
Sign up for free news and updates from Children’s Health Defense. CHD focuses on legal strategies to defend the health of our children and obtain justice for those injured. We can't do it without your support