Recently exposed¹: an AI program called Lavender identified, tracked, and targeted people in Gaza it IDed as associated in some way (even vaguely) with Hamas—targeted them for killing by Israeli Armed forces. This is unprecedented in the history of warfare. But not the brutality of it. Not by a longshot. In 1945, for example, British and US Air Forces firebombed and destroyed the whole German city of Dresden. And of course, the US dropped two atomic bombs on Hiroshima and Nagasaki. But the Israeli military used AI. A program. A non-human program to ID people for killing. What’s next? AI actually issuing the commands to murder large numbers of people? And if remotely operated drones are the method, humans are out of the picture completely? And humans can claim “plausible deniability?” As AI moves into every area of life, humans can shrug and say, “It’s the program.” Do what the program says. Because there is no other option. The system runs things, and the system is AI. People scoff at the idea that AI could run whole areas of government—but this Israeli AI proves it’s possible. AI could operate new medical drug approval. The program would be set up to favor Pharma’s interests above the safety of the public. But of course the PR for such a program would go this way: “Finally we have a system that is objective and eliminates unconscious human bias.” Imagine a court case where a government bureaucrat is put on trial for refusing to carry out a direct AI order. The judge rules: “The AI system is duly constituted to control this area of decision. The employee who purposely countermanded the system is therefore guilty of failing to follow protocol…” The same situation in the private sector would be even tighter—because a corporation would claim its AI is proprietary and unique and beyond the reach of law enforcement. All its workers, as a condition of employment, signed a statement agreeing to follow AI commands. Therefore, the worker who disobeyed is fined, fired (and blacklisted in the industry). And suppose that corporation is a large news outlet, and the employee they just fired was a reporter who wrote a story exposing massive corruption in a city prosecutor’s office? An AI interceded and killed the story before it was published, and the reporter went rogue and did a podcast accusing the AI and his bosses of defrauding the public. But alas, the non-human AI that sets the standards of journalism at the outlet is “duly constituted by the executive board.” How about a Climate Change AI installed at the Dept. of Homeland Security?... Subscribe to Jon Rappoport to read the rest.Become a paying subscriber of Jon Rappoport to get access to this post and other subscriber-only content. A subscription gets you:
|
No comments:
Post a Comment