AI and the future of medical treatment
By Jon Rappoport
---"Oh Doctor, what's ailing me? Can I get a diagnosis?
What's that? AI [artificial intelligence] is handling it now? You mean I
just go online and see the results of my tests and read the diagnosis
and pick up my drugs outside my front door? Wow. Very nice."
Really? Is it very nice?
As AI creeps and crawls into the realm of medical diagnosis
and treatment, and as it spreads under the banner of "more precise care
for the patient," remember that AI embeds false data more firmly than
any human doctor can. Once it's in there, how do you get rid of it?
"I'm sorry, sir. There is no human to speak with. All our data are produced by algorithms..."
For example, suppose the flu you have isn't the flu? Suppose
it's something else? AI would still diagnose you with flu, based on your
profile of symptoms, and you could be prescribed a toxic antiviral drug
you don't need, and also put on a warning list of people whose flu shot
isn't up to date.
Dr. Peter Doshi, writing in the online BMJ (British Medical Journal), reveals a monstrosity.
As Doshi states, every year hundreds of thousands of
respiratory samples are taken from flu patients in the US and tested in
labs. Here is the kicker: only a small percentage of these samples show
the presence of a flu virus.
This means: most of the people in America who are diagnosed by doctors with the flu have no flu virus in their bodies.
So they don't have the flu.
Therefore, even if you assume the flu vaccine is useful and
safe, it couldn't possibly prevent all those "flu cases" that aren't flu
cases.
The vaccine couldn't possibly work.
Here's the exact quote from Peter Doshi's BMJ review,
"Influenza: marketing vaccines by marketing disease" (BMJ 2013;
346:f3037):
"...even the ideal influenza vaccine, matched perfectly to
circulating strains of wild influenza and capable of stopping all
influenza viruses, can only deal with a small part of the 'flu' problem
because most 'flu' appears to have nothing to do with influenza. Every
year, hundreds of thousands of respiratory specimens are tested across
the US. Of those tested, on average 16% are found to be influenza
positive."
"...It's no wonder so many people feel that 'flu shots' don't work: for most flus, they can't."
Because most diagnosed cases of the flu aren't the flu.
BUT DO YOU THINK AI IS GOING TO FOLD THESE REVELATIONS INTO
ITS DATA BANK ON FLU? DO YOU THINK THIS MAJOR INSIGHT---WHICH BLASTS THE
WHOLE FLU PROPAGANDA SHIP OUT OF THE WATER---IS GOING TO ALTER THE AI
PROGRAM ON FLU DIAGNOSIS AND TREATMENT?
Of course not.
And there will be many, many other areas where AI is wrong--- but engraved in stone.
For instance, the official refusal to classify all vaccines
containing aluminum as highly toxic and dangerous---AI will bolster that
intentional refusal. At that point, who are you going to argue with? A
machine? The cloud?
NextGov is reporting on a version of AI now undergoing
testing: "Scientists test new chemical compounds on animals...But an
artificial intelligence system published in the research journal
Toxicological Sciences shows that it might be possible to automate some
tests using the knowledge about chemical interactions we already have.
The AI was trained to predict how toxic tens of thousands of unknown
chemicals could be, based on previous animal tests, and the algorithm's
results were shown to be as accurate as live animal tests."
Sound good? How likely is it that such an automated database
will include scores of toxic medical drugs that kill Americans at the
rate of 100,000 a year?
Yes, that's right, 100,000 a year. The citation is: Journal
of the American Medical Association, July 26, 2000, Dr. Barbara
Starfield, "Is US Health Really the Best in the World?"
Once AI is accepted as the Word on toxic chemicals, imagine
the degree of difficulty in trying to add many medical drugs to the
list.
"I'm sorry, sir. I don't know anything about medicines. I
just access the database on toxic chemicals and report what I find. Who
is in charge of the AI here? Is that what you're asking? I have no idea.
Let me transfer you to a senior specialist in public communication.
She's quite busy at the moment. If you leave a message, you may receive a
reply in the next few weeks. But I'm not sure she can help you. As I
say, we take all our information from the database..."
Automation of data creates a new level of abstraction. Yes,
it's hard enough to argue with a human bureaucrat---but that's nothing
compared with trying to question an AI program.
And of course, in the medical arena, who is going to assemble
that AI program and take charge of it? Who is going to decide what goes
in the program and what is omitted?
Who is going to present that program to the public and
characterize the AI as the fairest, most honest and objective system
under the sun?
What will happen when the next 10 generations of
schoolchildren are trained to believe in AI as the best and brightest
source of truth on the planet?
When I was writing my first book, AIDS INC., in 1988, I
started to become aware of artificially constructed templates of medical
information---templates that could become AI productions in the next 10
or 20 years.
I was roaming the stacks in the UCLA bio-med library, digging
up crucial information on various medical tests. These little-known
published studies were showing how unreliable the diagnostic tests could
be. But, as I discovered, this information had no place in medical
school curriculum. In all conventional medical circles, it was ignored.
As if it didn't exist.
I found the ignored data in archived volumes of medical journals on the library shelves.
What happens when those volumes are shipped into warehouses for storage, and no one accesses them anymore?
What happens when the bright and shiny AI medical databases rule the landscape?
Part of my work for the past 35 years has been keeping
medical truth alive and in front of readers. There is no expiration date
on truth.
When you feed AI enough data, and sets of basic assumptions,
it can and will construct a full-blown program that dictates a range of
actions that should be taken. But, for example, suppose you told a
nascent AI chess program that knights move only three squares forward,
rooks can only move diagonally, and kings can jump over other pieces.
You'll get a brilliant chess system that bears very little resemblance
to the actual game of chess.
This is exactly what happens when many underlying medical
assumptions---which are false or grossly incomplete---are entered into
an AI diagnostic and treatment system.
And much usable and beneficial truth will fade into the background and be lost.
No comments:
Post a Comment