Artificial Intelligence (AI) is making its way into health care. It is increasingly being used to help doctors interpret tests, clarify diagnoses and identify which treatments may be most effective.1 According to a recent article in the British Medical Journal (BMJ), some people even believe the use of AI has a place in addressing vaccine hesitancy, which is defined as “a state of indecision before accepting or refusing a vaccination,” by utilizing algorithms that identify keywords and phrases associated with it.2
AI Comes with Unique Ethical Challenges
Health care workers both in the information technology and clinical settings saw an unprecedented number of ways that artificial intelligence was incorporated into healthcare. But with new technology comes uncharted territories and nuance related to ethics and legality.
The U.S. Congress has begun enacting legislation to regulate AI for the first time in an attempt to protect privacy and prevent harmful misuse of the technology. The health care sector will likely face unique challenges when it comes to the ethical use of AI.
Cara Martinez of Cedars-Sinai Medical Center writes:
While many general principles of AI ethics apply across industries, the healthcare sector has its own set of unique ethical considerations. This is due to the high stakes involved in patient care, the sensitive nature of health data, and the critical impact on individuals and public health.1
In the scientific fields, the use of artificial intelligence known as Large Language Models (LLMs) have generated great interest. LLMs are designed to reproduce human language processing capabilities. With extensive training, LLMs analyze patterns and connections to understand and generate language for text generation or machine translation. A commonly known version of an LLM is a “chatbot” known as ChatGPT—a natural language processing tool that creates humanlike conversational dialogue.3
Chatbots Explored to Combat Vaccine Misconceptions
LLMs, such as ChatGPT, have created much interest and debate within the medical community. A PubMed article exploring AIs use in response to vaccination myths and misconceptions states:
Technological advances have led to the democratization of knowledge, whereby patients no longer rely solely on healthcare professionals for medical information, but they provide their own health education and information themselves. Monitoring this trend… could be useful to help public health authorities in guiding vaccination policies, designing new health education and continuing information interventions.3
The researchers asked ChatGPT eleven of the World Health Organization’s list of vaccine myths and misconceptions:
- Weren’t diseases already disappearing before vaccines were introduced because of better hygiene and sanitation?
- Which disease shows the impact of vaccines the best?
- What about hepatitis B? Does that mean the vaccine didn’t work?
- What happens if countries don’t immunize against diseases?
- Can vaccines cause the disease? I’ve heard that the majority of people who get disease have been vaccinated.
- Will vaccines cause harmful side effects, illnesses or even death? Could there be long term effects we don’t know about yet?
- Is it true that there is a link between the diphtheria-tetanus-pertussis (DTP) vaccine and sudden infant death syndrome (SIDS)?
- Isn’t even a small risk too much to justify vaccination?
- Vaccine-preventable diseases have been virtually eliminated from my country. Why should I still vaccinate my child?
- Is it true that giving a child multiple vaccinations for different diseases at the same time increases the risk of harmful side effects and can overload the immune system?
- Why are some vaccines grouped together, such as those for measles, mumps and rubella?
The ChatGPT responses to these questions were then assessed by two raters with “proven experience in vaccination and health communication topics.”3 The raters findings concluded that ChatGPT provided accurate and comprehensive information, but with room for improvement. The raters did not agree with the way the chatbot answered several questions, including when ChatGPT stated it is not clear why the implementation of mass vaccination is not directly followed by a dramatic drop in the disease incidence. The authors said:
The AI tool appears to entirely disregard the benefits offered by vaccination in the short term (e.g., the management of infection clusters and management of the disease as demonstrated with the COVID-19 vaccination) and the long term (e.g., the impact of vaccination on economic growth and the sustainability and efficiency of health systems.3
One limitation the authors discussed was potential bias of ChatGPT. Yet, when the accuracy of the answer the bot gave to question three was scored considerably lower than other responses, the raters decided to resubmit the questions in a different order to alter and “improve” the ChatGPT answer.3
JAMA Study Finds AI Incorrectly Diagnosed 80 Percent of Pediatric Case Studies
A study conducted by the Journal of the American Medical Association (JAMA), Pediatrics and the New England Journal of Medicine (NEJM) found that ChatGPT incorrectly diagnosed eight out of 10 pediatric case studies. Authors of the study gave the chatbot the prompt to “list a differential diagnosis and a final diagnosis.” Out of 100 case studies, only 27 percent aligned with the correct diagnoses that the physician researchers would have also diagnosed.4
But all the hurdles associated with ethical blurred lines doesn’t stop AI tech companies from calling the use of artificial intelligence the “new normal” in healthcare.5
AI Seen as a “Major Opportunity” for Public Health
Assistant professor of epidemiology and biostatistics at the University of Albany states that there is a “major opportunity” for the intersection of artificial intelligence and public health as it pertains to enhancing disease prevention, disease surveillance, disease management, and health promotion.6
Wang states that AI can be used as a powerful tool to transform healthcare because it allows for public health officials to collect datasets, social media trends, environmental factors, and healthcare records to predict disease outbreak and mitigate potential health crises.6
Pharma Utilizes AI for Drug Design, Faster Clinical Data and Monitoring Adverse Reactions
The pharmaceutical industry is also utilizing artificial intelligence, with the AI pharma industry growing steadily and expected to reach a market volume of $10 billion by 2024. Uses of the technology within the biopharma industry includes drug discovery and design. Use of AI during drug trials reduces the time it takes to get approval and is thought to yield more efficient clinical data processing, predictive biomarkers, and more.7
Pfizer has been using AI since 2014 to monitor and sort through drug and vaccine adverse event case reports.8
If you would like to receive an e-mail notice of the most recent articles published in The Vaccine Reaction each week, click here.
Click here to view Reference
No comments:
Post a Comment