Donation

With multiple Artificial Intelligence (AI) tools available (e.g., Open AI’s Chat GPT, Microsoft’s Bing, and Google’s Bard), there is much to be learned on how to use these huge resources to their full advantage.
 
But the backdrop to this is an array of warnings from prominent tech industry figures including OpenAI CEO and developer of ChatGPT Sam Altman, who raised concerns about AI’s “potential power to harm — or even end —human life,” as said by Terry Castleman of the Los Angeles Times in his article, “Prominent AI leaders warn of ‘risk of extinction’ technology.” 
 
The article brings up a signed letter sent in March by Elon Musk and other notable signatories, including Google DeepMind Chief Executive Demis Hassabis, OpenAI CEO Altman, Congressman Ted Lieu of the US House of Representatives, Stanford Professor Emeritus of Electrical Engineering Martin Hellman, and University of Toronto Emeritus Professor of Computer Science Geoffrey Hinton, to name a few. 
 
The Statement on AI Risk, where “AI experts and public figures express their concern about AI risk,” is succinct: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
 
This dire warning has been challenged by some. To address these concerns, the U.S. Food and Drug Administration (FDA) has released two discussion papers to start the discussion on “Using Artificial Intelligence and Machine Learning in the Development of Drug & Biological Products.”
 
“As with other evolving fields of science and technology, there are challenges associated with AI/ML in drug development, such as ethical and security considerations like improper data sharing or cybersecurity risks. There are also concerns with using algorithms that have a degree of opacity, or algorithms that may have internal operations that are not visible to users or other interested parties. This can lead to amplification of errors or preexisting biases in the data. We aim to prevent and remedy discrimination — including algorithmic discrimination, which occurs when automated systems favor one category of people over other(s) — to advance equity when using AI/ML techniques,” states the FDA press release.
 
“The discussion paper is a collaboration among the FDA’s Center for Drug Evaluation and Research, the Center for Biologics Evaluation and Research, and the Center for Devices and Radiological Health, including its Digital Health Center of Excellence. The paper aims to spur a discussion with interested parties in the medical products development community, such as pharmaceutical companies, ethicists, academia, patients and patient groups, and global counterpart regulatory and other authorities, on using AI/ML in drug and biologic development, and the development of medical devices to use with these treatments,” said the FDA further.

Chatbot: Addressing Patient Questions

One of the key concerns is whether AI chatbots are capable of adequately answering patient questions.
 
A cross-sectional study from the Journal of the American Medical Association (JAMA), “Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum” tackles this concern through an original investigation, where 195 “randomly drawn patient questions from a social media forum” were presented to physicians and to an AI chatbot assistant (ChatGPT) to see how both parties would respond. 
 
According to the study’s findings,“The chatbot responses were preferred over physician responses and rated significantly higher for both quality and empathy.” (The chatbot had 3.6 times better or very good quality response; chatbot response was ranked as significantly more empathetic, amounting to 9.8 times higher prevalence of empathetic or very empathetic responses.)
 
While this may be a sobering result from a physician standpoint, it also introduces the notion that chatbot response has the potential to become very useful in this setting. Obviously, much work still needs to be done, with types of patient questions requiring further study to be more effectively and efficiently answered.  

Understanding ChatGPT

As an OpenAI app, ChatGPT (GPT stands for Generative Pre-trained Transformer) is, perhaps, the most accessible of all current AI chatbot assistants. 
 
A generative pre-trained transformer is “a type of large language model (LLM) neural network that can perform various natural language processing tasks such as answering questions, summarizing text, and even generating lines of code,” as defined by Al Jazeera in its article, “How does GPT-4 work and how can you start using it in ChatGPT?”
 
Transformer is a specific type of neural network architecture which provides what is called a self-attention mechanism.  This mechanism allows the system to understand the relationships between words in a sentence … in other words, how to write!

 Thus, by using massive datasets as a resource, Chat GPT can write answers to specific questions.
 
ChatGPT3 (the free version) only has data until September 2021, so no more current information is available beyond said date. 
 
ChatGPT4 is fully updated, live, and is more conversational and complete in its feedback. However, this newer version requires a paid subscription.

Common-Sense Warnings and Golden Prompts

A recent article from a tech pop-up newsletter of the New York Times, “Get the Best From ChatGPT With These Golden Prompts” by Brian X. Chen provides helpful tips and advice on how to use chatbots to their “full potential” and on how to use them “safely and responsibly.” 
 
He starts off by giving a few “common-sense warnings:”

  • If privacy is an issue, do not provide personal details such as your name or company. Tech companies use your data to train their systems—meaning other people may be able to see your information.
  • Do not share confidential data such as trade secrets or sensitive information.
  • Because chatbots are powered by LLM technology, they analyze enormous amounts of digital text from the internet —meaning that not everything that they gather off the web is factual or accurate. Chatbots may repeat these false and inaccurate information. 
  • Additionally, while trying to predict patterns from their vast training data, chatbots are capable of making things up.

Among the most popular chatbots include ChatGPT, Bing, and Bard
 
According to Chen, “there is an art to typing in precise words and framing to generate the most helpful answers” and that these are what he refers to as “golden prompts.”
 
One technique I learned, as an example, is to use the intro phrase: “Act as if you are a myeloma expert.” This context will guide the chatbot’s response.
 
Then I instructed the chatbot: “Tell me what else you need to know to answer this question.”
 
For a  myeloma patient, this might mean telling the chatbot the status of the disease (e.g., newly diagnosed, relapsed, or high-risk, if there is concern about that). The best way is to create a thread of questions to refine the answers further to your unique situation.
 

Fact vs Fiction

While chatbots can be a nifty tool, it is important to be aware of its limitations—chatbots consider all data to be usable information and cannot discern between fact or fiction.
 
Currently, there is no truth monitor nor moral compass to guide chatbots in formulating the most factual and accurate answers. 
 
Be forewarned that chatbots are also capable of creating or making up its own facts—probably a great feature when you’re writing a fictitious script for a movie but NOT for finding factual answers.

The Bottom Line

With the introduction of AI technology, chatbots are definitely here to stay whilst requiring further refining and fine-tuning to enhance its capabilities over time. Meanwhile, it’s important to be aware of what these chatbots can and cannot do. 
 
Being capable of accessing massive datasets is a huge resource and an advantage. As someone who has attempted to review every article written about myeloma in a given year, I understand the value of AI when it comes to streamlining and improving my work output. I am also cognizant of its potential to guide patients in the decision-making process.
 
I would suggest downloading ChatGPT3 (as it is the free version), just to gain some firsthand experience in using chatbots. Maybe ask simple questions first such as “What is multiple myeloma?” or some other simple question that you already know what the answer would be, and you can discern or assess whether the chatbot is delivering factual or inaccurate information. This is my advice to the myeloma community, so it can start tapping and exploring this new technology. Any feedback is welcome.
 
With that in mind, I foresee upcoming webinars, in-person classes and tutorials to help the myeloma community learn more about using chatbots and artificial intelligence to gather information.
 
In the meantime, check chatbot information with your treating doctor and healthcare team. Most likely, this will help speed up discussions focused on this technology.

Stay tuned for more updates as we enter a new era of co-existing with artificial intelligence.

 

Translations:

Arabic  Dutch  French  German  Italian  Spanish

 


Image of Dr. Brian G.M. DurieProfessor of Medicine, Hematologist/Oncologist, and Honoree MD at the University of Brussels, Dr. Brian G.M. Durie is the Chairman of the Board and Chief Scientific Officer of the IMF. Dr. Durie is also the Chairman of the International Myeloma Working Group (IMWG)—a consortium of more than 250 myeloma experts from around the world—and leads the IMF’s Black Swan Research Initiative® (BSRI). 

 

Give Where Most Needed

We use cookies on our website to support technical features that enhance your user experience.

We also use analytics & advertising services. To opt-out click for more information.