In 2023, we learned about Artificial Intelligence (AI) technology, including the chatbot that was based on a large language model (LLM) called Chat GPT, where we can ask questions and sometimes, get crazy answers.
Despite concerns about hallucinating computers and the risks of automatic killer drones operated by evil dictators, the many uses of AI are moving forward.
A recent book by Mustafa Suleyman, The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma, beautifully highlights the central problem: how to maintain control over powerful technologies.
This is what Suleyman calls the containment problem—" the task of maintaining control over powerful technologies—as the essential challenge of our age.”
He describes a narrow path between automatic computer algorithms which can cause harm and potentially overbearing surveillance to guide the best uses of AI. It turns out that fundamental beliefs about what is right or wrong will make the difference.
As I sat across from my family bible, I understood the key impact of messaging that I learned as a child. While reading the handwritten notes at the front (which date back to the 1800s), I learned that my father William had a younger brother (of the same name) who tragically died when he was struck by a runaway horse and cart.
My father, William Weir, was the second William. He grew up and learned to play the organ at church and drew me to help with his instrument and to listen each weekend to the music and the sermons.
Awareness of the role of faith and ethical guardrails early in life, at least, gave us some preparation for the dilemmas we now face as a species, as we deal with the complex impact of AI during these unprecedented times.
The establishment of ground rules for everything—from the use of a robotic personal assistant to quantum computers—will determine how dramatic changes ahead will affect all of us. The coming wave of technology requires careful oversight and introduction of checks and balances at the highest levels.
The Positive Impact of AI
There is no doubt that the multi-faceted use of AI will have an enormous impact on the medical industry.
We are already surrounded by applications which may be as simple as scheduling appointments or as complex as running imaging machines and helping with diagnoses and decision-making. I have mentioned a few uses related to myeloma in prior blogs.
A simple algorithm can be used to indicate if a patient with a low monoclonal protein in the blood needs to have a bone marrow test or not (based on the iStopMM study published in Blood Cancer Journal on December 11, 2023: Immunophenotypic assessment of clonal plasma cells and B-cells in bone marrow and blood in the diagnostic classification of early stage monoclonal gammopathies: an iStopMM study).
This is appreciated by the patient and is helpful for the doctor and clinic to fine tune the use of bone marrow testing. Another algorithm can tell if a bone marrow sample is good or not, if planning for minimal residual disease (MRD) testing.
Some samples may be diluted with blood, cannot be relied upon, and must be rejected, as revealed by the results of the iStopMM study: Determining hemodilution in diagnostic bone marrow aspirated samples in plasma cell disorders by next-generation flow cytometry: Proposal for a bone marrow quality index, which was recently published in Blood Cancer Journal.
In this case, a new sample will be required. Again, very helpful for everyone to know.
Another simple algorithm identifies patients who have what is called an MGUS-like phenotype (Journal of Clinical Oncology, March 17, 2023: Definition and Clinical Significance of the Monoclonal Gammopathy of Undetermined Significance–Like Phenotype in Patients With Monoclonal Gammopathies).
This means that a patient after treatment may have a low level of myeloma protein remaking in the blood. Is this okay or is further treatment required? If immune testing shows the presence of MGUS-like phenotype, then it is probable that the low level of protein will remain stable without further treatment … just like MGUS! Again, very good to know.
These examples illustrate a framework within which AI can be used, with feedback to the doctor and patient and without much risk of harm.
It is likely that this list of helpful uses of AI will grow—and with simple guideline oversight, will not be controversial.
However, when suggested interventions or treatments are involved, a much thorough review will be essential.
The Role of Empathy
It is important to note that computers can be taught to respect patients’ feelings and wishes. A computer can be programmed to provide information on the most effective treatment (of a particular type)—one that has the fewest side effects and is logistically and financially feasible.
There can be an assessment of anxiety or fear related to a particular approach, with the type of treatment accepted or rejected on that basis. This brings into focus the absolute need for shared decision-making to fine tune and integrate new strategies in the most appropriate fashion.
Looping in the doctor ensures that obvious computer mistakes can be detected and rejected. Looping in the patient respects the needs and wishes of the patient, family, and caregivers.
Finding Accurate and Correct Information
A fundamental use of AI is to search for information. There has been a lot of attention devoted to false information. For example, an AI can just make up information if it is unable find an answer (also called a “hallucination”). Despite improvements, this continues to be a concern and one needs to be fully aware of this glitch.
Another issue is the use of copyright information. Right now, the New York Times is suing OpenAI and Microsoft “for copyright infringement over the unauthorized use of published work to train AI technologies,” as reported by NY Times.
Based on the lawsuit filed in the Federal District Court in Manhattan, it is likely that some settlement will be reached in which OpenAI and Microsoft (and possibly all other chatbots) will have to indicate the source of the article and potentially pay some type of user fee—either per item or more broadly.
The results of these types of lawsuits (and there are already several of them) will determine a lot about the future of search engines.
For myself, I have been intrigued to find my own words coming back to me when I search about myeloma.
The Bottom Line
AI is here to stay. We will learn to appreciate the benefits, but we must also remain alert to uses thrust upon us and the ongoing concern about wrong or fake information.
It’s one more thing to worry about, but I do believe there will be many day-to-day benefits which will allow myeloma patients to access more precise and earlier diagnoses as well as carefully researched optimal treatment choices.
Meanwhile, let’s continue to be optimistic.