St. Thomas panel discusses AI’s role in healthcare

Associate Professor Chih Lai talks about generative AI systems used in healthcare. The AI​ and Health Care panel took place in the leather room to discuss how AI has helped healthcare become more efficient on Nov. 7, 2024. (Dom Tritchler/The Crest)

The University of St. Thomas hosted a panel discussion that brought together experts from the Software Engineering and Data Science, Nursing and Philosophy departments to examine the transformative role artificial intelligence is set to play in healthcare.

The event, held on Nov. 7 in the O’Shaughnessy-Frey Library, drew an audience of students, faculty and community members eager to explore the implications of the rapidly-evolving technology.

Chih Lai, an associate professor of software engineering and data science, opened the discussion by emphasizing AI’s potential to reshape healthcare. He categorized AI into three types: traditional AI, vision AI and text AI.

Lai explained that AI’s power lies in its ability to process vast amounts of information, often from structured datasets. “Traditional AI works with structured data, like databases in healthcare operations or gene expression,” Lai said.

However, neither vision AI nor text AI relies on the same structured format.

“Vision AI handles unstructured data, such as medical images or videos, including innovative uses like detecting deepfakes or healthcare purposes, and text AI processes unstructured textual data, like doctors’ notes, to improve record management and analysis,” Lai said.

Lai mentioned the challenges of acquiring large amounts of data for AI training, and said that deepfaking can help overcome these challenges.

 “If we don’t have a large amount of data, AI does not work,” Lai said. “That is where deepfaking can be applied and many papers are doing this kind of deepfaking so well.”

Clinical trial professor Laura Beasley shared her personal journey with AI. 

“About a year and a half ago, I never thought I’d be up here talking about the use of AI in healthcare. In fact, I wasn’t even interested in it,” Beasley said. “But it’s not a futuristic thing anymore, and as we speak, it’s being used in healthcare right now.”

Beasley explained that AI is set to revolutionize healthcare by monitoring patients’ responses to medications over time, predicting when dose adjustments are needed, and analyzing lab results and vital signs to assist in critical decision-making. She emphasized how AI’s ability to transcribe patient conversations in real time can ensure patients have a clear record of their medical discussions, particularly when receiving distressing news.

“AI algorithms have been trained on thousands of skin images to identify skin cancers. In one notable study, AI systems outperformed experienced dermatologists in diagnosing melanoma,” Beasley said, highlighting the precision and potential of AI in medical diagnostics.

Beasley highlighted similar advancements in breast cancer detection.

“In South Korea, research comparing AI-based breast cancer detection to radiologists found AI had a higher sensitivity, detecting early-stage cancer with 91% accuracy compared to radiologists at 78%,” Beasley said.

She said that AI is alleviating healthcare burnout by reducing mundane documentation tasks, enabling providers to focus more on patient care. 

“AI is improving work-life balance, and happy doctors mean better care for patients,”  Beasley said.

However, she stressed the ethical challenges associated with AI. 

“Bias in AI algorithms can cause issues,” Beasley said. “If AI systems are trained on biased data, they could perpetuate inequalities.”

Beasley also raised questions about accountability and privacy, asking: “If you went to the doctor and AI helped to make a decision for you, who’s responsible? AI or the healthcare provider?”

Beasley concluded by advocating for AI integration into healthcare education. 

“We need to teach students that whatever gets generated in that AI, they have to look at it and say, ‘This looks right,’ or ‘It doesn’t look right,’ and that foundational knowledge must remain strong so providers can ensure quality and safety,” Beasley said.

Philosophy professor Heidi Diebel examined AI’s role in healthcare through an ethical and philosophical lens.

Drawing from “The Way of Medicine: Ethics and the Healing Profession” by Farr Curlin and Christopher Tollefsen, Diebel emphasized the importance of the internal qualities of good medicine, such as expertise and commitment to human health, over external rewards like financial gain.

“Medicine is about the well-functioning of the human organism as a whole,” Diebel said. 

She said that focusing too much on external goods could undermine the profession’s integrity, leading to overprescription or overtreatment.

Diebel also addressed the virtues of solidarity and trust as fundamental healthcare. 

“Solidarity involves a firm and enduring commitment to the goods of other persons and thus to the common goods of communities, so the medical professional is willing primarily that good of health for the patient,” Diebel said. 

She said she is concerned that AI might erode these values, highlighting concerns about the potential loss of person-to-person interaction and the importance of having live human experts overseeing diagnostics and care. 

“We need to be careful not to lose the person-to-person interaction,” Diebel said. “Let’s find a way to have real humans talk to our vulnerable; let’s not over-rely on AI.”

The panel ended with a Q&A session, touching on topics like preparing students for AI-integrated healthcare roles and addressing privacy concerns. The speakers also underscored the importance of ethical considerations, human oversight and trust and solidarity in the patient-provider relationship.

Beasley summarized her argument: “AI can be a great tool, but there also needs to be a balancing act, that human element at the end of the day. Start embracing it, using it ethically and responsibly.” 

Natulia Momo can be contacted at momo4842@stthomas.edu.