Dan Perri, MD, is an associate professor of medicine in the Department of Medicine at McMaster University with clinical and research activities in the Divisions of General Internal Medicine, Critical Care Medicine, and Clinical Pharmacology and Toxicology. He is also Chief Medical Information Officer at St. Joseph’s Healthcare Hamilton, where he seeks to optimize safe and appropriate patient care through the innovative use of digital technologies.
What do you consider the biggest dangers of artificial intelligence (AI) in medicine?
Dan Perri, MD: I think there are a few risks with AI in medicine. I think the first, which worries us the most, is that the physician-patient relationship may be altered, that the AI may take over, or it’ll be awkward for patients and physicians to interact in a meaningful human-to-human way with technology presence.
The next fear would be misinformation or misdiagnosis from AI and the repercussions of those types of mistakes. The next would be bias: AI is only as good as how it was trained. If it was trained with biased information or not the best information, perhaps it’s not the most valid outcome, so we do worry about bias, applying information that was derived from one set of patients that may not be applicable to a slightly different group of patients, whether that’s because of their cultural differences and/or their lifestyle differences. So, bias is an important concern.
Finally, privacy is a major issue as well. With a lot of this computing power unable to be performed locally, most of the AI of the future will be in the cloud, so very specific information about patients would have to go to the cloud to be processed to come back as meaningful information later. Privacy risks are certainly a worry.