AI in healthcare still has a long way to go

For many years, artificial intelligence (AI) technology has promised to greatly improve the healthcare industry. Whether by promising increased access to and understanding of data, providing ways to better navigate patient care, or better deciphering new research and development efforts, healthcare experts are eagerly looking forward to the mainstream use of AI. Many companies have invested billions of dollars in hopes of improving the quality and usability of AI in their own fields. And rightly, these efforts have certainly provided a lot of beneficial results, many of which have been the bedrock of continued building and innovation in this field. However, technology still has a long way to go.

One of the central challenges in developing AI technology in healthcare has been the cultivation of good data sets to use as educational models. Conceptually, the broader scope of ‘AI’ technology uses large data sets to decode patterns and make recommendations accordingly. However, these recommendations and pattern recognition outputs are only as good as the presented data sets, which can be problematic in many contexts, especially when dealing with patient care data.

This was the potential introduction of bias into AI-based care discuss by the main leaders on a large scale. According to Dr. Paul Conway, president of policy and global affairs for the American Kidney Association, “Devices that use artificial intelligence and machine learning will transform health care delivery by increasing efficiency in key processes in treating patients…” However, as he described it Pat Bird, Regulatory Head of Global Program Standards at Philips, “To help support our patients we need to be more aware of them, their medical conditions, their environment and their needs and we want to be able to better understand potential confounding factors driving some of the trends in the data collected… The latter alludes to the very specific problem that many AI enthusiasts face time and time again: bias due to data sets that are too small, too fragmented, or too imprecise.

For example, an AI algorithm developed to make recommendations for pain medication that is based on a data set containing only examples of cancer patients would likely not make sense to apply to the general population. After all, the pain medications needed for cancer patients are much different and likely to be more effective than those needed by the general public, and therefore, recommendations would be severely skewed. This bias is only one type. Expanding this potential inaccuracy and bias across races, genders, socioeconomic status, and other factors can give way to seriously inaccurate clinical decisions.

Why is this important? Because, if used correctly, AI has the potential to become a powerful force in the clinical environment. I have written in the past about how AI can be a valuable tool in a variety of fields, ranging from rays to me cancer Care. Although it may not have the knack for replacing the complexities, knowledge, and wisdom of physician-led patient care, there may indeed be a place for AI modalities as a tool for increasing clinical workflow.

However, for this technology to be truly value-added, systems must provide highly accurate recommendations, making sure that they take into account accurate and representative data. Only then can clinicians derive real value from this technology in order to make a bias-free impact in care delivery. Indeed, innovators, healthcare leaders and care providers have a huge task at hand in the coming years with this technology.

Leave a Comment