Artificial intelligence is becoming a ubiquitous part of our daily lives. It is used to drive cars, power smart devices, create art, and improve healthcare. Given the potential of AI, healthcare leaders are increasingly challenged with creating strong AI units and teams within their organizations.
This is not a trivial task, as it requires a level of understanding of technology that many leaders do not possess due to its newness and rapid evolution. Competent AI teams must address a wide range of important issues such as patient safety, fairness, governance, explainability, reproducibility, data drift, clinical workflows, support for the decision, as well as the technical details of the algorithms themselves. Let me highlight an example of the challenges healthcare leaders and their AI teams they assemble need to think about if AI is going to revolutionize healthcare.
A common type of AI is machine learning, which can be used to identify patterns in electronic health record data to predict clinical outcomes. The “learning” part refers to the adaptive process of finding mathematical functions (models) that produce actionable predictions. A model is often evaluated by making predictions in new data. It is common to assess the quality of a model using measures of its predictive accuracy. While this makes sense from a mathematical perspective, it does not mimic the way we as humans solve problems and make decisions.
Consider the car buying process. The key to this process is deciding which car to buy. We consider make and model along with other goals such as size, color, style, engine type, horsepower, range, efficiency, reliability and, of course, price. We rarely consider a single feature and usually don’t get everything we want. Consideration of multiple objectives is not unique to buying a car. We go through this same process for many decisions in life such as selecting a university, a political candidate, a job, etc. These tasks are not easy, but we seem to be wired to make decisions in this way. So why does machine learning typically focus on a single goal?
A possible answer to this question is that machine learning models are typically developed by AI experts who may not fully understand healthcare. Consider the goal of identifying novel drug targets from machine learning models using genetic information to predict disease risk. The hope is that the model will point to genes with protein products that could be developed as new drugs. However, like buying a car, there are other important factors. For example, only about 10% of proteins have chemical properties that make them accessible to small molecule drug candidates. This information on the “drugability” of proteins could be used to assess the value or usefulness of a model in addition to its predictive accuracy. It goes beyond model performance to include model utility and actionability.
How do we teach machine learning algorithms to choose models the way humans buy cars? The good news is that many multi-objective methods for machine learning have been developed. They are rarely used in health care or other fields. An intuitive approach is called Pareto optimization, in which several machine learning models are generated and evaluated using two or more quality criteria such as accuracy and complexity. The goal here is to identify a subset of models that optimally balance the trade-offs of all criteria. This approach more closely mimics the process of buying a car.
Machine learning to improve health is different from other application areas. Models need to do more than predict with good accuracy. They must be transparent, impartial, explainable, trustworthy, useful and actionable. They must teach us something. They have to be good for the patients. They must reduce health expenditure. It is not possible from a single objective.
An important next step with clinical AI is for IT and IT professionals to continue to work closely with clinicians to identify the right set of goals to maximize the impact of machine learning models on patient care. health. This will require engaging the human side of AI in addition to the algorithmic side. Health leaders play a critical role in building AI teams because they understand the necessary health outcome goals, they commit resources, and they can foster a diverse and collaborative culture necessary for success. Healthcare presents unique challenges and requires an AI strategy tailored to the complexities of patient care and institutional goals.