The opening plenary Friday morning at AMP25 set the stage for a thoughtful day. Takunda Matose, PhD, spoke about the use of artificial intelligence (AI) in healthcare, urging users and stakeholders to take a step back from its current uses for mindful considerations.
A bioethicist by training, Matose is a researcher at the Cincinnati Children’s Hospital and Medical Center/University of Cincinnati College of Medicine, focusing on healthcare justice, health inequalities, and research ethics.
True to his roots, Matose opened his talk by discussing ethics, beginning with a definition. He set the stage for the primary theme of his talk: consider the moral practices that affect everyone in a group collectively, or that lead to collective action on behalf of the whole.
First, he questioned the role researchers play in relation to their responsibilities to patients, participants, and the public, and how those responsibilities impact the development and use of AI tools for healthcare.
With evolving and increasing abilities of research, especially in the wake of more accessible AI, there is an increasing ability to collect data, with attempts to integrate and use it all. A common mantra among researchers is that there is always a need for more data. The implication is that if only there was just a bit more data, the healthcare community might be able to answer more questions with the most complete answer possible.
Matose contradicted this notion, explaining that most data go unused; increasing the amount of data would likely not improve outcomes and actually has diminishing marginal utility. Ideally, research should rely less on broad retrospective data, and more on how to develop research methods for more focused prospective studies. This shift requires reconsideration of data collection, amounts of data, and their use.
AI has been increasingly used in research as a tool to complete complex tasks and accelerate research discovery. Matose explained, “All [AI is] doing is functioning as a tool for doing really complex computational probabilities—probabilities that are going to change with more data inputs, [probabilities] that are going to change with the different pipelines that you’re all using. Each part along the way is going to manipulate the kind of outcomes that you’re going to get.”
He continued saying, “AI is really good at this whereas, us humans, we’re actually not very good at this.”
While AI is demonstrably better at analyzing data compared to humans, it’s limited by bias. Although bias is often considered a “bug” in the system, bias also can be considered a “feature” in the context of parameter selection of the system.
“It’s inescapable”
“Bias is simply the parameters of operation for these systems,” Matose said. “It’s all the decisions that we’re making in terms of how we’re designing the systems.” There is inherent bias when choosing which datasets are used for training and validating AI and additional bias in how data is annotated. These biases are not inherently problematic but need to be thoroughly vetted and remediated. Further, as there are multiple types of bias, there is no singular way to address bias in a system.
“These all introduce limitations in terms of the outcomes that we are going to get,” he said
Timing also plays a role in how bias is addressed. “Depending on when the bias shows up… the timing of the intervention is going to matter,” Matose pointed out. He expanded that even if the most accurate algorithm was created that avoids bias, there is still a possibility that additional bias will enter the system or project after the AI has been used.
Regardless of the presence or absence of bias—which he believes will never be fully eliminated—AI still has its limitations. Matose posed a number of questions to ask about a particular AI’s probabilistic abilities: What does it really mean for data and AI interpretation to be accurate? What is the predictive value of these models? Are true positives and negatives weighed more heavily in analysis? What are the rates of false positives and negatives? He explained that just because a model works with past data doesn’t mean it will work in the future.
Despite these limitations, AI has a fundamental advantage over humans as a tool for probabilistic reasoning as it surpasses human ability. He cautioned against including humans too much in AI systems. “We do need to be judicious about bringing humans back into the loop, particularly if we as humans bring in additional biases that are not already present in these systems,” he explained.
Ethical eventuality
“Maybe we don’t need humans in the loop all the time,” he stated. “Or at least we should be very thoughtful about at which point having humans in the loop would be important.”
Matose closed his talk by discussing primary motivating ethical questions that researchers should focus on. First, he questioned how we should consider the individuals in the community who do not fit into general public that would benefit from a practice.
“There is always going to be someone who is not benefiting,” he explained. “There is always someone for whom these things are not going to work. Everything has a failure rate—everything.”
In addition to patient considerations, Matose also asked for consideration about obligations to all stakeholders in the space including healthcare providers and families. When discussing AI and its implementation into a system, it’s necessary to include many stakeholders’ needs and roles in the system.
AI products and tools are often developed uniquely with different functions and outcomes. How they are connected and integrated in the system for patient and healthcare provider use must be considered.
He proposed two approaches for using AI in healthcare settings: vertical integration, centralizing access to reduce exposure points or interoperability, dispersing control and responsibility for multiple actors to access the systems and data. There are advantages and disadvantages to each approach that we should be conscious of. “Whatever decisions we make, there are going to be tradeoffs.”
What’s next?
When addressing the two main challenges of dealing with data and dealing with probabilism, there are multiple approaches.
Recognizing that data is only useful in aggregate, and that there are social facts that can be de-identified (using the right tools), requires that new regulatory and conceptual frameworks are needed moving forward.
AI tools use data to create probabilities through probabilistic reasoning. While he pointed out that the AI reasoning ability is better than human reasoning ability, it is prone to error or uncertainty. Communication of algorithm uncertainty is key when explaining what findings mean, how tools are developed and what their limitations are, and why research questions and methods are being used.
Matose encouraged communication with patients and other stakeholders. “Patients are plugged in,” he added. They have many resources outside of their doctor’s office.
“The consequences are much greater than at any point prior,” he declared. “In this sense, I think it’s more important to think about human discretion. Discretion in this sense might be more important than having humans ‘in the loop’.”
Putting humans in the center of design was Matose’s summary recommendation. “Everything either starts from humans or eventually it will touch humans.”
“Ultimately, this will, I think, lead to better clinical experiences and outcomes,” Matose concluded with optimism.
