Study Pinpoints Challenges for AI Explainability in Digital Pathology

  • Home
  • Blogs
  • Study Pinpoints Challenges for AI Explainability in Digital Pathology

Artificial intelligence (AI), particularly machine learning technologies, are transforming pathology workflows. By automating various steps in tissue analysis, AI algorithms accelerate diagnosis and improve diagnostic accuracy. However, the ability of humans to interpret AI algorithms is low, further complicating their regulatory approval for clinical use and limiting their widespread clinical adoption. To overcome challenges associated with AI interpretability, researchers have developed explainable AI (xAI) models. Nevertheless, there are no widely accepted criteria for determining the explainability of AI algorithms, and studies assessing the experience of clinicians with AI models are lacking.
In a recent study, researchers from the Distributed Artificial Intelligence Laboratory at the Technical University of Berlin conducted a first-of-its-kind mixed-methods study to assess the interpretation and usability of AI-assisted image analysis tasks by pathologists.1 The study shows that cognitive biases influence the interpretation of state-of-the-art xAI approaches, and that expectations of AI-assisted pathology are often unreasonable.
“Although AI assistance in pathology offers incredible benefits for patients and diagnosticians, it is important for AI vendors to consider both the social and psychological aspects of building explainability for their solutions. To adequately do so, explainable AI solutions must be developed with feedback and validation from real user studies,” said Theodore Evans, researcher at the Distributed Artificial Intelligence Laboratory and first author of the study.

Commenting on the implications of their findings on the development of AI models for pathology, Evans noted: “It is important for stakeholders in the development of regulatory aspects of clinical AI certification to carefully define the requirements for transparency and explainability, and to be aware of the second-order effects that mandating these as components of clinical AI systems may have.”
The study will appear in the August 2022 issue of Future Generation Computer Systems.

Study Rationale: Understanding the xAI Explainability Paradox
“While identifying promising research directions in explainable AI for medical imaging applications, we discovered that little work in this domain was grounded in an understanding of the potential user interactions with xAI systems,” Evans said. “Instead, the majority of state-of-the-art research came solely from the machine learning domain, based only upon the intuitions of researchers on what information about internal model workings might be valuable to any given stakeholder,” he added.
Evans also noted that, although studies have been conducted to investigate explainability requirements for AI systems in pathology, no studies have directly assessed the impact of existing xAI approaches on target users in this domain.
He explained that rather than adding yet another algorithmic approach to this already extensive body of work, they set out to better understand the implications of integrating existing methods into interactions between humans and AI in the context of digital and AI-assisted pathology.

Leave A Comment