The continued development of new and expensive therapies, increasing life expectancy and rising rates of chronic diseases contribute to addressing physician overload. Artificial intelligence (AI) promises to partially alleviate the impact of these problems, improving medical care and making it more cost-effective. In the field of radiology, it is well-known that many of the routine tasks can be automated and may provide new insights for helping in several radiologist’s tasks, improving radiology department workflow, and of course, enhancing patient care [1].
In radiological practice AI often takes the form of clinical decision support systems (CDSS), which assist radiologists in disease diagnosis and follow-up recommendations [2]. Where conventional CDSS combine individual patient characteristics with an existing knowledge base using logical-based approaches, AI-based CDSS apply models trained on large patient datasets that gather the required information to answer a specific question: i.e., anatomic segmentation, lesion detection, tissular characterization, or image generation. These applications can be also improved by integrating biometric data to image-based data (i.e., age, sex, weight, blood tests…).
However, AI is not fully integrated into radiology departments; i.e., the percentage of radiology departments and clinics that have AI-based tools and use them in their daily work is very small due to the availability of annotated data, technical validation, and integration [3]. Another drawback that is hindering the implementation of AI is related to the “black box” effect: the AI system provides useful predictions without revealing any information about its internal workings. In the use of AI-based models—where errors can lead to dangerous outcomes—the black box effect is not acceptable to most physicians and patients. For this reason, radiologists, like all other physicians, are reasonably reticent to adopt or accept any AI results [4]. The use of explainable artificial intelligence (XAI) aims to overcome the opaque nature of machine learning [5]. However, it should be noted that there is a difference between traditional machine learning algorithms that provide better explainability than the latest and novel deep learning models [5].
XAI is a set of techniques that aims to make algorithms understandable, so that the logic of their predictions can be understood [6]. XAI supports the basic principles of explainable AI, such as transparency, comprehensibility, intelligibility, demonstrability, explicability, and interpretability [7]. The main objective of XAI is to understand how a model works, which translates into reducing errors and anticipating the strengths and weaknesses of the model, as well as avoiding unexpected behavior already in production.
To date, clinical validation is currently the first widely discussed requirement for an AI system to be accepted. Radiologists not only expect to receive predictions such as malignancy vs. benignity, but they also need to know why and how the decision was made, at least at a basic level, why and how the decision was made. If the AI system can answer these questions, then expert confidence in the system will improve significantly [8].
Explainability results are usually represented visually (on radiological images) or by natural language explanations (on textual reports). Both show radiologists how different factors contribute to the final recommendation. It is important to note that explainability helps clinicians evaluate the recommendations provided by a system based on their experience and clinical judgment. This allows them to make an authoritative decision about whether to trust the system’s recommendations. In short, with XAI, false positives and false negatives are more easily identified [9].
Explainability further serves to improve AI systems as radiologists identify instances where the system performs poorly, as they can report these instances to developers to promote quality assurance and product improvement [8, 10].
The omission of explainability in clinical decision support systems represents a threat to the confident use of AI. Further work is needed to sensitize developers and healthcare professionals to the challenges and limitations of algorithms considered as black box in radiological AI and to enhance multidisciplinary collaboration to meet these issues with combined strengths [11]. The future of XAI is very promising. Many advances are being made in the field and AI is expected to become increasingly explainable through potential solutions [12].
Finally, the real challenge is not to oppose the incorporation of AI into professional life, but to accept the inevitable change in radiological practice by incorporating XAI into AI models, as shown in Fig. 1. The collaborative working of engineers and radiologists is essential to reach that level of confidence in AI tools and to avoid mistakes previously made with other AI-based solutions [13, 14]. In this way, we will avoid the real danger: accepting the result offered by machines without any reasoning support.
Fig. 1The future goal of XAI in clinical practice
Comments (0)