Leveraging Explainable Artificial Intelligence for Transparent and Trustworthy Cancer Detection Systems

Loading...
Publication Logo

Date

2025

Journal Title

Journal ISSN

Volume Title

Publisher

Elsevier

Open Access Color

Green Open Access

No

OpenAIRE Downloads

OpenAIRE Views

Publicly Funded

No
Impulse
Average
Influence
Average
Popularity
Top 10%

Research Projects

Journal Issue

Abstract

Timely detection of cancer is essential for enhancing patient outcomes. Artificial Intelligence (AI), especially Deep Learning (DL), demonstrates significant potential in cancer diagnostics; however, its opaque nature presents notable concerns. Explainable AI (XAI) mitigates these issues by improving transparency and interpretability. This study provides a systematic review of recent applications of XAI in cancer detection, categorizing the techniques according to cancer type, including breast, skin, lung, colorectal, brain, and others. It emphasizes interpretability methods, dataset utilization, simulation environments, and security considerations. The results indicate that Convolutional Neural Networks (CNNs) account for 31 % of model usage, SHAP is the predominant interpretability framework at 44.4 %, and Python is the leading programming language at 32.1 %. Only 7.4 % of studies address security issues. This study identifies significant challenges and gaps, guiding future research in trustworthy and interpretable AI within oncology.

Description

Keywords

Explainable Artificial Intelligence, Cancer Detection, Machine Learning, Deep Learning, Black-Box

Fields of Science

Citation

WoS Q

Q1

Scopus Q

Q1
OpenCitations Logo
OpenCitations Citation Count
N/A

Source

Artificial Intelligence in Medicine

Volume

169

Issue

Start Page

103243

End Page

PlumX Metrics
Citations

Scopus : 4

PubMed : 1

Captures

Mendeley Readers : 24

Google Scholar Logo
Google Scholar™
OpenAlex Logo
OpenAlex FWCI
41.70569625

Sustainable Development Goals