Toumaj, ShivaHeidari, ArashNavimipour, Nima JafariJafari Navimipour, Nima2025-09-152025-09-1520250933-36571873-2860https://doi.org/10.1016/j.artmed.2025.103243https://hdl.handle.net/20.500.12469/7482Timely detection of cancer is essential for enhancing patient outcomes. Artificial Intelligence (AI), especially Deep Learning (DL), demonstrates significant potential in cancer diagnostics; however, its opaque nature presents notable concerns. Explainable AI (XAI) mitigates these issues by improving transparency and interpretability. This study provides a systematic review of recent applications of XAI in cancer detection, categorizing the techniques according to cancer type, including breast, skin, lung, colorectal, brain, and others. It emphasizes interpretability methods, dataset utilization, simulation environments, and security considerations. The results indicate that Convolutional Neural Networks (CNNs) account for 31 % of model usage, SHAP is the predominant interpretability framework at 44.4 %, and Python is the leading programming language at 32.1 %. Only 7.4 % of studies address security issues. This study identifies significant challenges and gaps, guiding future research in trustworthy and interpretable AI within oncology.eninfo:eu-repo/semantics/closedAccessExplainable Artificial IntelligenceCancer DetectionMachine LearningDeep LearningBlack-BoxLeveraging Explainable Artificial Intelligence for Transparent and Trustworthy Cancer Detection SystemsArticle10.1016/j.artmed.2025.1032432-s2.0-105013515395