Despite their predictive capabilities and rapid advancement, the black-box nature of Artificial Intelligence (AI) models, particularly in healthcare, has sparked debate regarding their trustworthiness and accountability. In response, the field of Explainable AI (XAI) has emerged,
...
Despite their predictive capabilities and rapid advancement, the black-box nature of Artificial Intelligence (AI) models, particularly in healthcare, has sparked debate regarding their trustworthiness and accountability. In response, the field of Explainable AI (XAI) has emerged, aiming to create transparent AI technologies. We present a novel approach to enhance AI interpretability by leveraging texture analysis, with a focus on cancer datasets. By focusing on specific texture features and their correlations with a prediction outcome extracted from medical images, our proposed methodology aims to elucidate the underlying mechanics of AI, improve AI trustworthiness, and facilitate human understanding. The code is available at https://github.com/xrai-lib/xai-texture.@en