Comprehensive explainable AI approach for audit opinion classification using feed-forward neural networks


Cil A. E., Buyuktanir T., YILDIZ K.

Knowledge-Based Systems, vol.339, 2026 (SCI-Expanded, Scopus) identifier

  • Publication Type: Article / Article
  • Volume: 339
  • Publication Date: 2026
  • Doi Number: 10.1016/j.knosys.2026.115606
  • Journal Name: Knowledge-Based Systems
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC, Library, Information Science & Technology Abstracts (LISTA)
  • Keywords: Audit opinion classification, Explainable artificial intelligence (XAI), Feature importance / feature selection, Feed-Forward neural network (FFNN), Financial data analytics, SFIX framework
  • Marmara University Affiliated: Yes

Abstract

In this study, a comprehensive explainable artificial intelligence (XAI) framework for audit opinion classification is proposed by integrating the Scalable Financial-oriented Interpretable eXplanation (SFIX) model with a Feed-Forward Neural Network (FFNN). Unlike traditional post-hoc explainability methods such as SHapley Additive exPlanations(SHAP) and Locally Interpretable Model Agnostic Explanations (LIME), the SFIX framework provides multidimensional and intrinsic feature evaluation through dynamic importance, confidence, pattern, anomaly, and risk components. Using a real-world financial dataset, it is demonstrated that the top features selected by SFIX enable FFNN models to achieve high predictive performance while significantly improving interpretability. Notably, an FFNN trained with only 20 SFIX-selected features achieves nearly the same accuracy as the full-feature model, with a reduction of only 0.7%. The findings indicate that SFIX offers a more reliable and transparent feature selection mechanism and enhances model performance in low-dimensional feature spaces. Overall, a novel XAI-based approach tailored to financial auditing is presented, supporting more explainable and trustworthy decision support systems.