Knowledge-Based Systems, vol.339, 2026 (SCI-Expanded, Scopus)
In this study, a comprehensive explainable artificial intelligence (XAI) framework for audit opinion classification is proposed by integrating the Scalable Financial-oriented Interpretable eXplanation (SFIX) model with a Feed-Forward Neural Network (FFNN). Unlike traditional post-hoc explainability methods such as SHapley Additive exPlanations(SHAP) and Locally Interpretable Model Agnostic Explanations (LIME), the SFIX framework provides multidimensional and intrinsic feature evaluation through dynamic importance, confidence, pattern, anomaly, and risk components. Using a real-world financial dataset, it is demonstrated that the top features selected by SFIX enable FFNN models to achieve high predictive performance while significantly improving interpretability. Notably, an FFNN trained with only 20 SFIX-selected features achieves nearly the same accuracy as the full-feature model, with a reduction of only 0.7%. The findings indicate that SFIX offers a more reliable and transparent feature selection mechanism and enhances model performance in low-dimensional feature spaces. Overall, a novel XAI-based approach tailored to financial auditing is presented, supporting more explainable and trustworthy decision support systems.