IFAGGING: Semantically weighted intuitionistic fuzzy aggregation for interpretable cascaded forecasting


Baltaci A. Z., CAĞCAĞ YOLCU Ö., YOLCU U.

Information Sciences, cilt.745, 2026 (SCI-Expanded, Scopus) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 745
  • Basım Tarihi: 2026
  • Doi Numarası: 10.1016/j.ins.2026.123365
  • Dergi Adı: Information Sciences
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, Library, Information Science & Technology Abstracts (LISTA), MLA - Modern Language Association Database, zbMATH
  • Anahtar Kelimeler: Cascade Forward Neural Networks, Genetic Algorithm-Based Hyperparameter Tuning, Interpretable Probabilistic Forecasting, Intuitionistic Fuzzy Clustering, Nonlinear and Linear Dependency Modelling, Semantically Aggregated Forecasting
  • Marmara Üniversitesi Adresli: Evet

Özet

This study proposes a novel forecasting framework: IFAGGING—semantically weighted intuitionistic fuzzy aggregation for interpretable cascaded forecasting— which integrates intuitionistic fuzzy clustering, cascade forward neural networks (CFNNs), and semantic aggregation. Unlike traditional bootstrap-based bagging approaches, IFAGGING leverages semantic diversity by partitioning lagged time series data using intuitionistic fuzzy c-means clustering. Each semantic partition is used to train multiple CFNNs, enabling the model to capture both linear and nonlinear fuzzy dependencies within distinct subspaces. The outputs are semantically weighted using intuitionistic fuzzy membership and non-membership degrees, and further regularized through the hesitation degree within an aggregation strategy, ensuring interpretability and uncertainty awareness. Hyperparameters of the entire architecture are optimized via a genetic algorithm. Performance evaluations on diverse time series demonstrate that IFAGGING consistently outperforms conventional benchmarks in terms of accuracy, generalizability, and interpretability. Sensitivity analysis across 50 runs demonstrates stable RMSE distributions and narrow confidence intervals, confirming the model's consistency. Additional performance gains—up to 83% over the best and 68% over the second-best benchmarks—highlight the effectiveness of the proposed approach. Extensive residual analyses confirm the model's reliability, with absolute relative errors remaining mostly below 1%, and residuals exhibiting no pattern or structure that could be further modelled.