Ensemble Deep Learning: A State-Of-The-Art Comprehensive Review
Main Article Content
Abstract
Ensemble learning has been a cornerstone of machine learning, providing improved predictive performance and robustness by combining multiple models. However, in the era of deep learning, the landscape of ensemble techniques has rapidly evolved, influenced by advances in neural architectures, training models, and practical application requirements. This review provides a state-of-the-art survey of ensemble deep learning approaches, focusing on recent developments of ensemble methods. We introduce a classification of ensemble strategies based on model diversity, fusion mechanisms, and task alignment, and highlight emerging techniques such as attention-based ensemble fusion, neural architecture search-based ensembles, and large ensembles of language or vision models. The review also examines theoretical foundations, practical tradeoffs, and domain-specific adaptations in some fields. Compiling state-of-the-art benchmarks, we evaluate ensemble performance in terms of accuracy, efficiency, robustness, and interpretability. We also identify key challenges such as scalability, overfitting, and deployment limitations and present open research directions, including ensemble learning for continuous learning, federated learning, and learning from scratch. By connecting key insights with current trends, this review aims to guide researchers and practitioners in designing and implementing ensemble deep learning systems to address the next generation of AI challenges.
Article Details

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
References
Y. Hu et al., “Artificial Intelligence Approaches,” Aug. 2019, doi: 10.22224/gistbok/2019.3.4.
I. H. Sarker, “Machine Learning: Algorithms, Real-World Applications and Research Directions,” SN Comput. Sci., vol. 2, no. 3, p. 160, May 2021, doi: 10.1007/s42979-021-00592-x.
I. H. Sarker, “Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions,” SN Comput. Sci., vol. 2, no. 6, p. 420, Nov. 2021, doi: 10.1007/s42979-021-00815-1.
S. F. Ahmed et al., “Deep learning modelling techniques: current progress, applications, advantages, and challenges,” Artif. Intell. Rev., vol. 56, no. 11, pp. 13521–13617, Nov. 2023, doi: 10.1007/s10462-023-10466-8.
A. Mohammed and R. Kora, “A comprehensive review on ensemble deep learning: Opportunities and challenges,” Journal of King Saud University - Computer and Information Sciences, vol. 35, no. 2, pp. 757–774, Feb. 2023, doi: 10.1016/j.jksuci.2023.01.014.
N. Rane, S. P. Choudhary, and J. Rane, “Ensemble deep learning and machine learning: applications, opportunities, challenges, and future directions,” Studies in Medical and Health Sciences, vol. 1, no. 2, pp. 18–41, Jul. 2024, doi: 10.48185/smhs.v1i2.1225.
Y. Cao, T. A. Geddes, J. Y. H. Yang, and P. Yang, “Ensemble deep learning in bioinformatics,” Nat. Mach. Intell., vol. 2, no. 9, pp. 500–508, Aug. 2020, doi: 10.1038/s42256-020-0217-y.
L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 12, no. 10, pp. 993–1001, 1990, doi: 10.1109/34.58871.
Y. Yang, H. Lv, and N. Chen, “A Survey on ensemble learning under the era of deep learning,” Artif. Intell. Rev., vol. 56, no. 6, pp. 5545–5589, Jun. 2023, doi: 10.1007/s10462-022-10283-5.
O. Sagi and L. Rokach, “Ensemble learning: A survey,” WIREs Data Mining and Knowledge Discovery, vol. 8, no. 4, Jul. 2018, doi: 10.1002/widm.1249.
J. Jia, W. Liang, and Y. Liang, “A Review of Hybrid and Ensemble in Deep Learning for Natural Language Processing,” Aug. 2024, [Online]. Available: http://arxiv.org/abs/2312.05589
P. Mahajan, S. Uddin, F. Hajati, and M. A. Moni, “Ensemble Learning for Disease Prediction: A Review,” Healthcare, vol. 11, no. 12, p. 1808, Jun. 2023, doi: 10.3390/healthcare11121808.
M. A. Ganaie, M. Hu, A. K. Malik, M. Tanveer, and P. N. Suganthan, “Ensemble deep learning: A review,” Eng. Appl. Artif. Intell., vol. 115, p. 105151, Oct. 2022, doi: 10.1016/j.engappai.2022.105151.
Venugopal. Y. R and Dr. S. V, “Ensemble Learning Approaches for Improved Predictive Analytics in Healthcare,” International Journal of Research Publication and Reviews, vol. 5, no. 3, pp. 757–760, Mar. 2024, doi: 10.55248/gengpi.5.0324.0629.
I. Marçal and R. E. Garcia, “A comprehensible analysis of the efficacy of Ensemble Models for Bug Prediction,” Oct. 2023, [Online]. Available: http://arxiv.org/abs/2310.12133
A. A. Khan, O. Chaudhari, and R. Chandra, “A review of ensemble learning and data augmentation models for class imbalanced problems: Combination, implementation and evaluation,” Expert Syst. Appl., vol. 244, p. 122778, Jun. 2024, doi: 10.1016/j.eswa.2023.122778.
A. Goyal and N. Sardana, “Empirical Analysis of Ensemble Machine Learning Techniques for Bug Triaging,” in 2019 Twelfth International Conference on Contemporary Computing (IC3), IEEE, Aug. 2019, pp. 1–6. doi: 10.1109/IC3.2019.8844876.
D. Opitz and R. Maclin, “Popular Ensemble Methods: An Empirical Study,” Journal of Artificial Intelligence Research, vol. 11, pp. 169–198, Aug. 1999, doi: 10.1613/jair.614.
I. D. Mienye and Y. Sun, “A Survey of Ensemble Learning : Concepts , Algorithms , Applications , and Prospects,” IEEE Access, vol. 10, no. September, pp. 99129–99149, 2022, doi: 10.1109/ACCESS.2022.3207287.
S. Ali, T. Sreenivas Sremath, and A. Sarrafzadeh, “Ensemble learning methods for decision making: Status and future prospects,” International Conference on Machine Learning and Cybernetics (ICMLC), Guangzhou, China, pp. 211–216, 2015, doi: 10.1109/ICMLC.2015.7340924.
T. G. Dietterich, “Ensemble Methods in Machine Learning,” Multiple Classifier Systems, vol. 1857, 2000, doi: https://doi.org/10.1007/3-540-45014-9_1.
T. Dinh et al., “Data clustering: an essential technique in data science,” 2024, doi: https://doi.org/10.48550/arXiv.2412.18760.
A. Ghorbanian, “A new method based on ensemble time series for fast and accurate clustering,” Data Technologies and Applications, vol. 57, no. 5, pp. 756–779, 2023, doi: 10.1108/DTA-08-2022-0300.
A. A. Wani, “Comprehensive analysis of clustering algorithms : exploring limitations and innovative solutions,” PeerJ Comput. Sci., vol. 10, p. e2286, 2024, doi: 10.7717/peerj-cs.2286.
I. Al, B. H. Hammo, and O. Al-kadi, “An ensemble model with attention based mechanism for image captioning,” Computers and Electrical Engineering, vol. 123, no. PA, p. 110077, 2025, doi: 10.1016/j.compeleceng.2025.110077.
S. Wang, H. Jiao, X. Su, and Q. Yuan, “An Ensemble Learning Approach With Attention Mechanism for Detecting Pavement Distress and Disaster-Induced Road Damage,” IEEE Transactions on Intelligent Transportation Systems, vol. 25, no. 10, pp. 13667–13681, Oct. 2024, doi: 10.1109/TITS.2024.3391751.
X. Geng, X. He, L. Xu, and J. Yu, “Attention-based gating optimization network for multivariate time series prediction,” Appl. Soft Comput., vol. 126, p. 109275, Sep. 2022, doi: 10.1016/j.asoc.2022.109275.
K. Chen, L. Yang, Y. Chen, K. Chen, Y. Xu, and L. Li, “GP-NAS-ensemble: a model for NAS Performance Prediction,” Jan. 2023.
J. Artin, A. Valizadeh, M. Ahmadi, S. A. P. Kumar, and A. Sharifi, “Presentation of a Novel Method for Prediction of Traffic with Climate Condition Based on Ensemble Learning of Neural Architecture Search (NAS) and Linear Regression,” Complexity, vol. 2021, no. 1, Jan. 2021, doi: 10.1155/2021/8500572.
M. R. Al-Sinan, A. F. Haneef, and H. Luqman, “Ensemble Learning using Transformers and Convolutional Networks for Masked Face Recognition,” Oct. 2022.
C. Wang et al., “Ensembling Diffusion Models via Adaptive Feature Aggregation,” Feb. 2025.
E. Lella, A. Pazienza, D. Lofù, R. Anglani, and F. Vitulano, “An Ensemble Learning Approach Based on Diffusion Tensor Imaging Measures for Alzheimer’s Disease Classification,” Electronics (Basel)., vol. 10, no. 3, p. 249, Jan. 2021, doi: 10.3390/electronics10030249.
H. Manjunatha and P. Tsiotras, “Beyond One Model Fits All: Ensemble Deep Learning for Autonomous Vehicles,” Dec. 2023.
A.-A. Preda, D.-C. Cercel, T. Rebedea, and C.-G. Chiru, “UPB at IberLEF-2023 AuTexTification: Detection of Machine-Generated Text using Transformer Ensembles,” Aug. 2023.
H. Yang, M. Li, H. Zhou, Y. Xiao, Q. Fang, and R. Zhang, “One LLM is not Enough: Harnessing the Power of Ensemble Learning for Medical Question Answering,” Dec. 24, 2023. doi: 10.1101/2023.12.21.23300380.
A. Mabrouk, R. P. Díaz Redondo, M. Abd Elaziz, and M. Kayed, “Ensemble Federated Learning: An approach for collaborative pneumonia diagnosis,” Appl. Soft Comput., vol. 144, p. 110500, Sep. 2023, doi: 10.1016/j.asoc.2023.110500.
V. Hegiste, T. Legler, and M. Ruskowski, “Federated Ensemble YOLOv5 – A Better Generalized Object Detection Algorithm,” in 2023 Eighth International Conference on Fog and Mobile Edge Computing (FMEC), IEEE, Sep. 2023, pp. 7–14. doi: 10.1109/FMEC59375.2023.10305958.
F. E. Casado, D. Lema, R. Iglesias, C. V. Regueiro, and S. Barro, “Ensemble and continual federated learning for classification tasks,” Mach. Learn., vol. 112, no. 9, pp. 3413–3453, Sep. 2023, doi: 10.1007/s10994-023-06330-z.
M. Đumić and D. Jakobović, “Ensembles of priority rules for resource constrained project scheduling problem,” Appl. Soft Comput., vol. 110, p. 107606, Oct. 2021, doi: 10.1016/j.asoc.2021.107606.
F. S. Lay, A. Dömel, N. Y. Lii, and F. Stulp, “Orchestrating Method Ensembles to Adapt to Resource Requirements and Constraints During Robotic Task Execution,” IEEE Robot. Autom. Lett., vol. 10, no. 2, pp. 1186–1193, Feb. 2025, doi: 10.1109/LRA.2024.3518077.
S. Osei, A. R. Masegosa, and A. D. Masegosa, “Understanding the Role of Diversity in Ensemble-Based AutoML Methods for Classification Tasks,” IEEE Access, vol. 13, pp. 63566–63586, 2025, doi: 10.1109/ACCESS.2025.3554093.
S. Mao, J.-W. Chen, L. Jiao, S. Gou, and R. Wang, “Maximizing diversity by transformed ensemble learning,” Appl. Soft Comput., vol. 82, p. 105580, Sep. 2019, doi: 10.1016/j.asoc.2019.105580.
A. M. Durán-Rosal, T. Ashley, J. Pérez-Rodríguez, and F. Fernández-Navarro, “Global and Diverse Ensemble model for regression,” Neurocomputing, vol. 647, p. 130520, Sep. 2025, doi: 10.1016/j.neucom.2025.130520.
T. Anande, S. Alsaadi, and M. Leeson, “Enhanced Modelling Performance with Boosting Ensemble Meta-Learning and Optuna Optimization,” SN Comput. Sci., vol. 6, no. 1, p. 12, Dec. 2024, doi: 10.1007/s42979-024-03544-3.
M. Sabzevari, G. Martínez-Muñoz, and A. Suárez, “Building heterogeneous ensembles by pooling homogeneous ensembles,” International Journal of Machine Learning and Cybernetics, vol. 13, no. 2, pp. 551–558, Feb. 2022, doi: 10.1007/s13042-021-01442-1.
J. Wilson, S. Chaudhury, and B. Lall, “Homogeneous–Heterogeneous Hybrid Ensemble for concept-drift adaptation,” Neurocomputing, vol. 557, p. 126741, Nov. 2023, doi: 10.1016/j.neucom.2023.126741.
W. Li et al., “Machine Learning Model of ResNet50-Ensemble Voting for Malignant–Benign Small Pulmonary Nodule Classification on Computed Tomography Images,” Cancers (Basel)., vol. 15, no. 22, p. 5417, Nov. 2023, doi: 10.3390/cancers15225417.
Q. Zhang, Q. Yuan, J. Li, Z. Yang, and X. Ma, “Learning a Dilated Residual Network for SAR Image Despeckling,” Jan. 2018.
Z. Lu, B. Xu, L. Sun, T. Zhan, and S. Tang, “3-D Channel and Spatial Attention Based Multiscale Spatial–Spectral Residual Network for Hyperspectral Image Classification,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 13, pp. 4311–4324, 2020, doi: 10.1109/JSTARS.2020.3011992.
M. Sharkas, “Ear recognition with ensemble classifiers; A deep learning approach,” Multimed. Tools Appl., vol. 81, no. 30, pp. 43919–43945, Dec. 2022, doi: 10.1007/s11042-022-13252-w.
S. Bian and W. Wang, “On diversity and accuracy of homogeneous and heterogeneous ensembles,” Int. J. Hybrid Intell. Syst., vol. 4, no. 2, pp. 103–128, Jun. 2007, doi: 10.3233/HIS-2007-4204.
Z. Fang, Y. Wang, L. Peng, and H. Hong, “A comparative study of heterogeneous ensemble-learning techniques for landslide susceptibility mapping,” International Journal of Geographical Information Science, vol. 35, no. 2, pp. 321–347, Feb. 2021, doi: 10.1080/13658816.2020.1808897.
K. Jiang, Z. Xiong, Q. Yang, J. Chen, and G. Chen, “An interpretable ensemble method for deep representation learning,” Engineering Reports, vol. 6, no. 3, Mar. 2024, doi: 10.1002/eng2.12725.
H. Saleh, S. Mostafa, A. Alharbi, S. El-Sappagh, and T. Alkhalifah, “Heterogeneous Ensemble Deep Learning Model for Enhanced Arabic Sentiment Analysis,” Sensors, vol. 22, no. 10, p. 3707, May 2022, doi: 10.3390/s22103707.
Y. Xia, J. Zhao, L. He, Y. Li, and M. Niu, “A novel tree-based dynamic heterogeneous ensemble method for credit scoring,” Expert Syst. Appl., vol. 159, p. 113615, Nov. 2020, doi: 10.1016/j.eswa.2020.113615.
M. S. Abassi, “Diversity of Ensembles for Data Stream Classification,” Feb. 2019.
B. Pes, “Ensemble feature selection for high-dimensional data: a stability analysis across multiple domains,” Neural Comput. Appl., vol. 32, no. 10, pp. 5951–5973, May 2020, doi: 10.1007/s00521-019-04082-3.
M.-R. Amini, V. Feofanov, L. Pauletto, L. Hadjadj, É. Devijver, and Y. Maximov, “Self-training: A survey,” Neurocomputing, vol. 616, p. 128904, Feb. 2025, doi: 10.1016/j.neucom.2024.128904.
M. E.Almandouh, M. F. Alrahmawy, M. Eisa, M. Elhoseny, and A. S. Tolba, “Ensemble based high performance deep learning models for fake news detection,” Sci. Rep., vol. 14, no. 1, p. 26591, Nov. 2024, doi: 10.1038/s41598-024-76286-0.
A. Melleng, A.-J. Loughrey, and D. P, “Multi-task Ensemble Learning for Fake Reviews Detection and Helpfulness Prediction: A Novel Approach,” in Proceedings of the Conference Recent Advances in Natural Language Processing - Large Language Models for Natural Language Processings, INCOMA Ltd., Shoumen, BULGARIA, 2023, pp. 721–729. doi: 10.26615/978-954-452-092-2_078.
R. Caruana, “Multitask Learning,” Mach. Learn., vol. 28, no. 1, pp. 41–75, Jul. 1997, doi: 10.1023/A:1007379606734.
K.-H. Thung and C.-Y. Wee, “A brief review on multi-task learning,” Multimed. Tools Appl., vol. 77, no. 22, pp. 29705–29725, Nov. 2018, doi: 10.1007/s11042-018-6463-x.
M. Oner and A. Ustundag, “Combining predictive base models using deep ensemble learning,” Journal of Intelligent & Fuzzy Systems, vol. 39, no. 5, pp. 6657–6668, Nov. 2020, doi: 10.3233/JIFS-189126.
A. A. Khan, O. Chaudhari, and R. Chandra, “A review of ensemble learning and data augmentation models for class imbalanced problems: Combination, implementation and evaluation,” Expert Syst. Appl., vol. 244, p. 122778, Jun. 2023, doi: 10.1016/j.eswa.2023.122778.
M. Harandi, J. Taheri, and B. C. Lovell, “Ensemble Learning for Object Recognition and Tracking,” in Pattern Recognition, Machine Intelligence and Biometrics, Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 261–278. doi: 10.1007/978-3-642-22407-2_11.
X. Zhou and Y. He, “Deep Ensemble Learning-Based Sensor for Flotation Froth Image Recognition,” Sensors, vol. 24, no. 15, p. 5048, Aug. 2024, doi: 10.3390/s24155048.
P. Garg, M. K. Sharma, and P. Kumar, “Improving Hate Speech Classification Through Ensemble Learning and Explainable AI Techniques,” Arab. J. Sci. Eng., vol. 50, no. 15, pp. 11631–11644, Aug. 2024, doi: 10.1007/s13369-024-09540-2.
M. M. Abdelsamie, S. S. Azab, and H. A. Hefny, “A comprehensive review on Arabic offensive language and hate speech detection on social media: methods, challenges and solutions,” Soc. Netw. Anal. Min., vol. 14, no. 1, p. 111, May 2024, doi: 10.1007/s13278-024-01258-1.
D. M. Jose, A. M. Vincent, and G. S. Dwarakish, “Improving multiple model ensemble predictions of daily precipitation and temperature through machine learning techniques,” Sci. Rep., vol. 12, no. 1, p. 4678, Mar. 2022, doi: 10.1038/s41598-022-08786-w.
S. V. Boyapati, M. S. Karthik, K. Subrahmanyam, and B. R. Reddy, “An Analysis of House Price Prediction Using Ensemble Learning Algorithms,” Research Reports on Computer Science, pp. 87–96, May 2023, doi: 10.37256/rrcs.2320232639.
Z. Yang, X. Zhu, Y. Zhang, P. Nie, and X. Liu, “A Housing Price Prediction Method Based on Stacking Ensemble Learning Optimization Method,” in 2023 IEEE 10th International Conference on Cyber Security and Cloud Computing (CSCloud)/2023 IEEE 9th International Conference on Edge Computing and Scalable Cloud (EdgeCom), IEEE, Jul. 2023, pp. 96–101. doi: 10.1109/CSCloud-EdgeCom58631.2023.00025.
H. Shaiba et al., “Weather Forecasting Prediction Using Ensemble Machine Learning for Big Data Applications,” Computers, Materials & Continua, vol. 73, no. 2, pp. 3367–3382, 2022, doi: 10.32604/cmc.2022.030067.
E. Pei, Z. Hu, L. He, H. Ning, and A. D. Berenguer, “An ensemble learning-enhanced multitask learning method for continuous affect recognition from facial images,” Expert Syst. Appl., vol. 236, p. 121290, Feb. 2024, doi: 10.1016/j.eswa.2023.121290.
A. Melleng, A.-J. Loughrey, and D. P, “Multi-task Ensemble Learning for Fake Reviews Detection and Helpfulness Prediction: A Novel Approach,” in Proceedings of the Conference Recent Advances in Natural Language Processing - Large Language Models for Natural Language Processings, INCOMA Ltd., Shoumen, BULGARIA, 2023, pp. 721–729. doi: 10.26615/978-954-452-092-2_078.
X. Sun, D. Wu, A. Zinflou, and B. Boulet, “Anomaly Detection with Ensemble of Encoder and Decoder,” Mar. 2023.
H. Shen, C. Lin, Y. Ma, and E. Xie, “Ensemble Deep Learning Classification Method Based on Generative Adversarial Networks,” in 2024 16th International Conference on Computer and Automation Engineering (ICCAE), IEEE, Mar. 2024, pp. 46–53. doi: 10.1109/ICCAE59995.2024.10569429.
Y. Shin, G. Son, D. Hwang, and T. Eo, “Ensemble and low-frequency mixing with diffusion models for accelerated MRI reconstruction,” Med. Image Anal., vol. 101, p. 103477, Apr. 2025, doi: 10.1016/j.media.2025.103477.
Y. Wang, L. Zhang, and J. van de Weijer, “Ensembles of Generative Adversarial Networks,” Dec. 2016.
K.-L. Du and M. N. S. Swamy, “Combining Multiple Learners: Data Fusion and Ensemble Learning,” in Neural Networks and Statistical Learning, London: Springer London, 2019, pp. 737–767. doi: 10.1007/978-1-4471-7452-3_25.
F. Abdullakutty and U. Naseem, “Decoding Memes: A Comprehensive Analysis of Late and Early Fusion Models for Explainable Meme Analysis,” in Companion Proceedings of the ACM Web Conference 2024, New York, NY, USA: ACM, May 2024, pp. 1681–1689. doi: 10.1145/3589335.3652504.
A. Dore, M. Pinasco, and C. S. Regazzoni, “Multi-Modal Data Fusion Techniques and Applications,” in Multi-Camera Networks, Elsevier, 2009, pp. 213–237. doi: 10.1016/B978-0-12-374633-7.00011-2.
L. M. Pereira, A. Salazar, and L. Vergara, “A Comparative Analysis of Early and Late Fusion for the Multimodal Two-Class Problem,” IEEE Access, vol. 11, pp. 84283–84300, 2023, doi: 10.1109/ACCESS.2023.3296098.
X. Huang, T. Ma, L. Jia, Y. Zhang, H. Rong, and N. Alnabhan, “An effective multimodal representation and fusion method for multimodal intent recognition,” Neurocomputing, vol. 548, p. 126373, Sep. 2023, doi: 10.1016/j.neucom.2023.126373.
H. R. V. Joze, A. Shaban, M. L. Iuzzolino, and K. Koishida, “MMTM: Multimodal Transfer Module for CNN Fusion,” Mar. 2020.
S. Kalamkar and G. M. A., “Multimodal image fusion: A systematic review,” Decision Analytics Journal, vol. 9, p. 100327, Dec. 2023, doi: 10.1016/j.dajour.2023.100327.
O. O. Awe, G. O. Opateye, C. A. G. Johnson, O. T. Tayo, and R. Dias, “Weighted Hard and Soft Voting Ensemble Machine Learning Classifiers: Application to Anaemia Diagnosis,” 2023, pp. 351–374. doi: 10.1007/978-3-031-41352-0_18.
K. Cao-Van, T. C. Minh, L. G. Minh, T. T. B. Quyen, and H. M. Tan, “Soft-Voting Ensemble Model: An Efficient Learning Approach for Predictive Prostate Cancer Risk,” Vietnam Journal of Computer Science, vol. 11, no. 04, pp. 531–552, Nov. 2024, doi: 10.1142/S2196888824500155.
D. Arpit, H. Wang, Y. Zhou, and C. Xiong, “Ensemble of Averages: Improving Model Selection and Boosting Performance in Domain Generalization,” Oct. 2022.
D. A. Drew and S. L. Passman, “Ensemble Averaging,” 1999, pp. 92–104. doi: 10.1007/0-387-22637-0_10.
B. Avci and A. Boduroglu, “Contributions of ensemble perception to outlier representation precision,” Atten. Percept. Psychophys., vol. 83, no. 3, pp. 1141–1151, Apr. 2021, doi: 10.3758/s13414-021-02270-9.
Y. Zhao and M. K. Hryniewicki, “DCSO: Dynamic Combination of Detector Scores for Outlier Ensembles,” Nov. 2019.
E. M. Burger and S. J. Moura, “Gated ensemble learning method for demand-side electricity load forecasting,” Energy Build., vol. 109, pp. 23–34, Dec. 2015, doi: 10.1016/j.enbuild.2015.10.019.
N. Farhan, I. T. Awishi, M. H. K. Mehedi, MD. M. Alam, and A. A. Rasel, “Ensemble of Gated Recurrent Unit and Convolutional Neural Network for Sarcasm Detection in Bangla,” in 2023 IEEE 13th Annual Computing and Communication Workshop and Conference (CCWC), IEEE, Mar. 2023, pp. 0624–0629. doi: 10.1109/CCWC57344.2023.10099157.
Z. Song et al., “Attention-based multi-label neural networks for integrated prediction and interpretation of twelve widely occurring RNA modifications,” Nat. Commun., vol. 12, no. 1, p. 4011, Jun. 2021, doi: 10.1038/s41467-021-24313-3.
P. Ratadiya and D. Mishra, “An Attention Ensemble Based Approach for Multilabel Profanity Detection,” in 2019 International Conference on Data Mining Workshops (ICDMW), IEEE, Nov. 2019, pp. 544–550. doi: 10.1109/ICDMW.2019.00083.
E. J. Herron, S. R. Young, and T. E. Potok, “Ensembles of Networks Produced from Neural Architecture Search,” 2020, pp. 223–234. doi: 10.1007/978-3-030-59851-8_14.
T. Yamasaki, Z. Wang, T. Luo, N. Chen, and B. Wang, “RBFleX-NAS: Training-Free Neural Architecture Search Using Radial Basis Function Kernel and Hyperparameter Detection,” IEEE Trans. Neural Netw. Learn. Syst., vol. 36, no. 6, pp. 10057–10071, Jun. 2025, doi: 10.1109/TNNLS.2025.3552693.
K. Sriprateep et al., “Heterogeneous ensemble machine learning to predict the asiaticoside concentration in centella asiatica urban,” Intelligent Systems with Applications, vol. 21, p. 200319, Mar. 2024, doi: 10.1016/j.iswa.2023.200319.
G. Valentini and T. G. Dietterich, “Bias—Variance Analysis and Ensembles of SVM,” 2002, pp. 222–231. doi: 10.1007/3-540-45428-4_22.
A. Mikołajczyk-Bareła and M. Grochowski, “A survey on bias in machine learning research,” Aug. 2023.
M. Deprez and E. C. Robinson, “Machine learning basics,” in Machine Learning for Biomedical Applications, Elsevier, 2024, pp. 41–65. doi: 10.1016/B978-0-12-822904-0.00007-8.
W. Blanzeisky and P. Cunningham, “Algorithmic Factors Influencing Bias in Machine Learning,” Apr. 2021.
Z.-H. Zhou, “Why over-parameterization of deep neural networks does not overfit?,” Science China Information Sciences, vol. 64, no. 1, p. 116101, Jan. 2021, doi: 10.1007/s11432-020-2885-6.
N. Gupta, J. Smith, B. Adlam, and Z. Mariet, “Ensembling over Classifiers: a Bias-Variance Perspective,” Jun. 2022.
R. Baradaran and H. Amirkhani, “Ensemble learning-based approach for improving generalization capability of machine reading comprehension systems,” Neurocomputing, vol. 466, pp. 229–242, Nov. 2021, doi: 10.1016/j.neucom.2021.08.095.
P. Gupta, A. Pratap Singh, and V. Kumar, “A Review of Ensemble Methods Used in AI Applications,” 2023, pp. 145–157. doi: 10.1007/978-981-99-5080-5_13.
J. Wang, Y. Qian, F. Li, J. Liang, and Q. Zhang, “Generalization Performance of Pure Accuracy and its Application in Selective Ensemble Learning,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 2, pp. 1798–1816, Feb. 2023, doi: 10.1109/TPAMI.2022.3171436.
J. Kazmaier and J. H. van Vuuren, “The power of ensemble learning in sentiment analysis,” Expert Syst. Appl., vol. 187, p. 115819, Jan. 2022, doi: 10.1016/j.eswa.2021.115819.
K. Kawaguchi, “Deep Learning without Poor Local Minima,” Dec. 2016.
R. Rahaman and A. H. Thiery, “Uncertainty Quantification and Deep Ensembles,” Nov. 2021.
T. Buddenkotte et al., “Calibrating ensembles for scalable uncertainty quantification in deep learning-based medical image segmentation,” Comput. Biol. Med., vol. 163, p. 107096, Sep. 2023, doi: 10.1016/j.compbiomed.2023.107096.
S. Khan, M. Naseer, M. Hayat, S. W. Zamir, F. S. Khan, and M. Shah, “Transformers in Vision: A Survey,” ACM Comput. Surv., vol. 54, no. 10s, pp. 1–41, Jan. 2022, doi: 10.1145/3505244.
Q. Fournier, G. M. Caron, and D. Aloise, “A Practical Survey on Faster and Lighter Transformers,” ACM Comput. Surv., vol. 55, no. 14s, pp. 1–40, Dec. 2023, doi: 10.1145/3586074.
M. S. U. Miah, M. M. Kabir, T. Bin Sarwar, M. Safran, S. Alfarhood, and M. F. Mridha, “A multimodal approach to cross-lingual sentiment analysis with ensemble of transformer and LLM,” Sci. Rep., vol. 14, no. 1, p. 9603, Apr. 2024, doi: 10.1038/s41598-024-60210-7.
S. Bhojanapalli, A. Chakrabarti, D. Glasner, D. Li, T. Unterthiner, and A. Veit, “Understanding Robustness of Transformers for Image Classification,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), IEEE, Oct. 2021, pp. 10211–10221. doi: 10.1109/ICCV48922.2021.01007.
L. Nanni, “Augmentation and Ensembles: Improving Medical Image Segmentation with SAM and Deep Networks,” 2024.
C. Li, G. Feng, Y. Li, R. Liu, Q. Miao, and L. Chang, “DiffTAD: Denoising diffusion probabilistic models for vehicle trajectory anomaly detection,” Knowl. Based. Syst., vol. 286, p. 111387, Feb. 2024, doi: 10.1016/j.knosys.2024.111387.
Y. Balaji et al., “eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers,” Mar. 2023.
I. Salehin et al., “AutoML: A systematic review on automated machine learning with neural architecture search,” Journal of Information and Intelligence, vol. 2, no. 1, pp. 52–81, Jan. 2024, doi: 10.1016/j.jiixd.2023.10.002.
Y. Ji et al., “An Active Learning based Latency Prediction Approach for Neural Network Architecture,” in 2024 4th International Conference on Neural Networks, Information and Communication (NNICE), IEEE, Jan. 2024, pp. 967–971. doi: 10.1109/NNICE61279.2024.10498710.
N. Saber Rashid and H. Maseeh Yasin, “Privacy-preserving machine learning: a review of federated learning techniques and applications,” International Journal of Scientific World, vol. 11, no. 1, pp. 30–39, Feb. 2025, doi: 10.14419/af03y111.
B. Fan, S. Jiang, X. Su, S. Tarkoma, and P. Hui, “A Survey on Model-heterogeneous Federated Learning: Problems, Methods, and Prospects,” Nov. 2024.
D. P. Nguyen, S. Yu, J. P. Muñoz, and A. Jannesari, “Enhancing Heterogeneous Federated Learning with Knowledge Extraction and Multi-Model Fusion,” in Proceedings of the SC ’23 Workshops of the International Conference on High Performance Computing, Network, Storage, and Analysis, New York, NY, USA: ACM, Nov. 2023, pp. 36–43. doi: 10.1145/3624062.3626325.
F. S. Alhafiz and A. A. Basuhail, “Non-IID Medical Imaging Data on COVID-19 in the Federated Learning Framework: Impact and Directions,” COVID, vol. 4, no. 12, pp. 1985–2016, Dec. 2024, doi: 10.3390/covid4120140.
T. Shaik et al., “Clustered FedStack: Intermediate Global Models with Bayesian Information Criterion,” Pattern Recognit. Lett., vol. 177, pp. 121–127, Jan. 2024, doi: 10.1016/j.patrec.2023.12.004.
K.-L. Du, R. Zhang, B. Jiang, J. Zeng, and J. Lu, “Foundations and Innovations in Data Fusion and Ensemble Learning for Effective Consensus,” Mathematics, vol. 13, no. 4, p. 587, Feb. 2025, doi: 10.3390/math13040587.
T.-H. Huang, C. Shin, S. J. Tay, D. Adila, and F. Sala, “Multimodal Data Curation via Object Detection and Filter Ensembles,” Jan. 2024.
M. R. Supriyadi et al., “A systematic literature review: exploring the challenges of ensemble model for medical imaging,” BMC Med. Imaging, vol. 25, no. 1, p. 128, Apr. 2025, doi: 10.1186/s12880-025-01667-4.
V. K. Prasad et al., “Revolutionizing healthcare: a comparative insight into deep learning’s role in medical imaging,” Sci. Rep., vol. 14, no. 1, p. 30273, Dec. 2024, doi: 10.1038/s41598-024-71358-7.
F. Ullah et al., “Brain MR Image Enhancement for Tumor Segmentation Using 3D U-Net,” Sensors, vol. 21, no. 22, p. 7528, Nov. 2021, doi: 10.3390/s21227528.
M. M. Saleh, M. E. Salih, M. A. A. Ahmed, and A. M. Hussein, “From Traditional Methods to 3D U-Net: A Comprehensive Review of Brain Tumour Segmentation Techniques,” J. Biomed. Sci. Eng., vol. 18, no. 01, pp. 1–32, 2025, doi: 10.4236/jbise.2025.181001.
A. A. Munia et al., “Attention-guided hierarchical fusion U-Net for uncertainty-driven medical image segmentation,” Information Fusion, vol. 115, p. 102719, Mar. 2025, doi: 10.1016/j.inffus.2024.102719.
Z. Sherkatghanad, M. Abdar, M. Bakhtyari, P. Plawiak, and V. Makarenkov, “BayTTA: Uncertainty-aware medical image classification with optimized test-time augmentation using Bayesian model averaging,” Aug. 2024.
M. M. A. Valiuddin, R. J. G. van Sloun, C. G. A. Viviers, P. H. N. de With, and F. van der Sommen, “A Review of Bayesian Uncertainty Quantification in Deep Probabilistic Image Segmentation,” Dec. 2025.
M. M. Abdelsamie, S. S. Azab, and H. A. Hefny, “The dialects gap: A multi-task learning approach for enhancing hate speech detection in Arabic dialects,” Expert Syst. Appl., vol. 295, p. 128584, Jan. 2026, doi: 10.1016/j.eswa.2025.128584.
I. El Karfi and S. El Fkihi, “An Ensemble of Arabic Transformer-based Models for Arabic Sentiment Analysis,” International Journal of Advanced Computer Science and Applications, vol. 13, no. 8, 2022, doi: 10.14569/IJACSA.2022.0130865.
Q. Lin, Y. Liu, W. Wen, Z. Tao, C. Ouyang, and Y. Wan, “Ensemble Making Few-Shot Learning Stronger,” Data Intell., vol. 4, no. 3, pp. 529–551, Jul. 2022, doi: 10.1162/dint_a_00144.
P. Tomar, S. Shrivastava, and U. Thakar, “Ensemble Learning based Credit Card Fraud Detection System,” in 2021 5th Conference on Information and Communication Technology (CICT), IEEE, Dec. 2021, pp. 1–5. doi: 10.1109/CICT53865.2020.9672426.
C. L. Perera and S. C. Premaratne, “An Ensemble Machine Learning Approach for Forecasting Credit risk of Loan Applications,” WSEAS TRANSACTIONS ON SYSTEMS, vol. 23, pp. 31–46, Dec. 2023, doi: 10.37394/23202.2024.23.4.
N. M. Ibrahim and A. C. Aguguo, “AN ENSEMBLE OF UNSUPERVISED DEEP LEARNING MODELS FOR CREDIT CARD FRAUD DETECTION,” FUDMA JOURNAL OF SCIENCES, vol. 9, no. 5, pp. 263–270, May 2025, doi: 10.33003/fjs-2025-0905-3379.
M. Abdul Salam, K. M. Fouad, D. L. Elbably, and S. M. Elsayed, “Federated learning model for credit card fraud detection with data balancing techniques,” Neural Comput. Appl., vol. 36, no. 11, pp. 6231–6256, Apr. 2024, doi: 10.1007/s00521-023-09410-2.
M. Valipour and J. Dietrich, “Developing ensemble mean models of satellite remote sensing, climate reanalysis, and land surface models,” Theor. Appl. Climatol., vol. 150, no. 3–4, pp. 909–926, Nov. 2022, doi: 10.1007/s00704-022-04185-3.
T. Mollick, G. Hashmi, and S. R. Sabuj, “A perceptible stacking ensemble model for air temperature prediction in a tropical climate zone,” Discover Environment, vol. 1, no. 1, p. 15, Sep. 2023, doi: 10.1007/s44274-023-00014-0.
Z. Jin, Y. Ma, L. Chu, Y. Liu, R. Dubrow, and K. Chen, “Predicting spatiotemporally-resolved mean air temperature over Sweden from satellite data using an ensemble model,” Environ. Res., vol. 204, p. 111960, Mar. 2022, doi: 10.1016/j.envres.2021.111960.
S. Hu, S. Li, L. Gong, D. Liu, Z. Wang, and G. Xu, “Carbon emissions prediction based on ensemble models: An empirical analysis from China,” Environmental Modelling & Software, vol. 188, p. 106437, Apr. 2025, doi: 10.1016/j.envsoft.2025.106437.
T. Miller, I. Durlik, E. Kostecka, P. Borkowski, and A. Łobodzińska, “A Critical AI View on Autonomous Vehicle Navigation: The Growing Danger,” Electronics (Basel)., vol. 13, no. 18, p. 3660, Sep. 2024, doi: 10.3390/electronics13183660.
M. Wyatt, B. Radford, N. Callow, M. Bennamoun, and S. Hickey, “Using ensemble methods to improve the robustness of deep learning for image classification in marine environments,” Methods Ecol. Evol., vol. 13, no. 6, pp. 1317–1328, Jun. 2022, doi: 10.1111/2041-210X.13841.
M. Carranza-García, P. Lara-Benítez, J. García-Gutiérrez, and J. C. Riquelme, “Enhancing object detection for autonomous driving by optimizing anchor generation and addressing class imbalance,” Neurocomputing, vol. 449, pp. 229–244, Aug. 2021, doi: 10.1016/j.neucom.2021.04.001.
G. Wang, Z. Li, G. Weng, and Y. Chen, “An overview of industrial image segmentation using deep learning models,” Intelligence & Robotics, vol. 5, no. 1, pp. 143–80, Feb. 2025, doi: 10.20517/ir.2025.09.
T. Abe, E. K. Buchanan, G. Pleiss, R. Zemel, and J. P. Cunningham, “Deep Ensembles Work, But Are They Necessary?,” Oct. 2022.
T. N. Palmer, “The economic value of ensemble forecasts as a tool for risk assessment: From days to decades,” Quarterly Journal of the Royal Meteorological Society, vol. 128, no. 581, pp. 747–774, Apr. 2002, doi: 10.1256/0035900021643593.
L. Alzubaidi et al., “MEFF – A model ensemble feature fusion approach for tackling adversarial attacks in medical imaging,” Intelligent Systems with Applications, vol. 22, p. 200355, Jun. 2024, doi: 10.1016/j.iswa.2024.200355.
K. Domijan, “What is to be gained by ensemble models in analysis of spectroscopic data?,” Chemometrics and Intelligent Laboratory Systems, vol. 241, p. 104936, Oct. 2023, doi: 10.1016/j.chemolab.2023.104936.
N. Rane, S. P. Choudhary, and J. Rane, “Ensemble deep learning and machine learning: applications, opportunities, challenges, and future directions,” Studies in Medical and Health Sciences, vol. 1, no. 2, pp. 18–41, Jul. 2024, doi: 10.48185/smhs.v1i2.1225.