Analisis Komparatif CNN Ringan untuk Klasifikasi Penyakit Daun Tomat Menggunakan Visualisasi Grad-CAM
Abstract
Tomato leaf disease classification based on digital imagery has become an important approach in supporting smart agriculture, particularly for early detection of plant disease attacks. This study aims to compare the performance of several lightweight Convolutional Neural Network (CNN) architectures, namely MobileNetV3-Small, MobileNetV2, and EfficientNet-B0, in classifying tomato leaf diseases using the PlantVillage dataset. The dataset consists of 3,628 images distributed across 10 classes (9 disease classes and 1 healthy class), with a data split scheme of 80% for training and 20% for validation. Performance evaluation was conducted using classification reports, confusion matrices, and interpretability analysis through Grad-CAM and feature map visualization. The experimental results show that all models achieved very high accuracy, exceeding 99%. EfficientNet-B0 obtained the best performance with a validation accuracy of 99.59%, followed by MobileNetV2 at 99.45% and MobileNetV3-Small at 99.04%. However, model complexity increased along with accuracy, where EfficientNet-B0 had the largest number of parameters and FLOPs. Grad-CAM analysis revealed that higher-accuracy models demonstrated more precise activation focus on leaf lesion regions. This study confirms that lightweight CNN architectures are capable of delivering excellent classification performance while offering strong potential for deployment in plant disease detection systems on resource-limited devices
References
S. Ahmed, M. B. Hasan, T. Ahmed, M. R. K. Sony, and M. H. Kabir, “Less is more: Lighter and faster deep neural architecture for tomato leaf disease classification,” IEEE Access, vol. 10, pp. 68868–68884, 2022.
A. Islam, T. Tahsin, Z. Anjum, M. B. Hasan, and M. H. Kabir, “A Domain-Adapted Lightweight Ensemble for Resource-Efficient Few-Shot Plant Disease Classification,” arXiv Prepr. arXiv2512.13428, 2025.
S. Thuseethan, P. Vigneshwaran, J. Charles, and C. Wimalasooriya, “Siamese network-based lightweight framework for tomato leaf disease recognition,” Computers, vol. 13, no. 12, p. 323, 2024.
S. Rahman, M. Elvany, and M. Ramli, “MobileChiliNet: convolutional neural network for chili leaves classification,” Iaes Int. J. Artif. Intell., vol. 14, no. 5, pp. 3757–3770, 2025.
S. Rahman, A. Indrawati, and M. Zen, “Enhanced RegNetY-400MF for Fruit Fly Species Classification: Fine-Tuning Strategies and Data Balancing for Improved Accuracy,” vol. 9, no. 6, pp. 1463–1473, 2025, doi: https://doi.org/10.29207/resti.v9i6.6973.
S. Rahman, R. A. Setyadi, A. Indrawati, A. Sembiring, and M. Zen, “Improving the Accuracy of Chili Leaf Disease Classification with ResNet and Fine-Tuning Strategy.,” Int. J. Adv. Comput. Sci. Appl., vol. 15, no. 10, 2024.
A. Gao, A. Geng, Y. Song, L. Ren, Y. Zhang, and X. Han, “Detection of maize leaf diseases using improved MobileNet V3-small,” Int. J. Agric. Biol. Eng., vol. 16, no. 3, pp. 225–232, 2023.
R. Zehra, O. Sharma, and R. B. Singh, “Mobilenet-v3: A comprehensive survey of object detection algorithms using CNN,” Emerg. Trends Comput. Sci. Its Appl., pp. 362–366, 2025.
Y. Gulzar, “Fruit image classification model based on MobileNetV2 with deep transfer learning technique,” Sustainability, vol. 15, no. 3, p. 1906, 2023.
E. Elfatimi, R. Eryigit, and L. Elfatimi, “Beans leaf diseases classification using mobilenet models,” IEEE Access, vol. 10, pp. 9471–9482, 2022.
X. Shi, P. Li, H. Wu, Q. Chen, and H. Zhu, “A lightweight image splicing tampering localization method based on MobileNetV2 and SRM,” IET Image Process., vol. 17, no. 6, pp. 1883–1892, 2023.
J. Chen, D. Zhang, M. Suzauddola, and A. Zeb, “Identifying crop diseases using attention embedded MobileNet-V2 model,” Appl. Soft Comput., vol. 113, p. 107901, 2021.
M. Hossain and M. A. Rahman, “Brain tumor detection using deep network EfficientNet-B0,” in International Conference on Machine Intelligence and Emerging Technologies, Springer, 2022, pp. 213–225.
N. V. Hiremath, “Breast cancer detection and classification using EfficientNet B0 and EfficientNet B0-HSV,” 2023, Dublin, National College of Ireland.
H. Ali, N. Shifa, R. Benlamri, A. A. Farooque, and R. Yaqub, “A fine tuned EfficientNet-B0 convolutional neural network for accurate and efficient classification of apple leaf diseases,” Sci. Rep., vol. 15, no. 1, p. 25732, 2025.
A. M. Abdu, M. M. Mokji, and U. U. Sheikh, “Automatic vegetable disease identification approach using individual lesion features,” Comput. Electron. Agric., vol. 176, p. 105660, 2020, doi: https://doi.org/10.1016/j.compag.2020.105660.
J. G. A. Barbedo, “Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification,” Comput. Electron. Agric., vol. 153, pp. 46–53, 2018, doi: https://doi.org/10.1016/j.compag.2018.08.013.
R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization,” Int. J. Comput. Vis., vol. 128, no. 2, pp. 336–359, 2020, doi: 10.1007/s11263-019-01228-7.
F. Zhu, Y. Sun, Y. Zhang, W. Zhang, and J. Qi, “An improved MobileNetV3 mushroom quality classification model using images with complex backgrounds,” Agronomy, vol. 13, no. 12, p. 2924, 2023.
L. Chen et al., “Evaluation of MobileNetV3-Large for crack classification across low-and high-resolution images,” Front. Built Environ., vol. 11, p. 1724879, 2025.
L. Zhao and L. Wang, “A new lightweight network based on MobileNetV3,” KSII Trans. Internet Inf. Syst., vol. 16, no. 1, pp. 1–15, 2022.
R. Indraswari, R. Rokhana, and W. Herulambang, “Melanoma image classification based on MobileNetV2 network,” Procedia Comput. Sci., vol. 197, pp. 198–207, 2022.
Q. Zhu, H. Zhuang, M. Zhao, S. Xu, and R. Meng, “A study on expression recognition based on improved mobilenetV2 network,” Sci. Rep., vol. 14, no. 1, p. 8121, 2024.
Y. Gulzar, “Fruit image classification model based on MobileNetV2 with deep transfer learning technique,” Sustainability, vol. 15, no. 3, p. 1906, 2023.
H. Guo, J. Zhang, Y. Li, X. Pan, and C. Sun, “Advanced pathological subtype classification of thyroid cancer using efficientNetB0,” Diagn. Pathol., vol. 20, no. 1, p. 28, 2025.
A. Kumar, L. Nelson, and D. Arumugam, “Deep Learning-Based Classification of Brain Tumours on MRI Images Using EfficientNetB0,” in 2024 4th International Conference on Technological Advancements in Computational Sciences (ICTACS), IEEE, 2024, pp. 219–225.
K. Gencer, “A Comparative Analysis of EfficientNetB0 and EfficientNetV2 Variants for Brain Tumor Classification Using MRI Images,” Int. J. Innov. Eng. Appl., vol. 9, no. 1, pp. 1–7, 2025.
S. H. Kim, J. S. Park, H. S. Lee, S. H. Yoo, and K. J. Oh, “Combining CNN and Grad-CAM for profitability and explainability of investment strategy: Application to the KOSPI 200 futures,” Expert Syst. Appl., vol. 225, p. 120086, 2023.
H. Moujahid et al., “Combining CNN and Grad-Cam for COVID-19 Disease Prediction and Visual Explanation.,” Intell. Autom. Soft Comput., vol. 32, no. 2, 2022.
W. Siripattanadilok and T. Siriborvornratanakul, “Recognition of partially occluded soft-shell mud crabs using Faster R-CNN and Grad-CAM,” Aquac. Int., vol. 32, no. 3, pp. 2977–2997, 2024.
Copyright (c) 2026 Sayuti Rahman, Hartono Hartono, Arnes Sembiring, muhammad Khahfi Zuhanda, Bayu Aditya Pratama, Dewi Martini

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors retain copyright and grant the EXPLORER right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (CC BY-SA 4.0) that allows others to share (copy and redistribute the material in any medium or format) and adapt (remix, transform, and build upon the material) the work for any purpose, even commercially with an acknowledgement of the work's authorship and initial publication in EXPLORER.
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in EXPLORER.
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).





.png)















