Author:
Banupriya M, Kalirajan M, Mithunraj Muneeswaran M P, Naveen M, Mukesh Kumar M
Published in
Journal of Science Technology and Research
( Volume 7, Issue 1 )
Abstract
The increasing demand for precision and reliability in industrial manufacturing necessitates efficient quality inspection mechanisms, particularly for critical components such as blades used in turbines and cutting systems. Traditional manual inspection methods are timeconsuming, error-prone, and lack scalability. This paper addresses the challenge of accurate and real-time detection of blade surface defects by proposing an intelligent automated inspection system. The proposed approach integrates computer vision techniques with a deep learningbased Convolutional Neural Network (CNN) model to identify and classify defects such as cracks, corrosion, and surface irregularities. Image acquisition is performed using high-resolution cameras, followed by preprocessing and feature extraction to enhance detection accuracy. The system is trained and validated on a labeled dataset of blade images, achieving superior performance compared to conventional image processing methods. Experimental results demonstrate an accuracy of 96.8%, with improved precision and recall metrics, ensuring reliable defect identification in real-time industrial environments. The proposed system significantly reduces human intervention, enhances inspection speed, and improves quality assurance. This work contributes to the advancement of smart manufacturing by enabling scalable, costeffective, and high-precision automated inspection solutions.
Keywords
Automated Blade Inspection, Deep Learning, Computer Vision, Defect Detection, Convolutional Neural Network.
References

1. Deepa, R., Karthick, R., Velusamy, J., & Senthilkumar, R. (2025). Performance analysis
of multiple-input multiple-output orthogonal frequency division multiplexing system using
arithmetic optimization algorithm. Computer Standards & Interfaces, 92, 103934.
2. Senthilkumar,Dr.P.Venkatakrishnan,Dr.N.Balaji, Intelligent based novel embedded
system based IoT Enabled air pollution monitoring system, ELSEVIER Microprocessors
and Microsystems Vol.77, June 2020
3. M. Muthalakshmi, N.Mythili, Gurkirpal Singh, R.Senthilkumar (2025). Innovative
Approaches for Evaluating Sugarcane Quality: Utilizing Near-Infrared Spectroscopy to
Forecast Brix, Pol, and Fiber Content in Commercial Agricultural Domains. Journal of
Food Processing, Wiley, https://doi.org/10.1111/jfpe.70233
4. Senthilkumar Ramachandraarjunan, Venkatakrishnan Perumalsamy & Balaji Narayanan
2022, ‘IoT based artificial intelligence indoor air quality monitoring system using enabled
RNN algorithm techniques’, in Journal of Intelligent & Fuzzy Systems, vol. 43, no. 3, pp.
2853-2868
5. N. Nagarani, M. Muthalakshmi , E. S. Vinothkumar and R. Senthilkumar (2026)
‘Optimized Contrastive Multi-Level Graph Neural Networks-Based Pigment Epithelial
Detachment Detection in OCT images’ International Journal of Information Technology
& Decision Making 2026 World Scientific DOI: 10.1142/S0219622026500343
6. Sanitha P C; Syed Nageena Parveen; Shaik Thaherbasha; M. Shanmugapriya; T. Kalaivani;
R. Senthilkumar, Transparent Nutrition: An Explainable AI-based Diet Tracking System
for Preventing Nutrition-Related Disorders. 2025 3rd International Conference on
Intelligent Cyber Physical Systems and Internet of Things (ICoICI)
DOI:10.1109/ICoICI65217.2025.11252549
7. T. Jayasri; M.R. Archana Jenis; P.B. Aswathy; S. Manoranjitham; Christo George; R.
Senthilkumar Identity-First Defense in Zero Trust Security Architecture to Protect
Cyberspace 3rd International Conference on Intelligent Cyber Physical Systems and
Internet of Things (ICoICI) DOI:10.1109/ICoICI65217.2025.11254505
8. J. Uthayakumar; Swapna; A. Ravikumar; S. Sreeraj; R. Senthilkumar; Babu Pandipati AIDriven
Water Resource Management Systems 2025 2nd International Conference on
Computing and Data Science (ICCDS) DOI: 10.1109/ICCDS64403 .2025.11209318
9. R.Swathiramya; V.V.Karthikeyan; P.Sumathi; Sruthy K V; Afreen Hussain;
R.Senthilkumar Multimodal Machine Learning Models for Intelligent Interpretation of
Text, Image and Audio Inputs 2025 5th International Conference on Emerging Research
in Electronics, Computer Science and Technology (ICERECT)
DOI:10.1109/ICERECT65215.2025.11377322
10. Srinju.M; Dr.V.Dhanasekaran; S. Guruprasath; Dr.K.Edison Prabhu; K.J Godlin Debby;
Dr.R.Senthilkumar AI-Based Recommendation System for Weight Management Using
User Feedback and Health Metrics 2025 5th International Conference on Emerging
Research in Electronics, Computer Science and Technology (ICERECT) DOI:
10.1109/ICERECT65215.2025.11379842
11. S. S. S. Kruthiventi, K. Ayush, and R. V. Babu, “DeepFix: A fully convolutional neural
network for predicting human eye fixations,” IEEE Trans. Image Process., vol. 26, no. 9,
pp. 4446–4456, Sept. 2017.
12. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep
convolutional neural networks,” Advances in Neural Information Processing Systems, vol.
25, pp. 1097–1105, 2012.
13. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in
Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 770–778.
14. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, realtime
object detection,” in Proc. IEEE CVPR, 2016, pp. 779–788.
15. J. Redmon and A. Farhadi, “YOLOv3: An incremental improvement,” arXiv preprint
arXiv:1804.02767, 2018.
16. A. Bochkovskiy, C. Y. Wang, and H. Y. M. Liao, “YOLOv4: Optimal speed and accuracy
of object detection,” arXiv preprint arXiv:2004.10934, 2020.
17. G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools, 2000.
18. R. Szeliski, Computer Vision: Algorithms and Applications. Springer, 2011.
19. D. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis.,
vol. 60, no. 2, pp. 91–110, 2004.
20. T. Ojala, M. Pietikäinen, and T. Mäenpää, “Multiresolution gray-scale and rotation
invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal.
Mach. Intell., vol. 24, no. 7, pp. 971–987, 2002.
21. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, pp. 436–444,
2015.
22. M. Everingham et al., “The Pascal Visual Object Classes (VOC) challenge,” Int. J.
Comput. Vis., vol. 88, no. 2, pp. 303–338, 2010.
23. W. Zhao et al., “Automatic defect detection in industrial products using machine vision,”
IEEE Access, vol. 7, pp. 181-190, 2019.
24. X. Zhang, Y. Liu, and Q. Wang, “Surface defect detection of industrial products based on
deep learning,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1–12, 2021.
25. H. Chen, Y. Pang, and X. Li, “Automated visual inspection system for surface defects in
manufacturing,” IEEE Access, vol. 8, pp. 145-156, 2020.
26. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection
with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6,
pp. 1137–1149, 2017.
27. L. Liu et al., “Deep learning for generic object detection: A survey,” Int. J. Comput. Vis.,
vol. 128, pp. 261–318, 2020.
28. P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,”
in Proc. IEEE CVPR, 2001, pp. 511–518.
29. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: From error
visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612,
2004.
30. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic
segmentation,” in Proc. IEEE CVPR, 2015, pp. 3431–3440.