ABSTRACT
This project presents an integrated real-time plant leaf disease detection system that combines deep learning, embedded vision, and web-based monitoring to support precision agriculture. The proposed system utilizes the ESP32-CAM module as an edge-level camera device to continuously capture live leaf images from the field. These frames are streamed over a wireless network and processed using a custom-trained YOLOv5 object detection model. The model is trained on a curated dataset consisting of healthy and diseased leaf samples, including symptoms such as rust, blight, spots, and discoloration patterns. The dataset captures variations in shape, texture, and infection severity, enabling robust classification in real-world environmental conditions. The YOLOv5 model is deployed on a computer system with GPU acceleration, and a Flask-based web interface is developed to provide a real-time visual output of detection results. The system performs frame-by-frame disease inference, draws bounding boxes, and labels detected disease types with corresponding confidence scores. A dynamic frame-skipping mechanism and resolution scaling strategy are implemented to reduce inference latency, enabling smooth detection performance even with continuous ESP32 camera streaming. The Flask application also ensures automatic stream reconnection for uninterrupted field monitoring. This solution addresses key agricultural challenges, particularly the delay in manual diagnosis and the impact of undetected diseases on crop yield. By automating early identification of symptoms, farmers can take timely corrective measures such as targeted pesticide application, irrigation adjustments, or nutrient balancing. The system’s low-cost design, ease of deployment, and compatibility with IoT frameworks make it suitable for both small-scale farms and large agricultural monitoring setups. Overall, the project demonstrates how deep learning and embedded systems can be integrated to create an efficient, scalable, and intelligent disease detection platform that enhances productivity and supports sustainable agriculture.
INTRODUCTION
Agriculture remains the cornerstone of global food security, economic stability, and rural development. As the world’s population continues to increase, the demand for higher agricultural productivity has intensified significantly. However, crop diseases represent one of the most persistent threats to agricultural output, often resulting in substantial yield losses and reduced crop quality. Traditional methods of identifying plant diseases rely heavily on manual inspection by farmers or agricultural experts, a process that can be slow, subjective, and error-prone. Furthermore, early symptoms of disease are often subtle and overlooked, leading to delayed intervention and extensive crop damage. In response to these challenges, technological innovations in computer vision, machine learning, and embedded systems have introduced new possibilities for automated, real-time plant disease monitoring. The rapid advancement of deep learning, particularly convolutional neural networks (CNNs), has significantly improved the accuracy and speed of image-based classification tasks. Object detection models, especially the You Only Look Once (YOLO) family of architectures, have demonstrated exceptional performance in real-time applications due to their optimized inference pipelines. These models are especially well-suited for agricultural scenarios where multiple disease symptoms must be detected in complex, dynamic environments. Unlike traditional classification-based methods, YOLO can localize and identify various disease patterns simultaneously, making it a powerful tool for field-level diagnostics.
Parallel to these advancements, the emergence of low-cost IoT-enabled camera modules such as the ESP32-CAM has transformed remote agricultural monitoring. The ESP32-CAM’s Wi-Fi connectivity, compact form factor, and affordability make it ideal for continuous image capture in farm environments. When paired with edge- or cloud-based deep learning models, it enables farmers to implement real-time monitoring systems without the need for expensive hardware. This integration of embedded vision and artificial intelligence has sparked a new generation of smart agriculture solutions capable of transforming traditional farming practices. This project builds upon these developments by designing and implementing a real-time leaf disease detection system using the ESP32-CAM for image acquisition and a YOLOv5 deep learning model for disease classification and localization. The system employs a Flask-based web interface that streams live video from the ESP32-CAM and overlays detection results in real time. By training YOLOv5 on a carefully curated dataset containing healthy leaves and multiple categories of diseased leaves, the system is capable of detecting symptoms such as rust, blight, leaf spot, and other common infections. Each detection is displayed with bounding boxes and confidence scores, enabling users to visually identify affected areas immediately. The use of GPU acceleration and frame-skipping optimization ensures low-latency inference, allowing the system to operate smoothly even under continuous streaming conditions.
The relevance of such a system extends far beyond academic interest. Modern agriculture faces multiple challenges, including climate change, increased pest activity, and reduced farmer awareness regarding early disease symptoms. In developing regions, the lack of agricultural experts and diagnostic resources further complicates timely intervention. As a result, a large percentage of crop diseases go undetected until the later stages, making treatment costly and sometimes ineffective. Automating disease detection helps overcome these issues by providing consistent, objective, and rapid analysis. Early identification enables prompt mitigation strategies such as selective pesticide application, improved irrigation scheduling, or nutrient corrections, resulting in minimized losses and improved crop yield. Another major strength of this system lies in its scalability and cost-effectiveness. Because ESP32-CAM modules are inexpensive and operate over Wi-Fi, they can be deployed throughout large farms, greenhouses, or nurseries. The YOLOv5 model, once trained, does not require extremely high-end hardware to operate, and can even be optimized for edge devices using model compression techniques if needed. This makes the solution suitable for farmers, agricultural researchers, government agencies, and agritech startups seeking to implement smart monitoring solutions without heavy financial investment. In summary, this project demonstrates how the combination of YOLOv5 deep learning, ESP32-CAM video streaming, and Flask-based web visualization can create a robust, real-time, and scalable leaf disease detection system. The approach directly addresses the limitations of manual plant disease identification by providing automated, high-accuracy, and real-time diagnostics. By leveraging affordable hardware and state-of-the-art deep learning models, the system bridges the technological gap between small-scale farmers and advanced precision agriculture methods. The project ultimately showcases the potential of AI-driven agricultural automation to enhance crop health monitoring, reduce production losses, and contribute to more sustainable farming practices. As agriculture continues to evolve toward more data-driven and technology-enabled methodologies, solutions like this will play a crucial role in shaping the future of food production.
OBJECTIVES
The primary objective of this project is to design and develop a comprehensive, real-time leaf disease detection system that leverages deep learning, computer vision, and low-cost IoT hardware to support precision agriculture through early disease diagnosis and continuous plant health monitoring. A central aim is to integrate the ESP32-CAM module as an affordable and accessible imaging device capable of streaming live leaf visuals from the field to a Flask-based web interface, ensuring farmers, researchers, or agricultural technicians can remotely view plant conditions without requiring expensive equipment. Another major objective is to build, train, and optimize a YOLOv5 object detection model that can accurately identify, classify, and localize disease symptoms across diverse leaf types under real-world conditions such as varying light exposure, leaf textures, orientations, and backgrounds. This includes developing a sufficiently large and diverse dataset, performing high-quality annotations, and applying proper model training techniques to achieve high precision, reduced false positives, and strong generalization capability. Beyond simply detecting disease presence, the objective extends to enabling clear visualization through bounding boxes, confidence scores, and real-time overlays to provide actionable insights that allow users to assess the severity and specific regions of infection. Another essential objective is to ensure that the detection system functions efficiently in real-time by implementing techniques such as frame-skipping, resolution scaling, GPU acceleration, and resource-optimized inference to reduce latency while maintaining accuracy. The system should demonstrate robustness by supporting automatic reconnection of the ESP32-CAM stream during network interruptions and delivering a stable video feed suitable for continuous agricultural monitoring. In addition to technical goals, the project aims to enhance the accessibility of AI-driven agricultural tools by creating a low-cost, scalable, and user-friendly solution that can be deployed in farms, nurseries, greenhouses, or research labs without requiring advanced technical expertise. Another extended objective is to investigate the feasibility of implementing automated alerts or notifications that inform users when disease symptoms are detected, thereby enabling quicker interventions. The system also aims to support scalability and potential integration with cloud platforms for long-term data storage, historical disease pattern analysis, and predictive analytics. Moreover, the project strives to contribute to academic research by demonstrating an effective fusion of low-cost IoT devices with state-of-the-art deep learning techniques for real-time agricultural problem-solving.
• Demo Video
• Complete project
• Full project report
• Source code
• Complete project support by online
• Lifetime access
• Execution Guidelines
• Immediate (Download)
SYSTEM REQUIREMENTS
1. Hardware Requirements
1. ESP32-CAM module with OV2640 camera sensor
2. USB-to-Serial programmer (FTDI or CP2102) for flashing ESP32-CAM
3. Stable 5V power supply for ESP32-CAM
4. Computer/Laptop with at least Intel i3 or higher
5. Minimum 8GB RAM (recommended 16GB for faster model training)
6. GPU-enabled system (optional but recommended for YOLOv5 training)
7. Wi-Fi router for network connectivity
8. Smartphone or laptop for accessing the live web interface
9. SD card (optional) for ESP32 onboard storage
10. Power cables, jumpers, and connectors for ESP32-CAM setup
2. Software Requirements
1. Windows / Linux / macOS operating system
2. Python 3.8 or above
3. PyTorch framework (CPU or GPU version)
4. YOLOv5 model and repository
5. Flask web framework
6. OpenCV library for frame processing
7. Torchvision and Numpy dependencies
8. Arduino IDE or ESP-IDF for ESP32 firmware programming
9. Required Python packages: Flask, opencv-python, torch, numpy, pathlib
10. Browser (Chrome/Firefox/Edge) for live monitoring
3. Network Requirements
1. Stable Wi-Fi network for ESP32-CAM streaming
2. Router with 2.4 GHz support (ESP32-CAM requirement)
3. Local IP address configuration for ESP32-CAM
4. Firewall access for Flask server communication
5. Low-latency connection for real-time video streaming
Immediate Download:
1. Synopsis
2. Rough Report
3. Software code
4. Technical support
Only logged-in users can leave a review.
No reviews yet. Be the first to review this product!