Abstract
Operator fatigue is a significant factor contributing to accidents in industrial environments, transportation systems, and safety-critical operations. Continuous monitoring of an operator’s alertness can help reduce risks and improve workplace safety. This project presents a real-time Operator Fatigue Monitoring System based on computer vision techniques. The system captures live video using a webcam and analyzes facial features to detect signs of fatigue. A deep learning-based object detection model, YOLOv8, is used to detect the presence of a person in the frame. Once detected, the facial region is processed using MediaPipe Face Mesh to extract detailed facial landmarks. Eye landmarks are utilized to compute the Eye Aspect Ratio (EAR), which indicates whether the eyes are open or closed. To assess fatigue levels over time, the system calculates the Percentage of Eye Closure (PERCLOS) within a fixed frame window. If the PERCLOS value exceeds a predefined threshold, the system identifies a fatigue condition and generates an alert. A graphical user interface developed using Tkinter displays the real-time video feed along with fatigue indicators such as EAR value, PERCLOS percentage, system frame rate, and alert status. The proposed system operates at approximately 10 frames per second and provides a practical software-based solution for real-time fatigue monitoring. The implementation demonstrates how computer vision and lightweight deep learning models can be integrated to improve safety in operator monitoring applications.
Keywords
Operator Fatigue Detection, Computer Vision, YOLOv8, MediaPipe Face Mesh, Eye Aspect Ratio (EAR), PERCLOS, Real-Time Monitoring, Human Alertness Detection.
Introduction
Fatigue is one of the major causes of accidents in industries, transportation systems, and other safety-critical environments where continuous human attention is required. Operators who work for long durations often experience reduced alertness, slower reaction time, and decreased concentration. These conditions significantly increase the probability of operational errors and accidents. In many real-world scenarios such as industrial machinery operation, heavy equipment control, and vehicle driving, a momentary lapse in attention due to fatigue can lead to severe consequences including equipment damage, financial loss, or even loss of human life. Therefore, monitoring the alertness level of operators has become an important aspect of modern safety systems.
Traditionally, fatigue monitoring has relied on manual supervision or physiological sensors attached to the human body. Methods such as electroencephalography (EEG), electrocardiography (ECG), and wearable sensors can measure physiological signals related to fatigue. However, these approaches are often intrusive, expensive, and inconvenient for long-term use in industrial environments. Operators may feel uncomfortable wearing multiple sensors during work, and maintaining such systems can be complex. Due to these limitations, researchers have increasingly focused on non-intrusive monitoring techniques that rely on computer vision and artificial intelligence.
Recent advancements in computer vision and deep learning technologies have made it possible to develop automated fatigue detection systems that analyze facial features and eye movements in real time. Vision-based monitoring systems use cameras to capture facial expressions and detect behavioral patterns associated with fatigue, such as eye closure, blinking frequency, head movement, and yawning. Among these indicators, eye behavior has proven to be one of the most reliable signs of fatigue. When a person becomes tired, the duration of eye closure increases and blinking patterns change significantly. By analyzing these patterns, fatigue levels can be estimated effectively.
The proposed Operator Fatigue Monitoring System utilizes computer vision techniques to monitor the alertness level of an operator in real time. The system captures live video frames using a webcam and processes them using advanced image processing and deep learning algorithms. A lightweight object detection model is used to identify the presence of a person in the camera frame. Detecting the operator first ensures that further analysis is performed only when a human subject is present, thereby improving system efficiency and reducing unnecessary computations.
Once the person is detected, the system focuses on extracting the facial region for further analysis. Facial landmark detection techniques are then applied to identify key facial features such as the eyes, nose, and mouth. In this system, the MediaPipe Face Mesh model is used to detect detailed facial landmarks. The model provides a dense set of facial landmark points, allowing precise identification of eye regions and other facial features. These landmarks enable accurate analysis of eye behavior, which is essential for fatigue detection.
The system specifically analyzes the eye region to determine the operator’s alertness level. Eye landmarks obtained from the facial landmark detection model are used to compute the Eye Aspect Ratio (EAR). The Eye Aspect Ratio is a geometric measurement that represents the relationship between the vertical and horizontal distances of the eye. When the eye is open, the EAR value remains relatively high and stable. However, when the eye closes, the vertical distance decreases significantly, causing the EAR value to drop. By continuously monitoring the EAR value across consecutive frames, the system can determine whether the operator's eyes are open or closed.
To further improve the reliability of fatigue detection, the system uses the Percentage of Eye Closure (PERCLOS) metric. PERCLOS is widely used in fatigue monitoring systems and is considered one of the most effective indicators of drowsiness. It measures the percentage of time that the eyes remain closed within a specified observation window. If the percentage of closed eyes exceeds a predefined threshold, the system interprets it as a sign of fatigue. This approach helps reduce false detections and provides a more stable fatigue estimation over time.
The entire fatigue detection process operates in real time, allowing continuous monitoring of the operator. The system processes video frames sequentially and updates fatigue indicators for each frame. To provide clear visualization and monitoring, a graphical user interface is implemented using the Tkinter library. The interface displays the live video feed along with important parameters such as the Eye Aspect Ratio (EAR), PERCLOS percentage, frames per second (FPS), and the current fatigue status. The fatigue status changes dynamically depending on the detected alertness level of the operator.
Objectives
The objectives of the proposed Operator Fatigue Monitoring System are:
To develop a real-time operator fatigue monitoring system using computer vision techniques.
To detect the presence of an operator in the video frame using the YOLOv8 object detection model.
To extract facial landmarks accurately using the MediaPipe Face Mesh model.
To identify eye regions from facial landmarks for monitoring eye movement and blinking patterns.
To compute the Eye Aspect Ratio (EAR) to determine whether the eyes are open or closed.
To implement the PERCLOS algorithm for measuring the percentage of eye closure over a specific frame window.
To detect fatigue conditions automatically when the PERCLOS value exceeds the predefined threshold.
To display real-time monitoring results including EAR value, PERCLOS percentage, FPS, and fatigue status through a graphical user interface.
To design a lightweight and efficient system capable of performing real-time fatigue detection.
• Demo Video
• Complete project
• Full project report
• Source code
• Complete project support by online
• Life time access
• Execution Guidelines
• Immediate (Download)
Requirement Specification
Hardware Requirements
Sl. No Hardware Component Specification Purpose
1 Computer / Laptop Intel i5 / Ryzen 5 or higher Used to run the fatigue detection system and process video frames
2 RAM Minimum 8 GB Required for running deep learning models and image processing
3 Webcam HD Webcam (720p or higher) Captures real-time video of the operator
4 Storage Minimum 256 GB Used for storing project files, libraries, and models
5 Processor Multi-core CPU Handles real-time processing of computer vision algorithms
Software Requirements
Sl. No Software / Tool Version Purpose
1 Operating System Windows / Linux Platform used to run the fatigue detection system
2 Python Python 3.8 Programming language used for system development
3 OpenCV Latest Version Used for video capture and image processing
4 YOLOv8 (Ultralytics) Latest Version Used for real-time person detection
5 MediaPipe Latest Version Used for facial landmark detection
6 SciPy Latest Version Used for distance calculations in EAR computation
7 Tkinter Built-in Python Library Used for creating the graphical user interface
8 Pillow (PIL) Latest Version Used for image processing in GUI display
9 IDE / Code Editor VS Code / PyCharm Used for writing and running the Python code
1. Immediate Download Online
Only logged-in users can leave a review.
No reviews yet. Be the first to review this product!