Your cart

Your Wishlist

Categories

📞 Working Hours: 9:30 AM to 6:30 PM (Mon-Sat) | +91 9739594609 | 🟢 WhatsApp

⏰ 9:30 AM - 6:30 PM |

Customer support chat bot using natural language processing
YouTube Video
Product Image
Product Preview

AI Exam Proctor System Using Computer Vision and Deep Learning

Category: Python Projects

Price: ₹ 3780 ₹ 9000 0% OFF

Abstract
The rapid transition toward online examinations has highlighted the need for reliable, automated, and intelligent proctoring solutions capable of detecting and preventing malpractice in real time. This project presents an AI-based Automated Exam Proctoring System that integrates multimodal monitoring techniques to ensure exam integrity without the need for human invigilators. The system combines computer vision, deep learning, audio analysis, and behavior monitoring to detect suspicious activities such as the presence of unauthorized objects, multiple faces, screen switching, whispering, abnormal movements, and loss of attention. A custom-trained YOLO object detection model identifies cheating-related items, while a Convolutional Neural Network (CNN) performs real-time emotion classification to analyze stress or irregular behavior. Additionally, MediaPipe-based eye-gaze estimation and pose detection track the candidate’s focus, body posture, and seat presence. The system also includes an audio-based whisper detection mechanism that triggers escalating warnings followed by exam termination upon repeated violations. A Flask-based web interface provides seamless exam management, live video streaming, logging, and administrative control. Experimental evaluation demonstrates that the combined multimodal framework significantly enhances accuracy, robustness, and reliability compared to traditional online invigilating approaches. The solution is scalable, cost-effective, and applicable to academic institutions and certification platforms requiring secure online assessments.









Introduction
The rapid digital transformation across all domains of education has significantly increased the adoption of online examinations as an alternative to traditional classroom-based assessments. While online exams offer flexibility, accessibility, and scalability, they also introduce substantial challenges related to academic integrity and candidate authentication. Institutions worldwide face issues such as impersonation, unauthorized resource usage, communication between candidates, and exploitation of system loopholes. Human invigilators, who traditionally supervise physical exam environments, are often unable to effectively monitor multiple remote learners simultaneously. This gap has driven the need for a dependable, automated, intelligent proctoring system capable of ensuring fairness, reliability, and transparency in virtual assessments.
Artificial Intelligence (AI), computer vision, and audio processing technologies have emerged as effective tools to address these challenges by enabling machines to perceive, analyze, and respond to human behavior. The proposed system leverages a multimodal combination of deep learning models, object detection algorithms, and behavioral monitoring techniques to deliver a real-time, automated proctoring experience. Unlike conventional methods that rely solely on webcam feeds or screen-recording tools, this solution integrates multiple layers of detection—visual, auditory, behavioral, and contextual—to produce a more accurate assessment of candidate activity. By incorporating AI-driven monitoring, institutions can significantly reduce manual workload, eliminate subjectivity, and improve the overall security of online examinations.
One of the core components of the system is the YOLO-based object detection framework, which identifies cheating-related objects such as mobile phones, paper slips, and electronic devices from the candidate’s surroundings. YOLO (You Only Look Once) is widely recognized for its speed and real-time performance, making it suitable for continuous monitoring throughout the examination. The system flags suspicious objects with bounding boxes and logs all detections as evidence, enabling post-exam review by administrators. This level of automation ensures that unauthorized materials are detected promptly without human intervention.
In addition to object detection, emotional stability and behavioral cues provide essential insight into candidate authenticity and potential misconduct. A custom-trained Convolutional Neural Network (CNN) is integrated into the system to classify facial emotions such as anger, fear, sadness, happiness, and neutrality. Unusual emotional patterns during an exam may indicate stress, anxiety, or attempts to cheat, allowing the system to correlate behavioral anomalies with other warning signals. The CNN-based emotion recognition module is optimized for real-time inference, ensuring that the monitoring process remains seamless and uninterrupted.
Human gaze direction is another crucial indicator of attention and focus during an examination. The system incorporates MediaPipe-based eye-gaze estimation to determine whether the candidate is looking at the screen, frequently glancing sideways, or focusing outside the visible range. Repeated gaze deviations may suggest the presence of unauthorized assistance or external notes. The system dynamically tracks iris movements and estimates gaze direction with high precision, issuing warnings or terminating the exam based on predefined thresholds. Similarly, pose and body movement detection verify whether the student remains seated and within the camera frame. If the candidate stands up, leaves the seat, or moves away from the camera, the system records the incident and triggers escalating warnings.
Audio surveillance is also essential in maintaining exam integrity. The system integrates a whisper detection module that captures short audio samples using the system’s microphone and calculates their amplitude to determine whether the candidate is speaking softly. Whispering often indicates communication with individuals off-screen, which violates examination rules. The system responds to detected whispers with progressively stricter warnings, ultimately terminating the exam upon repeated violations. This proactive approach mitigates attempts to verbally exchange answers or receive verbal assistance.

Objectives
1. To develop an automated, AI-driven online exam proctoring system that accurately monitors candidates in real time using computer vision, audio processing, and behavioral analysis without requiring human invigilators.
2. To implement YOLO-based object detection for identifying cheating-related items such as mobile phones, books, papers, or electronic devices, and to capture evidence automatically whenever such objects appear in the candidate’s surroundings.
3. To integrate a CNN-based facial emotion recognition model capable of classifying student emotions (Angry, Fear, Happy, Neutral, Sad) to identify behavioral anomalies, stress indicators, or irregular patterns during the examination.
4. To incorporate eye-gaze tracking using MediaPipe Face Mesh for detecting loss of focus, side glancing, or looking away from the screen, and to generate warnings or terminate the exam based on repeated gaze violations.
5. To implement pose and movement monitoring using MediaPipe Pose to detect if the candidate leaves the seat, moves out of camera view, or exhibits abnormal body movements that indicate potential cheating.
6. To design an audio-based whisper detection mechanism that continuously monitors sound levels, identifies low-volume speech, and triggers escalating warnings culminating in exam termination if violations persist.

block-diagram

• Demo Video
• Complete project
• Full project report
• Source code
• Complete project support by online
• Lifetime access
• Execution Guidelines
• Immediate (Download)

Requirement Specification
Software Requirements
Python
TensorFlow
Keras
PyTorch
OpenCV
MediaPipe
Dlib
SoundDevice
NumPy
gTTS (Google Text-to-Speech)
Flask Web Framework
SQLite
Threading, Queue, Pathlib, Tempfile, OS

Hardware Requirements
1. Raspberry Pi
2. Pi Camera
3. Proposed System
4. Existing System

Immediate Download:
1. Synopsis
2. Rough Report
3. Software code
4. Technical support

Leave a Review

Only logged-in users can leave a review.

Customer Reviews

No reviews yet. Be the first to review this product!