Your cart

Your Wishlist

Categories

YouTube Video
Product Image
Product Preview

Multimodal Biometric System Using Human Signature Palm Recognition

Category: Image Processing

Price: ₹ 2560 ₹ 8000 68% OFF

Abstract
This project proposes a multimodal biometric system that uses both a person’s palm image and handwritten signature to verify identity. Two separate datasets—one containing palm images and another with signature samples—are used to train a Convolutional Neural Network (CNN). The CNN model learns to extract important features from both modalities and associates them with individual identities during training. In the testing phase, a user uploads both their palm image and signature. The system compares these inputs with the trained data, and if both match the same person, it displays Successful Match. Otherwise, it shows Unsuccessful Match.By combining two types of biometric traits—one physical and one behavioural—the system increases the reliability and accuracy of authentication. This method is useful for applications such as secure access systems, identity verification, and digital authentication platforms.
Keywords: dataset ,deep learning algorithm











Introduction
In an era where digital security and identity verification have become increasingly critical, traditional methods such as passwords, ID cards, and PINs no longer provide sufficient protection against fraud and unauthorized access. These conventional techniques are easily compromised and fail to guarantee the authenticity of a person’s identity. As a result, biometric systems have gained significant attention as a more reliable and secure means of authentication. Biometrics involves the use of unique physical or behavioural characteristics—such as fingerprints, face, iris, voice, signature, or palmprint—to identify and verify individuals.
Most existing biometric systems are unimodal, relying on a single trait for authentication. While unimodal systems are effective in many cases, they often face limitations like poor quality input data, variations due to age or environment, and vulnerability to spoofing or imitation. To overcome these challenges, multimodal biometric systems have been introduced, which integrate two or more biometric traits to improve accuracy, reduce error rates, and enhance resistance to fraudulent attempts.
This project focuses on the development of a multimodal biometric authentication system based on two traits: the human signature (a behavioral trait) and the palm image (a physiological trait). These traits are chosen because they are distinctive, non-intrusive, and commonly accepted by users. The proposed system uses Convolutional Neural Networks (CNNs) to learn and extract deep features from both datasets. Separate datasets for palmprints and signatures are used to train the model to recognize individuals based on the combination of these two traits.
During the testing phase, users are required to submit both their palm image and handwritten signature. The system compares these inputs against the trained data and determines whether they belong to the same registered individual. If a match is found for both traits, the system confirms identity with a "Successful Match" message; otherwise, it displays "Unsuccessful Match". This fusion-based approach increases the reliability of the authentication process and makes the system more secure and robust against spoofing.
By combining physiological and behavioral traits, this system demonstrates the advantages of multimodal biometrics in providing enhanced accuracy, stronger security, and greater confidence in identity verification. It is suitable for a wide range of applications, including secure access control, banking systems, digital identity verification, and government services.

Objective
The main objective of this project is to develop a multimodal biometric authenticationsystem that enhances the accuracy, security, and robustness of identity verification by combining two biometric traits: human palm image (physiological) and handwritten signature (behavioral).
The specific objectives are:
1. To collect and preprocess separate datasets for palm images and handwritten signatures of individuals.
2. To train a Convolutional Neural Network (CNN) model for each modality to extract and learn unique features for accurate identification.
3. To implement a matching system that verifies the identity of a person based on the combination of both palm and signature inputs.
4. To perform real-time testing, where a user uploads both their palm and signature, and the system returns “Successful Match” or “Unsuccessful Match” based on verification.
5. To evaluate the system's performance in terms of accuracy, false acceptance rate (FAR), and false rejection rate (FRR), and demonstrate its advantages over unimodal biometric systems.

block-diagram

• Demo Video
• Complete project
• Full project report
• Source code
• Complete project support by online
• Life time access
• Execution Guidelines
• Immediate (Download)

Software Requirement
Python idle 3.8
Library
Tensorflow
Opencv
Scikit-learn
keras
Hardware Requirement
PC

1. Immediate Download Online

Leave a Review

Only logged-in users can leave a review.

Customer Reviews