ABSTRACT
Depression is one of the most widespread and severe mental health disorders worldwide, often remaining undiagnosed due to stigma, lack of awareness, and limited clinical resources. With the rise of digital health technologies and large-scale patient data, machine learning has emerged as a promising tool for automatic depression detection. However, conventional centralized training methods raise serious privacy and security concerns, since sensitive health data must be collected and stored in one location. To overcome these challenges, this project proposes a Federated Learning (FL) framework for depression detection, which allows multiple institutions or devices to collaboratively train a shared model without exposing their local datasets.
The proposed system integrates privacy-preserving techniques to ensure secure model training. On the client side, local preprocessing is performed, including noise injection, class balancing using SMOTE, and dimensionality reduction through PCA, to improve model quality and robustness. Each client trains a Multi-Layer Perceptron (MLP) classifier for binary classification (depressed vs. healthy), while protecting sensitive updates using two complementary mechanisms: (1) Homomorphic Encryption with the CKKS scheme via TenSEAL, which ensures that local model weights are encrypted before being transmitted, and (2) Differential Privacy using Opacus, which injects statistical noise into the training process to provide formal privacy guarantees.
The central server, implemented using the Flower federated learning framework, securely aggregates the encrypted model parameters using a modified FedAvg strategy. Encrypted updates are decrypted inside a secure context, averaged, and re-encrypted before being redistributed to clients. This guarantees that the server never has access to raw client data and that individual contributions cannot be reconstructed. The server further evaluates the global model after each round and records training metrics such as loss and accuracy per round, saving the best-performing global model automatically.
INTRODUCTION
Depression is one of the most prevalent mental health disorders worldwide, affecting millions of people across different age groups and demographics. According to the World Health Organization (WHO), depression is a leading cause of disability and can lead to severe consequences such as reduced quality of life, increased healthcare costs, and even suicidal tendencies if left untreated. Early and accurate detection of depression plays a crucial role in providing timely medical intervention and improving patient outcomes. With the rapid growth of digital health data collected from hospitals, wearable devices, mobile applications, and electronic health records (EHRs), machine learning has emerged as a promising approach to automate depression detection. However, training effective models in healthcare poses significant challenges due to data sensitivity, privacy concerns, and strict regulations such as HIPAA and GDPR.
Traditional machine learning methods rely on centralized data collection, where all patient data is transferred to a central server for model training. While this approach benefits from access to a large dataset, it exposes highly sensitive personal health information to potential security breaches, unauthorized access, and misuse. In mental health applications, privacy concerns are even more critical, since patients may be reluctant to share data that reflects their emotional, psychological, and behavioral states. This highlights the urgent need for privacy-preserving machine learning techniques that can provide accurate predictions while safeguarding sensitive user data.
To address these challenges, this project employs Federated Learning (FL), a decentralized machine learning paradigm that allows multiple clients (e.g., hospitals, clinics, or user devices) to collaboratively train a global model without sharing raw data. Instead of sending datasets to a central server, each client trains a local model on its private dataset and only shares model updates (parameters or gradients) with the server. The server then aggregates these updates to improve the global model and redistributes it back to the clients for further training. This approach ensures that raw data never leaves the client’s device, significantly reducing privacy risks. However, while federated learning mitigates direct data exposure, it is still vulnerable to indirect information leakage. Research has shown that malicious attackers may reconstruct parts of the training data or infer sensitive attributes by analyzing the shared gradients or model parameters. To strengthen privacy protection, this project integrates two additional layers of security: Homomorphic Encryption (HE) and Differential Privacy (DP).
Homomorphic encryption, implemented using the CKKS scheme via the TenSEAL library, allows mathematical operations to be performed directly on encrypted data. In this project, each client encrypts its model parameters before sending them to the server. The server then performs secure aggregation on encrypted updates, decrypts them only within a controlled context, and re-encrypts the aggregated global model before redistributing it. This ensures that the server cannot access individual client parameters in plain form, thereby protecting client updates from unauthorized access. Differential privacy, implemented using Opacus for PyTorch, provides a complementary privacy guarantee by introducing controlled noise into the training process. This prevents adversaries from identifying whether a particular data sample contributed to the training. Each client in this project trains its local model with differential privacy constraints, ensuring that individual patient data points remain indistinguishable in the final model. The combination of federated learning, homomorphic encryption, and differential privacy creates a robust privacy-preserving framework tailored for sensitive healthcare applications.
• Demo Video
• Complete project
• Full project report
• Source code
• Complete project support by online
• Lifetime access
• Execution Guidelines
• Immediate (Download)
HARDWARE REQUIREMENTS
Laptop / Desktop computer
Processor: Intel i3 or above
RAM: Minimum 4 GB
Hard Disk: 20 GB free space
Internet connection
SOFTWARE REQUIREMENTS
Operating System: Windows / Linux / macOS
Programming Language: Python 3
Machine Learning Framework: TensorFlow or PyTorch
Federated Learning Framework: TensorFlow Federated or Flower
Libraries: NumPy, Pandas, Scikit-learn
Development Tool: Jupyter Notebook or VS Code
Immediate Download:
1. Synopsis
2. Rough Report
3. Software code
4. Technical support
Only logged-in users can leave a review.
No reviews yet. Be the first to review this product!