Overview:
This project leverages deep learning to classify facial emotions in real time from both images and videos. Built on the FER2013 dataset, the system detects emotions like Angry, Happy, Neutral, Sad, and Fear using a CNN model inspired by the VGG architecture.

It was further customized by simplifying emotion groups into two categories: “Okay” and “Not Okay”, ideal for safety, well-being, and user experience monitoring systems.
Key Features:
- Real-Time Detection: Detects facial emotions from webcam feeds using
FER_video.py
. - Image-Based Classification: Classifies emotions in static images and logs predictions with facial coordinates in a text file.
- Model Architecture: Based on Convolutional Neural Networks (CNNs) with data augmentation using
ImageDataGenerator
. - Accuracy: Achieved a final test accuracy of ~92.5%, competitive for the challenging FER2013 dataset.
- Database Integration: Stores predictions and face metadata (coordinates, timestamps, labels) into a MySQL database for future reference or analytics.
- Technology Stack: Python · TensorFlow · OpenCV · MySQL · CNN · FER2013 Dataset

Use Cases:
- Mental health monitoring
- Human-computer interaction
- Smart surveillance and sentiment-aware automation
- Classroom or audience emotion tracking