Emotion Detection Project

This project focuses on capturing live webcam feeds and then identifying and returning the users emotions in real time.


Unable to display PDF file.Downloadinstead.


About The Project

This project served as a proof-of-concept and aimed to investigate the capabilities of several current-day artificial intelligence concepts such as computer vision, image segmentation, and machine learning. The original idea behind this project was to allow users to stand in front of a camera and be returned with a prediction of what emotion they were currently experiencing based on their facial expressions, in real-time. This project was developed in a research team of 3 under Dr. Joseph Boutros, as part of Texas A&M University at Qatar’s Dean’s Research Initiative in Artificial Intelligence and Machine Learning.


The project program was built using Python and utilized multiple Python libraries such as OpenCV, Numpy, and Google MediaPipe to achieve the capability of isolating a user’s facial expressions in real-time, in order to be further processed by our neural network.


Once the user’s facial expression had been isolated and captured, we used a neural network to identify the facial expressions of the user, and return the predicted emotion, along with the percentage of predicted accuracy. The neural network in question was created using Google TensorFlow and was trained using a dataset of approximately 20,000 images.


My role in this project pertained to all major topics mentioned, but focused on the computer vision and image segmentation aspects of the program. I was responsible for developing a memory efficient program that would be capable of capturing approximately two images per second and then identifying the facial feature region of the user using a pre-trained MediaPipe neural network. Based on the returning coordinate values, I then cropped the images to specified dimensions, before storing them into a circular buffer of a predetermined size.