This project aims to analyse high volume emotional videos and images to recognise human facial expression and/or emotion using deep learning methods. The emotional videos and images are collected from the literature (such as UvA-NEMO , Cohn-Kanade , AFEW , MMI , and/or MAHNOB ). Initially, Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) were applied to distinguish between posed and genuine smile from UvA-NEMO smile database, and 3D Mesh model and CNN were applied to recognise emotion from Cohn-Kanade (CK and CK+) database. In addition to the preliminary results [1, 2] more experiments are needed to develop high performing models for recognising or classifying human emotions and/or facial expressions from the emotional videos and images.
- Liwei Hou (2019). Distinguishing genuine and posed smiles using computer vision deep learning approaches. Link: http://courses.cecs.anu.edu.au/courses/CSPROJECTS/19S2/reports/u6343089_report.pdf
- Jialin Yang (2019). Detecting Human Emotions From 3D Mesh. Link: http://courses.cecs.anu.edu.au/courses/CSPROJECTS/19S2/reports/u5894100_report.pdf