FaceGest: A Comprehensive Facial Gesture Dataset for Human-Computer Interaction

Yaseen1,*, and Sonain Jamil2,*
1Sejong University, Seoul, South Korea (email: yaseen@sju.ac.kr)
2University of Eastern Finland (UEF), Joensuu, Finland (email: sonainjamil@ieee.org)
CVPR 2025 Workshop on CV4Metaverse

*Indicates Equal Contribution

Output of MediaPipe for different face gestures.

Abstract

Human-Computer Interaction (HCI) has evolved significantly with the integration of facial gesture recognition, offering intuitive and hands-free control mechanisms. This paper presents the Facial Gesture (FaceGest), a comprehensive dataset designed to facilitate research and development in facial gesture recognition systems. The dataset comprises 13 distinct facial gesture classes, including eye-based, mouth-based, head-based, and combined gestures, captured from a diverse group of participants under various lighting conditions, angles, and environments. FaceGest contains approximately 15,000 labeled samples in both video and image formats, providing a robust foundation for training and evaluating machine learning models. Potential applications include hands-free accessibility solutions, automotive systems, smart home automation, AR/VR interactions, security authentication, and gaming controls. By offering this open-access dataset along with baseline models and evaluation metrics, FaceGest aims to bridge existing gaps in HCI datasets and promote the development of inclusive, efficient, and versatile interaction systems.

FaceGest Dataset

FaceGest Dataset.

Dataset Release Agreement

BibTeX

@InProceedings{--_2025_CVPR,
    author    = {--, Yaseen and Jamil, Sonain},
    title     = {FaceGest: A Comprehensive Facial Gesture Dataset for Human-Computer Interaction},
    booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops},
    month     = {June},
    year      = {2025},
    pages     = {337-347}
}