With a lot of learning shifting online, many students turn their cameras off, making it impossible for teachers to understand whether their students are engaged. Teachers feel that they are teaching a vacant wall of icons while students are unable to show that they are engaged without inviting discomfort. This application allows teachers to enable a mode of encouraging and communicating engagement without making students turning their video on. Students who are uncomfortable turning on their cameras can convey relevant emotions and engage with the lesson, helping all participants in a video conference increase mutual engagement and trust.
The application builds on facial landmark learning to detect and communicate engagement. It consists of four components: calibration, detection, communication, and reporting. The application first calibrates detection and display to the specific user, changing parameters for detection and the avatar that will be displayed to communicate engagement. It detects specific actions such as smiling or raising a hand, then conveys them through an avatar, e.g., if the user smiles, the avatar smiles. It also provides each student an aggregated engagement score and graph for the duration of the lesson. Optionally, they can even contribute to the teacher’s report which would average all students’ engagement, keeping individual scores anonymous, so teachers know if large sections of their classes found some parts of the class to be harder.
The application is a step towards communicating real-time engagement and measuring overall engagement, even without having the connection of communicatingin person. It could be useful for ad testing, initial screenings, and flipped learning. Meanwhile, the changing avatar can bring interest and fun not only to classes, but in regular video calls among friends.