This project seeks to mitigate the reduced video quality from data compression due to bandwidth limits, which hinders the transmission of emotional information. The project applies selective compression to a prerecorded video to produce a modified video that compresses the background and preserves important emotional information. The effect of this selective compression was assessed through data collection of user emotional and visual response. The final goal was to publish a paper summarizing the conclusions drawn from all of the lab data that was collected.
The importance of nonverbal communication has been well established through several theories including Albert Mehrabian's 7-38-55 rule that proposes the respective importance of semantics, tonality and facial expressions in communication. Although several studies have examined how emotions are expressed and preceived in communication, there is limited research investigating the relationship between how emotions are expressed through semantics and facial expressions. Using a facial expression analysis software to deconstruct facial expressions into features and a K-Nearest-Neighbor (KNN) machine learning classifier, we explored if facial expressions can be clustered based on semantics. Our findings indicate that facial expressions can be clustered based on semantics and that there is an inherent congruence between facial expressions and semantics. These results are novel and significant in the context of nonverbal communication and are applicable to several areas of research including the vast field of emotion AI and machine emotional communication.