Filtering by
- All Subjects: Computer Systems Engineering
- All Subjects: Unity
- Creators: Computer Science and Engineering Program
- Member of: Theses and Dissertations
- Status: Published
simulated computer-generated environment. These environments are accurately simulated in that they provide the appearance of- and allow users to interact with- the simulated environment. Using head-mounted displays, controllers, and auditory feedback, virtual reality provides a convincing simulation of interactable virtual worlds (Wikipedia, “Virtual reality”). The many worlds of virtual reality are often expansive, colorful, and detailed. However, there is one great flaw among them- an emotion evoked in many users through the exploration of such worlds-loneliness.
The content in these worlds is impressive, immersive, and entertaining. Without other people to share in these experiences, however, one can find themselves lonely. Users discover a feeling that no matter how many objects and colors surround them in countless virtual worlds, every world feels empty. As humans are social beings by nature, they feel lost without a sense of human connection and human interaction. Multiplayer experiences offer this missing element into the immersion of virtual reality worlds. Multiplayer offers users the opportunity to interact with other live people in a virtual simulation, which creates lasting memories and deeper, more meaningful immersion.
HackerHero is an educational game designed to teach children, especially those from marginalized backgrounds, computation thinking skills needed for STEAM fields. It also teaches children about social injustice. This project was focused on creating an audio visualization for an AI character within the HackerHero game. The audio visualization consisted of a static silhouette of a face and a wave-like form to represent the mouth. Audio content analysis was performed on audio sampled from the character’s voice lines. Pitch and amplitude derived from the analysis was used to animate the character’s visual features such as it’s brightness, color, and mouth movement. The mouth’s movement and color was manipulated with the audio’s pitch. The lights of Wave were controlled by the amplitude of the audio. Design considerations were made to accommodate those with visual disabilities such as color blindness and epilepsy. Overall the final audio visualization satisfied the project sponsor and built upon existing audio visualization work. User feedback will be a necessity for improving the audio visualization in the future.