Matching Items (10)
Filtering by

Clear all filters

136785-Thumbnail Image.png
Description
This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind using a visual-to-tactile mapping of facial action units and emotions. Pancake shaftless vibration motors are mounted on the back of

This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind using a visual-to-tactile mapping of facial action units and emotions. Pancake shaftless vibration motors are mounted on the back of a chair to provide vibrotactile stimulation in the context of a dyadic (one-on-one) interaction across a table. This work explores the design of spatiotemporal vibration patterns that can be used to convey the basic building blocks of facial movements according to the Facial Action Unit Coding System. A behavioral study was conducted to explore the factors that influence the naturalness of conveying affect using vibrotactile cues.
ContributorsBala, Shantanu (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / Department of Psychology (Contributor)
Created2014-05
134551-Thumbnail Image.png
Description
This project explores how web applications can structure their User Interfaces to best accommodate their users who may not be able to use standard input devices like a mouse and keyboard, or differentiate subtle color differences in text, or who may be overwhelmed with heavy animation or auto-play videos. This

This project explores how web applications can structure their User Interfaces to best accommodate their users who may not be able to use standard input devices like a mouse and keyboard, or differentiate subtle color differences in text, or who may be overwhelmed with heavy animation or auto-play videos. This project serves as a proof-of-concept of an accessible Virtual Learning Environment to be used by students of online classes, particularly at younger grade levels. It is a functional application that handles user login, lecture presentations and materials, and quizzes. The development of the front-end is done through the React JS library, an open source library from Facebook used for building UIs. This project finds that React has strong capabilities of building accessible UIs that is consistent with modern accessibility web standards. As React is one of the most popular emerging JavaScript libraries that is already being incorporated to large-scale web pages and applications, this project hopes to inform other developers on some of the tools and techniques that can make their work accessible to all users.
ContributorsTerzic, Philip Mico (Author) / Balasooriya, Janaka (Thesis director) / Tadayon-Navabi, Farideh (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
134091-Thumbnail Image.png
Description
Elections in the United States are highly decentralized with vast powers given to the states to control laws surrounding voter registration, primary procedures, and polling places even in elections of federal officials. There are many individual factors that predict a person's likelihood of voting including race, education, and age. Historically

Elections in the United States are highly decentralized with vast powers given to the states to control laws surrounding voter registration, primary procedures, and polling places even in elections of federal officials. There are many individual factors that predict a person's likelihood of voting including race, education, and age. Historically disenfranchised groups are still disproportionately affected by restrictive voter registration and ID laws which can suppress their turnout. Less understood is how election-day polling place accessibility affects turnout. Absentee and early voting increase accessibility for all voters, but 47 states still rely on election-day polling places. I study how the geographic allocation of polling places and the number of voters assigned to each (polling place load) in Maricopa County, Arizona has affected turnout in primary and general elections between 2006 and 2016 while controlling for the demographics of voting precincts. This represents a significant data problem; voting precincts changed three times during the time studied and polling places themselves can change every election. To aid in analysis, I created a visualization that allows for the exploration of polling place load, precinct demographics, and polling place accessibility metrics in a map view of the county. I find through a spatial regression model that increasing the load on a polling place can decrease the election-day turnout and prohibitively large distances to the polling place have a similar effect. The effect is more pronounced during general elections and is present at varying levels during each of the 12 elections studied. Finally, I discuss how early voting options appear to have little positive effect on overall turnout and may in fact decrease it.
ContributorsHansen, Brett Joseph (Author) / Maciejewski, Ross (Thesis director) / Grubesic, Anthony (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
Description

Games have traditionally had a high barrier to entry because they necessitate unique input devices, fast reaction times, high motor skills, and more. There has recently been a push to change the design process of these games to include people with disabilities so they can interact with the medium of

Games have traditionally had a high barrier to entry because they necessitate unique input devices, fast reaction times, high motor skills, and more. There has recently been a push to change the design process of these games to include people with disabilities so they can interact with the medium of games as well. This thesis examines the current guiding principles of accessible design, who they are being developed by, and how they might help guide future accessible design and development. Additionally, it will look at modern games with accessibility features and classify them in terms of the Game Accessibility Guidelines. Then, using an interview with a lead developer at a game studio as aid, there will be an examination into modern game industry practices and what might be holding developers or studios back when it comes to accessible design. Finally, further suggestions for these developers and studios will be made in order to help them and others improve in making their games more accessible to people with disabilities.

ContributorsDavis, Robert (Author) / McDaniel, Troy (Thesis director) / Selgrad, Justin (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School for the Future of Innovation in Society (Contributor)
Created2023-05
158127-Thumbnail Image.png
Description
Over the past decade, advancements in neural networks have been instrumental in achieving remarkable breakthroughs in the field of computer vision. One of the applications is in creating assistive technology to improve the lives of visually impaired people by making the world around them more accessible. A lot of research

Over the past decade, advancements in neural networks have been instrumental in achieving remarkable breakthroughs in the field of computer vision. One of the applications is in creating assistive technology to improve the lives of visually impaired people by making the world around them more accessible. A lot of research in convolutional neural networks has led to human-level performance in different vision tasks including image classification, object detection, instance segmentation, semantic segmentation, panoptic segmentation and scene text recognition. All the before mentioned tasks, individually or in combination, have been used to create assistive technologies to improve accessibility for the blind.

This dissertation outlines various applications to improve accessibility and independence for visually impaired people during shopping by helping them identify products in retail stores. The dissertation includes the following contributions; (i) A dataset containing images of breakfast-cereal products and a classifier using a deep neural (ResNet) network; (ii) A dataset for training a text detection and scene-text recognition model; (iii) A model for text detection and scene-text recognition to identify product images using a user-controlled camera; (iv) A dataset of twenty thousand products with product information and related images that can be used to train and test a system designed to identify products.
ContributorsPatel, Akshar (Author) / Panchanathan, Sethuraman (Thesis advisor) / Venkateswara, Hemanth (Thesis advisor) / McDaniel, Troy (Committee member) / Arizona State University (Publisher)
Created2020
158233-Thumbnail Image.png
Description
Individuals with voice disorders experience challenges communicating daily. These challenges lead to a significant decrease in the quality of life for individuals with dysphonia. While voice amplification systems are often employed as a voice-assistive technology, individuals with voice disorders generally still experience difficulties being understood while using voice amplification systems.

Individuals with voice disorders experience challenges communicating daily. These challenges lead to a significant decrease in the quality of life for individuals with dysphonia. While voice amplification systems are often employed as a voice-assistive technology, individuals with voice disorders generally still experience difficulties being understood while using voice amplification systems. With the goal of developing systems that help improve the quality of life of individuals with dysphonia, this work outlines the landscape of voice-assistive technology, the inaccessibility of state-of-the-art voice-based technology and the need for the development of intelligibility improving voice-assistive technologies designed both with and for individuals with voice disorders. With the rise of voice-based technologies in society, in order for everyone to participate in the use of voice-based technologies individuals with voice disorders must be included in both the data that is used to train these systems and the design process. An important and necessary step towards the development of better voice assistive technology as well as more inclusive voice-based systems is the creation of a large, publicly available dataset of dysphonic speech. To this end, a web-based platform to crowdsource voice disorder speech was developed to create such a dataset. This dataset will be released so that it is freely and publicly available to stimulate research in the field of voice-assistive technologies. Future work includes building a robust intelligibility estimation model, as well as employing that model to measure, and therefore enhance, the intelligibility of a given utterance. The hope is that this model will lead to the development of voice-assistive technology using state-of-the-art machine learning models to help individuals with voice disorders be better understood.
ContributorsMoore, Meredith Kay (Author) / Panchanathan, Sethuraman (Thesis advisor) / Berisha, Visar (Committee member) / McDaniel, Troy (Committee member) / Venkateswara, Hemanth (Committee member) / Arizona State University (Publisher)
Created2020
131996-Thumbnail Image.png
Description
Although many data visualization diagrams can be made accessible for individuals who are blind or visually impaired, they often do not present the information in a way that intuitively allows readers to easily discern patterns in the data. In particular, accessible node graphs tend to use speech to describe the

Although many data visualization diagrams can be made accessible for individuals who are blind or visually impaired, they often do not present the information in a way that intuitively allows readers to easily discern patterns in the data. In particular, accessible node graphs tend to use speech to describe the transitions between nodes. While the speech is easy to understand, readers can be overwhelmed by too much speech and may not be able to discern any structural patterns which occur in the graphs. Considering these limitations, this research seeks to find ways to better present transitions in node graphs.

This study aims to gain knowledge on how sequence patterns in node graphs can be perceived through speech and nonspeech audio. Users listened to short audio clips describing a sequence of transitions occurring in a node graph. User study results were evaluated based on accuracy and user feedback. Five common techniques were identified through the study, and the results will be used to help design a node graph tool to improve accessibility of node graph creation and exploration for individuals that are blind or visually impaired.
ContributorsDarmawaskita, Nicole (Author) / McDaniel, Troy (Thesis director) / Duarte, Bryan (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
166227-Thumbnail Image.png
Description
Artistic expression can be made more accessible through the use of technological interfaces such as auditory analysis, generative artificial intelligence models, and simplification of complicated systems, providing a way for human driven creativity to serve as an input that allow users to creatively express themselves. Studies and testing were done

Artistic expression can be made more accessible through the use of technological interfaces such as auditory analysis, generative artificial intelligence models, and simplification of complicated systems, providing a way for human driven creativity to serve as an input that allow users to creatively express themselves. Studies and testing were done with industry standard performance technology and protocols to create an accessible interface for creative expression. Artificial intelligence models were created to generate art based on simple text inputs. Users were then invited to display their creativity using the software, and a comprehensive performance showcased the potential of the system for artistic expression.
ContributorsPardhe, Joshua (Author) / Lim, Kang Yi (Co-author) / Meuth, Ryan (Thesis director) / Brian, Jennifer (Committee member) / Hermann, Kristen (Committee member) / Barrett, The Honors College (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Watts College of Public Service & Community Solut (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
166228-Thumbnail Image.png
Description
Artistic expression can be made more accessible through the use of technological interfaces such as auditory analysis, generative artificial intelligence models, and simplification of complicated systems, providing a way for human driven creativity to serve as an input that allow users to creatively express themselves. Studies and testing were done

Artistic expression can be made more accessible through the use of technological interfaces such as auditory analysis, generative artificial intelligence models, and simplification of complicated systems, providing a way for human driven creativity to serve as an input that allow users to creatively express themselves. Studies and testing were done with industry standard performance technology and protocols to create an accessible interface for creative expression. Artificial intelligence models were created to generate art based on simple text inputs. Users were then invited to display their creativity using the software, and a comprehensive performance showcased the potential of the system for artistic expression.
ContributorsLim, Kang Yi (Author) / Pardhe, Joshua (Co-author) / Meuth, Ryan (Thesis director) / Brian, Jennifer (Committee member) / Hermann, Kristen (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
192577-Thumbnail Image.png
Description

American Sign Language (ASL) is used for Deaf and Hard of Hearing (DHH) individuals to communicate and learn in a classroom setting. In ASL, fingerspelling and gestures are two primary components used for communication. Fingerspelling is commonly used for words that do not have a specifically designated sign or gesture.

American Sign Language (ASL) is used for Deaf and Hard of Hearing (DHH) individuals to communicate and learn in a classroom setting. In ASL, fingerspelling and gestures are two primary components used for communication. Fingerspelling is commonly used for words that do not have a specifically designated sign or gesture. In technical contexts, such as Computer Science curriculum, there are many technical terms that fall under this category. Most of its jargon does not have standardized ASL gestures; therefore, students, educators, and interpreters alike have been reliant on fingerspelling, which poses challenges for all parties. This study investigates the efficacy of both fingerspelling and gestures with fifteen technical terms that do have standardized gestures. The terms’ fingerspelling and gesture are assessed based on preference, ease of use, ease of learning, and time by research subjects who were selected as DHH individuals familiar with ASL.

The data is collected in a series of video recordings by research subjects as well as a post-participation questionnaire. Each research subject has produced thirty total videos, two videos to fingerspell and gesture each technical term. Afterwards, they completed a post-participation questionnaire in which they indicated their preference and how easy it was to learn and use both fingerspelling and gestures. Additionally, the videos have been analyzed to determine the time difference between fingerspelling and gestures. Analysis reveals that gestures are favored over fingerspelling as they are generally preferred, considered easier to learn and use, and faster. These results underscore the significance for standardized gestures in the Computer Science curriculum for accessible learning that enhances communication and promotes inclusion.

ContributorsKarim, Bushra (Author) / Gupta, Sandeep (Thesis director) / Hossain, Sameena (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of International Letters and Cultures (Contributor)
Created2024-05