Matching Items (13)
152614-Thumbnail Image.png
Description
Some disabled users of assistive technologies (AT) have expressed concerns that their use of those AT devices brings particular attention to their disability and, in doing so, stigmatizes them in the eyes of their peers. This research studies how a wide range of design factors, influence how positively or negatively

Some disabled users of assistive technologies (AT) have expressed concerns that their use of those AT devices brings particular attention to their disability and, in doing so, stigmatizes them in the eyes of their peers. This research studies how a wide range of design factors, influence how positively or negatively users of wearable technologies are perceived, by others. These factors are studied by asking survey respondents to estimate the degree to which they perceive disabilities in users of various products. The survey was given to 34 undergraduate Product Design students, and employed 40 pictures, each of which showed one person using a product. Some of these products were assistive technology devices, and some were not. Respondents used a five-bubble Likert scale to indicate the level of disability that they perceived in this person. Data analysis was done using SPSS software. The results showed that the gender of the respondent was not a significant factor in the respondent's estimation of the level of disability. However, the cultural background of the respondent was found to be significant in the respondent's estimates of disability for seven of the 40 pictures. The results also indicated that the size of AT, its familiarity to the mainstream population, its wearable location on the user's body, the perceived power of the user, the degree to which the AT device seemed to empower the user, the degree to which the AT device was seen as a vehicle for assertion of the user's individuality, and the successfulness of attempts to disguise the AT as some mainstream product reduced the perceived disability of the user. In contrast, symbols or stereotypes of disability, obstructing visibility of the face, an awkward complex design, a mismatch between the product's design and its context of use, and covering of the head were factors that focused attention on, and increased the perception of, the user's disability. These factors are summarized in a set of guidelines to help AT designers develop products that minimize the perceived disability and the resulting stigmatization of the user.
ContributorsValamanesh, Ronak (Author) / Velasquez, Joseph (Thesis advisor) / Black, John (Committee member) / Herring, Donald (Committee member) / Arizona State University (Publisher)
Created2014
150112-Thumbnail Image.png
Description
Typically, the complete loss or severe impairment of a sense such as vision and/or hearing is compensated through sensory substitution, i.e., the use of an alternative sense for receiving the same information. For individuals who are blind or visually impaired, the alternative senses have predominantly been hearing and touch. For

Typically, the complete loss or severe impairment of a sense such as vision and/or hearing is compensated through sensory substitution, i.e., the use of an alternative sense for receiving the same information. For individuals who are blind or visually impaired, the alternative senses have predominantly been hearing and touch. For movies, visual content has been made accessible to visually impaired viewers through audio descriptions -- an additional narration that describes scenes, the characters involved and other pertinent details. However, as audio descriptions should not overlap with dialogue, sound effects and musical scores, there is limited time to convey information, often resulting in stunted and abridged descriptions that leave out many important visual cues and concepts. This work proposes a promising multimodal approach to sensory substitution for movies by providing complementary information through haptics, pertaining to the positions and movements of actors, in addition to a film's audio description and audio content. In a ten-minute presentation of five movie clips to ten individuals who were visually impaired or blind, the novel methodology was found to provide an almost two time increase in the perception of actors' movements in scenes. Moreover, participants appreciated and found useful the overall concept of providing a visual perspective to film through haptics.
ContributorsViswanathan, Lakshmie Narayan (Author) / Panchanathan, Sethuraman (Thesis advisor) / Hedgpeth, Terri (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
154699-Thumbnail Image.png
Description
Unmanned aerial vehicles have received increased attention in the last decade due to their versatility, as well as the availability of inexpensive sensors (e.g. GPS, IMU) for their navigation and control. Multirotor vehicles, specifically quadrotors, have formed a fast growing field in robotics, with the range of applications spanning from

Unmanned aerial vehicles have received increased attention in the last decade due to their versatility, as well as the availability of inexpensive sensors (e.g. GPS, IMU) for their navigation and control. Multirotor vehicles, specifically quadrotors, have formed a fast growing field in robotics, with the range of applications spanning from surveil- lance and reconnaissance to agriculture and large area mapping. Although in most applications single quadrotors are used, there is an increasing interest in architectures controlling multiple quadrotors executing a collaborative task. This thesis introduces a new concept of control involving more than one quadrotors, according to which two quadrotors can be physically coupled in mid-flight. This concept equips the quadro- tors with new capabilities, e.g. increased payload or pursuit and capturing of other quadrotors. A comprehensive simulation of the approach is built to simulate coupled quadrotors. The dynamics and modeling of the coupled system is presented together with a discussion regarding the coupling mechanism, impact modeling and additional considerations that have been investigated. Simulation results are presented for cases of static coupling as well as enemy quadrotor pursuit and capture, together with an analysis of control methodology and gain tuning. Practical implementations are introduced as results show the feasibility of this design.
ContributorsLarsson, Daniel (Author) / Artemiadis, Panagiotis (Thesis advisor) / Marvi, Hamidreza (Committee member) / Berman, Spring (Committee member) / Arizona State University (Publisher)
Created2016
149621-Thumbnail Image.png
Description
Social situational awareness, or the attentiveness to one's social surroundings, including the people, their interactions and their behaviors is a complex sensory-cognitive-motor task that requires one to be engaged thoroughly in understanding their social interactions. These interactions are formed out of the elements of human interpersonal communication including both verbal

Social situational awareness, or the attentiveness to one's social surroundings, including the people, their interactions and their behaviors is a complex sensory-cognitive-motor task that requires one to be engaged thoroughly in understanding their social interactions. These interactions are formed out of the elements of human interpersonal communication including both verbal and non-verbal cues. While the verbal cues are instructive and delivered through speech, the non-verbal cues are mostly interpretive and requires the full attention of the participants to understand, comprehend and respond to them appropriately. Unfortunately certain situations are not conducive for a person to have complete access to their social surroundings, especially the non-verbal cues. For example, a person is who is blind or visually impaired may find that the non-verbal cues like smiling, head nod, eye contact, body gestures and facial expressions of their interaction partners are not accessible due to their sensory deprivation. The same could be said of people who are remotely engaged in a conversation and physically separated to have a visual access to one's body and facial mannerisms. This dissertation describes novel multimedia technologies to aid situations where it is necessary to mediate social situational information between interacting participants. As an example of the proposed system, an evidence-based model for understanding the accessibility problem faced by people who are blind or visually impaired is described in detail. From the derived model, a sleuth of sensing and delivery technologies that use state-of-the-art computer vision algorithms in combination with novel haptic interfaces are developed towards a) A Dyadic Interaction Assistant, capable of helping individuals who are blind to access important head and face based non-verbal communicative cues during one-on-one dyadic interactions, and b) A Group Interaction Assistant, capable of provide situational awareness about the interaction partners and their dynamics to a user who is blind, while also providing important social feedback about their own body mannerisms. The goal is to increase the effective social situational information that one has access to, with the conjuncture that a good awareness of one's social surroundings gives them the ability to understand and empathize with their interaction partners better. Extending the work from an important social interaction assistive technology, the need for enriched social situational awareness is everyday professional situations are also discussed, including, a) enriched remote interactions between physically separated interaction partners, and b) enriched communication between medical professionals during critical care procedures, towards enhanced patient safety. In the concluding remarks, this dissertation engages the readers into a science and technology policy discussion on the potential effect of a new technology like the social interaction assistant on the society. Discussing along the policy lines, social disability is highlighted as an important area that requires special attention from researchers and policy makers. Given that the proposed technology relies on wearable inconspicuous cameras, the discussion of privacy policies is extended to encompass newly evolving interpersonal interaction recorders, like the one presented in this dissertation.
ContributorsKrishna, Sreekar (Author) / Panchanathan, Sethuraman (Thesis advisor) / Black, John A. (Committee member) / Qian, Gang (Committee member) / Li, Baoxin (Committee member) / Shiota, Michelle (Committee member) / Arizona State University (Publisher)
Created2011
137649-Thumbnail Image.png
Description
An investigation of the Caregiver Autism Residential E-health (CARE) system composed of low-cost, end-user deployable smart home technology and accompanying heuristics for rule-based models of human behavior has been evaluated for its potential as an empowering assistive technology with the capacity to enhance the well-being of people living with autism,

An investigation of the Caregiver Autism Residential E-health (CARE) system composed of low-cost, end-user deployable smart home technology and accompanying heuristics for rule-based models of human behavior has been evaluated for its potential as an empowering assistive technology with the capacity to enhance the well-being of people living with autism, their caregivers, and family members. It allows adults living with autism to create personalized smart home interventions that provide motivational support and is accompanied by guidelines for a safe and effective means of behavioral change. This investigation contributes a participatory co-design approach which addresses both the role of flexibility for the dynamic needs of the individual while offering strategies for dealing with the challenges of designing assistive smart home technologies for the needs of individuals across the wide range of autism spectrum disorders.
ContributorsNewman, Naomi Catton (Author) / Burleson, Winslow (Thesis director) / Brotman, Ryan (Co-author) / Lozano, Cecil (Co-author) / Adams, Jim (Committee member) / Zautra, Alex (Committee member) / Barrett, The Honors College (Contributor) / Graduate College (Contributor) / School of Community Resources and Development (Contributor) / Department of Psychology (Contributor) / School of Human Evolution and Social Change (Contributor)
Created2013-05
Description

For the average person, when they use a computer, they interact with two main groups: the Computer Input, which consists of a keyboard and a mouse, and the Computer Output, which consists of a monitor and speakers. For those with physical disabilities, traditional Computer Input and Output methods can be

For the average person, when they use a computer, they interact with two main groups: the Computer Input, which consists of a keyboard and a mouse, and the Computer Output, which consists of a monitor and speakers. For those with physical disabilities, traditional Computer Input and Output methods can be difficult or uncomfortable to use. I believe VR Technology can make using computers much more accessible for those individuals, and my application demonstrates that belief.

ContributorsGarcia, Mario (Author) / Johnson-Glenberg, Mina (Thesis director) / Bunch, Jacob (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
154026-Thumbnail Image.png
Description
There has been a vast increase in applications of Unmanned Aerial Vehicles (UAVs) in civilian domains. To operate in the civilian airspace, a UAV must be able to sense and avoid both static and moving obstacles for flight safety. While indoor and low-altitude environments are mainly occupied by static obstacles,

There has been a vast increase in applications of Unmanned Aerial Vehicles (UAVs) in civilian domains. To operate in the civilian airspace, a UAV must be able to sense and avoid both static and moving obstacles for flight safety. While indoor and low-altitude environments are mainly occupied by static obstacles, risks in space of higher altitude primarily come from moving obstacles such as other aircraft or flying vehicles in the airspace. Therefore, the ability to avoid moving obstacles becomes a necessity

for Unmanned Aerial Vehicles.

Towards enabling a UAV to autonomously sense and avoid moving obstacles, this thesis makes the following contributions. Initially, an image-based reactive motion planner is developed for a quadrotor to avoid a fast approaching obstacle. Furthermore, A Dubin’s curve based geometry method is developed as a global path planner for a fixed-wing UAV to avoid collisions with aircraft. The image-based method is unable to produce an optimal path and the geometry method uses a simplified UAV model. To compensate

these two disadvantages, a series of algorithms built upon the Closed-Loop Rapid Exploratory Random Tree are developed as global path planners to generate collision avoidance paths in real time. The algorithms are validated in Software-In-the-Loop (SITL) and Hardware-In-the-Loop (HIL) simulations using a fixed-wing UAV model and in real flight experiments using quadrotors. It is observed that the algorithm enables a UAV to avoid moving obstacles approaching to it with different directions and speeds.
ContributorsLin, Yucong (Author) / Saripalli, Srikanth (Thesis advisor) / Scowen, Paul (Committee member) / Fainekos, Georgios (Committee member) / Thangavelautham, Jekanthan (Committee member) / Youngbull, Cody (Committee member) / Arizona State University (Publisher)
Created2015
158792-Thumbnail Image.png
Description
Access to real-time situational information including the relative position and motion of surrounding objects is critical for safe and independent travel. Object or obstacle (OO) detection at a distance is primarily a task of the visual system due to the high resolution information the eyes are able to receive from

Access to real-time situational information including the relative position and motion of surrounding objects is critical for safe and independent travel. Object or obstacle (OO) detection at a distance is primarily a task of the visual system due to the high resolution information the eyes are able to receive from afar. As a sensory organ in particular, the eyes have an unparalleled ability to adjust to varying degrees of light, color, and distance. Therefore, in the case of a non-visual traveler, someone who is blind or low vision, access to visual information is unattainable if it is positioned beyond the reach of the preferred mobility device or outside the path of travel. Although, the area of assistive technology in terms of electronic travel aids (ETA’s) has received considerable attention over the last two decades; surprisingly, the field has seen little work in the area focused on augmenting rather than replacing current non-visual travel techniques, methods, and tools. Consequently, this work describes the design of an intuitive tactile language and series of wearable tactile interfaces (the Haptic Chair, HaptWrap, and HapBack) to deliver real-time spatiotemporal data. The overall intuitiveness of the haptic mappings conveyed through the tactile interfaces are evaluated using a combination of absolute identification accuracy of a series of patterns and subjective feedback through post-experiment surveys. Two types of spatiotemporal representations are considered: static patterns representing object location at a single time instance, and dynamic patterns, added in the HaptWrap, which represent object movement over a time interval. Results support the viability of multi-dimensional haptics applied to the body to yield an intuitive understanding of dynamic interactions occurring around the navigator during travel. Lastly, it is important to point out that the guiding principle of this work centered on providing the navigator with spatial knowledge otherwise unattainable through current mobility techniques, methods, and tools, thus, providing the \emph{navigator} with the information necessary to make informed navigation decisions independently, at a distance.
ContributorsDuarte, Bryan Joiner (Author) / McDaniel, Troy (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Venkateswara, Hemanth (Committee member) / Arizona State University (Publisher)
Created2020
158224-Thumbnail Image.png
Description
Societal infrastructure is built with vision at the forefront of daily life. For those with

severe visual impairments, this creates countless barriers to the participation and

enjoyment of life’s opportunities. Technological progress has been both a blessing and

a curse in this regard. Digital text together with screen readers and refreshable Braille

displays have

Societal infrastructure is built with vision at the forefront of daily life. For those with

severe visual impairments, this creates countless barriers to the participation and

enjoyment of life’s opportunities. Technological progress has been both a blessing and

a curse in this regard. Digital text together with screen readers and refreshable Braille

displays have made whole libraries readily accessible and rideshare tech has made

independent mobility more attainable. Simultaneously, screen-based interactions and

experiences have only grown in pervasiveness and importance, precluding many of

those with visual impairments.

Sensory Substituion, the process of substituting an unavailable modality with

another one, has shown promise as an alternative to accomodation, but in recent

years meaningful strides in Sensory Substitution for vision have declined in frequency.

Given recent advances in Computer Vision, this stagnation is especially disconcerting.

Designing Sensory Substitution Devices (SSDs) for vision for use in interactive settings

that leverage modern Computer Vision techniques presents a variety of challenges

including perceptual bandwidth, human-computer-interaction, and person-centered

machine learning considerations. To surmount these barriers an approach called Per-

sonal Foveated Haptic Gaze (PFHG), is introduced. PFHG consists of two primary

components: a human visual system inspired interaction paradigm that is intuitive

and flexible enough to generalize to a variety of applications called Foveated Haptic

Gaze (FHG), and a person-centered learning component to address the expressivity

limitations of most SSDs. This component is called One-Shot Object Detection by

Data Augmentation (1SODDA), a one-shot object detection approach that allows a

user to specify the objects they are interested in locating visually and with minimal

effort realizing an object detection model that does so effectively.

The Personal Foveated Haptic Gaze framework was realized in a virtual and real-

world application: playing a 3D, interactive, first person video game (DOOM) and

finding user-specified real-world objects. User study results found Foveated Haptic

Gaze to be an effective and intuitive interface for interacting with dynamic visual

world using solely haptics. Additionally, 1SODDA achieves competitive performance

among few-shot object detection methods and high-framerate many-shot object de-

tectors. The combination of which paves the way for modern Sensory Substitution

Devices for vision.
ContributorsFakhri, Bijan (Author) / Panchanathan, Sethuraman (Thesis advisor) / McDaniel, Troy L (Committee member) / Venkateswara, Hemanth (Committee member) / Amor, Heni (Committee member) / Arizona State University (Publisher)
Created2020
132182-Thumbnail Image.png
Description
This project was completed as part of the InnovationSpace collaborative thesis, an entrepreneurial joint venture program that allows students to develop products that create market value while serving societal needs. This collaborative thesis was done in a team of students from various disciplines under the sponsorship of Cisco Systems, and

This project was completed as part of the InnovationSpace collaborative thesis, an entrepreneurial joint venture program that allows students to develop products that create market value while serving societal needs. This collaborative thesis was done in a team of students from various disciplines under the sponsorship of Cisco Systems, and the goal was to develop an assistive technology project for the disabled that incorporated the internet of things (IOT). This project was broken out into several different phases. Initially, the team came up with a variety of ideas based on our market research. We narrowed down the ideas to a list of three potential products and built a business model and prototype for each of them seen in phase 5. After reviewing them further, we ultimately selected the MecX, an assistive technology designed to increase physical activity for a disabled person. We built a working prototype for this product and created a full design with all stakeholders in mind. Once this was done, we ran surveys to test the feasibility of our product to its demographic. Finally, we presented this product to a panel of judges and sponsors.

The attached files show the business write-up from phases 5, 6, and 7 from the project followed by a personal reflection.
ContributorsPorter, Oscar Garfield (Author) / Trujillo, Rhett (Thesis director) / Hedges, Craig (Committee member) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05