Matching Items (16)
Filtering by

Clear all filters

149452-Thumbnail Image.png
Description
Cyber Physical Systems (CPSs) are systems comprising of computational systems that interact with the physical world to perform sensing, communication, computation and actuation. Common examples of these systems include Body Area Networks (BANs), Autonomous Vehicles (AVs), Power Distribution Systems etc. The close coupling between cyber and physical worlds in a

Cyber Physical Systems (CPSs) are systems comprising of computational systems that interact with the physical world to perform sensing, communication, computation and actuation. Common examples of these systems include Body Area Networks (BANs), Autonomous Vehicles (AVs), Power Distribution Systems etc. The close coupling between cyber and physical worlds in a CPS manifests in two types of interactions between computing systems and the physical world: intentional and unintentional. Unintentional interactions result from the physical characteristics of the computing systems and often cause harm to the physical world, if the computing nodes are close to each other, these interactions may overlap thereby increasing the chances of causing a Safety hazard. Similarly, due to mobile nature of computing nodes in a CPS planned and unplanned interactions with the physical world occur. These interactions represent the behavior of a computing node while it is following a planned path and during faulty operations. Both of these interactions change over time due to the dynamics (motion) of the computing node and may overlap thereby causing harm to the physical world. Lack of proper modeling and analysis frameworks for these systems causes system designers to use ad-hoc techniques thereby further increasing their design and development time. The thesis addresses these problems by taking a holistic approach to model Computational, Physical and Cyber Physical Interactions (CPIs) aspects of a CPS and proposes modeling constructs for them. These constructs are analyzed using a safety analysis algorithm developed as part of the thesis. The algorithm computes the intersection of CPIs for both mobile as well as static computing nodes and determines the safety of the physical system. A framework is developed by extending AADL to support these modeling constructs; the safety analysis algorithm is implemented as OSATE plug-in. The applicability of the proposed approach is demonstrated by considering the safety of human tissue during the operations of BAN, and the safety of passengers traveling in an Autonomous Vehicle.
ContributorsKandula, Sailesh Umamaheswara (Author) / Gupta, Sandeep (Thesis advisor) / Lee, Yann Hang (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2010
152360-Thumbnail Image.png
Description
In this work, we present approximate adders and multipliers to reduce data-path complexity of specialized hardware for various image processing systems. These approximate circuits have a lower area, latency and power consumption compared to their accurate counterparts and produce fairly accurate results. We build upon the work on approximate adders

In this work, we present approximate adders and multipliers to reduce data-path complexity of specialized hardware for various image processing systems. These approximate circuits have a lower area, latency and power consumption compared to their accurate counterparts and produce fairly accurate results. We build upon the work on approximate adders and multipliers presented in [23] and [24]. First, we show how choice of algorithm and parallel adder design can be used to implement 2D Discrete Cosine Transform (DCT) algorithm with good performance but low area. Our implementation of the 2D DCT has comparable PSNR performance with respect to the algorithm presented in [23] with ~35-50% reduction in area. Next, we use the approximate 2x2 multiplier presented in [24] to implement parallel approximate multipliers. We demonstrate that if some of the 2x2 multipliers in the design of the parallel multiplier are accurate, the accuracy of the multiplier improves significantly, especially when two large numbers are multiplied. We choose Gaussian FIR Filter and Fast Fourier Transform (FFT) algorithms to illustrate the efficacy of our proposed approximate multiplier. We show that application of the proposed approximate multiplier improves the PSNR performance of 32x32 FFT implementation by 4.7 dB compared to the implementation using the approximate multiplier described in [24]. We also implement a state-of-the-art image enlargement algorithm, namely Segment Adaptive Gradient Angle (SAGA) [29], in hardware. The algorithm is mapped to pipelined hardware blocks and we synthesized the design using 90 nm technology. We show that a 64x64 image can be processed in 496.48 µs when clocked at 100 MHz. The average PSNR performance of our implementation using accurate parallel adders and multipliers is 31.33 dB and that using approximate parallel adders and multipliers is 30.86 dB, when evaluated against the original image. The PSNR performance of both designs is comparable to the performance of the double precision floating point MATLAB implementation of the algorithm.
ContributorsVasudevan, Madhu (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Gupta, Sandeep (Committee member) / Arizona State University (Publisher)
Created2013
158399-Thumbnail Image.png
Description
Languages, specially gestural and sign languages, are best learned in immersive environments with rich feedback. Computer-Aided Language Learning (CALL) solu- tions for spoken languages have successfully incorporated some feedback mechanisms, but no such solution exists for signed languages. Computer Aided Sign Language Learning (CASLL) is a recent and promising field

Languages, specially gestural and sign languages, are best learned in immersive environments with rich feedback. Computer-Aided Language Learning (CALL) solu- tions for spoken languages have successfully incorporated some feedback mechanisms, but no such solution exists for signed languages. Computer Aided Sign Language Learning (CASLL) is a recent and promising field of research which is made feasible by advances in Computer Vision and Sign Language Recognition(SLR). Leveraging existing SLR systems for feedback based learning is not feasible because their decision processes are not human interpretable and do not facilitate conceptual feedback to learners. Thus, fundamental research is needed towards designing systems that are modular and explainable. The explanations from these systems can then be used to produce feedback to aid in the learning process.

In this work, I present novel approaches for the recognition of location, movement and handshape that are components of American Sign Language (ASL) using both wrist-worn sensors as well as webcams. Finally, I present Learn2Sign(L2S), a chat- bot based AI tutor that can provide fine-grained conceptual feedback to learners of ASL using the modular recognition approaches. L2S is designed to provide feedback directly relating to the fundamental concepts of ASL using an explainable AI. I present the system performance results in terms of Precision, Recall and F-1 scores as well as validation results towards the learning outcomes of users. Both retention and execution tests for 26 participants for 14 different ASL words learned using learn2sign is presented. Finally, I also present the results of a post-usage usability survey for all the participants. In this work, I found that learners who received live feedback on their executions improved their execution as well as retention performances. The average increase in execution performance was 28% points and that for retention was 4% points.
ContributorsPaudyal, Prajwal (Author) / Gupta, Sandeep (Thesis advisor) / Banerjee, Ayan (Committee member) / Hsiao, Ihan (Committee member) / Azuma, Tamiko (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020
165085-Thumbnail Image.png
Description
Wardriving is when prospective malicious hackers drive with a portable computer to sniff out and map potentially vulnerable networks. With the advent of smart homes and other Internet of Things devices, this poses the possibility of more unsecure targets. The hardware available to the public has also miniaturized and gotten

Wardriving is when prospective malicious hackers drive with a portable computer to sniff out and map potentially vulnerable networks. With the advent of smart homes and other Internet of Things devices, this poses the possibility of more unsecure targets. The hardware available to the public has also miniaturized and gotten more powerful. One no longer needs to carry a complete laptop to carry out network mapping. With this miniaturization and greater popularity of quadcopter technology, the two can be combined to create a more efficient wardriving setup in a potentially more target-rich environment. Thus, we set out to create a prototype as a proof of concept of this combination. By creating a bracket for a Raspberry Pi to be mounted to a drone with other wireless sniffing equipment, we demonstrate that one can use various off the shelf components to create a powerful network detection device. In this write up, we also outline some of the challenges encountered by combining these two technologies, as well as the solutions to those challenges. Adding payload weight to drones that are not initially designed for it causes detrimental effects to various characteristics such as flight behavior and power consumption. Less computing power is available due to the miniaturization that must take place for a drone-mounted solution. Communication between the miniature computer and a ground control computer is also essential in overall system operation. Below, we highlight solutions to these various problems as well as improvements that can be implemented for maximum system effectiveness.
ContributorsHer, Zachary (Author) / Walker, Elizabeth (Co-author) / Gupta, Sandeep (Thesis director) / Wang, Ruoyu (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
165086-Thumbnail Image.png
Description

Wardriving is when prospective malicious hackers drive with a portable computer to sniff out and map potentially vulnerable networks. With the advent of smart homes and other Internet of Things devices, this poses the possibility of more unsecure targets. The hardware available to the public has also miniaturized and gotten

Wardriving is when prospective malicious hackers drive with a portable computer to sniff out and map potentially vulnerable networks. With the advent of smart homes and other Internet of Things devices, this poses the possibility of more unsecure targets. The hardware available to the public has also miniaturized and gotten more powerful. One no longer needs to carry a complete laptop to carry out network mapping. With this miniaturization and greater popularity of quadcopter technology, the two can be combined to create a more efficient wardriving setup in a potentially more target-rich environment. Thus, we set out to create a prototype as a proof of concept of this combination. By creating a bracket for a Raspberry Pi to be mounted to a drone with other wireless sniffing equipment, we demonstrate that one can use various off the shelf components to create a powerful network detection device. In this write up, we also outline some of the challenges encountered by combining these two technologies, as well as the solutions to those challenges. Adding payload weight to drones that are not initially designed for it causes detrimental effects to various characteristics such as flight behavior and power consumption. Less computing power is available due to the miniaturization that must take place for a drone-mounted solution. Communication between the miniature computer and a ground control computer is also essential in overall system operation. Below, we highlight solutions to these various problems as well as improvements that can be implemented for maximum system effectiveness.

ContributorsWalker, Elizabeth (Author) / Her, Zachary (Co-author) / Gupta, Sandeep (Thesis director) / Wang, Ruoyu (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2022-05
192577-Thumbnail Image.png
Description

American Sign Language (ASL) is used for Deaf and Hard of Hearing (DHH) individuals to communicate and learn in a classroom setting. In ASL, fingerspelling and gestures are two primary components used for communication. Fingerspelling is commonly used for words that do not have a specifically designated sign or gesture.

American Sign Language (ASL) is used for Deaf and Hard of Hearing (DHH) individuals to communicate and learn in a classroom setting. In ASL, fingerspelling and gestures are two primary components used for communication. Fingerspelling is commonly used for words that do not have a specifically designated sign or gesture. In technical contexts, such as Computer Science curriculum, there are many technical terms that fall under this category. Most of its jargon does not have standardized ASL gestures; therefore, students, educators, and interpreters alike have been reliant on fingerspelling, which poses challenges for all parties. This study investigates the efficacy of both fingerspelling and gestures with fifteen technical terms that do have standardized gestures. The terms’ fingerspelling and gesture are assessed based on preference, ease of use, ease of learning, and time by research subjects who were selected as DHH individuals familiar with ASL.

The data is collected in a series of video recordings by research subjects as well as a post-participation questionnaire. Each research subject has produced thirty total videos, two videos to fingerspell and gesture each technical term. Afterwards, they completed a post-participation questionnaire in which they indicated their preference and how easy it was to learn and use both fingerspelling and gestures. Additionally, the videos have been analyzed to determine the time difference between fingerspelling and gestures. Analysis reveals that gestures are favored over fingerspelling as they are generally preferred, considered easier to learn and use, and faster. These results underscore the significance for standardized gestures in the Computer Science curriculum for accessible learning that enhances communication and promotes inclusion.

ContributorsKarim, Bushra (Author) / Gupta, Sandeep (Thesis director) / Hossain, Sameena (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of International Letters and Cultures (Contributor)
Created2024-05