Matching Items (11)
Filtering by

Clear all filters

150743-Thumbnail Image.png
Description
Thanks to continuous technology scaling, intelligent, fast and smaller digital systems are now available at affordable costs. As a result, digital systems have found use in a wide range of application areas that were not even imagined before, including medical (e.g., MRI, remote or post-operative monitoring devices, etc.), automotive (e.g.,

Thanks to continuous technology scaling, intelligent, fast and smaller digital systems are now available at affordable costs. As a result, digital systems have found use in a wide range of application areas that were not even imagined before, including medical (e.g., MRI, remote or post-operative monitoring devices, etc.), automotive (e.g., adaptive cruise control, anti-lock brakes, etc.), security systems (e.g., residential security gateways, surveillance devices, etc.), and in- and out-of-body sensing (e.g., capsule swallowed by patients measuring digestive system pH, heart monitors, etc.). Such computing systems, which are completely embedded within the application, are called embedded systems, as opposed to general purpose computing systems. In the design of such embedded systems, power consumption and reliability are indispensable system requirements. In battery operated portable devices, the battery is the single largest factor contributing to device cost, weight, recharging time, frequency and ultimately its usability. For example, in the Apple iPhone 4 smart-phone, the battery is $40\%$ of the device weight, occupies $36\%$ of its volume and allows only $7$ hours (over 3G) of talk time. As embedded systems find use in a range of sensitive applications, from bio-medical applications to safety and security systems, the reliability of the computations performed becomes a crucial factor. At our current technology-node, portable embedded systems are prone to expect failures due to soft errors at the rate of once-per-year; but with aggressive technology scaling, the rate is predicted to increase exponentially to once-per-hour. Over the years, researchers have been successful in developing techniques, implemented at different layers of the design-spectrum, to improve system power efficiency and reliability. Among the layers of design abstraction, I observe that the interface between the compiler and processor micro-architecture possesses a unique potential for efficient design optimizations. A compiler designer is able to observe and analyze the application software at a finer granularity; while the processor architect analyzes the system output (power, performance, etc.) for each executed instruction. At the compiler micro-architecture interface, if the system knowledge at the two design layers can be integrated, design optimizations at the two layers can be modified to efficiently utilize available resources and thereby achieve appreciable system-level benefits. To this effect, the thesis statement is that, ``by merging system design information at the compiler and micro-architecture design layers, smart compilers can be developed, that achieve reliable and power-efficient embedded computing through: i) Pure compiler techniques, ii) Hybrid compiler micro-architecture techniques, and iii) Compiler-aware architectures''. In this dissertation demonstrates, through contributions in each of the three compiler-based techniques, the effectiveness of smart compilers in achieving power-efficiency and reliability in embedded systems.
ContributorsJeyapaul, Reiley (Author) / Shrivastava, Aviral (Thesis advisor) / Vrudhula, Sarma (Committee member) / Clark, Lawrence (Committee member) / Colbourn, Charles (Committee member) / Arizona State University (Publisher)
Created2012
149560-Thumbnail Image.png
Description
Reducing device dimensions, increasing transistor densities, and smaller timing windows, expose the vulnerability of processors to soft errors induced by charge carrying particles. Since these factors are inevitable in the advancement of processor technology, the industry has been forced to improve reliability on general purpose Chip Multiprocessors (CMPs). With the

Reducing device dimensions, increasing transistor densities, and smaller timing windows, expose the vulnerability of processors to soft errors induced by charge carrying particles. Since these factors are inevitable in the advancement of processor technology, the industry has been forced to improve reliability on general purpose Chip Multiprocessors (CMPs). With the availability of increased hardware resources, redundancy based techniques are the most promising methods to eradicate soft error failures in CMP systems. This work proposes a novel customizable and redundant CMP architecture (UnSync) that utilizes hardware based detection mechanisms (most of which are readily available in the processor), to reduce overheads during error free executions. In the presence of errors (which are infrequent), the always forward execution enabled recovery mechanism provides for resilience in the system. The inherent nature of UnSync architecture framework supports customization of the redundancy, and thereby provides means to achieve possible performance-reliability trade-offs in many-core systems. This work designs a detailed RTL model of UnSync architecture and performs hardware synthesis to compare the hardware (power/area) overheads incurred. It then compares the same with those of the Reunion technique, a state-of-the-art redundant multi-core architecture. This work also performs cycle-accurate simulations over a wide range of SPEC2000, and MiBench benchmarks to evaluate the performance efficiency achieved over that of the Reunion architecture. Experimental results show that, UnSync architecture reduces power consumption by 34.5% and improves performance by up to 20% with 13.3% less area overhead, when compared to Reunion architecture for the same level of reliability achieved.
ContributorsHong, Fei (Author) / Shrivastava, Aviral (Thesis advisor) / Bazzi, Rida (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2011
133763-Thumbnail Image.png
Description
The goal of this project is to use an open-source solution to implement a drone Cyber-Physical System that can fly autonomously and accurately. The proof-of-concept to analyze the drone's flight capabilities is to fly in a pattern corresponding to the outline of an image, a process that requires both stability

The goal of this project is to use an open-source solution to implement a drone Cyber-Physical System that can fly autonomously and accurately. The proof-of-concept to analyze the drone's flight capabilities is to fly in a pattern corresponding to the outline of an image, a process that requires both stability and precision to accurately depict the image. In this project, we found that building a Cyber-Physical System is difficult because of the tedious and complex nature of designing and testing the hardware and software solutions of this system. Furthermore, we reflect on the difficulties that arose from using open-source hardware and software.
ContributorsDedinsky, Rachel (Co-author) / Lubbers, Harrison James (Co-author) / Shrivastava, Aviral (Thesis director) / Dougherty, Ryan (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
161988-Thumbnail Image.png
Description
Autonomous Vehicles (AV) are inevitable entities in future mobility systems thatdemand safety and adaptability as two critical factors in replacing/assisting human drivers. Safety arises in defining, standardizing, quantifying, and monitoring requirements for all autonomous components. Adaptability, on the other hand, involves efficient handling of uncertainty and inconsistencies in models and data. First, I

Autonomous Vehicles (AV) are inevitable entities in future mobility systems thatdemand safety and adaptability as two critical factors in replacing/assisting human drivers. Safety arises in defining, standardizing, quantifying, and monitoring requirements for all autonomous components. Adaptability, on the other hand, involves efficient handling of uncertainty and inconsistencies in models and data. First, I address safety by presenting a search-based test-case generation framework that can be used in training and testing deep-learning components of AV. Next, to address adaptability, I propose a framework based on multi-valued linear temporal logic syntax and semantics that allows autonomous agents to perform model-checking on systems with uncertainties. The search-based test-case generation framework provides safety assurance guarantees through formalizing and monitoring Responsibility Sensitive Safety (RSS) rules. I use the RSS rules in signal temporal logic as qualification specifications for monitoring and screening the quality of generated test-drive scenarios. Furthermore, to extend the existing temporal-based formal languages’ expressivity, I propose a new spatio-temporal perception logic that enables formalizing qualification specifications for perception systems. All-in-one, my test-generation framework can be used for reasoning about the quality of perception, prediction, and decision-making components in AV. Finally, my efforts resulted in publicly available software. One is an offline monitoring algorithm based on the proposed logic to reason about the quality of perception systems. The other is an optimal planner (model checker) that accepts mission specifications and model descriptions in the form of multi-valued logic and multi-valued sets, respectively. My monitoring framework is distributed with the publicly available S-TaLiRo and Sim-ATAV tools.
ContributorsHekmatnejad, Mohammad (Author) / Fainekos, Georgios (Thesis advisor) / Deshmukh, Jyotirmoy V (Committee member) / Karam, Lina (Committee member) / Pedrielli, Giulia (Committee member) / Shrivastava, Aviral (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
161806-Thumbnail Image.png
Description
Autonomous Vehicles (AVs) have the potential to significantly evolve transportation. AVs are expected to make transportation safer by avoiding accidents that happen due to human errors. When AVs become connected, they can exchange information with the infrastructure or other Connected Autonomous Vehicles (CAVs) to efficiently plan their future motion and

Autonomous Vehicles (AVs) have the potential to significantly evolve transportation. AVs are expected to make transportation safer by avoiding accidents that happen due to human errors. When AVs become connected, they can exchange information with the infrastructure or other Connected Autonomous Vehicles (CAVs) to efficiently plan their future motion and therefore, increase the road throughput and reduce energy consumption. Cooperative algorithms for CAVs will not be deployed in real life unless they are proved to be safe, robust, and resilient to different failure models. Since intersections are crucial areas where most accidents happen, this dissertation first focuses on making existing intersection management algorithms safe and resilient against network and computation time, bounded model mismatches and external disturbances, and the existence of a rogue vehicle. Then, a generic algorithm for conflict resolution and cooperation of CAVs is proposed that ensures the safety of vehicles even when other vehicles suddenly change their plan. The proposed approach can also detect deadlock situations among CAVs and resolve them through a negotiation process. A testbed consisting of 1/10th scale model CAVs is built to evaluate the proposed algorithms. In addition, a simulator is developed to perform tests at a large scale. Results from the conducted experiments indicate the robustness and resilience of proposed approaches.
ContributorsKhayatian, Mohammad (Author) / Shrivastava, Aviral (Thesis advisor) / Fainekos, Georgios (Committee member) / Ben Amor, Heni (Committee member) / Yang, Yezhou (Committee member) / Lou, Yingyan (Committee member) / Iannucci, Bob (Committee member) / Arizona State University (Publisher)
Created2021
187820-Thumbnail Image.png
Description
With the advent of new advanced analysis tools and access to related published data, it is getting more difficult for data owners to suppress private information from published data while still providing useful information. This dual problem of providing useful, accurate information and protecting it at the same time has

With the advent of new advanced analysis tools and access to related published data, it is getting more difficult for data owners to suppress private information from published data while still providing useful information. This dual problem of providing useful, accurate information and protecting it at the same time has been challenging, especially in healthcare. The data owners lack an automated resource that provides layers of protection on a published dataset with validated statistical values for usability. Differential privacy (DP) has gained a lot of attention in the past few years as a solution to the above-mentioned dual problem. DP is defined as a statistical anonymity model that can protect the data from adversarial observation while still providing intended usage. This dissertation introduces a novel DP protection mechanism called Inexact Data Cloning (IDC), which simultaneously protects and preserves information in published data while conveying source data intent. IDC preserves the privacy of the records by converting the raw data records into clonesets. The clonesets then pass through a classifier that removes potential compromising clonesets, filtering only good inexact cloneset. The mechanism of IDC is dependent on a set of privacy protection metrics called differential privacy protection metrics (DPPM), which represents the overall protection level. IDC uses two novel performance values, differential privacy protection score (DPPS) and clone classifier selection percentage (CCSP), to estimate the privacy level of protected data. In support of using IDC as a viable data security product, a software tool chain prototype, differential privacy protection architecture (DPPA), was developed to utilize the IDC. DPPA used the engineering security mechanism of IDC. DPPA is a hub which facilitates a market for data DP security mechanisms. DPPA works by incorporating standalone IDC mechanisms and provides automation, IDC protected published datasets and statistically verified IDC dataset diagnostic report. DPPA is currently doing functional, and operational benchmark processes that quantifies the DP protection of a given published dataset. The DPPA tool was recently used to test a couple of health datasets. The test results further validate the IDC mechanism as being feasible.
Contributorsthomas, zelpha (Author) / Bliss, Daniel W (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Banerjee, Ayan (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2023
156829-Thumbnail Image.png
Description
Advances in semiconductor technology have brought computer-based systems intovirtually all aspects of human life. This unprecedented integration of semiconductor based systems in our lives has significantly increased the domain and the number

of safety-critical applications – application with unacceptable consequences of failure. Software-level error resilience schemes are attractive because they can

Advances in semiconductor technology have brought computer-based systems intovirtually all aspects of human life. This unprecedented integration of semiconductor based systems in our lives has significantly increased the domain and the number

of safety-critical applications – application with unacceptable consequences of failure. Software-level error resilience schemes are attractive because they can provide commercial-off-the-shelf microprocessors with adaptive and scalable reliability.

Among all software-level error resilience solutions, in-application instruction replication based approaches have been widely used and are deemed to be the most effective. However, existing instruction-based replication schemes only protect some part of computations i.e. arithmetic and logical instructions and leave the rest as unprotected. To improve the efficacy of instruction-level redundancy-based approaches, we developed several error detection and error correction schemes. nZDC (near Zero silent

Data Corruption) is an instruction duplication scheme which protects the execution of whole application. Rather than detecting errors on register operands of memory and control flow operations, nZDC checks the results of such operations. nZDC en

sures the correct execution of memory write instruction by reloading stored value and checking it against redundantly computed value. nZDC also introduces a novel control flow checking mechanism which replicates compare and branch instructions and

detects both wrong direction branches as well as unwanted jumps. Fault injection experiments show that nZDC can improve the error coverage of the state-of-the-art schemes by more than 10x, without incurring any more performance penalty. Further

more, we introduced two error recovery solutions. InCheck is our backward recovery solution which makes light-weighted error-free checkpoints at the basic block granularity. In the case of error, InCheck reverts the program execution to the beginning of last executed basic block and resumes the execution by the aid of preserved in formation. NEMESIS is our forward recovery scheme which runs three versions of computation and detects errors by checking the results of all memory write and branch

operations. In the case of a mismatch, NEMESIS diagnosis routine decides if the error is recoverable. If yes, NEMESIS recovery routine reverts the effect of error from the program state and resumes program normal execution from the error detection

point.
ContributorsDidehban, Moslem (Author) / Shrivastava, Aviral (Thesis advisor) / Wu, Carole-Jean (Committee member) / Clark, Lawrence (Committee member) / Mahlke, Scott (Committee member) / Arizona State University (Publisher)
Created2018
157060-Thumbnail Image.png
Description
Automated driving systems are in an intensive research and development stage, and the companies developing these systems are targeting to deploy them on public roads in a very near future. Guaranteeing safe operation of these systems is crucial as they are planned to carry passengers and share the road with

Automated driving systems are in an intensive research and development stage, and the companies developing these systems are targeting to deploy them on public roads in a very near future. Guaranteeing safe operation of these systems is crucial as they are planned to carry passengers and share the road with other vehicles and pedestrians. Yet, there is no agreed-upon approach on how and in what detail those systems should be tested. Different organizations have different testing approaches, and one common approach is to combine simulation-based testing with real-world driving.

One of the expectations from fully-automated vehicles is never to cause an accident. However, an automated vehicle may not be able to avoid all collisions, e.g., the collisions caused by other road occupants. Hence, it is important for the system designers to understand the boundary case scenarios where an autonomous vehicle can no longer avoid a collision. Besides safety, there are other expectations from automated vehicles such as comfortable driving and minimal fuel consumption. All safety and functional expectations from an automated driving system should be captured with a set of system requirements. It is challenging to create requirements that are unambiguous and usable for the design, testing, and evaluation of automated driving systems. Another challenge is to define useful metrics for assessing the testing quality because in general, it is impossible to test every possible scenario.

The goal of this dissertation is to formalize the theory for testing automated vehicles. Various methods for automatic test generation for automated-driving systems in simulation environments are presented and compared. The contributions presented in this dissertation include (i) new metrics that can be used to discover the boundary cases between safe and unsafe driving conditions, (ii) a new approach that combines combinatorial testing and optimization-guided test generation methods, (iii) approaches that utilize global optimization methods and random exploration to generate critical vehicle and pedestrian trajectories for testing purposes, (iv) a publicly-available simulation-based automated vehicle testing framework that enables application of the existing testing approaches in the literature, including the new approaches presented in this dissertation.
ContributorsTuncali, Cumhur Erkan (Author) / Fainekos, Georgios (Thesis advisor) / Ben Amor, Heni (Committee member) / Kapinski, James (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2019
155071-Thumbnail Image.png
Description
Sports activities have been a cornerstone in the evolution of humankind through the ages from the ancient Roman empire to the Olympics in the 21st century. These activities have been used as a benchmark to evaluate the how humans have progressed through the sands of time. In the 21st century,

Sports activities have been a cornerstone in the evolution of humankind through the ages from the ancient Roman empire to the Olympics in the 21st century. These activities have been used as a benchmark to evaluate the how humans have progressed through the sands of time. In the 21st century, machines along with the help of powerful computing and relatively new computing paradigms have made a good case for taking up the mantle. Even though machines have been able to perform complex tasks and maneuvers, they have struggled to match the dexterity, coordination, manipulability and acuteness displayed by humans. Bi-manual tasks are more complex and bring in additional variables like coordination into the task making it harder to evaluate.

A task capable of demonstrating the above skillset would be a good measure of the progress in the field of robotic technology. Therefore a dual armed robot has been built and taught to handle the ball and make the basket successfully thus demonstrating the capability of using both arms. A combination of machine learning techniques, Reinforcement learning, and Imitation learning has been used along with advanced optimization algorithms to accomplish the task.
ContributorsKalige, Nikhil (Author) / Amor, Heni Ben (Thesis advisor) / Shrivastava, Aviral (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2016
157771-Thumbnail Image.png
Description
At modern-day intersections, traffic lights and stop signs assist human drivers to cross the intersection safely. Traffic congestion in urban road networks is a costly problem that affects all major cities. Efficiently operating intersections is largely dependent on accuracy and precision of human drivers, engendering a lingering uncertainty of attaining

At modern-day intersections, traffic lights and stop signs assist human drivers to cross the intersection safely. Traffic congestion in urban road networks is a costly problem that affects all major cities. Efficiently operating intersections is largely dependent on accuracy and precision of human drivers, engendering a lingering uncertainty of attaining safety and high throughput. To improve the efficiency of the existing traffic network and mitigate the effects of human error in the intersection, many studies have proposed autonomous, intelligent transportation systems. These studies often involve utilizing connected autonomous vehicles, implementing a supervisory system, or both. Implementing a supervisory system is relatively more popular due to the security concerns of vehicle-to-vehicle communication. Even though supervisory systems are a step in the right direction for security, many supervisory systems’ safe operation solely relies on the promise of connected data being correct, making system reliability difficult to achieve. To increase fault-tolerance and decrease the effects of position uncertainty, this thesis proposes the Reliable and Robust Intersection Manager, a supervisory system that uses a separate surveillance system to dependably detect vehicles present in the intersection in order to create data redundancy for more accurate scheduling of connected autonomous vehicles. Adding the Surveillance System ensures that the temporal safety buffers between arrival times of connected autonomous vehicles are maintained. This guarantees that connected autonomous vehicles can traverse the intersection safely in the event of large vehicle controller error, a single rogue car entering the intersection, or a sybil attack. To test the proposed system given these fault-models, MATLAB® was used to create simulations in order to observe the functionality of R2IM compared to the state-of-the-art supervisory system, Robust Intersection Manager. Though R2IM is less efficient than the Robust Intersection Manager, it considers more fault models. The Robust Intersection Manager failed to maintain safety in the event of large vehicle controller errors and rogue cars, however R2IM resulted in zero collisions.
ContributorsDedinsky, Rachel (Author) / Shrivastava, Aviral (Thesis advisor) / Sen, Arunabha (Committee member) / Syrotiuk, Violet (Committee member) / Arizona State University (Publisher)
Created2019