Matching Items (124)
Filtering by

Clear all filters

152177-Thumbnail Image.png
Description
Manufacture of building materials requires significant energy, and as demand for these materials continues to increase, the energy requirement will as well. Offsetting this energy use will require increased focus on sustainable building materials. Further, the energy used in building, particularly in heating and air conditioning, accounts for 40 percent

Manufacture of building materials requires significant energy, and as demand for these materials continues to increase, the energy requirement will as well. Offsetting this energy use will require increased focus on sustainable building materials. Further, the energy used in building, particularly in heating and air conditioning, accounts for 40 percent of a buildings energy use. Increasing the efficiency of building materials will reduce energy usage over the life time of the building. Current methods for maintaining the interior environment can be highly inefficient depending on the building materials selected. Materials such as concrete have low thermal efficiency and have a low heat capacity meaning it provides little insulation. Use of phase change materials (PCM) provides the opportunity to increase environmental efficiency of buildings by using the inherent latent heat storage as well as the increased heat capacity. Incorporating PCM into concrete via lightweight aggregates (LWA) by direct addition is seen as a viable option for increasing the thermal storage capabilities of concrete, thereby increasing building energy efficiency. As PCM change phase from solid to liquid, heat is absorbed from the surroundings, decreasing the demand on the air conditioning systems on a hot day or vice versa on a cold day. Further these materials provide an additional insulating capacity above the value of plain concrete. When the temperature drops outside the PCM turns back into a solid and releases the energy stored from the day. PCM is a hydrophobic material and causes reductions in compressive strength when incorporated directly into concrete, as shown in previous studies. A proposed method for mitigating this detrimental effect, while still incorporating PCM into concrete is to encapsulate the PCM in aggregate. This technique would, in theory, allow for the use of phase change materials directly in concrete, increasing the thermal efficiency of buildings, while negating the negative effect on compressive strength of the material.
ContributorsSharma, Breeann (Author) / Neithalath, Narayanan (Thesis advisor) / Mobasher, Barzin (Committee member) / Rajan, Subramaniam D. (Committee member) / Arizona State University (Publisher)
Created2013
168534-Thumbnail Image.png
Description
The rapid growth of data generated from Internet of Things (IoTs) such as smart phones and smart home devices presents new challenges to cloud computing in transferring, storing, and processing the data. With increasingly more powerful edge devices, edge computing, on the other hand, has the potential to better responsiveness,

The rapid growth of data generated from Internet of Things (IoTs) such as smart phones and smart home devices presents new challenges to cloud computing in transferring, storing, and processing the data. With increasingly more powerful edge devices, edge computing, on the other hand, has the potential to better responsiveness, privacy, and cost efficiency. However, resources across the cloud and edge are highly distributed and highly diverse. To address these challenges, this paper proposes EdgeFaaS, a Function-as-a-Service (FaaS) based computing framework that supports the flexible, convenient, and optimized use of distributed and heterogeneous resources across IoT, edge, and cloud systems. EdgeFaaS allows cluster resources and individual devices to be managed under the same framework and provide computational and storage resources for functions. It provides virtual function and virtual storage interfaces for consistent function management and storage management across heterogeneous compute and storage resources. It automatically optimizes the scheduling of functions and placement of data according to their performance and privacy requirements. EdgeFaaS is evaluated based on two edge workflows: video analytics workflow and federated learning workflow, both of which are representative edge applications and involve large amounts of input data generated from edge devices.
ContributorsJin, Runyu (Author) / Zhao, Ming (Thesis advisor) / Shrivastava, Aviral (Committee member) / Sarwat Abdelghany Aly Elsayed, Mohamed (Committee member) / Arizona State University (Publisher)
Created2021
171583-Thumbnail Image.png
Description
With the breakdown of Dennard scaling, computer architects can no longer rely on integrated circuit energy efficiency to scale with transistor density, and must under-clock or power-gate parts of their designs in order to fit within given power budgets. Hardware accelerators may improve energy efficiency of some compute-intensive tasks, but

With the breakdown of Dennard scaling, computer architects can no longer rely on integrated circuit energy efficiency to scale with transistor density, and must under-clock or power-gate parts of their designs in order to fit within given power budgets. Hardware accelerators may improve energy efficiency of some compute-intensive tasks, but as more tasks are accelerated, the general-purpose portions of workloads account for a larger share of execution time while also leaving less instruction, data, or task-level parallelism to exploit. Adaptive computing systems have potential to address these challenges by modifying their behavior at runtime. Adaptation requires runtime decision-making, which can be performed both in hardware and software. While software-based decision-making is more flexible and can execute higher complexity operations compared to hardware, it also incurs a significant latency and power overhead. Hardware designs are more limited in the space of decisions they can make, but have direct access to their own internal microarchitectural states and can make faster decisions, allowing for better-informed adaptation and extracting previously unobtainable performance and security benefits. In this dissertation I study (i) the viability and trade-offs of general-purpose adaptive systems, (ii) the difficulty and complexity of making adaptation decisions, and (iii) how time spent in the observation-analysis-adaptation cycle affects adaptation benefits. I introduce techniques for (a) modeling and understanding high performance computing systems and microarchitecture, (b) enabling hardware learning and decision-making through low-latency networks, and (c) on securing hardware designs using runtime decision-making. I propose an always-awake and active learning `hardware nervous system' pervasive throughout the chip that can reason about the individual hardware module performance, energy usage, and security. I present the design and implementation of (1) a reference architecture and (2) a microarchitecture-aware static binary instrumentation tool. Finally, I provide results showing (1) that runtime adaptation is a necessary to continue improving performance on general-purpose tasks, (2) that significant performance loss and performance variation happens under the ISA-level, and is unobservable without hardware support, and (3) that hardware must possess decision-making and ‘self-awareness’ capabilities at the microarchitecture level in order to efficiently use its own faculties.
ContributorsIsakov, Mihailo (Author) / Kinsy, Michel (Thesis advisor) / Shrivastava, Aviral (Committee member) / Rudd, Kevin (Committee member) / Gadepally, Vijay (Committee member) / Arizona State University (Publisher)
Created2022
190953-Thumbnail Image.png
Description
Pan Tilt Traffic Cameras (PTTC) are a vital component of traffic managementsystems for monitoring/surveillance. In a real world scenario, if a vehicle is in pursuit of another vehicle or an accident has occurred at an intersection causing traffic stoppages, accurate and venerable data from PTTC is necessary to quickly localize the cars on

Pan Tilt Traffic Cameras (PTTC) are a vital component of traffic managementsystems for monitoring/surveillance. In a real world scenario, if a vehicle is in pursuit of another vehicle or an accident has occurred at an intersection causing traffic stoppages, accurate and venerable data from PTTC is necessary to quickly localize the cars on a map for adept emergency response as more and more traffic systems are getting automated using machine learning concepts. However, the position(orientation) of the PTTC with respect to the environment is often unknown as most of them lack Inertial Measurement Units or Encoders. Current State Of the Art systems 1. Demand high performance compute and use carbon footprint heavy Deep Neural Networks(DNN), 2. Are only applicable to scenarios with appropriate lane markings or only roundabouts, 3. Demand complex mathematical computations to determine focal length and optical center first before determining the pose. A compute light approach "TIPANGLE" is presented in this work. The approach uses the concept of Siamese Neural Networks(SNN) encompassing simple mathematical functions i.e., Euclidian Distance and Contrastive Loss to achieve the objective. The effectiveness of the approach is reckoned with a thorough comparison study with alternative approaches and also by executing the approach on an embedded system i.e., Raspberry Pi 3.
ContributorsJagadeesha, Shreehari (Author) / Shrivastava, Aviral (Thesis advisor) / Gopalan, Nakul (Committee member) / Arora, Aman (Committee member) / Arizona State University (Publisher)
Created2023
187599-Thumbnail Image.png
Description
5G Millimeter Wave (mmWave) technology holds great promise for Connected Autonomous Vehicles (CAVs) due to its ability to achieve data rates in the Gbps range. However, mmWave suffers high beamforming overhead and requirement of line of sight (LOS) to maintain a strong connection. For Vehicle-to-Infrastructure (V2I) scenarios, where CAVs connect

5G Millimeter Wave (mmWave) technology holds great promise for Connected Autonomous Vehicles (CAVs) due to its ability to achieve data rates in the Gbps range. However, mmWave suffers high beamforming overhead and requirement of line of sight (LOS) to maintain a strong connection. For Vehicle-to-Infrastructure (V2I) scenarios, where CAVs connect to roadside units (RSUs), these drawbacks become apparent. Because vehicles are dynamic, there is a large potential for link blockages, which in turn is detrimental to the connected applications running on the vehicle, such as cooperative perception and remote driver takeover. Existing RSU selection schemes base their decisions on signal strength and vehicle trajectory alone, which is not enough to prevent the blockage of links. Most recent CAVs motion planning algorithms routinely use other vehicle's near-future plans, either by explicit communication among vehicles, or by prediction. In this thesis, I make use of this knowledge (of the other vehicle's near future path plans) to further improve the RSU association mechanism for CAVs. I solve the RSU association problem by converting it to a shortest path problem with the objective to maximize the total communication bandwidth. Evaluations of B-AWARE in simulation using Simulated Urban Mobility (SUMO) and Digital twin for self-dRiving Intelligent VEhicles (DRIVE) on 12 highway and city street scenarios with varying traffic density and RSU placements show that B-AWARE results in a 1.05x improvement of the potential datarate in the average case and 1.28x in the best case vs. the state of the art. But more impressively, B-AWARE reduces the time spent with no connection by 48% in the average case and 251% in the best case as compared to the state-of-the-art methods. This is partly a result of B-AWARE reducing almost 100% of blockage occurrences in simulation.
ContributorsSzeto, Matthew (Author) / Shrivastava, Aviral (Thesis advisor) / LiKamWa, Robert (Committee member) / Meuth, Ryan (Committee member) / Arizona State University (Publisher)
Created2023
171818-Thumbnail Image.png
Description
Recent advances in autonomous vehicle (AV) technologies have ensured that autonomous driving will soon be present in real-world traffic. Despite the potential of AVs, many studies have shown that traffic accidents in hybrid traffic environments (where both AVs and human-driven vehicles (HVs) are present) are inevitable because of the unpredictability

Recent advances in autonomous vehicle (AV) technologies have ensured that autonomous driving will soon be present in real-world traffic. Despite the potential of AVs, many studies have shown that traffic accidents in hybrid traffic environments (where both AVs and human-driven vehicles (HVs) are present) are inevitable because of the unpredictability of human-driven vehicles. Given that eliminating accidents is impossible, an achievable goal of designing AVs is to design them in a way so that they will not be blamed for any accident in which they are involved in. This work proposes BlaFT – a Blame-Free motion planning algorithm in hybrid Traffic. BlaFT is designed to be compatible with HVs and other AVs, and will not be blamed for accidents in a structured road environment. Also, it proves that no accidents will happen if all AVs are using the BlaFT motion planner and that when in hybrid traffic, the AV using BlaFT will be blame-free even if it is involved in a collision. The work instantiated scores of BlaFT and HV vehicles in an urban road scape loop in the 'Simulation of Urban MObility', ran the simulation for several hours, and observe that as the percentage of BlaFT vehicles increases, the traffic becomes safer. Adding BlaFT vehicles to HVs also increases the efficiency of traffic as a whole by up to 34%.
ContributorsPark, Sanggu (Author) / Shrivastava, Aviral (Thesis advisor) / Wang, Ruoyu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2022
171825-Thumbnail Image.png
Description
High-temperature mechanical behaviors of metal alloys and underlying microstructural variations responsible for such behaviors are essential areas of interest for many industries, particularly for applications such as jet engines. Anisotropic grain structures, change of preferred grain orientation, and other transformations of grains occur both during metal powder bed fusion additive

High-temperature mechanical behaviors of metal alloys and underlying microstructural variations responsible for such behaviors are essential areas of interest for many industries, particularly for applications such as jet engines. Anisotropic grain structures, change of preferred grain orientation, and other transformations of grains occur both during metal powder bed fusion additive manufacturing processes, due to variation of thermal gradient and cooling rates, and afterward during different thermomechanical loads, which parts experience in their specific applications, could also impact its mechanical properties both at room and high temperatures. In this study, an in-depth analysis of how different microstructural features, such as crystallographic texture, grain size, grain boundary misorientation angles, and inherent defects, as byproducts of electron beam powder bed fusion (EB-PBF) AM process, impact its anisotropic mechanical behaviors and softening behaviors due to interacting mechanisms. Mechanical testing is conducted for EB-PBF Ti6Al4V parts made at different build orientations up to 600°C temperature. Microstructural analysis using electron backscattered diffraction (EBSD) is conducted on samples before and after mechanical testing to understand the interacting impact that temperature and mechanical load have on the activation of certain mechanisms. The vertical samples showed larger grain sizes, with an average of 6.6 µm, a lower average misorientation angle, and subsequently lower strength values than the other two horizontal samples. Among the three strong preferred grain orientations of the α phases, <1 1 2 ̅ 1> and <1 1 2 ̅ 0> were dominant in horizontally built samples, whereas the <0 0 0 1> was dominant in vertically built samples. Thus, strong microstructural variation, as observed among different EB-PBF Ti6Al4V samples, mainly resulted in anisotropic behaviors. Furthermore, alpha grain showed a significant increase in average grain size for all samples with the increasing test temperature, especially from 400°C to 600°C, indicating grain growth and coarsening as potential softening mechanisms along with temperature-induced possible dislocation motion. The severity of internal and external defects on fatigue strength has been evaluated non-destructively using quantitative methods, i.e., Murakami’s square root of area parameter model and Basquin’s model, and the external surface defects were rendered to be more critical as potential crack initiation sites.
ContributorsMian, Md Jamal (Author) / Ladani, Leila (Thesis advisor) / Razmi, Jafar (Committee member) / Shuaib, Abdelrahman (Committee member) / Mobasher, Barzin (Committee member) / Nian, Qiong (Committee member) / Arizona State University (Publisher)
Created2022
171405-Thumbnail Image.png
Description
Many companies face pressure to deploy flexible compute infrastructures to manage their operations. However, the current developments in cloud and edge computing have created a data processing asymmetry challenge. On the edge, workloads frequently require low-latency responses, contend with connectivity and bandwidth instabilities, may require privacy guarantees, and may perform

Many companies face pressure to deploy flexible compute infrastructures to manage their operations. However, the current developments in cloud and edge computing have created a data processing asymmetry challenge. On the edge, workloads frequently require low-latency responses, contend with connectivity and bandwidth instabilities, may require privacy guarantees, and may perform under limited or high-variance compute resources. In the cloud, workloads tolerate longer latency, expect highly available infrastructure, access high-performance compute resources, and have more power available, but may be further from where the processing results are needed. This compute asymmetry challenge requires a new computational paradigm. In this work, I advance a new computing architecture model, called the Continuum Computing Architecture (CCA), and validate this model with a candidate architecture. CCA is a unifying edge-fog-cloud computing model that provides the following capabilities: (i) a continuum of compute that spans from network-connected edge devices to the cloud – with very low power consumption to high-performance compute; (ii) same architecture with different micro-architectures along this compute continuum – a single RISC-V instruction set architecture with reconfigurable processing units; (iii) portability across all scales – the same program can be run across the continuum with different latencies and power utilizations; and (iv) secure shared memory features are fully-supported – physical memories along the continuum are abstracted to allow edge and cloud to share data in a transparent fashion. The validating architecture has three micro-architectures. The edge micro-architecture, Parmenides, targets accelerator-based edge processing system-on-chips (SoCs). Parmenides includes security features to protect the SoC in uncontrolled environments while adapting its power usage and processing to ambient events. The fog and cloud micro-architectures, Melissus and Zeno, must support application data distribution across the memory of many compute nodes to achieve the desired scale and performance. As a solution, I introduce the Eleatic Memory Model (EMM): a global shared memory architecture with hardware-supported global memory access permissions. All memory accesses are made with a Namespace-based capability scheme that supports improved scalability and memory security. The CCA model addresses several memory-centric security challenges including the misuse of resources, risk to application and data integrity, as well as concerns over authorization and confidentiality.
ContributorsEhret, Alan (Author) / Kinsy, Michel A (Thesis advisor) / Vrudhula, Sarma (Committee member) / Shrivastava, Aviral (Committee member) / Rudd, Kevin (Committee member) / Gettings, Karen (Committee member) / Arizona State University (Publisher)
Created2022
171406-Thumbnail Image.png
Description
Coarse-grain reconfigurable architectures (CGRAs) have shown significant improvements as hardware accelerator whilst demanding low power. Such acceleration inherits from the nature of instruction-level parallelism and exploited by many techniques. Modulo scheduling is a popular approach to software pipelining techniques that provides an efficient heuristic to accelerations on loops, repetitive regions

Coarse-grain reconfigurable architectures (CGRAs) have shown significant improvements as hardware accelerator whilst demanding low power. Such acceleration inherits from the nature of instruction-level parallelism and exploited by many techniques. Modulo scheduling is a popular approach to software pipelining techniques that provides an efficient heuristic to accelerations on loops, repetitive regions of an application. Existing scheduling algorithms for modulo scheduling heuristic persist on loop exiting problems that limit CGRA acceleration to only loops with known trip count and no exit statements. Another notable limitation is the early exit problem, where loops can only terminate after certain iterations as CGRA moves to kernel stage. In attempts to circumvent such obstacles, COMSAT introduces a modified modulo scheduling technique that acts as an external module and can be applied to any existing scheduling/mapping algorithms with minimal hardware changes. Experiments from MiBench and Rodinia benchmark suites have shown that COMSAT achieved an average speedup of 3x in overall benchmarks and 10x speedup in kernel regions. Without COMSAT techniques, only 25% of said loops would have been able to accelerate, reducing benchmark and kernel speedups to 1.25x and 3.63x respectively.
ContributorsTa, Vinh (Author) / Shrivastava, Aviral (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kinsey, Michel (Committee member) / Arizona State University (Publisher)
Created2022
161988-Thumbnail Image.png
Description
Autonomous Vehicles (AV) are inevitable entities in future mobility systems thatdemand safety and adaptability as two critical factors in replacing/assisting human drivers. Safety arises in defining, standardizing, quantifying, and monitoring requirements for all autonomous components. Adaptability, on the other hand, involves efficient handling of uncertainty and inconsistencies in models and data. First, I

Autonomous Vehicles (AV) are inevitable entities in future mobility systems thatdemand safety and adaptability as two critical factors in replacing/assisting human drivers. Safety arises in defining, standardizing, quantifying, and monitoring requirements for all autonomous components. Adaptability, on the other hand, involves efficient handling of uncertainty and inconsistencies in models and data. First, I address safety by presenting a search-based test-case generation framework that can be used in training and testing deep-learning components of AV. Next, to address adaptability, I propose a framework based on multi-valued linear temporal logic syntax and semantics that allows autonomous agents to perform model-checking on systems with uncertainties. The search-based test-case generation framework provides safety assurance guarantees through formalizing and monitoring Responsibility Sensitive Safety (RSS) rules. I use the RSS rules in signal temporal logic as qualification specifications for monitoring and screening the quality of generated test-drive scenarios. Furthermore, to extend the existing temporal-based formal languages’ expressivity, I propose a new spatio-temporal perception logic that enables formalizing qualification specifications for perception systems. All-in-one, my test-generation framework can be used for reasoning about the quality of perception, prediction, and decision-making components in AV. Finally, my efforts resulted in publicly available software. One is an offline monitoring algorithm based on the proposed logic to reason about the quality of perception systems. The other is an optimal planner (model checker) that accepts mission specifications and model descriptions in the form of multi-valued logic and multi-valued sets, respectively. My monitoring framework is distributed with the publicly available S-TaLiRo and Sim-ATAV tools.
ContributorsHekmatnejad, Mohammad (Author) / Fainekos, Georgios (Thesis advisor) / Deshmukh, Jyotirmoy V (Committee member) / Karam, Lina (Committee member) / Pedrielli, Giulia (Committee member) / Shrivastava, Aviral (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021