Matching Items (174)
152177-Thumbnail Image.png
Description
Manufacture of building materials requires significant energy, and as demand for these materials continues to increase, the energy requirement will as well. Offsetting this energy use will require increased focus on sustainable building materials. Further, the energy used in building, particularly in heating and air conditioning, accounts for 40 percent

Manufacture of building materials requires significant energy, and as demand for these materials continues to increase, the energy requirement will as well. Offsetting this energy use will require increased focus on sustainable building materials. Further, the energy used in building, particularly in heating and air conditioning, accounts for 40 percent of a buildings energy use. Increasing the efficiency of building materials will reduce energy usage over the life time of the building. Current methods for maintaining the interior environment can be highly inefficient depending on the building materials selected. Materials such as concrete have low thermal efficiency and have a low heat capacity meaning it provides little insulation. Use of phase change materials (PCM) provides the opportunity to increase environmental efficiency of buildings by using the inherent latent heat storage as well as the increased heat capacity. Incorporating PCM into concrete via lightweight aggregates (LWA) by direct addition is seen as a viable option for increasing the thermal storage capabilities of concrete, thereby increasing building energy efficiency. As PCM change phase from solid to liquid, heat is absorbed from the surroundings, decreasing the demand on the air conditioning systems on a hot day or vice versa on a cold day. Further these materials provide an additional insulating capacity above the value of plain concrete. When the temperature drops outside the PCM turns back into a solid and releases the energy stored from the day. PCM is a hydrophobic material and causes reductions in compressive strength when incorporated directly into concrete, as shown in previous studies. A proposed method for mitigating this detrimental effect, while still incorporating PCM into concrete is to encapsulate the PCM in aggregate. This technique would, in theory, allow for the use of phase change materials directly in concrete, increasing the thermal efficiency of buildings, while negating the negative effect on compressive strength of the material.
ContributorsSharma, Breeann (Author) / Neithalath, Narayanan (Thesis advisor) / Mobasher, Barzin (Committee member) / Rajan, Subramaniam D. (Committee member) / Arizona State University (Publisher)
Created2013
161577-Thumbnail Image.png
Description
This dissertation considers the question of how convenient access to copious networked observational data impacts our ability to learn causal knowledge. It investigates in what ways learning causality from such data is different from -- or the same as -- the traditional causal inference which often deals with small scale

This dissertation considers the question of how convenient access to copious networked observational data impacts our ability to learn causal knowledge. It investigates in what ways learning causality from such data is different from -- or the same as -- the traditional causal inference which often deals with small scale i.i.d. data collected from randomized controlled trials? For example, how can we exploit network information for a series of tasks in the area of learning causality? To answer this question, the dissertation is written toward developing a suite of novel causal learning algorithms that offer actionable insights for a series of causal inference tasks with networked observational data. The work aims to benefit real-world decision-making across a variety of highly influential applications. In the first part of this dissertation, it investigates the task of inferring individual-level causal effects from networked observational data. First, it presents a representation balancing-based framework for handling the influence of hidden confounders to achieve accurate estimates of causal effects. Second, it extends the framework with an adversarial learning approach to properly combine two types of existing heuristics: representation balancing and treatment prediction. The second part of the dissertation describes a framework for counterfactual evaluation of treatment assignment policies with networked observational data. A novel framework that captures patterns of hidden confounders is developed to provide more informative input for downstream counterfactual evaluation methods. The third part presents a framework for debiasing two-dimensional grid-based e-commerce search with observational search log data where there is an implicit network connecting neighboring products in a search result page. A novel inverse propensity scoring framework that models user behavior patterns for two-dimensional display in e-commerce websites is developed, which aims to optimize online performance of ranking algorithms with offline log data.
ContributorsGuo, Ruocheng (Author) / Liu, Huan (Thesis advisor) / Candan, K. Selcuk (Committee member) / Xue, Guoliang (Committee member) / Kiciman, Emre (Committee member) / Arizona State University (Publisher)
Created2021
190798-Thumbnail Image.png
Description
With the proliferation of mobile computing and Internet-of-Things (IoT), billions of mobile and IoT devices are connected to the Internet, generating zillions of Bytes of data at the network edge. Driving by this trend, there is an urgent need to push the artificial intelligence (AI) frontiers to the network edge

With the proliferation of mobile computing and Internet-of-Things (IoT), billions of mobile and IoT devices are connected to the Internet, generating zillions of Bytes of data at the network edge. Driving by this trend, there is an urgent need to push the artificial intelligence (AI) frontiers to the network edge to unleash the potential of the edge big data fully. This dissertation aims to comprehensively study collaborative learning and optimization algorithms to build a foundation of edge intelligence. Under this common theme, this dissertation is broadly organized into three parts. The first part of this study focuses on model learning with limited data and limited computing capability at the network edge. A global model initialization is first obtained by running federated learning (FL) across many edge devices, based on which a semi-supervised algorithm is devised for an edge device to carry out quick adaptation, aiming to address the insufficiency of labeled data and to learn a personalized model efficiently. In the second part of this study, collaborative learning between the edge and the cloud is studied to achieve real-time edge intelligence. More specifically, a distributionally robust optimization (DRO) approach is proposed to enable the synergy between local data processing and cloud knowledge transfer. Two attractive uncertainty models are investigated corresponding to the cloud knowledge transfer: the distribution uncertainty set based on the cloud data distribution and the prior distribution of the edge model conditioned on the cloud model. Collaborative learning algorithms are developed along this line. The final part focuses on developing an offline model-based safe Inverse Reinforcement Learning (IRL) algorithm for connected Autonomous Vehicles (AVs). A reward penalty is introduced to penalize unsafe states, and a risk-measure-based approach is proposed to mitigate the model uncertainty introduced by offline training. The experimental results demonstrate the improvement of the proposed algorithm over the existing baselines in terms of cumulative rewards.
ContributorsZhang, Zhaofeng (Author) / Zhang, Junshan (Thesis advisor) / Zhang, Yanchao (Thesis advisor) / Dasarathy, Gautam (Committee member) / Fan, Deliang (Committee member) / Arizona State University (Publisher)
Created2023
189245-Thumbnail Image.png
Description
Recent advances in cyber-physical systems, artificial intelligence, and cloud computing have driven the widespread deployment of Internet-of-Things (IoT) devices in smart homes. However, the spate of cyber attacks exploiting the vulnerabilities and weak security management of smart home IoT devices have highlighted the urgency and challenges of designing efficient mechanisms

Recent advances in cyber-physical systems, artificial intelligence, and cloud computing have driven the widespread deployment of Internet-of-Things (IoT) devices in smart homes. However, the spate of cyber attacks exploiting the vulnerabilities and weak security management of smart home IoT devices have highlighted the urgency and challenges of designing efficient mechanisms for detecting, analyzing, and mitigating security threats towards them. In this dissertation, I seek to address the security and privacy issues of smart home IoT devices from the perspectives of traffic measurement, pattern recognition, and security applications. I first propose an efficient multidimensional smart home network traffic measurement framework, which enables me to deeply understand the smart home IoT ecosystem and detect various vulnerabilities and flaws. I further design intelligent schemes to efficiently extract security-related IoT device event and user activity patterns from the encrypted smart home network traffic. Based on the knowledge of how smart home operates, different systems for securing smart home networks are proposed and implemented, including abnormal network traffic detection across multiple IoT networking protocol layers, smart home safety monitoring with extracted spatial information about IoT device events, and system-level IoT vulnerability analysis and network hardening.
ContributorsWan, Yinxin (Author) / Xue, Guoliang (Thesis advisor) / Xu, Kuai (Thesis advisor) / Yang, Yezhou (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2023
171361-Thumbnail Image.png
Description
Software Defined Networking has been the primary component for Quality of Service provisioning in the last decade. The key idea in such networks is producing independence between the control and the data-plane. The control plane essentially provides decision making logic to the data-plane, which in-turn is only responsible for moving

Software Defined Networking has been the primary component for Quality of Service provisioning in the last decade. The key idea in such networks is producing independence between the control and the data-plane. The control plane essentially provides decision making logic to the data-plane, which in-turn is only responsible for moving the packets from source to destination based on the flow-table entries and actions. In this thesis an in-depth design and analysis of Software Defined Networking control plane architecture for Next Generation Networks is provided. Typically, Next Generation Networks are those that need to satisfy Quality of Service restrictions (like time bounds, priority, hops, to name a few) before the packets are in transit. For instance, applications that are dependent on prediction popularly known as ML/AI applications have heavy resource requirements and require completion of tasks within the time bounds otherwise the scheduling is rendered useless. The bottleneck could be essentially on any layer of the network stack, however in this thesis the focus is on layer-2 and layer-3 scheduling. To that end, the design of an intelligent control plane is proposed by paying attention to the scheduling, routing and admission strategies which are necessary to facilitate the aforementioned applications requirement. Simulation evaluation and comparisons with state of the art approaches is provided withreasons corroborating the design choices. Finally, quantitative metrics are defined and measured to justify the benefits of the designs.
ContributorsBalasubramanian, Venkatraman (Author) / Reisslein, Martin (Thesis advisor) / Suppappola, Antonia Papandreou (Committee member) / Zhang, Yanchao (Committee member) / Thyagaturu, Akhilesh (Committee member) / Arizona State University (Publisher)
Created2022
171717-Thumbnail Image.png
Description
Although the increasing penetration of electric vehicles (EVs) has reduced the emissionof the greenhouse gas caused by vehicles, it would lead to serious congestion on-road and in charging stations. Strategic coordination of EV charging would benefit the transportation system. However, it is difficult to model a congestion game, which includes choosing charging routes

Although the increasing penetration of electric vehicles (EVs) has reduced the emissionof the greenhouse gas caused by vehicles, it would lead to serious congestion on-road and in charging stations. Strategic coordination of EV charging would benefit the transportation system. However, it is difficult to model a congestion game, which includes choosing charging routes and stations. Furthermore, conventional algorithms cannot balance System Optimization and User Equilibrium, which can cause a huge waste to the whole society. To solve these problems, this paper shows (1) a congestion game setup to optimize and reveal the relationship between EV users, (2) using ε – Nash Equilibrium to reduce the inefficient impact from the self-minded the behavior of the EV users, and (3) finding the relatively optimal solution to approach Pareto-Optimal solution. The proposed method can reduce more total EVs charging time and most EV users’ charging time than existing methods. Numerical simulations demonstrate the advantages of the new method compared to the current methods.
ContributorsYu, Hao (Author) / Weng, Yang (Thesis advisor) / Yu, Hongbin (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2022
171925-Thumbnail Image.png
Description
The problem of monitoring complex networks for the detection of anomalous behavior is well known. Sensors are usually deployed for the purpose of monitoring these networks for anomalies and Sensor Placement Optimization (SPO) is the problem of determining where these sensors should be placed (deployed) in the network. Prior works

The problem of monitoring complex networks for the detection of anomalous behavior is well known. Sensors are usually deployed for the purpose of monitoring these networks for anomalies and Sensor Placement Optimization (SPO) is the problem of determining where these sensors should be placed (deployed) in the network. Prior works have utilized the well known Set Cover formulation in order to determine the locations where sensors should be placed in the network, so that anomalies can be effectively detected. However, such works cannot be utilized to address the problem when the objective is to not only detect the presence of anomalies, but also to detect (distinguish) the source(s) of the detected anomalies, i.e., uniquely monitoring the network. In this dissertation, I attempt to fill in this gap by utilizing the mathematical concept of Identifying Codes and illustrating how it not only can overcome the aforementioned limitation, but also it, and its variants, can be utilized to monitor complex networks modeled from multiple domains. Over the course of this dissertation, I make key contributions which further enhance the efficacy and applicability of Identifying Codes as a monitoring strategy. First, I show how Identifying Codes are superior to not only the Set Cover formulation but also standard graph centrality metrics, for the purpose of uniquely monitoring complex networks. Second, I study novel problems such as the budget constrained Identifying Code, scalable Identifying Code, robust Identifying Code etc., and present algorithms and results for the respective problems. Third, I present useful Identifying Code results for restricted graph classes such as Unit Interval Bigraphs and Unit Disc Bigraphs. Finally, I show the universality of Identifying Codes by applying it to multiple domains.
ContributorsBasu, Kaustav (Author) / Sen, Arunabha (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2022
171963-Thumbnail Image.png
Description
The Internet-of-Things (IoT) paradigm is reshaping the ways to interact with the physical space. Many emerging IoT applications need to acquire, process, gain insights from, and act upon the massive amount of data continuously produced by ubiquitous IoT sensors. It is nevertheless technically challenging and economically prohibitive for each IoT

The Internet-of-Things (IoT) paradigm is reshaping the ways to interact with the physical space. Many emerging IoT applications need to acquire, process, gain insights from, and act upon the massive amount of data continuously produced by ubiquitous IoT sensors. It is nevertheless technically challenging and economically prohibitive for each IoT application to deploy and maintain a dedicated large-scale sensor network over distributed wide geographic areas. Built upon the Sensing-as-a-Service paradigm, cloud-sensing service providers are emerging to provide heterogeneous sensing data to various IoT applications with a shared sensing substrate. Cyber threats are among the biggest obstacles against the faster development of cloud-sensing services. This dissertation presents novel solutions to achieve trustworthy IoT sensing-as-a-service. Chapter 1 introduces the cloud-sensing system architecture and the outline of this dissertation. Chapter 2 presents MagAuth, a secure and usable two-factor authentication scheme that explores commercial off-the-shelf wrist wearables with magnetic strap bands to enhance the security and usability of password-based authentication for touchscreen IoT devices. Chapter 3 presents SmartMagnet, a novel scheme that combines smartphones and cheap magnets to achieve proximity-based access control for IoT devices. Chapter 4 proposes SpecKriging, a new spatial-interpolation technique based on graphic neural networks for secure cooperative spectrum sensing which is an important application of cloud-sensing systems. Chapter 5 proposes a trustworthy multi-transmitter localization scheme based on SpecKriging. Chapter 6 discusses the future work.
ContributorsZhang, Yan (Author) / Zhang, Yanchao YZ (Thesis advisor) / Fan, Deliang (Committee member) / Xue, Guoliang (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2022
171813-Thumbnail Image.png
Description
This dissertation investigates the problem of efficiently and effectively prioritizing a vulnerability risk in a computer networking system. Vulnerability prioritization is one of the most challenging issues in vulnerability management, which affects allocating preventive and defensive resources in a computer networking system. Due to the large number of identified vulnerabilities,

This dissertation investigates the problem of efficiently and effectively prioritizing a vulnerability risk in a computer networking system. Vulnerability prioritization is one of the most challenging issues in vulnerability management, which affects allocating preventive and defensive resources in a computer networking system. Due to the large number of identified vulnerabilities, it is very challenging to remediate them all in a timely fashion. Thus, an efficient and effective vulnerability prioritization framework is required. To deal with this challenge, this dissertation proposes a novel risk-based vulnerability prioritization framework that integrates the recent artificial intelligence techniques (i.e., neuro-symbolic computing and logic reasoning). The proposed work enhances the vulnerability management process by prioritizing vulnerabilities with high risk by refining the initial risk assessment with the network constraints. This dissertation is organized as follows. The first part of this dissertation presents the overview of the proposed risk-based vulnerability prioritization framework, which contains two stages. The second part of the dissertation investigates vulnerability risk features in a computer networking system. The third part proposes the first stage of this framework, a vulnerability risk assessment model. The proposed assessment model captures the pattern of vulnerability risk features to provide a more comprehensive risk assessment for a vulnerability. The fourth part proposes the second stage of this framework, a vulnerability prioritization reasoning engine. This reasoning engine derives network constraints from interactions between vulnerabilities and network environment elements based on network and system setups. This proposed framework assesses a vulnerability in a computer networking system based on its actual security impact by refining the initial risk assessment with the network constraints.
ContributorsZeng, Zhen (Author) / Xue, Guoliang (Thesis advisor) / Liu, Huan (Committee member) / Zhao, Ming (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2022
171825-Thumbnail Image.png
Description
High-temperature mechanical behaviors of metal alloys and underlying microstructural variations responsible for such behaviors are essential areas of interest for many industries, particularly for applications such as jet engines. Anisotropic grain structures, change of preferred grain orientation, and other transformations of grains occur both during metal powder bed fusion additive

High-temperature mechanical behaviors of metal alloys and underlying microstructural variations responsible for such behaviors are essential areas of interest for many industries, particularly for applications such as jet engines. Anisotropic grain structures, change of preferred grain orientation, and other transformations of grains occur both during metal powder bed fusion additive manufacturing processes, due to variation of thermal gradient and cooling rates, and afterward during different thermomechanical loads, which parts experience in their specific applications, could also impact its mechanical properties both at room and high temperatures. In this study, an in-depth analysis of how different microstructural features, such as crystallographic texture, grain size, grain boundary misorientation angles, and inherent defects, as byproducts of electron beam powder bed fusion (EB-PBF) AM process, impact its anisotropic mechanical behaviors and softening behaviors due to interacting mechanisms. Mechanical testing is conducted for EB-PBF Ti6Al4V parts made at different build orientations up to 600°C temperature. Microstructural analysis using electron backscattered diffraction (EBSD) is conducted on samples before and after mechanical testing to understand the interacting impact that temperature and mechanical load have on the activation of certain mechanisms. The vertical samples showed larger grain sizes, with an average of 6.6 µm, a lower average misorientation angle, and subsequently lower strength values than the other two horizontal samples. Among the three strong preferred grain orientations of the α phases, <1 1 2 ̅ 1> and <1 1 2 ̅ 0> were dominant in horizontally built samples, whereas the <0 0 0 1> was dominant in vertically built samples. Thus, strong microstructural variation, as observed among different EB-PBF Ti6Al4V samples, mainly resulted in anisotropic behaviors. Furthermore, alpha grain showed a significant increase in average grain size for all samples with the increasing test temperature, especially from 400°C to 600°C, indicating grain growth and coarsening as potential softening mechanisms along with temperature-induced possible dislocation motion. The severity of internal and external defects on fatigue strength has been evaluated non-destructively using quantitative methods, i.e., Murakami’s square root of area parameter model and Basquin’s model, and the external surface defects were rendered to be more critical as potential crack initiation sites.
ContributorsMian, Md Jamal (Author) / Ladani, Leila (Thesis advisor) / Razmi, Jafar (Committee member) / Shuaib, Abdelrahman (Committee member) / Mobasher, Barzin (Committee member) / Nian, Qiong (Committee member) / Arizona State University (Publisher)
Created2022