Matching Items (31)
156345-Thumbnail Image.png
Description
The energy consumption by public drinking water and wastewater utilities represent up to 30%-40% of a municipality energy bill. The largest energy consumption is used to operate motors for pumping. As a result, the engineering and control community develop the Variable Speed Pumps (VSPs) which allow for regulating valves in

The energy consumption by public drinking water and wastewater utilities represent up to 30%-40% of a municipality energy bill. The largest energy consumption is used to operate motors for pumping. As a result, the engineering and control community develop the Variable Speed Pumps (VSPs) which allow for regulating valves in the network instead of the traditional binary ON/OFF pumps. Potentially, VSPs save up to 90% of annual energy cost compared to the binary pump. The control problem has been tackled in the literature as “Pump Scheduling Optimization” (PSO) with a main focus on the cost minimization. Nonetheless, engineering literature is mostly concerned with the problem of understanding “healthy working conditions” (e.g., leakages, breakages) for a water infrastructure rather than the costs. This is very critical because if we operate a network under stress, it may satisfy the demand at present but will likely hinder network functionality in the future.

This research addresses the problem of analyzing working conditions of large water systems by means of a detailed hydraulic simulation model (e.g., EPANet) to gain insights into feasibility with respect to pressure, tank level, etc. This work presents a new framework called Feasible Set Approximation – Probabilistic Branch and Bound (FSA-PBnB) for the definition and determination of feasible solutions in terms of pumps regulation. We propose the concept of feasibility distance, which is measured as the distance of the current solution from the feasibility frontier to estimate the distribution of the feasibility values across the solution space. Based on this estimate, pruning the infeasible regions and maintaining the feasible regions are proposed to identify the desired feasible solutions. We test the proposed algorithm with both theoretical and real water networks. The results demonstrate that FSA-PBnB has the capability to identify the feasibility profile in an efficient way. Additionally, with the feasibility distance, we can understand the quality of sub-region in terms of feasibility.

The present work provides a basic feasibility determination framework on the low dimension problems. When FSA-PBnB extends to large scale constraint optimization problems, a more intelligent sampling method may be developed to further reduce the computational effort.
ContributorsTsai, Yi-An (Author) / Pedrielli, Giulia (Thesis advisor) / Mirchandani, Pitu (Committee member) / Mascaro, Giuseppe (Committee member) / Zabinsky, Zelda (Committee member) / Candelieri, Antonio (Committee member) / Arizona State University (Publisher)
Created2018
157491-Thumbnail Image.png
Description
Researchers and practitioners have widely studied road network traffic data in different areas such as urban planning, traffic prediction and spatial-temporal databases. For instance, researchers use such data to evaluate the impact of road network changes. Unfortunately, collecting large-scale high-quality urban traffic data requires tremendous efforts because participating vehicles must

Researchers and practitioners have widely studied road network traffic data in different areas such as urban planning, traffic prediction and spatial-temporal databases. For instance, researchers use such data to evaluate the impact of road network changes. Unfortunately, collecting large-scale high-quality urban traffic data requires tremendous efforts because participating vehicles must install Global Positioning System(GPS) receivers and administrators must continuously monitor these devices. There have been some urban traffic simulators trying to generate such data with different features. However, they suffer from two critical issues (1) Scalability: most of them only offer single-machine solution which is not adequate to produce large-scale data. Some simulators can generate traffic in parallel but do not well balance the load among machines in a cluster. (2) Granularity: many simulators do not consider microscopic traffic situations including traffic lights, lane changing, car following. This paper proposed GeoSparkSim, a scalable traffic simulator which extends Apache Spark to generate large-scale road network traffic datasets with microscopic traffic simulation. The proposed system seamlessly integrates with a Spark-based spatial data management system, GeoSpark, to deliver a holistic approach that allows data scientists to simulate, analyze and visualize large-scale urban traffic data. To implement microscopic traffic models, GeoSparkSim employs a simulation-aware vehicle partitioning method to partition vehicles among different machines such that each machine has a balanced workload. The experimental analysis shows that GeoSparkSim can simulate the movements of 200 thousand cars over an extensive road network (250 thousand road junctions and 300 thousand road segments).
ContributorsFu, Zishan (Author) / Sarwat, Mohamed (Thesis advisor) / Pedrielli, Giulia (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2019
156575-Thumbnail Image.png
Description
Mathematical modeling and decision-making within the healthcare industry have given means to quantitatively evaluate the impact of decisions into diagnosis, screening, and treatment of diseases. In this work, we look into a specific, yet very important disease, the Alzheimer. In the United States, Alzheimer’s Disease (AD) is the 6th leading

Mathematical modeling and decision-making within the healthcare industry have given means to quantitatively evaluate the impact of decisions into diagnosis, screening, and treatment of diseases. In this work, we look into a specific, yet very important disease, the Alzheimer. In the United States, Alzheimer’s Disease (AD) is the 6th leading cause of death. Diagnosis of AD cannot be confidently confirmed until after death. This has prompted the importance of early diagnosis of AD, based upon symptoms of cognitive decline. A symptom of early cognitive decline and indicator of AD is Mild Cognitive Impairment (MCI). In addition to this qualitative test, Biomarker tests have been proposed in the medical field including p-Tau, FDG-PET, and hippocampal. These tests can be administered to patients as early detectors of AD thus improving patients’ life quality and potentially reducing the costs of the health structure. Preliminary work has been conducted in the development of a Sequential Tree Based Classifier (STC), which helps medical providers predict if a patient will contract AD or not, by sequentially testing these biomarker tests. The STC model, however, has its limitations and the need for a more complex, robust model is needed. In fact, STC assumes a general linear model as the status of the patient based upon the tests results. We take a simulation perspective and try to define a more complex model that represents the patient evolution in time.

Specifically, this thesis focuses on the formulation of a Markov Chain model that is complex and robust. This Markov Chain model emulates the evolution of MCI patients based upon doctor visits and the sequential administration of biomarker tests. Data provided to create this Markov Chain model were collected by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The data lacked detailed information of the sequential administration of the biomarker tests and therefore, different analytical approaches were tried and conducted in order to calibrate the model. The resulting Markov Chain model provided the capability to conduct experiments regarding different parameters of the Markov Chain and yielded different results of patients that contracted AD and those that did not, leading to important insights into effect of thresholds and sequence on patient prediction capability as well as health costs reduction.



The data in this thesis was provided from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). ADNI investigators did not contribute to any analysis or writing of this thesis. A list of the ADNI investigators can be found at: http://adni.loni.usc.edu/about/governance/principal-investigators/ .
ContributorsCamarena, Raquel (Author) / Pedrielli, Giulia (Thesis advisor) / Li, Jing (Thesis advisor) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2018
156625-Thumbnail Image.png
Description
With trends of globalization on rise, predominant of the trades happen by sea, and experts have predicted an increase in trade volumes over the next few years. With increasing trade volumes, container ships’ upsizing is being carried out to meet the demand. But the problem with container ships’ upsizing is

With trends of globalization on rise, predominant of the trades happen by sea, and experts have predicted an increase in trade volumes over the next few years. With increasing trade volumes, container ships’ upsizing is being carried out to meet the demand. But the problem with container ships’ upsizing is that the sea port terminals must be equipped adequately to improve the turnaround time otherwise the container ships’ upsizing would not yield the anticipated benefits. This thesis focus on a special type of a double automated crane set-up, with a finite interoperational buffer capacity. The buffer is placed in between the cranes, and the idea behind this research is to analyze the performance of the crane operations when this technology is adopted. This thesis proposes the approximation of this complex system, thereby addressing the computational time issue and allowing to efficiently analyze the performance of the system. The approach to model this system has been carried out in two phases. The first phase consists of the development of discrete event simulation model to make the system evolve over time. The challenges of this model are its high processing time which consists of performing large number of experimental runs, thus laying the foundation for the development of the analytical model of the system, and with respect to analytical modeling, a continuous time markov process approach has been adopted. Further, to improve the efficiency of the analytical model, a state aggregation approach is proposed. Thus, this thesis would give an insight on the outcomes of the two approaches and the behavior of the error space, and the performance of the models for the varying buffer capacities would reflect the scope of improvement in these kinds of operational set up.
ContributorsRengarajan, Sundaravaradhan (Author) / Pedrielli, Giulia (Thesis advisor) / Ju, Feng (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2018
136386-Thumbnail Image.png
Description
With the development of technology, there has been a dramatic increase in the number of machine learning programs. These complex programs make conclusions and can predict or perform actions based off of models from previous runs or input information. However, such programs require the storing of a very large amount

With the development of technology, there has been a dramatic increase in the number of machine learning programs. These complex programs make conclusions and can predict or perform actions based off of models from previous runs or input information. However, such programs require the storing of a very large amount of data. Queries allow users to extract only the information that helps for their investigation. The purpose of this thesis was to create a system with two important components, querying and visualization. Metadata was stored in Sedna as XML and time series data was stored in OpenTSDB as JSON. In order to connect the two databases, the time series ID was stored as a metric in the XML metadata. Queries should be simple, flexible, and return all data that fits the query parameters. The query language used was an extension of XQuery FLWOR that added time series parameters. Visualization should be easily understood and be organized in a way to easily find important information and details. Because of the possibility of a large amount of data being returned from a query, a multivariate heat map was used to visualize the time series results. The two programs that the system performed queries on was Energy Plus and Epidemic Simulation Data Management System. By creating such a system, it would be easier for people of the project's fields to find the relationship between metadata that leads to the desired results over time. Over the time of the thesis project, the overall software was completed, however the software must be optimized in order to take the enormous amount of data expected from the system.
ContributorsTse, Adam Yusof (Author) / Candan, Selcuk (Thesis director) / Chen, Xilun (Committee member) / Barrett, The Honors College (Contributor) / School of Music (Contributor) / Computer Science and Engineering Program (Contributor)
Created2015-05
147738-Thumbnail Image.png
Description

Covid-19 is unlike any coronavirus we have seen before, characterized mostly by the ease with which it spreads. This analysis utilizes an SEIR model built to accommodate various populations to understand how different testing and infection rates may affect hospitalization and death. This analysis finds that infection rates have a

Covid-19 is unlike any coronavirus we have seen before, characterized mostly by the ease with which it spreads. This analysis utilizes an SEIR model built to accommodate various populations to understand how different testing and infection rates may affect hospitalization and death. This analysis finds that infection rates have a significant impact on Covid-19 impact regardless of the population whereas the impact that testing rates have in this simulation is not as pronounced. Thus, policy-makers should focus on decreasing infection rates through targeted lockdowns and vaccine rollout to contain the virus, and decrease its spread.

Created2021-05
187702-Thumbnail Image.png
Description
Efforts to enhance the quality of life and promote better health have led to improved water quality standards. Adequate daily fluid intake, primarily from tap water, is crucial for human health. By improving drinking water quality, negative health effects associated with consuming inadequate water can be mitigated. Although the United

Efforts to enhance the quality of life and promote better health have led to improved water quality standards. Adequate daily fluid intake, primarily from tap water, is crucial for human health. By improving drinking water quality, negative health effects associated with consuming inadequate water can be mitigated. Although the United States Environmental Protection Agency (EPA) sets and enforces federal water quality limits at water treatment plants, water quality reaching end users degrades during the water delivery process, emphasizing the need for proactive control systems in buildings to ensure safe drinking water.Future commercial and institutional buildings are anticipated to feature real-time water quality sensors, automated flushing and filtration systems, temperature control devices, and chemical boosters. Integrating these technologies with a reliable water quality control system that optimizes the use of chemical additives, filtration, flushing, and temperature adjustments ensures users consistently have access to water of adequate quality. Additionally, existing buildings can be retrofitted with these technologies at a reasonable cost, guaranteeing user safety. In the absence of smart buildings with the required technology, Chapter 2 describes developing an EPANET-MSX (a multi-species extension of EPA’s water simulation tool) model for a typical 5-story building. Chapter 3 involves creating accurate nonlinear approximation models of EPANET-MSX’s complex fluid dynamics and chemical reactions and developing an open-loop water quality control system that can regulate the water quality based on the approximated state of water quality. To address potential sudden changes in water quality, improve predictions, and reduce the gap between approximated and true state of water quality, a feedback control loop is developed in Chapter 4. Lastly, this dissertation includes the development of a reinforcement learning (RL) based water quality control system for cases where the approximation models prove inadequate and cause instability during implementation with a real building water network. The RL-based control system can be implemented in various buildings without the need to develop new hydraulic models and can handle the stochastic nature of water demand, ensuring the proactive control system’s effectiveness in maintaining water quality within safe limits for consumption.
ContributorsGhasemzadeh, Kiarash (Author) / Mirchandani, Pitu (Thesis advisor) / Boyer, Treavor (Committee member) / Ju, Feng (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2023
171513-Thumbnail Image.png
Description
Automated driving systems (ADS) have come a long way since their inception. It is clear that these systems rely heavily on stochastic deep learning techniques for perception, planning, and prediction, as it is impossible to construct every possible driving scenario to generate driving policies. Moreover, these systems need to be

Automated driving systems (ADS) have come a long way since their inception. It is clear that these systems rely heavily on stochastic deep learning techniques for perception, planning, and prediction, as it is impossible to construct every possible driving scenario to generate driving policies. Moreover, these systems need to be trained and validated extensively on typical and abnormal driving situations before they can be trusted with human life. However, most publicly available driving datasets only consist of typical driving behaviors. On the other hand, there is a plethora of videos available on the internet that capture abnormal driving scenarios, but they are unusable for ADS training or testing as they lack important information such as camera calibration parameters, and annotated vehicle trajectories. This thesis proposes a new toolbox, DeepCrashTest-V2, that is capable of reconstructing high-quality simulations from monocular dashcam videos found on the internet. The toolbox not only estimates the crucial parameters such as camera calibration, ego-motion, and surrounding road user trajectories but also creates a virtual world in Car Learning to Act (CARLA) using data from OpenStreetMaps to simulate the estimated trajectories. The toolbox is open-source and is made available in the form of a python package on GitHub at https://github.com/C-Aniruddh/deepcrashtest_v2.
ContributorsChandratre, Aniruddh Vinay (Author) / Fainekos, Georgios (Thesis advisor) / Ben Amor, Hani (Thesis advisor) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2022
189335-Thumbnail Image.png
Description
Generative Adversarial Networks (GANs) have emerged as a powerful framework for generating realistic and high-quality data. In the original ``vanilla'' GAN formulation, two models -- the generator and discriminator -- are engaged in a min-max game and optimize the same value function. Despite offering an intuitive approach, vanilla GANs often

Generative Adversarial Networks (GANs) have emerged as a powerful framework for generating realistic and high-quality data. In the original ``vanilla'' GAN formulation, two models -- the generator and discriminator -- are engaged in a min-max game and optimize the same value function. Despite offering an intuitive approach, vanilla GANs often face stability challenges such as vanishing gradients and mode collapse. Addressing these common failures, recent work has proposed the use of tunable classification losses in place of traditional value functions. Although parameterized robust loss families, e.g. $\alpha$-loss, have shown promising characteristics as value functions, this thesis argues that the generator and discriminator require separate objective functions to achieve their different goals. As a result, this thesis introduces the $(\alpha_{D}, \alpha_{G})$-GAN, a parameterized class of dual-objective GANs, as an alternative approach to the standard vanilla GAN. The $(\alpha_{D}, \alpha_{G})$-GAN formulation, inspired by $\alpha$-loss, allows practitioners to tune the parameters $(\alpha_{D}, \alpha_{G}) \in [0,\infty)^{2}$ to provide a more stable training process. The objectives for the generator and discriminator in $(\alpha_{D}, \alpha_{G})$-GAN are derived, and the advantages of using these objectives are investigated. In particular, the optimization trajectory of the generator is found to be influenced by the choice of $\alpha_{D}$ and $\alpha_{G}$. Empirical evidence is presented through experiments conducted on various datasets, including the 2D Gaussian Mixture Ring, Celeb-A image dataset, and LSUN Classroom image dataset. Performance metrics such as mode coverage and Fréchet Inception Distance (FID) are used to evaluate the effectiveness of the $(\alpha_{D}, \alpha_{G})$-GAN compared to the vanilla GAN and state-of-the-art Least Squares GAN (LSGAN). The experimental results demonstrate that tuning $\alpha_{D} < 1$ leads to improved stability, robustness to hyperparameter choice, and competitive performance compared to LSGAN.
ContributorsOtstot, Kyle (Author) / Sankar, Lalitha (Thesis advisor) / Kosut, Oliver (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2023
168839-Thumbnail Image.png
Description
The introduction of parameterized loss functions for robustness in machine learning has led to questions as to how hyperparameter(s) of the loss functions can be tuned. This thesis explores how Bayesian methods can be leveraged to tune such hyperparameters. Specifically, a modified Gibbs sampling scheme is used to generate a

The introduction of parameterized loss functions for robustness in machine learning has led to questions as to how hyperparameter(s) of the loss functions can be tuned. This thesis explores how Bayesian methods can be leveraged to tune such hyperparameters. Specifically, a modified Gibbs sampling scheme is used to generate a distribution of loss parameters of tunable loss functions. The modified Gibbs sampler is a two-block sampler that alternates between sampling the loss parameter and optimizing the other model parameters. The sampling step is performed using slice sampling, while the optimization step is performed using gradient descent. This thesis explores the application of the modified Gibbs sampler to alpha-loss, a tunable loss function with a single parameter $\alpha \in (0,\infty]$, that is designed for the classification setting. Theoretically, it is shown that the Markov chain generated by a modified Gibbs sampling scheme is ergodic; that is, the chain has, and converges to, a unique stationary (posterior) distribution. Further, the modified Gibbs sampler is implemented in two experiments: a synthetic dataset and a canonical image dataset. The results show that the modified Gibbs sampler performs well under label noise, generating a distribution indicating preference for larger values of alpha, matching the outcomes of previous experiments.
ContributorsCole, Erika Lingo (Author) / Sankar, Lalitha (Thesis advisor) / Lan, Shiwei (Thesis advisor) / Pedrielli, Giulia (Committee member) / Hahn, Paul (Committee member) / Arizona State University (Publisher)
Created2022