Matching Items (15)
154089-Thumbnail Image.png
Description
Swarms of animals, fish, birds, locusts etc. are a common occurrence but their coherence and method of organization poses a major question for mathematics and biology.The Vicsek and the Attraction-Repulsion are two models that have been proposed to explain the emergence of collective motion. A major issue

Swarms of animals, fish, birds, locusts etc. are a common occurrence but their coherence and method of organization poses a major question for mathematics and biology.The Vicsek and the Attraction-Repulsion are two models that have been proposed to explain the emergence of collective motion. A major issue for the Vicsek Model is that its particles are not attracted to each other, leaving the swarm with alignment in velocity but without spatial coherence. Restricting the particles to a bounded domain generates global spatial coherence of swarms while maintaining velocity alignment. While individual particles are specularly reflected at the boundary, the swarm as a whole is not. As a result, new dynamical swarming solutions are found.

The Attraction-Repulsion Model set with a long-range attraction and short-range repulsion interaction potential typically stabilizes to a well-studied flock steady state solution. The particles for a flock remain spatially coherent but have no spatial bound and explore all space. A bounded domain with specularly reflecting walls traps the particles within a specific region. A fundamental refraction law for a swarm impacting on a planar boundary is derived. The swarm reflection varies from specular for a swarm dominated by

kinetic energy to inelastic for a swarm dominated by potential energy. Inelastic collisions lead to alignment with the wall and to damped pulsating oscillations of the swarm. The fundamental refraction law provides a one-dimensional iterative map that allows for a prediction and analysis of the trajectory of the center of mass of a flock in a channel and a square domain.

The extension of the wall collisions to a scattering experiment is conducted by setting two identical flocks to collide. The two particle dynamics is studied analytically and shows a transition from scattering: diverging flocks to bound states in the form of oscillations or parallel motions. Numerical studies of collisions of flocks show the same transition where the bound states become either a single translating flock or a rotating (mill).
ContributorsThatcher, Andrea (Author) / Armbruster, Hans (Thesis advisor) / Motsch, Sebastien (Committee member) / Ringhofer, Christian (Committee member) / Platte, Rodrigo (Committee member) / Gardner, Carl (Committee member) / Arizona State University (Publisher)
Created2015
157121-Thumbnail Image.png
Description
In this work, I present a Bayesian inference computational framework for the analysis of widefield microscopy data that addresses three challenges: (1) counting and localizing stationary fluorescent molecules; (2) inferring a spatially-dependent effective fluorescence profile that describes the spatially-varying rate at which fluorescent molecules emit subsequently-detected photons (due to different

In this work, I present a Bayesian inference computational framework for the analysis of widefield microscopy data that addresses three challenges: (1) counting and localizing stationary fluorescent molecules; (2) inferring a spatially-dependent effective fluorescence profile that describes the spatially-varying rate at which fluorescent molecules emit subsequently-detected photons (due to different illumination intensities or different local environments); and (3) inferring the camera gain. My general theoretical framework utilizes the Bayesian nonparametric Gaussian and beta-Bernoulli processes with a Markov chain Monte Carlo sampling scheme, which I further specify and implement for Total Internal Reflection Fluorescence (TIRF) microscopy data, benchmarking the method on synthetic data. These three frameworks are self-contained, and can be used concurrently so that the fluorescence profile and emitter locations are both considered unknown and, under some conditions, learned simultaneously. The framework I present is flexible and may be adapted to accommodate the inference of other parameters, such as emission photophysical kinetics and the trajectories of moving molecules. My TIRF-specific implementation may find use in the study of structures on cell membranes, or in studying local sample properties that affect fluorescent molecule photon emission rates.
ContributorsWallgren, Ross (Author) / Presse, Steve (Thesis advisor) / Armbruster, Hans (Thesis advisor) / McCulloch, Robert (Committee member) / Arizona State University (Publisher)
Created2019
132984-Thumbnail Image.png
Description
The listing price of residential rental real estate is dependent upon property specific attributes. These attributes involve data that can be tabulated as categorical and continuous predictors. The forecasting model presented in this paper is developed using publicly available, property specific information sourced from the Zillow and Trulia online real

The listing price of residential rental real estate is dependent upon property specific attributes. These attributes involve data that can be tabulated as categorical and continuous predictors. The forecasting model presented in this paper is developed using publicly available, property specific information sourced from the Zillow and Trulia online real estate databases. The following fifteen predictors were tracked for forty-eight rental listings in the 85281 area code: housing type, square footage, number of baths, number of bedrooms, distance to Arizona State University’s Tempe Campus, crime level of the neighborhood, median age range of the neighborhood population, percentage of the neighborhood population that is married, median year of construction of the neighborhood, percentage of the population commuting longer than thirty minutes, percentage of neighborhood homes occupied by renters, percentage of the population commuting by transit, and the number of restaurants, grocery stores, and nightlife within a one mile radius of the property. Through regression analysis, the significant predictors of the listing price of a rental property in the 85281 area code were discerned. These predictors were used to form a forecasting model. This forecasting model explains 75.5% of the variation in listing prices of residential rental real estate in the 85281 area code.
ContributorsSchuchter, Grant (Author) / Clough, Michael (Thesis director) / Escobedo, Adolfo (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
155759-Thumbnail Image.png
Description
Carbon Capture and Storage (CCS) is a climate stabilization strategy that prevents CO2 emissions from entering the atmosphere. Despite its benefits, impactful CCS projects require large investments in infrastructure, which could deter governments from implementing this strategy. In this sense, the development of innovative tools to support large-scale cost-efficient CCS

Carbon Capture and Storage (CCS) is a climate stabilization strategy that prevents CO2 emissions from entering the atmosphere. Despite its benefits, impactful CCS projects require large investments in infrastructure, which could deter governments from implementing this strategy. In this sense, the development of innovative tools to support large-scale cost-efficient CCS deployment decisions is critical for climate change mitigation. This thesis proposes an improved mathematical formulation for the scalable infrastructure model for CCS (SimCCS), whose main objective is to design a minimum-cost pipe network to capture, transport, and store a target amount of CO2. Model decisions include source, reservoir, and pipe selection, as well as CO2 amounts to capture, store, and transport. By studying the SimCCS optimal solution and the subjacent network topology, new valid inequalities (VI) are proposed to strengthen the existing mathematical formulation. These constraints seek to improve the quality of the linear relaxation solutions in the branch and bound algorithm used to solve SimCCS. Each VI is explained with its intuitive description, mathematical structure and examples of resulting improvements. Further, all VIs are validated by assessing the impact of their elimination from the new formulation. The validated new formulation solves the 72-nodes Alberta problem up to 7 times faster than the original model. The upgraded model reduces the computation time required to solve SimCCS in 72% of randomly generated test instances, solving SimCCS up to 200 times faster. These formulations can be tested and then applied to enhance variants of the SimCCS and general fixed-charge network flow problems. Finally, an experience from testing a Benders decomposition approach for SimCCS is discussed and future scope of probable efficient solution-methods is outlined.
ContributorsLobo, Loy Joseph (Author) / Sefair, Jorge A (Thesis advisor) / Escobedo, Adolfo (Committee member) / Kuby, Michael (Committee member) / Middleton, Richard (Committee member) / Arizona State University (Publisher)
Created2017
168304-Thumbnail Image.png
Description
Monitoring a system for deviations from standard or reference behavior is essential for many data-driven tasks. Whether it is monitoring sensor data or the interactions between system elements, such as edges in a path or transactions in a network, the goal is to detect significant changes from a reference. As

Monitoring a system for deviations from standard or reference behavior is essential for many data-driven tasks. Whether it is monitoring sensor data or the interactions between system elements, such as edges in a path or transactions in a network, the goal is to detect significant changes from a reference. As technological advancements allow for more data to be collected from systems, monitoring approaches should evolve to accommodate the greater collection of high-dimensional data and complex system settings. This dissertation introduces system-level models for monitoring tasks characterized by changes in a subset of system components, utilizing component-level information and relationships. A change may only affect a portion of the data or system (partial change). The first three parts of this dissertation present applications and methods for detecting partial changes. The first part introduces a methodology for partial change detection in a simple, univariate setting. Changes are detected with posterior probabilities and statistical mixture models which allow only a fraction of data to change. The second and third parts of this dissertation center around monitoring more complex multivariate systems modeled through networks. The goal is to detect partial changes in the underlying network attributes and topology. The contributions of the second and third parts are two non-parametric system-level monitoring techniques that consider relationships between network elements. The algorithm Supervised Network Monitoring (SNetM) leverages Graph Neural Networks and transforms the problem into supervised learning. The other algorithm Supervised Network Monitoring for Partial Temporal Inhomogeneity (SNetMP) generates a network embedding, and then transforms the problem to supervised learning. At the end, both SNetM and SNetMP construct measures and transform them to pseudo-probabilities to be monitored for changes. The last topic addresses predicting and monitoring system-level delays on paths in a transportation/delivery system. For each item, the risk of delay is quantified. Machine learning is used to build a system-level model for delay risk, given the information available (such as environmental conditions) on the edges of a path, which integrates edge models. The outputs can then be used in a system-wide monitoring framework, and items most at risk are identified for potential corrective actions.
ContributorsKasaei Roodsari, Maziar (Author) / Runger, George (Thesis advisor) / Escobedo, Adolfo (Committee member) / Pan, Rong (Committee member) / Shinde, Amit (Committee member) / Arizona State University (Publisher)
Created2021
Description
Within recent years, the drive for increased sustainability within large corporations has drastically increased. One critical measure within sustainability is the diversion rate, or the amount of waste diverted from landfills to recycling, repurposing, or reselling. There are a variety of different ways in which a company can improve their

Within recent years, the drive for increased sustainability within large corporations has drastically increased. One critical measure within sustainability is the diversion rate, or the amount of waste diverted from landfills to recycling, repurposing, or reselling. There are a variety of different ways in which a company can improve their diversion rate, such as repurposing paper. A conventional method would be to simply have a recycling bin for collecting all paper, but the concern for large companies then becomes a security issue as confidential papers may not be safe in a traditional recycling bin. Salt River Project (SRP) has tackled this issue by hiring a third-party vendor (TPV) and having all paper placed into designated, secure shredding bins whose content is shredded upon collection and ultimately recycled into new material. However, while this effort is improving their diversion, the question has arisen of how to make the program viable in the long term based on the costs required to sustain it. To tackle this issue, this thesis will focus on creating a methodology and sampling plan to determine the appropriate level of a third-party recycling service required and to guide efficient bin-sizing solutions. This will in turn allow for SRP to understand how much paper waste is being produced and how accurately they are being charged for TPV services.
ContributorsHolladay, Amy E. (Author) / Escobedo, Adolfo (Thesis director) / Kucukozyigit, Ali (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131386-Thumbnail Image.png
Description
Collecting accurate collective decisions via crowdsourcing
is challenging due to cognitive biases, varying
worker expertise, and varying subjective scales. This
work investigates new ways to determine collective decisions
by prompting users to provide input in multiple
formats. A crowdsourced task is created that aims
to determine ground-truth by collecting information in
two different ways: rankings and numerical

Collecting accurate collective decisions via crowdsourcing
is challenging due to cognitive biases, varying
worker expertise, and varying subjective scales. This
work investigates new ways to determine collective decisions
by prompting users to provide input in multiple
formats. A crowdsourced task is created that aims
to determine ground-truth by collecting information in
two different ways: rankings and numerical estimates.
Results indicate that accurate collective decisions can
be achieved with less people when ordinal and cardinal
information is collected and aggregated together
using consensus-based, multimodal models. We also
show that presenting users with larger problems produces
more valuable ordinal information, and is a more
efficient way to collect an aggregate ranking. As a result,
we suggest input-elicitation to be more widely considered
for future work in crowdsourcing and incorporated
into future platforms to improve accuracy and efficiency.
ContributorsKemmer, Ryan Wyeth (Author) / Escobedo, Adolfo (Thesis director) / Maciejewski, Ross (Committee member) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
130926-Thumbnail Image.png
Description
The outbreak of the coronavirus has impacted retailers and the food industry after they were forced to switch to delivery services due to social distancing measures. During these times, online sales and local deliveries started to see an increase in their demand - making these methods the new way of

The outbreak of the coronavirus has impacted retailers and the food industry after they were forced to switch to delivery services due to social distancing measures. During these times, online sales and local deliveries started to see an increase in their demand - making these methods the new way of staying in business. For this reason, this research seeks to identify strategies that could be implemented by delivery service companies to improve their operations by comparing two types of p-median models (node-based and edge-based). To simulate demand, geographical data will be analyzed for the cities of San Diego and Paris. The usage of districting models will allow the determination on how balance and compact the service regions are within the districts. After analyzing the variability of each demand simulation run, conclusions will be made on whether one model is better than the other.
ContributorsAguilar, Sarbith Anabella (Author) / Escobedo, Adolfo (Thesis director) / Juarez, Joseph (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-12
158103-Thumbnail Image.png
Description
Global optimization (programming) has been attracting the attention of researchers for almost a century. Since linear programming (LP) and mixed integer linear programming (MILP) had been well studied in early stages, MILP methods and software tools had improved in their efficiency in the past few years. They are now fast

Global optimization (programming) has been attracting the attention of researchers for almost a century. Since linear programming (LP) and mixed integer linear programming (MILP) had been well studied in early stages, MILP methods and software tools had improved in their efficiency in the past few years. They are now fast and robust even for problems with millions of variables. Therefore, it is desirable to use MILP software to solve mixed integer nonlinear programming (MINLP) problems. For an MINLP problem to be solved by an MILP solver, its nonlinear functions must be transformed to linear ones. The most common method to do the transformation is the piecewise linear approximation (PLA). This dissertation will summarize the types of optimization and the most important tools and methods, and will discuss in depth the PLA tool. PLA will be done using nonuniform partitioning of the domain of the variables involved in the function that will be approximated. Also partial PLA models that approximate only parts of a complicated optimization problem will be introduced. Computational experiments will be done and the results will show that nonuniform partitioning and partial PLA can be beneficial.
ContributorsAlkhalifa, Loay (Author) / Mittelmann, Hans (Thesis advisor) / Armbruster, Hans (Committee member) / Escobedo, Adolfo (Committee member) / Renaut, Rosemary (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2020
158694-Thumbnail Image.png
Description
In conventional supervised learning tasks, information retrieval from extensive collections of data happens automatically at low cost, whereas in many real-world problems obtaining labeled data can be hard, time-consuming, and expensive. Consider healthcare systems, for example, where unlabeled medical images are abundant while labeling requires a considerable amount of knowledge

In conventional supervised learning tasks, information retrieval from extensive collections of data happens automatically at low cost, whereas in many real-world problems obtaining labeled data can be hard, time-consuming, and expensive. Consider healthcare systems, for example, where unlabeled medical images are abundant while labeling requires a considerable amount of knowledge from experienced physicians. Active learning addresses this challenge with an iterative process to select instances from the unlabeled data to annotate and improve the supervised learner. At each step, the query of examples to be labeled can be considered as a dilemma between exploitation of the supervised learner's current knowledge and exploration of the unlabeled input features.

Motivated by the need for efficient active learning strategies, this dissertation proposes new algorithms for batch-mode, pool-based active learning. The research considers the following questions: how can unsupervised knowledge of the input features (exploration) improve learning when incorporated with supervised learning (exploitation)? How to characterize exploration in active learning when data is high-dimensional? Finally, how to adaptively make a balance between exploration and exploitation?

The first contribution proposes a new active learning algorithm, Cluster-based Stochastic Query-by-Forest (CSQBF), which provides a batch-mode strategy that accelerates learning with added value from exploration and improved exploitation scores. CSQBF balances exploration and exploitation using a probabilistic scoring criterion based on classification probabilities from a tree-based ensemble model within each data cluster.

The second contribution introduces two more query strategies, Double Margin Active Learning (DMAL) and Cluster Agnostic Active Learning (CAAL), that combine consistent exploration and exploitation modules into a coherent and unified measure for label query. Instead of assuming a fixed clustering structure, CAAL and DMAL adopt a soft-clustering strategy which provides a new approach to formalize exploration in active learning.

The third contribution addresses the challenge of dynamically making a balance between exploration and exploitation criteria throughout the active learning process. Two adaptive algorithms are proposed based on feedback-driven bandit optimization frameworks that elegantly handle this issue by learning the relationship between exploration-exploitation trade-off and an active learner's performance.
ContributorsShams, Ghazal (Author) / Runger, George C. (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Escobedo, Adolfo (Committee member) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2020