Matching Items (20)
148054-Thumbnail Image.png
Description

The novel Coronavirus Disease 2019 exposed issues in the supply chain for N95 face masks. The demand for protective face masks spiked globally and domestically due to the unexpected outbreak of the pandemic. An important issue was the dependency on N95 mask production in countries abroad. The focus on face

The novel Coronavirus Disease 2019 exposed issues in the supply chain for N95 face masks. The demand for protective face masks spiked globally and domestically due to the unexpected outbreak of the pandemic. An important issue was the dependency on N95 mask production in countries abroad. The focus on face masks in this thesis accounts for all models of the N95 mask.<br/>This thesis will focus on onshore and offshore production of N95 face masks before and during the pandemic. Specifically, we will focus on (1) the production of masks in 2019; (2) 3M, Honeywell, and Prestige Ameritech’s production changes; (3) the observations made by All The Things LLC, a broker for face masks; (4) the rise of counterfeit masks and actions taken to stop counterfeit production; (4) actions taken by the federal government to aid in production and distribution; and (5) future research opportunities on this topic. This research project into the production of N95 face masks ceased in February of 2021. <br/>This thesis defends the critical need for more domestically produced N95 masks. The U.S. needs to increase the number of N95 masks produced domestically, manage the Strategic National Stockpile to eliminate masks past their shelf life, and create a plan to replenish the stockpile to reduce the possibility of a shortage when the next public health emergency takes place.

ContributorsParr, Jacqueline Elizabeth (Author) / Keane, Katy (Thesis director) / Rogers, Dale (Committee member) / Dean, W.P. Carey School of Business (Contributor) / Department of Management and Entrepreneurship (Contributor) / Department of Supply Chain Management (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
151226-Thumbnail Image.png
Description
Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning of

Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning of the relevant patterns This dissertation proposes TS representations and methods for supervised TS analysis. The approaches combine new representations that handle translations and dilations of patterns with bag-of-features strategies and tree-based ensemble learning. This provides flexibility in handling time-warped patterns in a computationally efficient way. The ensemble learners provide a classification framework that can handle high-dimensional feature spaces, multiple classes and interaction between features. The proposed representations are useful for classification and interpretation of the TS data of varying complexity. The first contribution handles the problem of time warping with a feature-based approach. An interval selection and local feature extraction strategy is proposed to learn a bag-of-features representation. This is distinctly different from common similarity-based time warping. This allows for additional features (such as pattern location) to be easily integrated into the models. The learners have the capability to account for the temporal information through the recursive partitioning method. The second contribution focuses on the comprehensibility of the models. A new representation is integrated with local feature importance measures from tree-based ensembles, to diagnose and interpret time intervals that are important to the model. Multivariate time series (MTS) are especially challenging because the input consists of a collection of TS and both features within TS and interactions between TS can be important to models. Another contribution uses a different representation to produce computationally efficient strategies that learn a symbolic representation for MTS. Relationships between the multiple TS, nominal and missing values are handled with tree-based learners. Applications such as speech recognition, medical diagnosis and gesture recognition are used to illustrate the methods. Experimental results show that the TS representations and methods provide better results than competitive methods on a comprehensive collection of benchmark datasets. Moreover, the proposed approaches naturally provide solutions to similarity analysis, predictive pattern discovery and feature selection.
ContributorsBaydogan, Mustafa Gokce (Author) / Runger, George C. (Thesis advisor) / Atkinson, Robert (Committee member) / Gel, Esma (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2012
130926-Thumbnail Image.png
Description
The outbreak of the coronavirus has impacted retailers and the food industry after they were forced to switch to delivery services due to social distancing measures. During these times, online sales and local deliveries started to see an increase in their demand - making these methods the new way of

The outbreak of the coronavirus has impacted retailers and the food industry after they were forced to switch to delivery services due to social distancing measures. During these times, online sales and local deliveries started to see an increase in their demand - making these methods the new way of staying in business. For this reason, this research seeks to identify strategies that could be implemented by delivery service companies to improve their operations by comparing two types of p-median models (node-based and edge-based). To simulate demand, geographical data will be analyzed for the cities of San Diego and Paris. The usage of districting models will allow the determination on how balance and compact the service regions are within the districts. After analyzing the variability of each demand simulation run, conclusions will be made on whether one model is better than the other.
ContributorsAguilar, Sarbith Anabella (Author) / Escobedo, Adolfo (Thesis director) / Juarez, Joseph (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-12
131386-Thumbnail Image.png
Description
Collecting accurate collective decisions via crowdsourcing
is challenging due to cognitive biases, varying
worker expertise, and varying subjective scales. This
work investigates new ways to determine collective decisions
by prompting users to provide input in multiple
formats. A crowdsourced task is created that aims
to determine ground-truth by collecting information in
two different ways: rankings and numerical

Collecting accurate collective decisions via crowdsourcing
is challenging due to cognitive biases, varying
worker expertise, and varying subjective scales. This
work investigates new ways to determine collective decisions
by prompting users to provide input in multiple
formats. A crowdsourced task is created that aims
to determine ground-truth by collecting information in
two different ways: rankings and numerical estimates.
Results indicate that accurate collective decisions can
be achieved with less people when ordinal and cardinal
information is collected and aggregated together
using consensus-based, multimodal models. We also
show that presenting users with larger problems produces
more valuable ordinal information, and is a more
efficient way to collect an aggregate ranking. As a result,
we suggest input-elicitation to be more widely considered
for future work in crowdsourcing and incorporated
into future platforms to improve accuracy and efficiency.
ContributorsKemmer, Ryan Wyeth (Author) / Escobedo, Adolfo (Thesis director) / Maciejewski, Ross (Committee member) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
132984-Thumbnail Image.png
Description
The listing price of residential rental real estate is dependent upon property specific attributes. These attributes involve data that can be tabulated as categorical and continuous predictors. The forecasting model presented in this paper is developed using publicly available, property specific information sourced from the Zillow and Trulia online real

The listing price of residential rental real estate is dependent upon property specific attributes. These attributes involve data that can be tabulated as categorical and continuous predictors. The forecasting model presented in this paper is developed using publicly available, property specific information sourced from the Zillow and Trulia online real estate databases. The following fifteen predictors were tracked for forty-eight rental listings in the 85281 area code: housing type, square footage, number of baths, number of bedrooms, distance to Arizona State University’s Tempe Campus, crime level of the neighborhood, median age range of the neighborhood population, percentage of the neighborhood population that is married, median year of construction of the neighborhood, percentage of the population commuting longer than thirty minutes, percentage of neighborhood homes occupied by renters, percentage of the population commuting by transit, and the number of restaurants, grocery stores, and nightlife within a one mile radius of the property. Through regression analysis, the significant predictors of the listing price of a rental property in the 85281 area code were discerned. These predictors were used to form a forecasting model. This forecasting model explains 75.5% of the variation in listing prices of residential rental real estate in the 85281 area code.
ContributorsSchuchter, Grant (Author) / Clough, Michael (Thesis director) / Escobedo, Adolfo (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132935-Thumbnail Image.png
Description
The purpose of this creative project was to investigate the process a start-up or small business must complete to have a sell-able apparel product manufactured. The initial goal of the project was to experience the manufacturing process from start to finish and complete a full production run from a professional

The purpose of this creative project was to investigate the process a start-up or small business must complete to have a sell-able apparel product manufactured. The initial goal of the project was to experience the manufacturing process from start to finish and complete a full production run from a professional manufacturer. The conclusion found was that start-ups and small businesses will have to begin production within the United States.
ContributorsBour, Melissa (Author) / Sewell, Dennita (Thesis director) / Rogers, Dale (Committee member) / Ellis, Naomi (Committee member) / Dean, Herberger Institute for Design and the Arts (Contributor) / Department of Supply Chain Management (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
134386-Thumbnail Image.png
Description
Amazon Prime Air is the innovative new service that promises automated drone delivery in thirty minutes or less. The platform has not yet been brought to market, but there is a plethora compelling data available that suggests it will be a unique and highly disruptive business segment for Amazon. The

Amazon Prime Air is the innovative new service that promises automated drone delivery in thirty minutes or less. The platform has not yet been brought to market, but there is a plethora compelling data available that suggests it will be a unique and highly disruptive business segment for Amazon. The aim of this thesis is to analyze the framework laid out by Amazon.com, Inc. for their anticipated Prime Air drone delivery platform, and offer our recommendations for what steps the e-commerce giant should take moving forward. Following a brief recap of the company's founding and a breakdown of its various business segments, we will begin our analysis by examining past strategic decisions that Amazon has made which have directly contributed to their current market position. It is our goal to construct a narrative of what events lead the company to begin developing a fleet of automated delivery vehicles. Following this history lesson, we will review and criticize the existing elements of Amazon's Prime Air platform, and explore any possible alternatives that they could have taken to optimize the development of this exciting new technology. Criticisms will touch upon elements such as cost efficiencies, brand management, and utilization of infrastructure to name but a few. These criticisms will be based upon data sourced from Amazon's available material as well as comments from market analysts and journalists. The culminating element of our analysis will be to offer our professional recommendations as to what we believe the next logical steps that Amazon should take for their Prime Air platform. These recommendations will be informed by our criticisms and our understanding of Amazon as a corporation. This chapter will be largely concerned with guiding Amazon towards a fully optimized drone delivery platform. Our recommendations will be based upon our extensive experience concerning cost and logistical efficiencies, as well as our knowledge of Amazon as a corporation. We will offer succinct suggestions for Amazon's immediate needs as well as long-term solutions to lingering obstacles that they may face.
ContributorsMcCaleb, Nicholas (Co-author) / Glynn, Reagan (Co-author) / Choi, Thomas (Thesis director) / Rogers, Dale (Committee member) / Department of Supply Chain Management (Contributor) / Department of Information Systems (Contributor) / Department of Finance (Contributor) / W. P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
171393-Thumbnail Image.png
Description
The rank aggregation problem has ubiquitous applications in operations research, artificial intelligence, computational social choice, and various other fields. Generally, rank aggregation is utilized whenever a set of judges (human or non-human) express their preferences over a set of items, and it is necessary to find a consensus ranking that

The rank aggregation problem has ubiquitous applications in operations research, artificial intelligence, computational social choice, and various other fields. Generally, rank aggregation is utilized whenever a set of judges (human or non-human) express their preferences over a set of items, and it is necessary to find a consensus ranking that best represents these preferences collectively. Many real-world instances of this problem involve a very large number of items, include ties, and/or contain partial information, which brings a challenge to decision-makers. This work makes several contributions to overcoming these challenges. Most attention on this problem has focused on an NP-hard distance-based variant known as Kemeny aggregation, for which solution approaches with provable guarantees that can handle difficult large-scale instances remain elusive. Firstly, this work introduces exact and approximate methodologies inspired by the social choice foundations of the problem, namely the Condorcet criterion, to decompose the problem. To deal with instances where exact partitioning does not yield many subsets, it proposes Approximate Condorcet Partitioning, which is a scalable solution technique capable of handling large-scale instances while providing provable guarantees. Secondly, this work delves into the rank aggregation problem under the generalized Kendall-tau distance, which contains Kemeny aggregation as a special case. This new problem provides a robust and highly-flexible framework for handling ties. First, it derives exact and heuristic solution methods for the generalized problem. Second, it introduces a novel social choice property that encloses existing variations of the Condorcet criterion as special cases. Thirdly, this work focuses on top-k list aggregation. Top-k lists are a special form of item orderings wherein out of n total items only a small number of them, k, are explicitly ordered. Top-k lists are being increasingly utilized in various fields including recommendation systems, information retrieval, and machine learning. This work introduces exact and inexact methods for consolidating a collection of heterogeneous top- lists. Furthermore, the strength of the proposed exact formulations is analyzed from a polyhedral point of view. Finally, this work identifies the top-100 U.S. universities by consolidating four prominent university rankings to assess the computational implications of this problem.
ContributorsAkbari, Sina (Author) / Escobedo, Adolfo (Thesis advisor) / Byeon, Geunyeong (Committee member) / Sefair, Jorge (Committee member) / Wu, Shin-Yi (Committee member) / Arizona State University (Publisher)
Created2022
161983-Thumbnail Image.png
Description
Matching or stratification is commonly used in observational studies to remove bias due to confounding variables. Analyzing matched data sets requires specific methods which handle dependency among observations within a stratum. Also, modern studies often include hundreds or thousands of variables. Traditional methods for matched data sets are challenged in

Matching or stratification is commonly used in observational studies to remove bias due to confounding variables. Analyzing matched data sets requires specific methods which handle dependency among observations within a stratum. Also, modern studies often include hundreds or thousands of variables. Traditional methods for matched data sets are challenged in high-dimensional settings, mixed type variables (numerical and categorical), nonlinear andinteraction effects. Furthermore, machine learning research for such structured data is quite limited. This dissertation addresses this important gap and proposes machine learning models for identifying informative variables from high-dimensional matched data sets. The first part of this dissertation proposes a machine learning model to identify informative variables from high-dimensional matched case-control data sets. The outcome of interest in this study design is binary (case or control), and each stratum is assumed to have one unit from each outcome level. The proposed method which is referred to as Matched Forest (MF) is effective for large number of variables and identifying interaction effects. The second part of this dissertation proposes three enhancements of MF algorithm. First, a regularization framework is proposed to improve variable selection performance in excessively high-dimensional settings. Second, a classification method is proposed to classify unlabeled pairs of data. Third, two metrics are proposed to estimate the effects of important variables identified by MF. The third part proposes a machine learning model based on Neural Networks to identify important variables from a more generalized matched case-control data set where each stratum has one unit from case outcome level and more than one unit from control outcome level. This method which is referred to as Matched Neural Network (MNN) performs better than current algorithms to identify variables with interaction effects. Lastly, a generalized machine learning model is proposed to identify informative variables from high-dimensional matched data sets where the outcome has more than two levels. This method outperforms existing algorithms in the literature in identifying variables with complex nonlinear and interaction effects.
ContributorsShomal Zadeh, Nooshin (Author) / Runger, George (Thesis advisor) / Montgomery, Douglas (Committee member) / Shinde, Shilpa (Committee member) / Escobedo, Adolfo (Committee member) / Arizona State University (Publisher)
Created2021
168304-Thumbnail Image.png
Description
Monitoring a system for deviations from standard or reference behavior is essential for many data-driven tasks. Whether it is monitoring sensor data or the interactions between system elements, such as edges in a path or transactions in a network, the goal is to detect significant changes from a reference. As

Monitoring a system for deviations from standard or reference behavior is essential for many data-driven tasks. Whether it is monitoring sensor data or the interactions between system elements, such as edges in a path or transactions in a network, the goal is to detect significant changes from a reference. As technological advancements allow for more data to be collected from systems, monitoring approaches should evolve to accommodate the greater collection of high-dimensional data and complex system settings. This dissertation introduces system-level models for monitoring tasks characterized by changes in a subset of system components, utilizing component-level information and relationships. A change may only affect a portion of the data or system (partial change). The first three parts of this dissertation present applications and methods for detecting partial changes. The first part introduces a methodology for partial change detection in a simple, univariate setting. Changes are detected with posterior probabilities and statistical mixture models which allow only a fraction of data to change. The second and third parts of this dissertation center around monitoring more complex multivariate systems modeled through networks. The goal is to detect partial changes in the underlying network attributes and topology. The contributions of the second and third parts are two non-parametric system-level monitoring techniques that consider relationships between network elements. The algorithm Supervised Network Monitoring (SNetM) leverages Graph Neural Networks and transforms the problem into supervised learning. The other algorithm Supervised Network Monitoring for Partial Temporal Inhomogeneity (SNetMP) generates a network embedding, and then transforms the problem to supervised learning. At the end, both SNetM and SNetMP construct measures and transform them to pseudo-probabilities to be monitored for changes. The last topic addresses predicting and monitoring system-level delays on paths in a transportation/delivery system. For each item, the risk of delay is quantified. Machine learning is used to build a system-level model for delay risk, given the information available (such as environmental conditions) on the edges of a path, which integrates edge models. The outputs can then be used in a system-wide monitoring framework, and items most at risk are identified for potential corrective actions.
ContributorsKasaei Roodsari, Maziar (Author) / Runger, George (Thesis advisor) / Escobedo, Adolfo (Committee member) / Pan, Rong (Committee member) / Shinde, Amit (Committee member) / Arizona State University (Publisher)
Created2021