This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 42
Filtering by

Clear all filters

152414-Thumbnail Image.png
Description
Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may

Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may be desired at component, sub-system or full system level. Two issues that are considered in this work are: 1. Information about design ideas is incomplete, informal and sketchy 2. Designers often work at multiple levels; different aspects or subsystems may be at different levels of abstraction Thus, high fidelity analysis and simulation tools are not appropriate for this purpose. This thesis looks at the requirements for a simulation tool and how it could facilitate concept evaluation. The specific tasks reported in this thesis are: 1. The typical types of information available after an ideation session 2. The typical types of technical evaluations done in early stages 3. How to conduct low fidelity design evaluation given a well-defined feasibility question A computational tool for supporting idea evaluation was designed and implemented. It was assumed that the results of the ideation session are represented as a morphological chart and each entry is expressed as some combination of a sketch, text and references to physical effects and machine components. Approximately 110 physical effects were identified and represented in terms of algebraic equations, physical variables and a textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 16 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works. textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 15 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works.
ContributorsKhorshidi, Maryam (Author) / Shah, Jami J. (Thesis advisor) / Wu, Teresa (Committee member) / Gel, Esma (Committee member) / Arizona State University (Publisher)
Created2014
150547-Thumbnail Image.png
Description
This dissertation presents methods for addressing research problems that currently can only adequately be solved using Quality Reliability Engineering (QRE) approaches especially accelerated life testing (ALT) of electronic printed wiring boards with applications to avionics circuit boards. The methods presented in this research are generally applicable to circuit boards, but

This dissertation presents methods for addressing research problems that currently can only adequately be solved using Quality Reliability Engineering (QRE) approaches especially accelerated life testing (ALT) of electronic printed wiring boards with applications to avionics circuit boards. The methods presented in this research are generally applicable to circuit boards, but the data generated and their analysis is for high performance avionics. Avionics equipment typically requires 20 years expected life by aircraft equipment manufacturers and therefore ALT is the only practical way of performing life test estimates. Both thermal and vibration ALT induced failure are performed and analyzed to resolve industry questions relating to the introduction of lead-free solder product and processes into high reliability avionics. In chapter 2, thermal ALT using an industry standard failure machine implementing Interconnect Stress Test (IST) that simulates circuit board life data is compared to real production failure data by likelihood ratio tests to arrive at a mechanical theory. This mechanical theory results in a statistically equivalent energy bound such that failure distributions below a specific energy level are considered to be from the same distribution thus allowing testers to quantify parameter setting in IST prior to life testing. In chapter 3, vibration ALT comparing tin-lead and lead-free circuit board solder designs involves the use of the likelihood ratio (LR) test to assess both complete failure data and S-N curves to present methods for analyzing data. Failure data is analyzed using Regression and two-way analysis of variance (ANOVA) and reconciled with the LR test results that indicating that a costly aging pre-process may be eliminated in certain cases. In chapter 4, vibration ALT for side-by-side tin-lead and lead-free solder black box designs are life tested. Commercial models from strain data do not exist at the low levels associated with life testing and need to be developed because testing performed and presented here indicate that both tin-lead and lead-free solders are similar. In addition, earlier failures due to vibration like connector failure modes will occur before solder interconnect failures.
ContributorsJuarez, Joseph Moses (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie M. (Thesis advisor) / Gel, Esma (Committee member) / Mignolet, Marc (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2012
150981-Thumbnail Image.png
Description
For more than twenty years, clinical researchers have been publishing data regarding incidence and risk of adverse events (AEs) incurred during hospitalizations. Hospitals have standard operating policies and procedures (SOPP) to protect patients from AE. The AE specifics (rates, SOPP failures, timing and risk factors) during heart failure (HF) hospitalizations

For more than twenty years, clinical researchers have been publishing data regarding incidence and risk of adverse events (AEs) incurred during hospitalizations. Hospitals have standard operating policies and procedures (SOPP) to protect patients from AE. The AE specifics (rates, SOPP failures, timing and risk factors) during heart failure (HF) hospitalizations are unknown. There were 1,722 patients discharged with a primary diagnosis of HF from an academic hospital between January 2005 and December 2007. Three hundred eighty-one patients experienced 566 AEs, classified into four categories: medication (43.9%), infection (18.9%), patient care (26.3%), or procedural (10.9%). Three distinct analyses were performed: 1) patient's perspective of SOPP reliability including cumulative distribution and hazard functions of time to AEs; 2) Cox proportional hazards model to determine independent patient-specific risk factors for AEs; and 3) hospital administration's perspective of SOPP reliability through three years of the study including cumulative distribution and hazard functions of time between AEs and moving range statistical process control (SPC) charts for days between failures of each type. This is the first study, to our knowledge, to consider reliability of SOPP from both the patient's and hospital administration's perspective. AE rates in hospitalized patients are similar to other recently published reports and did not improve during the study period. Operations research methodologies will be necessary to improve reliability of care delivered to hospitalized patients.
ContributorsHuddleston, Jeanne (Author) / Fowler, John (Thesis advisor) / Montgomery, Douglas C. (Thesis advisor) / Gel, Esma (Committee member) / Shunk, Dan (Committee member) / Arizona State University (Publisher)
Created2012
151226-Thumbnail Image.png
Description
Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning of

Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning of the relevant patterns This dissertation proposes TS representations and methods for supervised TS analysis. The approaches combine new representations that handle translations and dilations of patterns with bag-of-features strategies and tree-based ensemble learning. This provides flexibility in handling time-warped patterns in a computationally efficient way. The ensemble learners provide a classification framework that can handle high-dimensional feature spaces, multiple classes and interaction between features. The proposed representations are useful for classification and interpretation of the TS data of varying complexity. The first contribution handles the problem of time warping with a feature-based approach. An interval selection and local feature extraction strategy is proposed to learn a bag-of-features representation. This is distinctly different from common similarity-based time warping. This allows for additional features (such as pattern location) to be easily integrated into the models. The learners have the capability to account for the temporal information through the recursive partitioning method. The second contribution focuses on the comprehensibility of the models. A new representation is integrated with local feature importance measures from tree-based ensembles, to diagnose and interpret time intervals that are important to the model. Multivariate time series (MTS) are especially challenging because the input consists of a collection of TS and both features within TS and interactions between TS can be important to models. Another contribution uses a different representation to produce computationally efficient strategies that learn a symbolic representation for MTS. Relationships between the multiple TS, nominal and missing values are handled with tree-based learners. Applications such as speech recognition, medical diagnosis and gesture recognition are used to illustrate the methods. Experimental results show that the TS representations and methods provide better results than competitive methods on a comprehensive collection of benchmark datasets. Moreover, the proposed approaches naturally provide solutions to similarity analysis, predictive pattern discovery and feature selection.
ContributorsBaydogan, Mustafa Gokce (Author) / Runger, George C. (Thesis advisor) / Atkinson, Robert (Committee member) / Gel, Esma (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2012
151213-Thumbnail Image.png
Description
The reliability assessment of future distribution networks is an important issue in power engineering for both utilities and customers. This is due to the increasing demand for more reliable service with less interruption frequency and duration. This research consists of two main parts related to the evaluation of the future

The reliability assessment of future distribution networks is an important issue in power engineering for both utilities and customers. This is due to the increasing demand for more reliable service with less interruption frequency and duration. This research consists of two main parts related to the evaluation of the future distribution system reliability. An innovative algorithm named the encoded Markov cut set (EMCS) is proposed to evaluate the reliability of the networked power distribution system. The proposed algorithm is based on the identification of circuit minimal tie sets using the concept of Petri nets. Prime number encoding and unique prime factorization are then utilized to add more flexibility in communicating between the systems states, and to classify the states as tie sets, cut sets, or minimal cut sets. Different reduction and truncation techniques are proposed to reduce the size of the state space. The Markov model is used to compute the availability, mean time to failure, and failure frequency of the network. A well-known Test Bed is used to illustrate the analysis (the Roy Billinton test system (RBTS)), and different load and system reliability indices are calculated. The method shown is algorithmic and appears suitable for off-line comparison of alternative secondary distribution system designs on the basis of their reliability. The second part assesses the impact of the conventional and renewable distributed generation (DG) on the reliability of the future distribution system. This takes into account the variability of the power output of the renewable DG, such as wind and solar DGs, and the chronological nature of the load demand. The stochastic nature of the renewable resources and its influence on the reliability of the system are modeled and studied by computing the adequacy transition rate. Then, an integrated Markov model that incorporates the DG adequacy transition rate, DG mechanical failure, and starting and switching probability is proposed and utilized to give accurate results for the DG reliability impact. The main focus in this research is the conventional, solar, and wind DG units. However, the technique used appears to be applicable to any renewable energy source.
ContributorsAlmuhaini, Mohammad (Author) / Heydt, Gerald (Thesis advisor) / Ayyanar, Raja (Committee member) / Gel, Esma (Committee member) / Tylavsky, Daniel (Committee member) / Arizona State University (Publisher)
Created2012
Description
Every year, more than 11 million maritime containers and 11 million commercial trucks arrive to the United States, carrying all types of imported goods. As it would be costly to inspect every container, only a fraction of them are inspected before being allowed to proceed into the United States. This

Every year, more than 11 million maritime containers and 11 million commercial trucks arrive to the United States, carrying all types of imported goods. As it would be costly to inspect every container, only a fraction of them are inspected before being allowed to proceed into the United States. This dissertation proposes a decision support system that aims to allocate the scarce inspection resources at a land POE (L-POE), to minimize the different costs associated with the inspection process, including those associated with delaying the entry of legitimate imports. Given the ubiquity of sensors in all aspects of the supply chain, it is necessary to have automated decision systems that incorporate the information provided by these sensors and other possible channels into the inspection planning process. The inspection planning system proposed in this dissertation decomposes the inspection effort allocation process into two phases: Primary and detailed inspection planning. The former helps decide what to inspect, and the latter how to conduct the inspections. A multi-objective optimization (MOO) model is developed for primary inspection planning. This model tries to balance the costs of conducting inspections, direct and expected, and the waiting time of the trucks. The resulting model is exploited in two different ways: One is to construct a complete or a partial efficient frontier for the MOO model with diversity of Pareto-optimal solutions maximized; the other is to evaluate a given inspection plan and provide possible suggestions for improvement. The methodologies are described in detail and case studies provided. The case studies show that this MOO based primary planning model can effectively pick out the non-conforming trucks to inspect, while balancing the costs and waiting time.
ContributorsXue, Liangjie (Author) / Villalobos, Jesus René (Thesis advisor) / Gel, Esma (Committee member) / Runger, George C. (Committee member) / Maltz, Arnold (Committee member) / Arizona State University (Publisher)
Created2012
137381-Thumbnail Image.png
Description
In recent years, Operations Research (OR) has had a signicant impact on improving the performance of hospital Emergency Departments (EDs). This includes improving a wide range of processes involving patient ow from the initial call to the ED through disposition, discharge home, or admission to the hospital. We mainly seek

In recent years, Operations Research (OR) has had a signicant impact on improving the performance of hospital Emergency Departments (EDs). This includes improving a wide range of processes involving patient ow from the initial call to the ED through disposition, discharge home, or admission to the hospital. We mainly seek to illustrate the benet of OR in EDs, and provide an overview of research performed in this vein to assist both researchers and practitioners. We also elaborate on possibilities for future researchers by shedding light on some less studied aspects that can have valuable impacts on practice.
ContributorsAustin, Garrett Alexander (Author) / Saghafian, Soroush (Thesis director) / Gel, Esma (Committee member) / Traub, Stephen (Committee member) / Industrial, Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2013-12
149443-Thumbnail Image.png
Description
Public health surveillance is a special case of the general problem where counts (or rates) of events are monitored for changes. Modern data complements event counts with many additional measurements (such as geographic, demographic, and others) that comprise high-dimensional covariates. This leads to an important challenge to detect a change

Public health surveillance is a special case of the general problem where counts (or rates) of events are monitored for changes. Modern data complements event counts with many additional measurements (such as geographic, demographic, and others) that comprise high-dimensional covariates. This leads to an important challenge to detect a change that only occurs within a region, initially unspecified, defined by these covariates. Current methods are typically limited to spatial and/or temporal covariate information and often fail to use all the information available in modern data that can be paramount in unveiling these subtle changes. Additional complexities associated with modern health data that are often not accounted for by traditional methods include: covariates of mixed type, missing values, and high-order interactions among covariates. This work proposes a transform of public health surveillance to supervised learning, so that an appropriate learner can inherently address all the complexities described previously. At the same time, quantitative measures from the learner can be used to define signal criteria to detect changes in rates of events. A Feature Selection (FS) method is used to identify covariates that contribute to a model and to generate a signal. A measure of statistical significance is included to control false alarms. An alternative Percentile method identifies the specific cases that lead to changes using class probability estimates from tree-based ensembles. This second method is intended to be less computationally intensive and significantly simpler to implement. Finally, a third method labeled Rule-Based Feature Value Selection (RBFVS) is proposed for identifying the specific regions in high-dimensional space where the changes are occurring. Results on simulated examples are used to compare the FS method and the Percentile method. Note this work emphasizes the application of the proposed methods on public health surveillance. Nonetheless, these methods can easily be extended to a variety of applications where counts (or rates) of events are monitored for changes. Such problems commonly occur in domains such as manufacturing, economics, environmental systems, engineering, as well as in public health.
ContributorsDavila, Saylisse (Author) / Runger, George C. (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Young, Dennis (Committee member) / Gel, Esma (Committee member) / Arizona State University (Publisher)
Created2010
132157-Thumbnail Image.png
Description
The findings of this project show that through the use of principal component analysis and K-Means clustering, NBA players can be algorithmically classified in distinct clusters, representing a player archetype. Individual player data for the 2018-2019 regular season was collected for 150 players, and this included regular per game statistics,

The findings of this project show that through the use of principal component analysis and K-Means clustering, NBA players can be algorithmically classified in distinct clusters, representing a player archetype. Individual player data for the 2018-2019 regular season was collected for 150 players, and this included regular per game statistics, such as rebounds, assists, field goals, etc., and advanced statistics, such as usage percentage, win shares, and value over replacement players. The analysis was achieved using the statistical programming language R on the integrated development environment RStudio. The principal component analysis was computed first in order to produce a set of five principal components, which explain roughly 82.20% of the total variance within the player data. These five principal components were then used as the parameters the players were clustered against in the K-Means clustering algorithm implemented in R. It was determined that eight clusters would best represent the groupings of the players, and eight clusters were created with a unique set of players belonging to each one. Each cluster was analyzed based on the players making up the cluster and a player archetype was established to define each of the clusters. The reasoning behind the player archetypes given to each cluster was explained, providing details as to why the players were clustered together and the main data features that influenced the clustering results. Besides two of the clusters, the archetypes were proven to be independent of the player's position. The clustering results can be expanded on in the future to include a larger sample size of players, and it can be used to make inferences regarding NBA roster construction. The clustering can highlight key weaknesses in rosters and show which combinations of player archetypes lead to team success.
ContributorsElam, Mason Matthew (Author) / Armbruster, Dieter (Thesis director) / Gel, Esma (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
Description
In the past, Industrial Engineering/Engineering Management Capstone groups have not provided adequate documentation of their project data, results, and conclusions to both the course instructor and their project sponsors. The goal of this project is to mitigate these issues by instituting a knowledge management system with one of ASU’s cloud

In the past, Industrial Engineering/Engineering Management Capstone groups have not provided adequate documentation of their project data, results, and conclusions to both the course instructor and their project sponsors. The goal of this project is to mitigate these issues by instituting a knowledge management system with one of ASU’s cloud storage tools, OSF, and by updating course rubrics to reflect knowledge sharing best practices. This project used existing research to employ tactics that promote the long-term use of this system. In addition, data specialists from ASU Library’s Research and Data Management department were involved.
ContributorsWade, Alexis Nicole (Author) / Juarez, Joseph (Thesis director) / Gel, Esma (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor) / Barrett, The Honors College (Contributor)
Created2019-12