Matching Items (84)
152414-Thumbnail Image.png
Description
Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may

Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may be desired at component, sub-system or full system level. Two issues that are considered in this work are: 1. Information about design ideas is incomplete, informal and sketchy 2. Designers often work at multiple levels; different aspects or subsystems may be at different levels of abstraction Thus, high fidelity analysis and simulation tools are not appropriate for this purpose. This thesis looks at the requirements for a simulation tool and how it could facilitate concept evaluation. The specific tasks reported in this thesis are: 1. The typical types of information available after an ideation session 2. The typical types of technical evaluations done in early stages 3. How to conduct low fidelity design evaluation given a well-defined feasibility question A computational tool for supporting idea evaluation was designed and implemented. It was assumed that the results of the ideation session are represented as a morphological chart and each entry is expressed as some combination of a sketch, text and references to physical effects and machine components. Approximately 110 physical effects were identified and represented in terms of algebraic equations, physical variables and a textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 16 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works. textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 15 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works.
ContributorsKhorshidi, Maryam (Author) / Shah, Jami J. (Thesis advisor) / Wu, Teresa (Committee member) / Gel, Esma (Committee member) / Arizona State University (Publisher)
Created2014
152456-Thumbnail Image.png
Description
Vehicles powered by electricity and alternative-fuels are becoming a more popular form of transportation since they have less of an environmental impact than standard gasoline vehicles. Unfortunately, their success is currently inhibited by the sparseness of locations where the vehicles can refuel as well as the fact that many of

Vehicles powered by electricity and alternative-fuels are becoming a more popular form of transportation since they have less of an environmental impact than standard gasoline vehicles. Unfortunately, their success is currently inhibited by the sparseness of locations where the vehicles can refuel as well as the fact that many of the vehicles have a range that is less than those powered by gasoline. These factors together create a "range anxiety" in drivers, which causes the drivers to worry about the utility of alternative-fuel and electric vehicles and makes them less likely to purchase these vehicles. For the new vehicle technologies to thrive it is critical that range anxiety is minimized and performance is increased as much as possible through proper routing and scheduling. In the case of long distance trips taken by individual vehicles, the routes must be chosen such that the vehicles take the shortest routes while not running out of fuel on the trip. When many vehicles are to be routed during the day, if the refueling stations have limited capacity then care must be taken to avoid having too many vehicles arrive at the stations at any time. If the vehicles that will need to be routed in the future are unknown then this problem is stochastic. For fleets of vehicles serving scheduled operations, switching to alternative-fuels requires ensuring the schedules do not cause the vehicles to run out of fuel. This is especially problematic since the locations where the vehicles may refuel are limited due to the technology being new. This dissertation covers three related optimization problems: routing a single electric or alternative-fuel vehicle on a long distance trip, routing many electric vehicles in a network where the stations have limited capacity and the arrivals into the system are stochastic, and scheduling fleets of electric or alternative-fuel vehicles with limited locations to refuel. Different algorithms are proposed to solve each of the three problems, of which some are exact and some are heuristic. The algorithms are tested on both random data and data relating to the State of Arizona.
ContributorsAdler, Jonathan D (Author) / Mirchandani, Pitu B. (Thesis advisor) / Askin, Ronald (Committee member) / Gel, Esma (Committee member) / Xue, Guoliang (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2014
152902-Thumbnail Image.png
Description
Accelerated life testing (ALT) is the process of subjecting a product to stress conditions (temperatures, voltage, pressure etc.) in excess of its normal operating levels to accelerate failures. Product failure typically results from multiple stresses acting on it simultaneously. Multi-stress factor ALTs are challenging as they increase the number of

Accelerated life testing (ALT) is the process of subjecting a product to stress conditions (temperatures, voltage, pressure etc.) in excess of its normal operating levels to accelerate failures. Product failure typically results from multiple stresses acting on it simultaneously. Multi-stress factor ALTs are challenging as they increase the number of experiments due to the stress factor-level combinations resulting from the increased number of factors. Chapter 2 provides an approach for designing ALT plans with multiple stresses utilizing Latin hypercube designs that reduces the simulation cost without loss of statistical efficiency. A comparison to full grid and large-sample approximation methods illustrates the approach computational cost gain and flexibility in determining optimal stress settings with less assumptions and more intuitive unit allocations.

Implicit in the design criteria of current ALT designs is the assumption that the form of the acceleration model is correct. This is unrealistic assumption in many real-world problems. Chapter 3 provides an approach for ALT optimum design for model discrimination. We utilize the Hellinger distance measure between predictive distributions. The optimal ALT plan at three stress levels was determined and its performance was compared to good compromise plan, best traditional plan and well-known 4:2:1 compromise test plans. In the case of linear versus quadratic ALT models, the proposed method increased the test plan's ability to distinguish among competing models and provided better guidance as to which model is appropriate for the experiment.

Chapter 4 extends the approach of Chapter 3 to ALT sequential model discrimination. An initial experiment is conducted to provide maximum possible information with respect to model discrimination. The follow-on experiment is planned by leveraging the most current information to allow for Bayesian model comparison through posterior model probability ratios. Results showed that performance of plan is adversely impacted by the amount of censoring in the data, in the case of linear vs. quadratic model form at three levels of constant stress, sequential testing can improve model recovery rate by approximately 8% when data is complete, but no apparent advantage in adopting sequential testing was found in the case of right-censored data when censoring is in excess of a certain amount.
ContributorsNasir, Ehab (Author) / Pan, Rong (Thesis advisor) / Runger, George C. (Committee member) / Gel, Esma (Committee member) / Kao, Ming-Hung (Committee member) / Montgomery, Douglas C. (Committee member) / Arizona State University (Publisher)
Created2014
150497-Thumbnail Image.png
Description
In recent years, service oriented computing (SOC) has become a widely accepted paradigm for the development of distributed applications such as web services, grid computing and cloud computing systems. In service-based systems (SBS), multiple service requests with specific performance requirements make services compete for system resources. IT service providers need

In recent years, service oriented computing (SOC) has become a widely accepted paradigm for the development of distributed applications such as web services, grid computing and cloud computing systems. In service-based systems (SBS), multiple service requests with specific performance requirements make services compete for system resources. IT service providers need to allocate resources to services so the performance requirements of customers can be satisfied. Workload and performance models are required for efficient resource management and service performance assurance in SBS. This dissertation develops two methods to understand and model the cause-effect relations of service-related activities with resources workload and service performance. Part one presents an empirical method that requires the collection of system dynamics data and the application of statistical analyses. The results show that the method is capable to: 1) uncover the impacts of services on resource workload and service performance, 2) identify interaction effects of multiple services running concurrently, 3) gain insights about resource and performance tradeoffs of services, and 4) build service workload and performance models. In part two, the empirical method is used to investigate the impacts of services, security mechanisms and cyber attacks on resources workload and service performance. The information obtained is used to: 1) uncover interaction effects of services, security mechanisms and cyber attacks, 2) identify tradeoffs within limits of system resources, and 3) develop general/specific strategies for system survivability. Finally, part three presents a framework based on the usage profiles of services competing for resources and the resource-sharing schemes. The framework is used to: 1) uncover the impacts of service parameters (e.g. arrival distribution, execution time distribution, priority, workload intensity, scheduling algorithm) on workload and performance, and 2) build service workload and performance models at individual resources. The estimates obtained from service workload and performance models at individual resources can be aggregated to obtain overall estimates of services through multiple system resources. The workload and performance models of services obtained through both methods can be used for the efficient resource management and service performance assurance in SBS.
ContributorsMartinez Aranda, Billibaldo (Author) / Ye, Nong (Thesis advisor) / Wu, Tong (Committee member) / Sarjoughian, Hessam S. (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2012
150981-Thumbnail Image.png
Description
For more than twenty years, clinical researchers have been publishing data regarding incidence and risk of adverse events (AEs) incurred during hospitalizations. Hospitals have standard operating policies and procedures (SOPP) to protect patients from AE. The AE specifics (rates, SOPP failures, timing and risk factors) during heart failure (HF) hospitalizations

For more than twenty years, clinical researchers have been publishing data regarding incidence and risk of adverse events (AEs) incurred during hospitalizations. Hospitals have standard operating policies and procedures (SOPP) to protect patients from AE. The AE specifics (rates, SOPP failures, timing and risk factors) during heart failure (HF) hospitalizations are unknown. There were 1,722 patients discharged with a primary diagnosis of HF from an academic hospital between January 2005 and December 2007. Three hundred eighty-one patients experienced 566 AEs, classified into four categories: medication (43.9%), infection (18.9%), patient care (26.3%), or procedural (10.9%). Three distinct analyses were performed: 1) patient's perspective of SOPP reliability including cumulative distribution and hazard functions of time to AEs; 2) Cox proportional hazards model to determine independent patient-specific risk factors for AEs; and 3) hospital administration's perspective of SOPP reliability through three years of the study including cumulative distribution and hazard functions of time between AEs and moving range statistical process control (SPC) charts for days between failures of each type. This is the first study, to our knowledge, to consider reliability of SOPP from both the patient's and hospital administration's perspective. AE rates in hospitalized patients are similar to other recently published reports and did not improve during the study period. Operations research methodologies will be necessary to improve reliability of care delivered to hospitalized patients.
ContributorsHuddleston, Jeanne (Author) / Fowler, John (Thesis advisor) / Montgomery, Douglas C. (Thesis advisor) / Gel, Esma (Committee member) / Shunk, Dan (Committee member) / Arizona State University (Publisher)
Created2012
Description
Every year, more than 11 million maritime containers and 11 million commercial trucks arrive to the United States, carrying all types of imported goods. As it would be costly to inspect every container, only a fraction of them are inspected before being allowed to proceed into the United States. This

Every year, more than 11 million maritime containers and 11 million commercial trucks arrive to the United States, carrying all types of imported goods. As it would be costly to inspect every container, only a fraction of them are inspected before being allowed to proceed into the United States. This dissertation proposes a decision support system that aims to allocate the scarce inspection resources at a land POE (L-POE), to minimize the different costs associated with the inspection process, including those associated with delaying the entry of legitimate imports. Given the ubiquity of sensors in all aspects of the supply chain, it is necessary to have automated decision systems that incorporate the information provided by these sensors and other possible channels into the inspection planning process. The inspection planning system proposed in this dissertation decomposes the inspection effort allocation process into two phases: Primary and detailed inspection planning. The former helps decide what to inspect, and the latter how to conduct the inspections. A multi-objective optimization (MOO) model is developed for primary inspection planning. This model tries to balance the costs of conducting inspections, direct and expected, and the waiting time of the trucks. The resulting model is exploited in two different ways: One is to construct a complete or a partial efficient frontier for the MOO model with diversity of Pareto-optimal solutions maximized; the other is to evaluate a given inspection plan and provide possible suggestions for improvement. The methodologies are described in detail and case studies provided. The case studies show that this MOO based primary planning model can effectively pick out the non-conforming trucks to inspect, while balancing the costs and waiting time.
ContributorsXue, Liangjie (Author) / Villalobos, Jesus René (Thesis advisor) / Gel, Esma (Committee member) / Runger, George C. (Committee member) / Maltz, Arnold (Committee member) / Arizona State University (Publisher)
Created2012
151152-Thumbnail Image.png
Description
Access control is one of the most fundamental security mechanisms used in the design and management of modern information systems. However, there still exists an open question on how formal access control models can be automatically analyzed and fully realized in secure system development. Furthermore, specifying and managing access control

Access control is one of the most fundamental security mechanisms used in the design and management of modern information systems. However, there still exists an open question on how formal access control models can be automatically analyzed and fully realized in secure system development. Furthermore, specifying and managing access control policies are often error-prone due to the lack of effective analysis mechanisms and tools. In this dissertation, I present an Assurance Management Framework (AMF) that is designed to cope with various assurance management requirements from both access control system development and policy-based computing. On one hand, the AMF framework facilitates comprehensive analysis and thorough realization of formal access control models in secure system development. I demonstrate how this method can be applied to build role-based access control systems by adopting the NIST/ANSI RBAC standard as an underlying security model. On the other hand, the AMF framework ensures the correctness of access control policies in policy-based computing through automated reasoning techniques and anomaly management mechanisms. A systematic method is presented to formulate XACML in Answer Set Programming (ASP) that allows users to leverage off-the-shelf ASP solvers for a variety of analysis services. In addition, I introduce a novel anomaly management mechanism, along with a grid-based visualization approach, which enables systematic and effective detection and resolution of policy anomalies. I further evaluate the AMF framework through modeling and analyzing multiparty access control in Online Social Networks (OSNs). A MultiParty Access Control (MPAC) model is formulated to capture the essence of multiparty authorization requirements in OSNs. In particular, I show how AMF can be applied to OSNs for identifying and resolving privacy conflicts, and representing and reasoning about MPAC model and policy. To demonstrate the feasibility of the proposed methodology, a suite of proof-of-concept prototype systems is implemented as well.
ContributorsHu, Hongxin (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Dasgupta, Partha (Committee member) / Ye, Nong (Committee member) / Arizona State University (Publisher)
Created2012
151213-Thumbnail Image.png
Description
The reliability assessment of future distribution networks is an important issue in power engineering for both utilities and customers. This is due to the increasing demand for more reliable service with less interruption frequency and duration. This research consists of two main parts related to the evaluation of the future

The reliability assessment of future distribution networks is an important issue in power engineering for both utilities and customers. This is due to the increasing demand for more reliable service with less interruption frequency and duration. This research consists of two main parts related to the evaluation of the future distribution system reliability. An innovative algorithm named the encoded Markov cut set (EMCS) is proposed to evaluate the reliability of the networked power distribution system. The proposed algorithm is based on the identification of circuit minimal tie sets using the concept of Petri nets. Prime number encoding and unique prime factorization are then utilized to add more flexibility in communicating between the systems states, and to classify the states as tie sets, cut sets, or minimal cut sets. Different reduction and truncation techniques are proposed to reduce the size of the state space. The Markov model is used to compute the availability, mean time to failure, and failure frequency of the network. A well-known Test Bed is used to illustrate the analysis (the Roy Billinton test system (RBTS)), and different load and system reliability indices are calculated. The method shown is algorithmic and appears suitable for off-line comparison of alternative secondary distribution system designs on the basis of their reliability. The second part assesses the impact of the conventional and renewable distributed generation (DG) on the reliability of the future distribution system. This takes into account the variability of the power output of the renewable DG, such as wind and solar DGs, and the chronological nature of the load demand. The stochastic nature of the renewable resources and its influence on the reliability of the system are modeled and studied by computing the adequacy transition rate. Then, an integrated Markov model that incorporates the DG adequacy transition rate, DG mechanical failure, and starting and switching probability is proposed and utilized to give accurate results for the DG reliability impact. The main focus in this research is the conventional, solar, and wind DG units. However, the technique used appears to be applicable to any renewable energy source.
ContributorsAlmuhaini, Mohammad (Author) / Heydt, Gerald (Thesis advisor) / Ayyanar, Raja (Committee member) / Gel, Esma (Committee member) / Tylavsky, Daniel (Committee member) / Arizona State University (Publisher)
Created2012
151226-Thumbnail Image.png
Description
Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning of

Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning of the relevant patterns This dissertation proposes TS representations and methods for supervised TS analysis. The approaches combine new representations that handle translations and dilations of patterns with bag-of-features strategies and tree-based ensemble learning. This provides flexibility in handling time-warped patterns in a computationally efficient way. The ensemble learners provide a classification framework that can handle high-dimensional feature spaces, multiple classes and interaction between features. The proposed representations are useful for classification and interpretation of the TS data of varying complexity. The first contribution handles the problem of time warping with a feature-based approach. An interval selection and local feature extraction strategy is proposed to learn a bag-of-features representation. This is distinctly different from common similarity-based time warping. This allows for additional features (such as pattern location) to be easily integrated into the models. The learners have the capability to account for the temporal information through the recursive partitioning method. The second contribution focuses on the comprehensibility of the models. A new representation is integrated with local feature importance measures from tree-based ensembles, to diagnose and interpret time intervals that are important to the model. Multivariate time series (MTS) are especially challenging because the input consists of a collection of TS and both features within TS and interactions between TS can be important to models. Another contribution uses a different representation to produce computationally efficient strategies that learn a symbolic representation for MTS. Relationships between the multiple TS, nominal and missing values are handled with tree-based learners. Applications such as speech recognition, medical diagnosis and gesture recognition are used to illustrate the methods. Experimental results show that the TS representations and methods provide better results than competitive methods on a comprehensive collection of benchmark datasets. Moreover, the proposed approaches naturally provide solutions to similarity analysis, predictive pattern discovery and feature selection.
ContributorsBaydogan, Mustafa Gokce (Author) / Runger, George C. (Thesis advisor) / Atkinson, Robert (Committee member) / Gel, Esma (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2012
150547-Thumbnail Image.png
Description
This dissertation presents methods for addressing research problems that currently can only adequately be solved using Quality Reliability Engineering (QRE) approaches especially accelerated life testing (ALT) of electronic printed wiring boards with applications to avionics circuit boards. The methods presented in this research are generally applicable to circuit boards, but

This dissertation presents methods for addressing research problems that currently can only adequately be solved using Quality Reliability Engineering (QRE) approaches especially accelerated life testing (ALT) of electronic printed wiring boards with applications to avionics circuit boards. The methods presented in this research are generally applicable to circuit boards, but the data generated and their analysis is for high performance avionics. Avionics equipment typically requires 20 years expected life by aircraft equipment manufacturers and therefore ALT is the only practical way of performing life test estimates. Both thermal and vibration ALT induced failure are performed and analyzed to resolve industry questions relating to the introduction of lead-free solder product and processes into high reliability avionics. In chapter 2, thermal ALT using an industry standard failure machine implementing Interconnect Stress Test (IST) that simulates circuit board life data is compared to real production failure data by likelihood ratio tests to arrive at a mechanical theory. This mechanical theory results in a statistically equivalent energy bound such that failure distributions below a specific energy level are considered to be from the same distribution thus allowing testers to quantify parameter setting in IST prior to life testing. In chapter 3, vibration ALT comparing tin-lead and lead-free circuit board solder designs involves the use of the likelihood ratio (LR) test to assess both complete failure data and S-N curves to present methods for analyzing data. Failure data is analyzed using Regression and two-way analysis of variance (ANOVA) and reconciled with the LR test results that indicating that a costly aging pre-process may be eliminated in certain cases. In chapter 4, vibration ALT for side-by-side tin-lead and lead-free solder black box designs are life tested. Commercial models from strain data do not exist at the low levels associated with life testing and need to be developed because testing performed and presented here indicate that both tin-lead and lead-free solders are similar. In addition, earlier failures due to vibration like connector failure modes will occur before solder interconnect failures.
ContributorsJuarez, Joseph Moses (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie M. (Thesis advisor) / Gel, Esma (Committee member) / Mignolet, Marc (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2012