Matching Items (82)
152033-Thumbnail Image.png
Description
The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of

The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of engineered complex systems and predict their future adaptive patterns. The approach allows the examination of complexity in the structure and the behavior of components as a result of their connections and in relation to their environment. This research describes and uses the major differences of natural complex adaptive systems (CASs) with artificial/engineered CASs to build a framework and platform for ECAS. While this framework focuses on the critical factors of an engineered system, it also enables one to synthetically employ engineering and mathematical models to analyze and measure complexity in such systems. In this way concepts of complex systems science are adapted to management science and system of systems engineering. In particular an integrated consumer-based optimization and agent-based modeling (ABM) platform is presented that enables managers to predict and partially control patterns of behaviors in ECASs. Demonstrated on the U.S. electricity markets, ABM is integrated with normative and subjective decision behavior recommended by the U.S. Department of Energy (DOE) and Federal Energy Regulatory Commission (FERC). The approach integrates social networks, social science, complexity theory, and diffusion theory. Furthermore, it has unique and significant contribution in exploring and representing concrete managerial insights for ECASs and offering new optimized actions and modeling paradigms in agent-based simulation.
ContributorsHaghnevis, Moeed (Author) / Askin, Ronald G. (Thesis advisor) / Armbruster, Dieter (Thesis advisor) / Mirchandani, Pitu (Committee member) / Wu, Tong (Committee member) / Hedman, Kory (Committee member) / Arizona State University (Publisher)
Created2013
151698-Thumbnail Image.png
Description
Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient

Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient or the Quality Assurance for the amount of organ dose received. In this study, we are exploring the methodologies to systematically reduce the absorbed radiation dose in the Fluoroscopically Guided Interventional Radiology procedures. In the first part of this study, we developed a mathematical model which determines a set of geometry settings for the equipment and a level for the energy during a patient exam. The goal is to minimize the amount of absorbed dose in the critical organs while maintaining image quality required for the diagnosis. The model is a large-scale mixed integer program. We performed polyhedral analysis and derived several sets of strong inequalities to improve the computational speed and quality of the solution. Results present the amount of absorbed dose in the critical organ can be reduced up to 99% for a specific set of angles. In the second part, we apply an approximate gradient method to simultaneously optimize angle and table location while minimizing dose in the critical organs with respect to the image quality. In each iteration, we solve a sub-problem as a MIP to determine the radiation field size and corresponding X-ray tube energy. In the computational experiments, results show further reduction (up to 80%) of the absorbed dose in compare with previous method. Last, there are uncertainties in the medical procedures resulting imprecision of the absorbed dose. We propose a robust formulation to hedge from the worst case absorbed dose while ensuring feasibility. In this part, we investigate a robust approach for the organ motions within a radiology procedure. We minimize the absorbed dose for the critical organs across all input data scenarios which are corresponding to the positioning and size of the organs. The computational results indicate up to 26% increase in the absorbed dose calculated for the robust approach which ensures the feasibility across scenarios.
ContributorsKhodadadegan, Yasaman (Author) / Zhang, Muhong (Thesis advisor) / Pavlicek, William (Thesis advisor) / Fowler, John (Committee member) / Wu, Tong (Committee member) / Arizona State University (Publisher)
Created2013
151286-Thumbnail Image.png
Description
Facility location models are usually employed to assist decision processes in urban and regional planning. The focus of this research is extensions of a classic location problem, the Weber problem, to address continuously distributed demand as well as multiple facilities. Addressing continuous demand and multi-facilities represents major challenges. Given advances

Facility location models are usually employed to assist decision processes in urban and regional planning. The focus of this research is extensions of a classic location problem, the Weber problem, to address continuously distributed demand as well as multiple facilities. Addressing continuous demand and multi-facilities represents major challenges. Given advances in geographic information systems (GIS), computational science and associated technologies, spatial optimization provides a possibility for improved problem solution. Essential here is how to represent facilities and demand in geographic space. In one respect, spatial abstraction as discrete points is generally assumed as it simplifies model formulation and reduces computational complexity. However, errors in derived solutions are likely not negligible, especially when demand varies continuously across a region. In another respect, although mathematical functions describing continuous distributions can be employed, such theoretical surfaces are generally approximated in practice using finite spatial samples due to a lack of complete information. To this end, the dissertation first investigates the implications of continuous surface approximation and explicitly shows errors in solutions obtained from fitted demand surfaces through empirical applications. The dissertation then presents a method to improve spatial representation of continuous demand. This is based on infill asymptotic theory, which indicates that errors in fitted surfaces tend to zero as the number of sample points increases to infinity. The implication for facility location modeling is that a solution to the discrete problem with greater demand point density will approach the theoretical optimum for the continuous counterpart. Therefore, in this research discrete points are used to represent continuous demand to explore this theoretical convergence, which is less restrictive and less problem altering compared to existing alternatives. The proposed continuous representation method is further extended to develop heuristics to solve the continuous Weber and multi-Weber problems, where one or more facilities can be sited anywhere in continuous space to best serve continuously distributed demand. Two spatial optimization approaches are proposed for the two extensions of the Weber problem, respectively. The special characteristics of those approaches are that they integrate optimization techniques and GIS functionality. Empirical results highlight the advantages of the developed approaches and the importance of solution integration within GIS.
ContributorsYao, Jing (Author) / Murray, Alan T. (Thesis advisor) / Mirchandani, Pitu B. (Committee member) / Kuby, Michael J (Committee member) / Arizona State University (Publisher)
Created2012
152456-Thumbnail Image.png
Description
Vehicles powered by electricity and alternative-fuels are becoming a more popular form of transportation since they have less of an environmental impact than standard gasoline vehicles. Unfortunately, their success is currently inhibited by the sparseness of locations where the vehicles can refuel as well as the fact that many of

Vehicles powered by electricity and alternative-fuels are becoming a more popular form of transportation since they have less of an environmental impact than standard gasoline vehicles. Unfortunately, their success is currently inhibited by the sparseness of locations where the vehicles can refuel as well as the fact that many of the vehicles have a range that is less than those powered by gasoline. These factors together create a "range anxiety" in drivers, which causes the drivers to worry about the utility of alternative-fuel and electric vehicles and makes them less likely to purchase these vehicles. For the new vehicle technologies to thrive it is critical that range anxiety is minimized and performance is increased as much as possible through proper routing and scheduling. In the case of long distance trips taken by individual vehicles, the routes must be chosen such that the vehicles take the shortest routes while not running out of fuel on the trip. When many vehicles are to be routed during the day, if the refueling stations have limited capacity then care must be taken to avoid having too many vehicles arrive at the stations at any time. If the vehicles that will need to be routed in the future are unknown then this problem is stochastic. For fleets of vehicles serving scheduled operations, switching to alternative-fuels requires ensuring the schedules do not cause the vehicles to run out of fuel. This is especially problematic since the locations where the vehicles may refuel are limited due to the technology being new. This dissertation covers three related optimization problems: routing a single electric or alternative-fuel vehicle on a long distance trip, routing many electric vehicles in a network where the stations have limited capacity and the arrivals into the system are stochastic, and scheduling fleets of electric or alternative-fuel vehicles with limited locations to refuel. Different algorithms are proposed to solve each of the three problems, of which some are exact and some are heuristic. The algorithms are tested on both random data and data relating to the State of Arizona.
ContributorsAdler, Jonathan D (Author) / Mirchandani, Pitu B. (Thesis advisor) / Askin, Ronald (Committee member) / Gel, Esma (Committee member) / Xue, Guoliang (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2014
153486-Thumbnail Image.png
Description
Quantum resilience is a pragmatic theory that allows systems engineers to formally characterize the resilience of systems. As a generalized theory, it not only clarifies resilience in the literature, but also can be applied to all disciplines and domains of discourse. Operationalizing resilience in this manner permits decision-makers to compare

Quantum resilience is a pragmatic theory that allows systems engineers to formally characterize the resilience of systems. As a generalized theory, it not only clarifies resilience in the literature, but also can be applied to all disciplines and domains of discourse. Operationalizing resilience in this manner permits decision-makers to compare and contrast system deployment options for suitability in a variety of environments and allows for consistent treatment of resilience across domains. Systems engineers, whether planning future infrastructures or managing ecosystems, are increasingly asked to deliver resilient systems. Quantum resilience provides a way forward that allows specific resilience requirements to be specified, validated, and verified.

Quantum resilience makes two very important claims. First, resilience cannot be characterized without recognizing both the system and the valued function it provides. Second, resilience is not about disturbances, insults, threats, or perturbations. To avoid crippling infinities, characterization of resilience must be accomplishable without disturbances in mind. In light of this, quantum resilience defines resilience as the extent to which a system delivers its valued functions, and characterizes resilience as a function of system productivity and complexity. System productivity vis-à-vis specified “valued functions” involves (1) the quanta of the valued function delivered, and (2) the number of systems (within the greater system) which deliver it. System complexity is defined structurally and relationally and is a function of a variety of items including (1) system-of-systems hierarchical decomposition, (2) interfaces and connections between systems, and (3) inter-system dependencies.

Among the important features of quantum resilience is that it can be implemented in any system engineering tool that provides sufficient design and specification rigor (i.e., one that supports standards like the Lifecycle and Systems Modeling languages and frameworks like the DoD Architecture Framework). Further, this can be accomplished with minimal software development and has been demonstrated in three model-based system engineering tools, two of which are commercially available, well-respected, and widely used. This pragmatic approach assures transparency and consistency in characterization of resilience in any discipline.
ContributorsRoberts, Thomas Wade (Author) / Allenby, Braden (Thesis advisor) / Chester, Mikhail (Committee member) / Anderies, John M (Committee member) / Arizona State University (Publisher)
Created2015
153424-Thumbnail Image.png
Description
Comparative life cycle assessment (LCA) evaluates the relative performance of multiple products, services, or technologies with the purpose of selecting the least impactful alternative. Nevertheless, characterized results are seldom conclusive. When one alternative performs best in some aspects, it may also performs worse in others. These tradeoffs among different impact

Comparative life cycle assessment (LCA) evaluates the relative performance of multiple products, services, or technologies with the purpose of selecting the least impactful alternative. Nevertheless, characterized results are seldom conclusive. When one alternative performs best in some aspects, it may also performs worse in others. These tradeoffs among different impact categories make it difficult to identify environmentally preferable alternatives. To help reconcile this dilemma, LCA analysts have the option to apply normalization and weighting to generate comparisons based upon a single score. However, these approaches can be misleading because they suffer from problems of reference dataset incompletion, linear and fully compensatory aggregation, masking of salient tradeoffs, weight insensitivity and difficulties incorporating uncertainty in performance assessment and weights. Consequently, most LCA studies truncate impacts assessment at characterization, which leaves decision-makers to confront highly uncertain multi-criteria problems without the aid of analytic guideposts. This study introduces Stochastic Multi attribute Analysis (SMAA), a novel approach to normalization and weighting of characterized life-cycle inventory data for use in comparative Life Cycle Assessment (LCA). The proposed method avoids the bias introduced by external normalization references, and is capable of exploring high uncertainty in both the input parameters and weights.
ContributorsPrado, Valentina (Author) / Seager, Thomas P (Thesis advisor) / Chester, Mikhail V (Committee member) / Kullapa Soratana (Committee member) / Tervonen, Tommi (Committee member) / Arizona State University (Publisher)
Created2015
153348-Thumbnail Image.png
Description
This research develops heuristics for scheduling electric power production amid uncertainty. Reliability is becoming more difficult to manage due to growing uncertainty from renewable resources. This challenge is compounded by the risk of resource outages, which can occur any time and without warning. Stochastic optimization is a promising tool but

This research develops heuristics for scheduling electric power production amid uncertainty. Reliability is becoming more difficult to manage due to growing uncertainty from renewable resources. This challenge is compounded by the risk of resource outages, which can occur any time and without warning. Stochastic optimization is a promising tool but remains computationally intractable for large systems. The models used in industry instead schedule for the forecast and withhold generation reserve for scenario response, but they are blind to how this reserve may be constrained by network congestion. This dissertation investigates more effective heuristics to improve economics and reliability in power systems where congestion is a concern.

Two general approaches are developed. Both approximate the effects of recourse decisions without actually solving a stochastic model. The first approach procures more reserve whenever approximate recourse policies stress the transmission network. The second approach procures reserve at prime locations by generalizing the existing practice of reserve disqualification. The latter approach is applied for feasibility and is later extended to limit scenario costs. Testing demonstrates expected cost improvements around 0.5%-1.0% for the IEEE 73-bus test case, which can translate to millions of dollars per year even for modest systems. The heuristics developed in this dissertation perform somewhere between established deterministic and stochastic models: providing an economic benefit over current practices without substantially increasing computational times.
ContributorsLyon, Joshua Daniel (Author) / Zhang, Muhong (Thesis advisor) / Hedman, Kory W (Thesis advisor) / Askin, Ronald G. (Committee member) / Mirchandani, Pitu (Committee member) / Arizona State University (Publisher)
Created2015
149802-Thumbnail Image.png
Description
Services outsourcing is a prevalent yet problematic phenomenon. On the one hand, more and more firms are outsourcing services function. On the other hand, we are faced with many services outsourcing failures. This research attempts to uncover some of the omitted causes of services outsourcing failure. It extends a conceptual

Services outsourcing is a prevalent yet problematic phenomenon. On the one hand, more and more firms are outsourcing services function. On the other hand, we are faced with many services outsourcing failures. This research attempts to uncover some of the omitted causes of services outsourcing failure. It extends a conceptual paper that used social network theory to examine the shifting of the triadic relationship structures among the service buyer, service supplier and the buyer's customers at different stages of the services outsourcing arrangements and its performance implications. This study empirically examines these performance implications. Specifically, this research defines the concept of bridge transfer, which denotes the weakening and dissolution of operational ties between the service buyer firms' and their end customers and the appearing and strengthening of operational ties between the service supplier firms and the end customers. It also empirically derives a measurement scale for this new construct. Further, the effects of bridge transfer on supplier's appropriation behavior, buyer's cost of quality and end customers' quality perception are examined in the context of customer facing services and are contrasted with those entail little or no customer interactions. In addition, the moderating roles of buyer-supplier relationship on the effects of bridge transfer are also examined. An Internet-based survey was administered to firms affiliated with CAPS Research and the Institute of Supply Management as the primary data source (n=137). Principal Component Analyses were used to derive a composite score for each of the model construct. Then linear regressions were used to detect the effects of bridge transfer on services outsourcing outcomes and to detect the moderating role of buyer-supplier relationships on these effects. The results show that bridge transfer is positively correlated to suppliers' appropriate behavior and negatively correlated to end customer's quality perception in the context of customer facing services. The effects of bridge transfer are not found for services that entail little or no interactions with the end customers. Instead, buyer-supplier relationship is found to be a key influencing factor to services outsourcing outcomes in this context. This study helps to pinpoint some of the omitted causes of services outsourcing failures and shed light on how to manage services outsourcing for success.
ContributorsLi, Mei (Author) / Choi, Thomas Y. (Thesis advisor) / Dooley, Kevin J (Committee member) / Bitner, Mary-Jo (Committee member) / Arizona State University (Publisher)
Created2011
150113-Thumbnail Image.png
Description
A low temperature amorphous oxide thin film transistor (TFT) backplane technology for flexible organic light emitting diode (OLED) displays has been developed to create 4.1-in. diagonal backplanes. The critical steps in the evolution of the backplane process include the qualification and optimization of the low temperature (200 °C) metal oxide

A low temperature amorphous oxide thin film transistor (TFT) backplane technology for flexible organic light emitting diode (OLED) displays has been developed to create 4.1-in. diagonal backplanes. The critical steps in the evolution of the backplane process include the qualification and optimization of the low temperature (200 °C) metal oxide process, the stability of the devices under forward and reverse bias stress, the transfer of the process to flexible plastic substrates, and the fabrication of white organic light emitting diode (OLED) displays. Mixed oxide semiconductor thin film transistors (TFTs) on flexible plastic substrates typically suffer from performance and stability issues related to the maximum processing temperature limitation of the polymer. A novel device architecture based upon a dual active layer enables significant improvements in both the performance and stability. Devices are directly fabricated below 200 ºC on a polyethylene naphthalate (PEN) substrate using mixed metal oxides of either zinc indium oxide (ZIO) or indium gallium zinc oxide (IGZO) as the active semiconductor. The dual active layer architecture allows for adjustment in the saturation mobility and threshold voltage stability without the requirement of high temperature annealing, which is not compatible with flexible colorless plastic substrates like PEN. The device performance and stability is strongly dependent upon the composition of the mixed metal oxide; this dependency provides a simple route to improving the threshold voltage stability and drive performance. By switching from a single to a dual active layer, the saturation mobility increases from 1.2 cm2/V-s to 18.0 cm2/V-s, while the rate of the threshold voltage shift decreases by an order of magnitude. This approach could assist in enabling the production of devices on flexible substrates using amorphous oxide semiconductors.
ContributorsMarrs, Michael (Author) / Raupp, Gregory B (Thesis advisor) / Vogt, Bryan D (Thesis advisor) / Allee, David R. (Committee member) / Arizona State University (Publisher)
Created2011
149754-Thumbnail Image.png
Description
A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process

A good production schedule in a semiconductor back-end facility is critical for the on time delivery of customer orders. Compared to the front-end process that is dominated by re-entrant product flows, the back-end process is linear and therefore more suitable for scheduling. However, the production scheduling of the back-end process is still very difficult due to the wide product mix, large number of parallel machines, product family related setups, machine-product qualification, and weekly demand consisting of thousands of lots. In this research, a novel mixed-integer-linear-programming (MILP) model is proposed for the batch production scheduling of a semiconductor back-end facility. In the MILP formulation, the manufacturing process is modeled as a flexible flow line with bottleneck stages, unrelated parallel machines, product family related sequence-independent setups, and product-machine qualification considerations. However, this MILP formulation is difficult to solve for real size problem instances. In a semiconductor back-end facility, production scheduling usually needs to be done every day while considering updated demand forecast for a medium term planning horizon. Due to the limitation on the solvable size of the MILP model, a deterministic scheduling system (DSS), consisting of an optimizer and a scheduler, is proposed to provide sub-optimal solutions in a short time for real size problem instances. The optimizer generates a tentative production plan. Then the scheduler sequences each lot on each individual machine according to the tentative production plan and scheduling rules. Customized factory rules and additional resource constraints are included in the DSS, such as preventive maintenance schedule, setup crew availability, and carrier limitations. Small problem instances are randomly generated to compare the performances of the MILP model and the deterministic scheduling system. Then experimental design is applied to understand the behavior of the DSS and identify the best configuration of the DSS under different demand scenarios. Product-machine qualification decisions have long-term and significant impact on production scheduling. A robust product-machine qualification matrix is critical for meeting demand when demand quantity or mix varies. In the second part of this research, a stochastic mixed integer programming model is proposed to balance the tradeoff between current machine qualification costs and future backorder costs with uncertain demand. The L-shaped method and acceleration techniques are proposed to solve the stochastic model. Computational results are provided to compare the performance of different solution methods.
ContributorsFu, Mengying (Author) / Askin, Ronald G. (Thesis advisor) / Zhang, Muhong (Thesis advisor) / Fowler, John W (Committee member) / Pan, Rong (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011