This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 221
Filtering by

Clear all filters

152033-Thumbnail Image.png
Description
The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of

The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of engineered complex systems and predict their future adaptive patterns. The approach allows the examination of complexity in the structure and the behavior of components as a result of their connections and in relation to their environment. This research describes and uses the major differences of natural complex adaptive systems (CASs) with artificial/engineered CASs to build a framework and platform for ECAS. While this framework focuses on the critical factors of an engineered system, it also enables one to synthetically employ engineering and mathematical models to analyze and measure complexity in such systems. In this way concepts of complex systems science are adapted to management science and system of systems engineering. In particular an integrated consumer-based optimization and agent-based modeling (ABM) platform is presented that enables managers to predict and partially control patterns of behaviors in ECASs. Demonstrated on the U.S. electricity markets, ABM is integrated with normative and subjective decision behavior recommended by the U.S. Department of Energy (DOE) and Federal Energy Regulatory Commission (FERC). The approach integrates social networks, social science, complexity theory, and diffusion theory. Furthermore, it has unique and significant contribution in exploring and representing concrete managerial insights for ECASs and offering new optimized actions and modeling paradigms in agent-based simulation.
ContributorsHaghnevis, Moeed (Author) / Askin, Ronald G. (Thesis advisor) / Armbruster, Dieter (Thesis advisor) / Mirchandani, Pitu (Committee member) / Wu, Tong (Committee member) / Hedman, Kory (Committee member) / Arizona State University (Publisher)
Created2013
151698-Thumbnail Image.png
Description
Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient

Ionizing radiation used in the patient diagnosis or therapy has negative effects on the patient body in short term and long term depending on the amount of exposure. More than 700,000 examinations are everyday performed on Interventional Radiology modalities [1], however; there is no patient-centric information available to the patient or the Quality Assurance for the amount of organ dose received. In this study, we are exploring the methodologies to systematically reduce the absorbed radiation dose in the Fluoroscopically Guided Interventional Radiology procedures. In the first part of this study, we developed a mathematical model which determines a set of geometry settings for the equipment and a level for the energy during a patient exam. The goal is to minimize the amount of absorbed dose in the critical organs while maintaining image quality required for the diagnosis. The model is a large-scale mixed integer program. We performed polyhedral analysis and derived several sets of strong inequalities to improve the computational speed and quality of the solution. Results present the amount of absorbed dose in the critical organ can be reduced up to 99% for a specific set of angles. In the second part, we apply an approximate gradient method to simultaneously optimize angle and table location while minimizing dose in the critical organs with respect to the image quality. In each iteration, we solve a sub-problem as a MIP to determine the radiation field size and corresponding X-ray tube energy. In the computational experiments, results show further reduction (up to 80%) of the absorbed dose in compare with previous method. Last, there are uncertainties in the medical procedures resulting imprecision of the absorbed dose. We propose a robust formulation to hedge from the worst case absorbed dose while ensuring feasibility. In this part, we investigate a robust approach for the organ motions within a radiology procedure. We minimize the absorbed dose for the critical organs across all input data scenarios which are corresponding to the positioning and size of the organs. The computational results indicate up to 26% increase in the absorbed dose calculated for the robust approach which ensures the feasibility across scenarios.
ContributorsKhodadadegan, Yasaman (Author) / Zhang, Muhong (Thesis advisor) / Pavlicek, William (Thesis advisor) / Fowler, John (Committee member) / Wu, Tong (Committee member) / Arizona State University (Publisher)
Created2013
151715-Thumbnail Image.png
Description
This philosophical inquiry explores the work of philosophers Gilles Deleuze and Félix Guattari and posits applications to music education. Through the concepts of multiplicities, becoming, bodies without organs, smooth spaces, maps, and nomads, Deleuze and Guattari challenge prior and current understandings of existence. In their writings on art, education, and

This philosophical inquiry explores the work of philosophers Gilles Deleuze and Félix Guattari and posits applications to music education. Through the concepts of multiplicities, becoming, bodies without organs, smooth spaces, maps, and nomads, Deleuze and Guattari challenge prior and current understandings of existence. In their writings on art, education, and how might one live, they assert a world consisting of variability and motion. Drawing on Deleuze and Guattari's emphasis on time and difference, I posit the following questions: Who and when are we? Where are we? When is music? When is education? Throughout this document, their philosophical figuration of a rhizome serves as a recurring theme, highlighting the possibilities of complexity, diverse connections, and continual processes. I explore the question "When and where are we?" by combining the work of Deleuze and Guattari with that of other authors. Drawing on these ideas, I posit an ontology of humans as inseparably cognitive, embodied, emotional, social, and striving multiplicities. Investigating the question "Where are we?" using Deleuze and Guattari's writings as well as that of contemporary place philosophers and other writers reveals that humans exist at the continually changing confluence of local and global places. In order to engage with the questions "When is music?" and "When is education?" I inquire into how humans as cognitive, embodied, emotional, social, and striving multiplicities emplaced in a glocalized world experience music and education. In the final chapters, a philosophy of music education consisting of the ongoing, interconnected processes of complicating, considering, and connecting is proposed. Complicating involves continually questioning how humans' multiple inseparable qualities and places integrate during musical and educative experiences. Considering includes imagining the multiple directions in which connections might occur as well as contemplating the quality of potential connections. Connecting involves assisting students in forming variegated connections between themselves, their multiple qualities, and their glocal environments. Considering a rhizomatic philosophy of music education includes continually engaging in the integrated processes of complicating, considering, and connecting. Through such ongoing practices, music educators can promote flourishing in the lives of students and the experiences of their multiple communities.
ContributorsRicherme, Lauren Kapalka (Author) / Stauffer, Sandra (Thesis advisor) / Gould, Elizabeth (Committee member) / Schmidt, Margaret (Committee member) / Sullivan, Jill (Committee member) / Tobias, Evan (Committee member) / Arizona State University (Publisher)
Created2013
152223-Thumbnail Image.png
Description
Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has

Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has been done in the ALT area and optimal design for ALT is a major topic. This dissertation consists of three main studies. First, a methodology of finding optimal design for ALT with right censoring and interval censoring have been developed and it employs the proportional hazard (PH) model and generalized linear model (GLM) to simplify the computational process. A sensitivity study is also given to show the effects brought by parameters to the designs. Second, an extended version of I-optimal design for ALT is discussed and then a dual-objective design criterion is defined and showed with several examples. Also in order to evaluate different candidate designs, several graphical tools are developed. Finally, when there are more than one models available, different model checking designs are discussed.
ContributorsYang, Tao (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Borror, Connie (Committee member) / Rigdon, Steve (Committee member) / Arizona State University (Publisher)
Created2013
152087-Thumbnail Image.png
Description
Nonregular screening designs can be an economical alternative to traditional resolution IV 2^(k-p) fractional factorials. Recently 16-run nonregular designs, referred to as no-confounding designs, were introduced in the literature. These designs have the property that no pair of main effect (ME) and two-factor interaction (2FI) estimates are completely confounded. In

Nonregular screening designs can be an economical alternative to traditional resolution IV 2^(k-p) fractional factorials. Recently 16-run nonregular designs, referred to as no-confounding designs, were introduced in the literature. These designs have the property that no pair of main effect (ME) and two-factor interaction (2FI) estimates are completely confounded. In this dissertation, orthogonal arrays were evaluated with many popular design-ranking criteria in order to identify optimal 20-run and 24-run no-confounding designs. Monte Carlo simulation was used to empirically assess the model fitting effectiveness of the recommended no-confounding designs. The results of the simulation demonstrated that these new designs, particularly the 24-run designs, are successful at detecting active effects over 95% of the time given sufficient model effect sparsity. The final chapter presents a screening design selection methodology, based on decision trees, to aid in the selection of a screening design from a list of published options. The methodology determines which of a candidate set of screening designs has the lowest expected experimental cost.
ContributorsStone, Brian (Author) / Montgomery, Douglas C. (Thesis advisor) / Silvestrini, Rachel T. (Committee member) / Fowler, John W (Committee member) / Borror, Connie M. (Committee member) / Arizona State University (Publisher)
Created2013
151903-Thumbnail Image.png
Description
ABSTRACT In this work, I provide two novel pieces of evidence in favor of the view that there is pragmatic encroachment on knowledge. First, I present an empirical case via the results of a series of recent experiments to show that folk-knowledge attributions may be sensitive to time constraints even

ABSTRACT In this work, I provide two novel pieces of evidence in favor of the view that there is pragmatic encroachment on knowledge. First, I present an empirical case via the results of a series of recent experiments to show that folk-knowledge attributions may be sensitive to time constraints even when the latter are construed in a non-truth relevant manner. Along the way, I consider some comments made by Jonathan Schaffer (2006) as it pertains to interpreting time constraints-sensitivity in a manner that supports contextualism, before offering reasons to resist such a treatment. I proceed by applying interest relative invariantism to adjudicate a conflict in the epistemology of testimony namely, the positive reasons requirement a la, reductionism vs. non-reductionism. In particular, I highlight how whether an epistemic subject H needs positive non-testimonial reasons to be justified in accepting S's testimony that p, depends on what is at stake for H in believing that p and how much time H has in deliberating about p.
ContributorsShin, Joseph Ellis (Author) / Pinillos, N. Angel (Thesis advisor) / Reynolds, Steven L (Committee member) / White, Michael J. (Committee member) / Arizona State University (Publisher)
Created2013
152069-Thumbnail Image.png
Description
Emergentism offers a promising compromise in the philosophy of mind between Cartesian substance dualism and reductivistic physicalism. The ontological emergentist holds that conscious mental phenomena supervene on physical phenomena, but that they have a nature over and above the physical. However, emergentist views have been subjected to a variety of

Emergentism offers a promising compromise in the philosophy of mind between Cartesian substance dualism and reductivistic physicalism. The ontological emergentist holds that conscious mental phenomena supervene on physical phenomena, but that they have a nature over and above the physical. However, emergentist views have been subjected to a variety of powerful objections: they are alleged to be self-contradictory, incompatible with mental causation, justified by unreliable intuitions, and in conflict with our contemporary scientific understanding of the world. I defend the emergentist position against these objections. I clarify the concepts of supervenience and of ontological novelty in a way that ensures the emergentist position is coherent, while remaining distinct from physicalism and traditional dualism. Making note of the equivocal way in which the concept of sufficiency is used in Jaegwon Kim's arguments against emergent mental causation, I argue that downward causation does not entail widespread overdetermination. I argue that considerations of ideal a priori deducibility from some physical base, or "Cosmic Hermeneutics", will not themselves provide answers to where the cuts in the structure of nature lie. Instead, I propose reconsidering the question of Cosmic Hermeneutics in terms of which cognitive resources would be required for the ideal reasoner to perform the deduction. Lastly, I respond to the objection that emergence in the philosophy of mind is in conflict with our contemporary scientific understanding of the world. I suggest that a kind of weak ontological emergence is a viable form of explanation in many fields, and discuss current applications of emergence in biology, sociology, and the study of complex systems.
ContributorsWatson, Jeffrey (Author) / Kobes, Bernard W (Thesis advisor) / Pinillos, Nestor (Committee member) / Horgan, Terence (Committee member) / Reynolds, Steven (Committee member) / Arizona State University (Publisher)
Created2013
151744-Thumbnail Image.png
Description
This thesis explores the conceptual span and plausibility of emergence and its applicability to the problem of mental causation. The early parts of the project explicate a distinction between weak and strong emergence as described by Jaegwon Kim. They also consider Kim's objections regarding the conceptual incoherence of strong emergence

This thesis explores the conceptual span and plausibility of emergence and its applicability to the problem of mental causation. The early parts of the project explicate a distinction between weak and strong emergence as described by Jaegwon Kim. They also consider Kim's objections regarding the conceptual incoherence of strong emergence and the otiose nature of weak emergence. The paper then explores Mark Bedau's in-between conception of emergence and ultimately finds that middle conception to be both coherent and useful. With these three emergence distinctions in hand, the thesis goes on to explore Evan Thompson's recent work - Mind in Life (2010). In that work, Thompson advances a strong emergence approach to mind, whereby he concludes the incipient stages of cognition are found at the most basic levels of life, namely - biologic cells. Along the way, Thompson embraces holism and a nonfundamental
onhierarchical physics in order to counter Jaegwon Kim's objections to the notion of downward causation needed for strong emergence. The thesis presents arguments against Thompson's holism and nonfundamental physics, while supporting his assertion regarding the incipient stages of cognition. It then combines an important distinction between mental causation and the experience of mental causation with Thompson's notion of incipient cognition to arrive at a dual realms approach to understanding mental causation.
ContributorsFournier, Thomas (Author) / Kobes, Bernard W (Thesis advisor) / Reynolds, Steven L (Committee member) / Armendt, Brad (Committee member) / Arizona State University (Publisher)
Created2013
151751-Thumbnail Image.png
Description
Saying, "if Mary had watered Sam's plant, it wouldn't have died," is an ordinary way to identify Mary not watering Sam's plant as the cause of its death. But there are problems with this statement. If we identify Mary's omitted action as the cause, we seemingly admit an inordinate number

Saying, "if Mary had watered Sam's plant, it wouldn't have died," is an ordinary way to identify Mary not watering Sam's plant as the cause of its death. But there are problems with this statement. If we identify Mary's omitted action as the cause, we seemingly admit an inordinate number of omissions as causes. For any counterfactual statement containing the omitted action is true (e.g. if Hillary Clinton had watered Sam's plant, it wouldn't have died). The statement, moreover, is mysterious because it is not clear why one protasis is more salient than any alternatives such as "if Sam hadn't gone to Bismarck." In the burgeoning field of experimental metaphysics, some theorists have tried to account for these intuitions about omissive causes. By synthesizing this data and providing a few experiments, I will suggest that judgments - and maybe metaphysics - about omissive causes necessarily have a normative feature. This understanding of omissive causes may be able to adequately resolve the problems above.
ContributorsHenne, Paul (Author) / Kobes, Bernard W (Thesis advisor) / Pinillos, Nestor A (Thesis advisor) / Reynolds, Steven (Committee member) / Arizona State University (Publisher)
Created2013
151936-Thumbnail Image.png
Description
With the number of internationally-run clinical drug trials increasing, the double standards between those in developed nations and those in developing nations are being scrutinized under the ethical microscope. Many argue that several pharmaceutical companies and researchers are exploiting developing nation participants. Two issues of concern are the use of

With the number of internationally-run clinical drug trials increasing, the double standards between those in developed nations and those in developing nations are being scrutinized under the ethical microscope. Many argue that several pharmaceutical companies and researchers are exploiting developing nation participants. Two issues of concern are the use of a placebo control when an effective alternative treatment exists and the lack of drug availability to the country that hosted the clinical trial should the experimental drug prove effective. Though intuitively this seems like an instance of exploitation, philosophically, exploitation theories cannot adequately account for the wrongdoing in these cases. My project has two parts. First, after explaining why the theories of Alan Wertheimer, John Lawrence Hill, and Ruth Sample fail to explain the exploitation in clinical drug research, I provide an alternative account of exploitation that can explain why the double standard in clinical research is harmful. Rather than craft a single theory encompassing all instances of exploitation, I offer an account of a type, or subset, of exploitation that I refer to as comparative exploitation. The double standards in clinical research fall under the category of comparative exploitation. Furthermore, while many critics maintain that cases of comparative exploitation, including clinical research, are mutually beneficial, they are actually harmful to its victims. I explain the harm of comparative exploitation using Ben Bradley's counterfactual account of harm and Larry May's theory of sharing responsibility. The second part of my project focuses on the "standard of care" argument, which most defenders use to justify the double standard in clinical research. I elaborate on Ruth Macklin's position that advocates of the "standard of care" position make three faulty assumptions: placebo-controlled trials are the gold standard, the only relevant question responsive to the host country's health needs is "Is the experimental product being studied better than the 'nothing' now available to the population?", and the only way of obtaining affordable products is to test cheap alternatives to replace the expensive ones. In the end, I advocate moving towards a universalizing of standards in order to avoid exploitation.
ContributorsFundora, Danielle (Author) / McGregor, Joan (Thesis advisor) / Brake, Elizabeth (Committee member) / Portmore, Douglas (Committee member) / Arizona State University (Publisher)
Created2013