This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 145
152414-Thumbnail Image.png
Description
Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may

Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may be desired at component, sub-system or full system level. Two issues that are considered in this work are: 1. Information about design ideas is incomplete, informal and sketchy 2. Designers often work at multiple levels; different aspects or subsystems may be at different levels of abstraction Thus, high fidelity analysis and simulation tools are not appropriate for this purpose. This thesis looks at the requirements for a simulation tool and how it could facilitate concept evaluation. The specific tasks reported in this thesis are: 1. The typical types of information available after an ideation session 2. The typical types of technical evaluations done in early stages 3. How to conduct low fidelity design evaluation given a well-defined feasibility question A computational tool for supporting idea evaluation was designed and implemented. It was assumed that the results of the ideation session are represented as a morphological chart and each entry is expressed as some combination of a sketch, text and references to physical effects and machine components. Approximately 110 physical effects were identified and represented in terms of algebraic equations, physical variables and a textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 16 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works. textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 15 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works.
ContributorsKhorshidi, Maryam (Author) / Shah, Jami J. (Thesis advisor) / Wu, Teresa (Committee member) / Gel, Esma (Committee member) / Arizona State University (Publisher)
Created2014
152398-Thumbnail Image.png
Description
Identifying important variation patterns is a key step to identifying root causes of process variability. This gives rise to a number of challenges. First, the variation patterns might be non-linear in the measured variables, while the existing research literature has focused on linear relationships. Second, it is important to remove

Identifying important variation patterns is a key step to identifying root causes of process variability. This gives rise to a number of challenges. First, the variation patterns might be non-linear in the measured variables, while the existing research literature has focused on linear relationships. Second, it is important to remove noise from the dataset in order to visualize the true nature of the underlying patterns. Third, in addition to visualizing the pattern (preimage), it is also essential to understand the relevant features that define the process variation pattern. This dissertation considers these variation challenges. A base kernel principal component analysis (KPCA) algorithm transforms the measurements to a high-dimensional feature space where non-linear patterns in the original measurement can be handled through linear methods. However, the principal component subspace in feature space might not be well estimated (especially from noisy training data). An ensemble procedure is constructed where the final preimage is estimated as the average from bagged samples drawn from the original dataset to attenuate noise in kernel subspace estimation. This improves the robustness of any base KPCA algorithm. In a second method, successive iterations of denoising a convex combination of the training data and the corresponding denoised preimage are used to produce a more accurate estimate of the actual denoised preimage for noisy training data. The number of primary eigenvectors chosen in each iteration is also decreased at a constant rate. An efficient stopping rule criterion is used to reduce the number of iterations. A feature selection procedure for KPCA is constructed to find the set of relevant features from noisy training data. Data points are projected onto sparse random vectors. Pairs of such projections are then matched, and the differences in variation patterns within pairs are used to identify the relevant features. This approach provides robustness to irrelevant features by calculating the final variation pattern from an ensemble of feature subsets. Experiments are conducted using several simulated as well as real-life data sets. The proposed methods show significant improvement over the competitive methods.
ContributorsSahu, Anshuman (Author) / Runger, George C. (Thesis advisor) / Wu, Teresa (Committee member) / Pan, Rong (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2013
150733-Thumbnail Image.png
Description
This research by studies the computational performance of four different mixed integer programming (MIP) formulations for single machine scheduling problems with varying complexity. These formulations are based on (1) start and completion time variables, (2) time index variables, (3) linear ordering variables and (4) assignment and positional date variables. The

This research by studies the computational performance of four different mixed integer programming (MIP) formulations for single machine scheduling problems with varying complexity. These formulations are based on (1) start and completion time variables, (2) time index variables, (3) linear ordering variables and (4) assignment and positional date variables. The objective functions that are studied in this paper are total weighted completion time, maximum lateness, number of tardy jobs and total weighted tardiness. Based on the computational results, discussion and recommendations are made on which MIP formulation might work best for these problems. The performances of these formulations very much depend on the objective function, number of jobs and the sum of the processing times of all the jobs. Two sets of inequalities are presented that can be used to improve the performance of the formulation with assignment and positional date variables. Further, this research is extend to single machine bicriteria scheduling problems in which jobs belong to either of two different disjoint sets, each set having its own performance measure. These problems have been referred to as interfering job sets in the scheduling literature and also been called multi-agent scheduling where each agent's objective function is to be minimized. In the first single machine interfering problem (P1), the criteria of minimizing total completion time and number of tardy jobs for the two sets of jobs is studied. A Forward SPT-EDD heuristic is presented that attempts to generate set of non-dominated solutions. The complexity of this specific problem is NP-hard. The computational efficiency of the heuristic is compared against the pseudo-polynomial algorithm proposed by Ng et al. [2006]. In the second single machine interfering job sets problem (P2), the criteria of minimizing total weighted completion time and maximum lateness is studied. This is an established NP-hard problem for which a Forward WSPT-EDD heuristic is presented that attempts to generate set of supported points and the solution quality is compared with MIP formulations. For both of these problems, all jobs are available at time zero and the jobs are not allowed to be preempted.
ContributorsKhowala, Ketan (Author) / Fowler, John (Thesis advisor) / Keha, Ahmet (Thesis advisor) / Balasubramanian, Hari J (Committee member) / Wu, Teresa (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2012
Description
The problem of detecting the presence of a known signal in multiple channels of additive white Gaussian noise, such as occurs in active radar with a single transmitter and multiple geographically distributed receivers, is addressed via coherent multiple-channel techniques. A replica of the transmitted signal replica is treated as a

The problem of detecting the presence of a known signal in multiple channels of additive white Gaussian noise, such as occurs in active radar with a single transmitter and multiple geographically distributed receivers, is addressed via coherent multiple-channel techniques. A replica of the transmitted signal replica is treated as a one channel in a M-channel detector with the remaining M-1 channels comprised of data from the receivers. It is shown that the distribution of the eigenvalues of a Gram matrix are invariant to the presence of the signal replica on one channel provided the other M-1 channels are independent and contain only white Gaussian noise. Thus, the thresholds representing false alarm probabilities for detectors based on functions of these eigenvalues remain valid when one channel is known to not contain only noise. The derivation is supported by results from Monte Carlo simulations. The performance of the largest eigenvalue as a detection statistic in the active case is examined, and compared to the normalized matched filter detector in a two and three channel case.
ContributorsBeaudet, Kaitlyn Elizabeth (Author) / Cochran, Douglas (Thesis director) / Wu, Teresa (Committee member) / Howard, Stephen (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2013-05
149481-Thumbnail Image.png
Description
Surgery is one of the most important functions in a hospital with respect to operational cost, patient flow, and resource utilization. Planning and scheduling the Operating Room (OR) is important for hospitals to improve efficiency and achieve high quality of service. At the same time, it is a complex task

Surgery is one of the most important functions in a hospital with respect to operational cost, patient flow, and resource utilization. Planning and scheduling the Operating Room (OR) is important for hospitals to improve efficiency and achieve high quality of service. At the same time, it is a complex task due to the conflicting objectives and the uncertain nature of surgeries. In this dissertation, three different methodologies are developed to address OR planning and scheduling problem. First, a simulation-based framework is constructed to analyze the factors that affect the utilization of a catheterization lab and provide decision support for improving the efficiency of operations in a hospital with different priorities of patients. Both operational costs and patient satisfaction metrics are considered. Detailed parametric analysis is performed to provide generic recommendations. Overall it is found the 75th percentile of process duration is always on the efficient frontier and is a good compromise of both objectives. Next, the general OR planning and scheduling problem is formulated with a mixed integer program. The objectives include reducing staff overtime, OR idle time and patient waiting time, as well as satisfying surgeon preferences and regulating patient flow from OR to the Post Anesthesia Care Unit (PACU). Exact solutions are obtained using real data. Heuristics and a random keys genetic algorithm (RKGA) are used in the scheduling phase and compared with the optimal solutions. Interacting effects between planning and scheduling are also investigated. Lastly, a multi-objective simulation optimization approach is developed, which relaxes the deterministic assumption in the second study by integrating an optimization module of a RKGA implementation of the Non-dominated Sorting Genetic Algorithm II (NSGA-II) to search for Pareto optimal solutions, and a simulation module to evaluate the performance of a given schedule. It is experimentally shown to be an effective technique for finding Pareto optimal solutions.
ContributorsLi, Qing (Author) / Fowler, John W (Thesis advisor) / Mohan, Srimathy (Thesis advisor) / Gopalakrishnan, Mohan (Committee member) / Askin, Ronald G. (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2010
149478-Thumbnail Image.png
Description
Optimization of surgical operations is a challenging managerial problem for surgical suite directors. This dissertation presents modeling and solution techniques for operating room (OR) planning and scheduling problems. First, several sequencing and patient appointment time setting heuristics are proposed for scheduling an Outpatient Procedure Center. A discrete event simulation model

Optimization of surgical operations is a challenging managerial problem for surgical suite directors. This dissertation presents modeling and solution techniques for operating room (OR) planning and scheduling problems. First, several sequencing and patient appointment time setting heuristics are proposed for scheduling an Outpatient Procedure Center. A discrete event simulation model is used to evaluate how scheduling heuristics perform with respect to the competing criteria of expected patient waiting time and expected surgical suite overtime for a single day compared to current practice. Next, a bi-criteria Genetic Algorithm is used to determine if better solutions can be obtained for this single day scheduling problem. The efficacy of the bi-criteria Genetic Algorithm, when surgeries are allowed to be moved to other days, is investigated. Numerical experiments based on real data from a large health care provider are presented. The analysis provides insight into the best scheduling heuristics, and the tradeoff between patient and health care provider based criteria. Second, a multi-stage stochastic mixed integer programming formulation for the allocation of surgeries to ORs over a finite planning horizon is studied. The demand for surgery and surgical duration are random variables. The objective is to minimize two competing criteria: expected surgery cancellations and OR overtime. A decomposition method, Progressive Hedging, is implemented to find near optimal surgery plans. Finally, properties of the model are discussed and methods are proposed to improve the performance of the algorithm based on the special structure of the model. It is found simple rules can improve schedules used in practice. Sequencing surgeries from the longest to shortest mean duration causes high expected overtime, and should be avoided, while sequencing from the shortest to longest mean duration performed quite well in our experiments. Expending greater computational effort with more sophisticated optimization methods does not lead to substantial improvements. However, controlling daily procedure mix may achieve substantial improvements in performance. A novel stochastic programming model for a dynamic surgery planning problem is proposed in the dissertation. The efficacy of the progressive hedging algorithm is investigated. It is found there is a significant correlation between the performance of the algorithm and type and number of scenario bundles in a problem instance. The computational time spent to solve scenario subproblems is among the most significant factors that impact the performance of the algorithm. The quality of the solutions can be improved by detecting and preventing cyclical behaviors.
ContributorsGul, Serhat (Author) / Fowler, John W. (Thesis advisor) / Denton, Brian T. (Thesis advisor) / Wu, Teresa (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2010
131002-Thumbnail Image.png
Description
This thesis presents a process by which a controller used for collective transport tasks is qualitatively studied and probed for presence of undesirable equilibrium states that could entrap the system and prevent it from converging to a target state. Fields of study relevant to this project include dynamic system modeling,

This thesis presents a process by which a controller used for collective transport tasks is qualitatively studied and probed for presence of undesirable equilibrium states that could entrap the system and prevent it from converging to a target state. Fields of study relevant to this project include dynamic system modeling, modern control theory, script-based system simulation, and autonomous systems design. Simulation and computational software MATLAB and Simulink® were used in this thesis.
To achieve this goal, a model of a swarm performing a collective transport task in a bounded domain featuring convex obstacles was simulated in MATLAB/ Simulink®. The closed-loop dynamic equations of this model were linearized about an equilibrium state with angular acceleration and linear acceleration set to zero. The simulation was run over 30 times to confirm system ability to successfully transport the payload to a goal point without colliding with obstacles and determine ideal operating conditions by testing various orientations of objects in the bounded domain. An additional purely MATLAB simulation was run to identify local minima of the Hessian of the navigation-like potential function. By calculating this Hessian periodically throughout the system’s progress and determining the signs of its eigenvalues, a system could check whether it is trapped in a local minimum, and potentially dislodge itself through implementation of a stochastic term in the robot controllers. The eigenvalues of the Hessian calculated in this research suggested the model local minima were degenerate, indicating an error in the mathematical model for this system, which likely incurred during linearization of this highly nonlinear system.
Created2020-12
131595-Thumbnail Image.png
Description
Chemoreception is an important method for an octopus to sense and react to its surroundings. However, the density of chemoreceptors within different areas of the skin of the octopus arm is poorly documented. In order to assess the relative sensitivity of various regions and the degree to which chemoreception is

Chemoreception is an important method for an octopus to sense and react to its surroundings. However, the density of chemoreceptors within different areas of the skin of the octopus arm is poorly documented. In order to assess the relative sensitivity of various regions and the degree to which chemoreception is locally controlled, octopus arms were amputated and exposed to acetic acid, a noxious chemical stimulus that has previously been shown to elicit movement responses in amputated arms (Hague et al., 2013). To test this, 11 wild-caught Octopus bimaculoides (6 females, 5 males) were obtained. Acetic acid vapor was introduced in the distal oral, distal aboral, proximal oral, and proximal aboral regions of amputated arms. The frequency of the occurrence of movement was first analyzed. For those trials in which movement occurred, the latency (delay between the stimulus and the onset of movement) and the duration of movement were analyzed. The distal aboral and distal oral regions were both more likely to move than either the proximal oral or proximal aboral regions (p < 0.0001), and when they did move, were more likely to move for longer periods of time (p < 0.05). In addition, the proximal oral region was more likely to exhibit a delay in the onset of movement compared to the distal oral or distal aboral regions (p < 0.0001). These findings provide evidence that the distal arm is most sensitive to noxious chemical stimuli. However, there were no significant differences between the distal oral and distal aboral regions, or between the proximal oral and proximal aboral regions. This suggests that there may not be a significant difference in the density of chemoreceptors in the aboral versus oral regions of the arm, contrary to claims in the literature. The other independent variables analyzed, including sex, body mass, arm length, anterior versus posterior arm identity, and left versus right arm identity, did not have a significant effect on any of the three dependent variables analyzed. Further analysis of the relative density of chemoreceptors in different regions of the octopus arm is merited.
ContributorsCasleton, Rachel Marie (Author) / Fisher, Rebecca (Thesis director) / Marvi, Hamidreza (Committee member) / Gire, David (Committee member) / School of International Letters and Cultures (Contributor) / School of Molecular Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
132543-Thumbnail Image.png
Description
Octopus arms employ a complex three dimensional array of musculature, called a
muscular hydrostat, which allows for nearly infinite degrees of freedom of movement without
the structure of a skeletal system. This study employed Magnetic Resonance Imaging with a
Gadoteridol-based contrast agent to image the octopus arm and view the internal tissues. Muscle
layering

Octopus arms employ a complex three dimensional array of musculature, called a
muscular hydrostat, which allows for nearly infinite degrees of freedom of movement without
the structure of a skeletal system. This study employed Magnetic Resonance Imaging with a
Gadoteridol-based contrast agent to image the octopus arm and view the internal tissues. Muscle
layering was mapped and area was measured using AMIRA image processing and the trends in
these layers at the proximal, middle, and distal portions of the arms were analyzed. A total of 39
arms from 6 specimens were scanned to give 112 total imaged sections (38 proximal, 37 middle,
37 distal), from which to ascertain and study the possible differences in musculature. The
images revealed significant increases in the internal longitudinal muscle layer percentages
between the proximal and middle, proximal and distal, and middle and distal sections of the
arms. These structural differences are hypothesized to be used for rapid retraction of the distal
segment when encountering predators or noxious stimuli. In contrast, a significant decrease in
the transverse muscle layer was found when comparing the same sections. These structural
differences are hypothesized to be a result of bending behaviors during retraction. Additionally,
the internal longitudinal layer was separately studied orally, toward the sucker, and aborally,
away from the sucker. The significant differences in oral and aboral internal longitudinal
musculature in proximal, middle, and distal sections is hypothesized to support the pseudo-joint
functionality displayed in octopus fetching behaviors. The results indicate that individual
octopus arm morphology is more unique than previously thought and supports that internal
structural differences exist to support behavioral functionality.
ContributorsCummings, Sheldon Daniel (Author) / Fisher, Rebecca (Thesis director) / Marvi, Hamidreza (Committee member) / Cherry, Brian (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132761-Thumbnail Image.png
Description
Rapid advancements in Artificial Intelligence (AI), Machine Learning, and Deep Learning technologies are widening the playing field for automated decision assistants in healthcare. The field of radiology offers a unique platform for this technology due to its repetitive work structure, ability to leverage large data sets, and high position for

Rapid advancements in Artificial Intelligence (AI), Machine Learning, and Deep Learning technologies are widening the playing field for automated decision assistants in healthcare. The field of radiology offers a unique platform for this technology due to its repetitive work structure, ability to leverage large data sets, and high position for clinical and social impact. Several technologies in cancer screening, such as Computer Aided Detection (CAD), have broken the barrier of research into reality through successful outcomes with patient data (Morton, Whaley, Brandt, & Amrami, 2006; Patel et al, 2018). Technologies, such as the IBM Medical Sieve, are growing excitement with the potential for increased impact through the addition of medical record information ("Medical Sieve Radiology Grand Challenge", 2018). As the capabilities of automation increase and become a part of expert-decision-making jobs, however, the careful consideration of its integration into human systems is often overlooked. This paper aims to identify how healthcare professionals and system engineers implementing and interacting with automated decision-making aids in Radiology should take bureaucratic, legal, professional, and political accountability concerns into consideration. This Accountability Framework is modeled after Romzek and Dubnick’s (1987) public administration framework and expanded on through an analysis of literature on accountability definitions and examples in military, healthcare, and research sectors. A cohesive understanding of this framework and the human concerns it raises helps drive the questions that, if fully addressed, create the potential for a successful integration and adoption of AI in radiology and ultimately the care environment.
ContributorsGilmore, Emily Anne (Author) / Chiou, Erin (Thesis director) / Wu, Teresa (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05