This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 154
149676-Thumbnail Image.png
Description
Locomotion of microorganisms is commonly observed in nature. Although microorganism locomotion is commonly attributed to mechanical deformation of solid appendages, in 1956 Nobel Laureate Peter Mitchell proposed that an asymmetric ion flux on a bacterium's surface could generate electric fields that drive locomotion via self-electrophoresis. Recent advances in nanofabrication have

Locomotion of microorganisms is commonly observed in nature. Although microorganism locomotion is commonly attributed to mechanical deformation of solid appendages, in 1956 Nobel Laureate Peter Mitchell proposed that an asymmetric ion flux on a bacterium's surface could generate electric fields that drive locomotion via self-electrophoresis. Recent advances in nanofabrication have enabled the engineering of synthetic analogues, bimetallic colloidal particles, that swim due to asymmetric ion flux originally proposed by Mitchell. Bimetallic colloidal particles swim through aqueous solutions by converting chemical fuel to fluid motion through asymmetric electrochemical reactions. This dissertation presents novel bimetallic motor fabrication strategies, motor functionality, and a study of the motor collective behavior in chemical concentration gradients. Brownian dynamics simulations and experiments show that the motors exhibit chemokinesis, a motile response to chemical gradients that results in net migration and concentration of particles. Chemokinesis is typically observed in living organisms and distinct from chemotaxis in that there is no particle directional sensing. The synthetic motor chemokinesis observed in this work is due to variation in the motor's velocity and effective diffusivity as a function of the fuel and salt concentration. Static concentration fields are generated in microfluidic devices fabricated with porous walls. The development of nanoscale particles that swim autonomously and collectively in chemical concentration gradients can be leveraged for a wide range of applications such as directed drug delivery, self-healing materials, and environmental remediation.
ContributorsWheat, Philip Matthew (Author) / Posner, Jonathan D (Thesis advisor) / Phelan, Patrick (Committee member) / Chen, Kangping (Committee member) / Buttry, Daniel (Committee member) / Calhoun, Ronald (Committee member) / Arizona State University (Publisher)
Created2011
150339-Thumbnail Image.png
Description
A low cost expander, combustor device that takes compressed air, adds thermal energy and then expands the gas to drive an electrical generator is to be designed by modifying an existing reciprocating spark ignition engine. The engine used is the 6.5 hp Briggs and Stratton series 122600 engine. Compressed air

A low cost expander, combustor device that takes compressed air, adds thermal energy and then expands the gas to drive an electrical generator is to be designed by modifying an existing reciprocating spark ignition engine. The engine used is the 6.5 hp Briggs and Stratton series 122600 engine. Compressed air that is stored in a tank at a particular pressure will be introduced during the compression stage of the engine cycle to reduce pump work. In the modified design the intake and exhaust valve timings are modified to achieve this process. The time required to fill the combustion chamber with compressed air to the storage pressure immediately before spark and the state of the air with respect to crank angle is modeled numerically using a crank step energy and mass balance model. The results are used to complete the engine cycle analysis based on air standard assumptions and air to fuel ratio of 15 for gasoline. It is found that at the baseline storage conditions (280 psi, 70OF) the modified engine does not meet the imposed constraints of staying below the maximum pressure of the unmodified engine. A new storage pressure of 235 psi is recommended. This only provides a 7.7% increase in thermal efficiency for the same work output. The modification of this engine for this low efficiency gain is not recommended.
ContributorsJoy, Lijin (Author) / Trimble, Steve (Thesis advisor) / Davidson, Joseph (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2011
149873-Thumbnail Image.png
Description
Passive cooling designs & technologies offer great promise to lower energy use in buildings. Though the working principles of these designs and technologies are well understood, simplified tools to quantitatively evaluate their performance are lacking. Cooling by night ventilation, which is the topic of this research, is one of the

Passive cooling designs & technologies offer great promise to lower energy use in buildings. Though the working principles of these designs and technologies are well understood, simplified tools to quantitatively evaluate their performance are lacking. Cooling by night ventilation, which is the topic of this research, is one of the well known passive cooling technologies. The building's thermal mass can be cooled at night by ventilating the inside of the space with the relatively lower outdoor air temperatures, thereby maintaining lower indoor temperatures during the warmer daytime period. Numerous studies, both experimental and theoretical, have been performed and have shown the effectiveness of the method to significantly reduce air conditioning loads or improve comfort levels in those climates where the night time ambient air temperature drops below that of the indoor air. The impact of widespread adoption of night ventilation cooling can be substantial, given the large fraction of energy consumed by air conditioning of buildings (about 12-13% of the total electricity use in U.S. buildings). Night ventilation is relatively easy to implement with minimal design changes to existing buildings. Contemporary mathematical models to evaluate the performance of night ventilation are embedded in detailed whole building simulation tools which require a certain amount of expertise and is a time consuming approach. This research proposes a methodology incorporating two models, Heat Transfer model and Thermal Network model, to evaluate the effectiveness of night ventilation. This methodology is easier to use and the run time to evaluate the results is faster. Both these models are approximations of thermal coupling between thermal mass and night ventilation in buildings. These models are modifications of existing approaches meant to model dynamic thermal response in buildings subject to natural ventilation. Effectiveness of night ventilation was quantified by a parameter called the Discomfort Reduction Factor (DRF) which is the index of reduction of occupant discomfort levels during the day time from night ventilation. Daily and Monthly DRFs are calculated for two climate zones and three building heat capacities. It is verified that night ventilation is effective in seasons and regions when day temperatures are between 30 oC and 36 oC and night temperatures are below 20 oC. The accuracy of these models may be lower than using a detailed simulation program but the loss in accuracy in using these tools more than compensates for the insights provided and better transparency in the analysis approach and results obtained.
ContributorsEndurthy, Akhilesh Reddy (Author) / Reddy, T Agami (Thesis advisor) / Phelan, Patrick (Committee member) / Addison, Marlin (Committee member) / Arizona State University (Publisher)
Created2011
149854-Thumbnail Image.png
Description
There is increasing interest in the medical and behavioral health communities towards developing effective strategies for the treatment of chronic diseases. Among these lie adaptive interventions, which consider adjusting treatment dosages over time based on participant response. Control engineering offers a broad-based solution framework for optimizing the effectiveness of such

There is increasing interest in the medical and behavioral health communities towards developing effective strategies for the treatment of chronic diseases. Among these lie adaptive interventions, which consider adjusting treatment dosages over time based on participant response. Control engineering offers a broad-based solution framework for optimizing the effectiveness of such interventions. In this thesis, an approach is proposed to develop dynamical models and subsequently, hybrid model predictive control schemes for assigning optimal dosages of naltrexone, an opioid antagonist, as treatment for a chronic pain condition known as fibromyalgia. System identification techniques are employed to model the dynamics from the daily diary reports completed by participants of a blind naltrexone intervention trial. These self-reports include assessments of outcomes of interest (e.g., general pain symptoms, sleep quality) and additional external variables (disturbances) that affect these outcomes (e.g., stress, anxiety, and mood). Using prediction-error methods, a multi-input model describing the effect of drug, placebo and other disturbances on outcomes of interest is developed. This discrete time model is approximated by a continuous second order model with zero, which was found to be adequate to capture the dynamics of this intervention. Data from 40 participants in two clinical trials were analyzed and participants were classified as responders and non-responders based on the models obtained from system identification. The dynamical models can be used by a model predictive controller for automated dosage selection of naltrexone using feedback/feedforward control actions in the presence of external disturbances. The clinical requirement for categorical (i.e., discrete-valued) drug dosage levels creates a need for hybrid model predictive control (HMPC). The controller features a multiple degree-of-freedom formulation that enables the user to adjust the speed of setpoint tracking, measured disturbance rejection and unmeasured disturbance rejection independently in the closed loop system. The nominal and robust performance of the proposed control scheme is examined via simulation using system identification models from a representative participant in the naltrexone intervention trial. The controller evaluation described in this thesis gives credibility to the promise and applicability of control engineering principles for optimizing adaptive interventions.
ContributorsDeśapāṇḍe, Sunīla (Author) / Rivera, Daniel E. (Thesis advisor) / Si, Jennie (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2011
150190-Thumbnail Image.png
Description
Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of

Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of focus. In supervised learning like regression, the data consists of many features and only a subset of the features may be responsible for the result. Also, the features might require special structural requirements, which introduces additional complexity for feature selection. The sparse learning package, provides a set of algorithms for learning a sparse set of the most relevant features for both regression and classification problems. Structural dependencies among features which introduce additional requirements are also provided as part of the package. The features may be grouped together, and there may exist hierarchies and over- lapping groups among these, and there may be requirements for selecting the most relevant groups among them. In spite of getting sparse solutions, the solutions are not guaranteed to be robust. For the selection to be robust, there are certain techniques which provide theoretical justification of why certain features are selected. The stability selection, is a method for feature selection which allows the use of existing sparse learning methods to select the stable set of features for a given training sample. This is done by assigning probabilities for the features: by sub-sampling the training data and using a specific sparse learning technique to learn the relevant features, and repeating this a large number of times, and counting the probability as the number of times a feature is selected. Cross-validation which is used to determine the best parameter value over a range of values, further allows to select the best parameter value. This is done by selecting the parameter value which gives the maximum accuracy score. With such a combination of algorithms, with good convergence guarantees, stable feature selection properties and the inclusion of various structural dependencies among features, the sparse learning package will be a powerful tool for machine learning research. Modular structure, C implementation, ATLAS integration for fast linear algebraic subroutines, make it one of the best tool for a large sparse setting. The varied collection of algorithms, support for group sparsity, batch algorithms, are a few of the notable functionality of the SLEP package, and these features can be used in a variety of fields to infer relevant elements. The Alzheimer Disease(AD) is a neurodegenerative disease, which gradually leads to dementia. The SLEP package is used for feature selection for getting the most relevant biomarkers from the available AD dataset, and the results show that, indeed, only a subset of the features are required to gain valuable insights.
ContributorsThulasiram, Ramesh (Author) / Ye, Jieping (Thesis advisor) / Xue, Guoliang (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150194-Thumbnail Image.png
Description
Processed pyro-gel contains castor oil with solid component of boehmite (Al-OOH). The pyro-gel is synthesized by heat to convert boehmite to gamma-Al2O3 and to a certain extent alpha-Al2O3 nano-particles and castor oil into carbon residue. The effect of heat on pyro-gel is analyzed in a series of experiments using two

Processed pyro-gel contains castor oil with solid component of boehmite (Al-OOH). The pyro-gel is synthesized by heat to convert boehmite to gamma-Al2O3 and to a certain extent alpha-Al2O3 nano-particles and castor oil into carbon residue. The effect of heat on pyro-gel is analyzed in a series of experiments using two burning chambers with the initial temperature as the main factor. The obtained temperature distribution profiles are studied and it is observed that the gel behaves very close to the theoretical prediction under heat. The carbon residue with Al2O3 is then processed for twelve hours and then analyzed to obtain the pore distribution of the Al2O3 nano-particles and the relation between the pore volume and the pre-heat temperature is analyzed. The obtained pore distribution shows the pore volume of Al2O3 nano-particles has direct relation to the pre-heat temperature. The experimental process involving the cylindrical reactor is simulated by using a finite rate chemistry eddy-dissipation model in a non-premixed and a porous mesh. The temperature distribution profile of the processed gel for both the meshes is obtained and a comparison is done with the data obtained in the experimental analysis. The temperature distribution obtained from the simulations show they follow a very similar profile to the temperature distribution obtained from experimental analysis, thus confirming the accuracy of both the models. The variation in numerical values between the experimental and simulation analysis is discussed. A physical model is proposed to determine the pore formation based on the temperature distribution obtained from experimental analysis and simulation.
ContributorsSagi, Varun (Author) / Lee, Taewoo (Thesis advisor) / Phelan, Patrick (Committee member) / Chen, Kangping (Committee member) / Arizona State University (Publisher)
Created2010
150298-Thumbnail Image.png
Description
Due to restructuring and open access to the transmission system, modern electric power systems are being operated closer to their operational limits. Additionally, the secure operational limits of modern power systems have become increasingly difficult to evaluate as the scale of the network and the number of transactions between utilities

Due to restructuring and open access to the transmission system, modern electric power systems are being operated closer to their operational limits. Additionally, the secure operational limits of modern power systems have become increasingly difficult to evaluate as the scale of the network and the number of transactions between utilities increase. To account for these challenges associated with the rapid expansion of electric power systems, dynamic equivalents have been widely applied for the purpose of reducing the computational effort of simulation-based transient security assessment. Dynamic equivalents are commonly developed using a coherency-based approach in which a retained area and an external area are first demarcated. Then the coherent generators in the external area are aggregated and replaced by equivalenced models, followed by network reduction and load aggregation. In this process, an improperly defined retained area can result in detrimental impacts on the effectiveness of the equivalents in preserving the dynamic characteristics of the original unreduced system. In this dissertation, a comprehensive approach has been proposed to determine an appropriate retained area boundary by including the critical generators in the external area that are tightly coupled with the initial retained area. Further-more, a systematic approach has also been investigated to efficiently predict the variation in generator slow coherency behavior when the system operating condition is subject to change. Based on this determination, the critical generators in the external area that are tightly coherent with the generators in the initial retained area are retained, resulting in a new retained area boundary. Finally, a novel hybrid dynamic equivalent, consisting of both a coherency-based equivalent and an artificial neural network (ANN)-based equivalent, has been proposed and analyzed. The ANN-based equivalent complements the coherency-based equivalent at all the retained area boundary buses, and it is designed to compensate for the discrepancy between the full system and the conventional coherency-based equivalent. The approaches developed have been validated on a large portion of the Western Electricity Coordinating Council (WECC) system and on a test case including a significant portion of the eastern interconnection.
ContributorsMa, Feng (Author) / Vittal, Vijay (Thesis advisor) / Tylavsky, Daniel (Committee member) / Heydt, Gerald (Committee member) / Si, Jennie (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2011
150302-Thumbnail Image.png
Description
Proportional-Integral-Derivative (PID) controllers are a versatile category of controllers that are commonly used in the industry as control systems due to the ease of their implementation and low cost. One problem that continues to intrigue control designers is the matter of finding a good combination of the three parameters -

Proportional-Integral-Derivative (PID) controllers are a versatile category of controllers that are commonly used in the industry as control systems due to the ease of their implementation and low cost. One problem that continues to intrigue control designers is the matter of finding a good combination of the three parameters - P, I and D of these controllers so that system stability and optimum performance is achieved. Also, a certain amount of robustness to the process is expected from the PID controllers. In the past, many different methods for tuning PID parameters have been developed. Some notable techniques are the Ziegler-Nichols, Cohen-Coon, Astrom methods etc. For all these techniques, a simple limitation remained with the fact that for a particular system, there can be only one set of tuned parameters; i.e. there are no degrees of freedom involved to readjust the parameters for a given system to achieve, for instance, higher bandwidth. Another limitation in most cases is where a controller is designed in continuous time then converted into discrete-time for computer implementation. The drawback of this method is that some robustness due to phase and gain margin is lost in the process. In this work a method of tuning PID controllers using a loop-shaping approach has been developed where the bandwidth of the system can be chosen within an acceptable range. The loop-shaping is done against a Glover-McFarlane type ℋ∞ controller which is widely accepted as a robust control design method. The numerical computations are carried out entirely in discrete-time so there is no loss of robustness due to conversion and approximations near Nyquist frequencies. Some extra degrees of freedom owing to choice of bandwidth and capability of choosing loop-shapes are also involved and are discussed in detail. Finally, comparisons of this method against existing techniques for tuning PID controllers both in continuous and in discrete-time are shown. The results tell us that our design performs well for loop-shapes that are achievable through a PID controller.
ContributorsShafique, Md. Ashfaque Bin (Author) / Tsakalis, Konstantinos S. (Thesis advisor) / Rodriguez, Armando A. (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2011
150095-Thumbnail Image.png
Description
Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It

Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It is particularly desirable to share the domain knowledge (among the tasks) when there are a number of related tasks but only limited training data is available for each task. Modeling the relationship of multiple tasks is critical to the generalization performance of the MTL algorithms. In this dissertation, I propose a series of MTL approaches which assume that multiple tasks are intrinsically related via a shared low-dimensional feature space. The proposed MTL approaches are developed to deal with different scenarios and settings; they are respectively formulated as mathematical optimization problems of minimizing the empirical loss regularized by different structures. For all proposed MTL formulations, I develop the associated optimization algorithms to find their globally optimal solution efficiently. I also conduct theoretical analysis for certain MTL approaches by deriving the globally optimal solution recovery condition and the performance bound. To demonstrate the practical performance, I apply the proposed MTL approaches on different real-world applications: (1) Automated annotation of the Drosophila gene expression pattern images; (2) Categorization of the Yahoo web pages. Our experimental results demonstrate the efficiency and effectiveness of the proposed algorithms.
ContributorsChen, Jianhui (Author) / Ye, Jieping (Thesis advisor) / Kumar, Sudhir (Committee member) / Liu, Huan (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2011
152273-Thumbnail Image.png
Description
This study focuses on state estimation of nonlinear discrete time systems with constraints. Physical processes have inherent in them, constraints on inputs, outputs, states and disturbances. These constraints can provide additional information to the estimator in estimating states from the measured output. Recursive filters such as Kalman Filters or Extended

This study focuses on state estimation of nonlinear discrete time systems with constraints. Physical processes have inherent in them, constraints on inputs, outputs, states and disturbances. These constraints can provide additional information to the estimator in estimating states from the measured output. Recursive filters such as Kalman Filters or Extended Kalman Filters are commonly used in state estimation; however, they do not allow inclusion of constraints in their formulation. On the other hand, computational complexity of full information estimation (using all measurements) grows with iteration and becomes intractable. One way of formulating the recursive state estimation problem with constraints is the Moving Horizon Estimation (MHE) approximation. Estimates of states are calculated from the solution of a constrained optimization problem of fixed size. Detailed formulation of this strategy is studied and properties of this estimation algorithm are discussed in this work. The problem with the MHE formulation is solving an optimization problem in each iteration which is computationally intensive. State estimation with constraints can be formulated as Extended Kalman Filter (EKF) with a projection applied to estimates. The states are estimated from the measurements using standard Extended Kalman Filter (EKF) algorithm and the estimated states are projected on to a constrained set. Detailed formulation of this estimation strategy is studied and the properties associated with this algorithm are discussed. Both these state estimation strategies (MHE and EKF with projection) are tested with examples from the literature. The average estimation time and the sum of square estimation error are used to compare performance of these estimators. Results of the case studies are analyzed and trade-offs are discussed.
ContributorsJoshi, Rakesh (Author) / Tsakalis, Konstantinos (Thesis advisor) / Rodriguez, Armando (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2013