This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 165
Filtering by

Clear all filters

151874-Thumbnail Image.png
Description
Wind measurements are fundamental inputs for the evaluation of potential energy yield and performance of wind farms. Three-dimensional scanning coherent Doppler lidar (CDL) may provide a new basis for wind farm site selection, design, and control. In this research, CDL measurements obtained from multiple wind energy developments are analyzed and

Wind measurements are fundamental inputs for the evaluation of potential energy yield and performance of wind farms. Three-dimensional scanning coherent Doppler lidar (CDL) may provide a new basis for wind farm site selection, design, and control. In this research, CDL measurements obtained from multiple wind energy developments are analyzed and a novel wind farm control approach has been modeled. The possibility of using lidar measurements to more fully characterize the wind field is discussed, specifically, terrain effects, spatial variation of winds, power density, and the effect of shear at different layers within the rotor swept area. Various vector retrieval methods have been applied to the lidar data, and results are presented on an elevated terrain-following surface at hub height. The vector retrieval estimates are compared with tower measurements, after interpolation to the appropriate level. CDL data is used to estimate the spatial power density at hub height. Since CDL can measure winds at different vertical levels, an approach for estimating wind power density over the wind turbine rotor-swept area is explored. Sample optimized layouts of wind farm using lidar data and global optimization algorithms, accounting for wake interaction effects, have been explored. An approach to evaluate spatial wind speed and direction estimates from a standard nested Coupled Ocean and Atmosphere Mesoscale Prediction System (COAMPS) model and CDL is presented. The magnitude of spatial difference between observations and simulation for wind energy assessment is researched. Diurnal effects and ramp events as estimated by CDL and COAMPS were inter-compared. Novel wind farm control based on incoming winds and direction input from CDL's is developed. Both yaw and pitch control using scanning CDL for efficient wind farm control is analyzed. The wind farm control optimizes power production and reduces loads on wind turbines for various lidar wind speed and direction inputs, accounting for wind farm wake losses and wind speed evolution. Several wind farm control configurations were developed, for enhanced integrability into the electrical grid. Finally, the value proposition of CDL for a wind farm development, based on uncertainty reduction and return of investment is analyzed.
ContributorsKrishnamurthy, Raghavendra (Author) / Calhoun, Ronald J (Thesis advisor) / Chen, Kangping (Committee member) / Huang, Huei-Ping (Committee member) / Fraser, Matthew (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2013
152223-Thumbnail Image.png
Description
Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has

Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has been done in the ALT area and optimal design for ALT is a major topic. This dissertation consists of three main studies. First, a methodology of finding optimal design for ALT with right censoring and interval censoring have been developed and it employs the proportional hazard (PH) model and generalized linear model (GLM) to simplify the computational process. A sensitivity study is also given to show the effects brought by parameters to the designs. Second, an extended version of I-optimal design for ALT is discussed and then a dual-objective design criterion is defined and showed with several examples. Also in order to evaluate different candidate designs, several graphical tools are developed. Finally, when there are more than one models available, different model checking designs are discussed.
ContributorsYang, Tao (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Borror, Connie (Committee member) / Rigdon, Steve (Committee member) / Arizona State University (Publisher)
Created2013
151341-Thumbnail Image.png
Description
With the rapid development of mobile sensing technologies like GPS, RFID, sensors in smartphones, etc., capturing position data in the form of trajectories has become easy. Moving object trajectory analysis is a growing area of interest these days owing to its applications in various domains such as marketing, security, traffic

With the rapid development of mobile sensing technologies like GPS, RFID, sensors in smartphones, etc., capturing position data in the form of trajectories has become easy. Moving object trajectory analysis is a growing area of interest these days owing to its applications in various domains such as marketing, security, traffic monitoring and management, etc. To better understand movement behaviors from the raw mobility data, this doctoral work provides analytic models for analyzing trajectory data. As a first contribution, a model is developed to detect changes in trajectories with time. If the taxis moving in a city are viewed as sensors that provide real time information of the traffic in the city, a change in these trajectories with time can reveal that the road network has changed. To detect changes, trajectories are modeled with a Hidden Markov Model (HMM). A modified training algorithm, for parameter estimation in HMM, called m-BaumWelch, is used to develop likelihood estimates under assumed changes and used to detect changes in trajectory data with time. Data from vehicles are used to test the method for change detection. Secondly, sequential pattern mining is used to develop a model to detect changes in frequent patterns occurring in trajectory data. The aim is to answer two questions: Are the frequent patterns still frequent in the new data? If they are frequent, has the time interval distribution in the pattern changed? Two different approaches are considered for change detection, frequency-based approach and distribution-based approach. The methods are illustrated with vehicle trajectory data. Finally, a model is developed for clustering and outlier detection in semantic trajectories. A challenge with clustering semantic trajectories is that both numeric and categorical attributes are present. Another problem to be addressed while clustering is that trajectories can be of different lengths and also have missing values. A tree-based ensemble is used to address these problems. The approach is extended to outlier detection in semantic trajectories.
ContributorsKondaveeti, Anirudh (Author) / Runger, George C. (Thesis advisor) / Mirchandani, Pitu (Committee member) / Pan, Rong (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2012
151485-Thumbnail Image.png
Description
Tesla turbo-machinery offers a robust, easily manufactured, extremely versatile prime mover with inherent capabilities making it perhaps the best, if not the only, solution for certain niche applications. The goal of this thesis is not to optimize the performance of the Tesla turbine, but to compare its performance with various

Tesla turbo-machinery offers a robust, easily manufactured, extremely versatile prime mover with inherent capabilities making it perhaps the best, if not the only, solution for certain niche applications. The goal of this thesis is not to optimize the performance of the Tesla turbine, but to compare its performance with various working fluids. Theoretical and experimental analyses of a turbine-generator assembly utilizing compressed air, saturated steam and water as the working fluids were performed and are presented in this work. A brief background and explanation of the technology is provided along with potential applications. A theoretical thermodynamic analysis is outlined, resulting in turbine and rotor efficiencies, power outputs and Reynolds numbers calculated for the turbine for various combinations of working fluids and inlet nozzles. The results indicate the turbine is capable of achieving a turbine efficiency of 31.17 ± 3.61% and an estimated rotor efficiency 95 ± 9.32%. These efficiencies are promising considering the numerous losses still present in the current design. Calculation of the Reynolds number provided some capability to determine the flow behavior and how that behavior impacts the performance and efficiency of the Tesla turbine. It was determined that turbulence in the flow is essential to achieving high power outputs and high efficiency. Although the efficiency, after peaking, begins to slightly taper off as the flow becomes increasingly turbulent, the power output maintains a steady linear increase.
ContributorsPeshlakai, Aaron (Author) / Phelan, Patrick (Thesis advisor) / Trimble, Steve (Committee member) / Wang, Liping (Committee member) / Arizona State University (Publisher)
Created2012
151511-Thumbnail Image.png
Description
With the increase in computing power and availability of data, there has never been a greater need to understand data and make decisions from it. Traditional statistical techniques may not be adequate to handle the size of today's data or the complexities of the information hidden within the data. Thus

With the increase in computing power and availability of data, there has never been a greater need to understand data and make decisions from it. Traditional statistical techniques may not be adequate to handle the size of today's data or the complexities of the information hidden within the data. Thus knowledge discovery by machine learning techniques is necessary if we want to better understand information from data. In this dissertation, we explore the topics of asymmetric loss and asymmetric data in machine learning and propose new algorithms as solutions to some of the problems in these topics. We also studied variable selection of matched data sets and proposed a solution when there is non-linearity in the matched data. The research is divided into three parts. The first part addresses the problem of asymmetric loss. A proposed asymmetric support vector machine (aSVM) is used to predict specific classes with high accuracy. aSVM was shown to produce higher precision than a regular SVM. The second part addresses asymmetric data sets where variables are only predictive for a subset of the predictor classes. Asymmetric Random Forest (ARF) was proposed to detect these kinds of variables. The third part explores variable selection for matched data sets. Matched Random Forest (MRF) was proposed to find variables that are able to distinguish case and control without the restrictions that exists in linear models. MRF detects variables that are able to distinguish case and control even in the presence of interaction and qualitative variables.
ContributorsKoh, Derek (Author) / Runger, George C. (Thesis advisor) / Wu, Tong (Committee member) / Pan, Rong (Committee member) / Cesta, John (Committee member) / Arizona State University (Publisher)
Created2013
151515-Thumbnail Image.png
Description
This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified

This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified technique results in significant improvement in velocity retrieval accuracy. These modifications include changes to innovation covariance portioning, covariance binning, and analysis increment calculation. It is observed that the modified technique is able to make retrievals with better accuracy, preserves local information better, and compares well with tower measurements. In order to study the error of representativeness and vector retrieval error, a lidar simulator was constructed. Using the lidar simulator a thorough sensitivity analysis of the lidar measurement process and vector retrieval is carried out. The error of representativeness as a function of scales of motion and sensitivity of vector retrieval to look angle is quantified. Using the modified OI technique, study of nocturnal flow in Owens' Valley, CA was carried out to identify and understand uncharacteristic events on the night of March 27th 2006. Observations from 1030 UTC to 1230 UTC (0230 hr local time to 0430 hr local time) on March 27 2006 are presented. Lidar observations show complex and uncharacteristic flows such as sudden bursts of westerly cross-valley wind mixing with the dominant up-valley wind. Model results from Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS®) and other in-situ instrumentations are used to corroborate and complement these observations. The modified OI technique is used to identify uncharacteristic and extreme flow events at a wind development site. Estimates of turbulence and shear from this technique are compared to tower measurements. A formulation for equivalent wind speed in the presence of variations in wind speed and direction, combined with shear is developed and used to determine wind energy content in presence of turbulence.
ContributorsChoukulkar, Aditya (Author) / Calhoun, Ronald (Thesis advisor) / Mahalov, Alex (Committee member) / Kostelich, Eric (Committee member) / Huang, Huei-Ping (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2013
152902-Thumbnail Image.png
Description
Accelerated life testing (ALT) is the process of subjecting a product to stress conditions (temperatures, voltage, pressure etc.) in excess of its normal operating levels to accelerate failures. Product failure typically results from multiple stresses acting on it simultaneously. Multi-stress factor ALTs are challenging as they increase the number of

Accelerated life testing (ALT) is the process of subjecting a product to stress conditions (temperatures, voltage, pressure etc.) in excess of its normal operating levels to accelerate failures. Product failure typically results from multiple stresses acting on it simultaneously. Multi-stress factor ALTs are challenging as they increase the number of experiments due to the stress factor-level combinations resulting from the increased number of factors. Chapter 2 provides an approach for designing ALT plans with multiple stresses utilizing Latin hypercube designs that reduces the simulation cost without loss of statistical efficiency. A comparison to full grid and large-sample approximation methods illustrates the approach computational cost gain and flexibility in determining optimal stress settings with less assumptions and more intuitive unit allocations.

Implicit in the design criteria of current ALT designs is the assumption that the form of the acceleration model is correct. This is unrealistic assumption in many real-world problems. Chapter 3 provides an approach for ALT optimum design for model discrimination. We utilize the Hellinger distance measure between predictive distributions. The optimal ALT plan at three stress levels was determined and its performance was compared to good compromise plan, best traditional plan and well-known 4:2:1 compromise test plans. In the case of linear versus quadratic ALT models, the proposed method increased the test plan's ability to distinguish among competing models and provided better guidance as to which model is appropriate for the experiment.

Chapter 4 extends the approach of Chapter 3 to ALT sequential model discrimination. An initial experiment is conducted to provide maximum possible information with respect to model discrimination. The follow-on experiment is planned by leveraging the most current information to allow for Bayesian model comparison through posterior model probability ratios. Results showed that performance of plan is adversely impacted by the amount of censoring in the data, in the case of linear vs. quadratic model form at three levels of constant stress, sequential testing can improve model recovery rate by approximately 8% when data is complete, but no apparent advantage in adopting sequential testing was found in the case of right-censored data when censoring is in excess of a certain amount.
ContributorsNasir, Ehab (Author) / Pan, Rong (Thesis advisor) / Runger, George C. (Committee member) / Gel, Esma (Committee member) / Kao, Ming-Hung (Committee member) / Montgomery, Douglas C. (Committee member) / Arizona State University (Publisher)
Created2014
152860-Thumbnail Image.png
Description
In the three phases of the engineering design process (conceptual design, embodiment design and detailed design), traditional reliability information is scarce. However, there are different sources of information that provide reliability inputs while designing a new product. This research considered these sources to be further analyzed: reliability information from similar

In the three phases of the engineering design process (conceptual design, embodiment design and detailed design), traditional reliability information is scarce. However, there are different sources of information that provide reliability inputs while designing a new product. This research considered these sources to be further analyzed: reliability information from similar existing products denominated as parents, elicited experts' opinions, initial testing and the customer voice for creating design requirements. These sources were integrated with three novels approaches to produce reliability insights in the engineering design process, all under the Design for Reliability (DFR) philosophy. Firstly, an enhanced parenting process to assess reliability was presented. Using reliability information from parents it was possible to create a failure structure (parent matrix) to be compared against the new product. Then, expert opinions were elicited to provide the effects of the new design changes (parent factor). Combining those two elements resulted in a reliability assessment in early design process. Extending this approach into the conceptual design phase, a methodology was created to obtain a graphical reliability insight of a new product's concept. The approach can be summarized by three sequential steps: functional analysis, cognitive maps and Bayesian networks. These tools integrated the available information, created a graphical representation of the concept and provided quantitative reliability assessments. Lastly, to optimize resources when product testing is viable (e.g., detailed design) a type of accelerated life testing was recommended: the accelerated degradation tests. The potential for robust design engineering for this type of test was exploited. Then, robust design was achieved by setting the design factors at some levels such that the impact of stress factor variation on the degradation rate can be minimized. Finally, to validate the proposed approaches and methods, different case studies were presented.
ContributorsMejia Sanchez, Luis (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Villalobos, Jesus R (Committee member) / See, Tung-King (Committee member) / Arizona State University (Publisher)
Created2014
152984-Thumbnail Image.png
Description
Multi-touch tablets and smart phones are now widely used in both workplace and consumer settings. Interacting with these devices requires hand and arm movements that are potentially complex and poorly understood. Experimental studies have revealed differences in performance that could potentially be associated with injury risk. However, underlying causes for

Multi-touch tablets and smart phones are now widely used in both workplace and consumer settings. Interacting with these devices requires hand and arm movements that are potentially complex and poorly understood. Experimental studies have revealed differences in performance that could potentially be associated with injury risk. However, underlying causes for performance differences are often difficult to identify. For example, many patterns of muscle activity can potentially result in similar behavioral output. Muscle activity is one factor contributing to forces in tissues that could contribute to injury. However, experimental measurements of muscle activity and force for humans are extremely challenging. Models of the musculoskeletal system can be used to make specific estimates of neuromuscular coordination and musculoskeletal forces. However, existing models cannot easily be used to describe complex, multi-finger gestures such as those used for multi-touch human computer interaction (HCI) tasks. We therefore seek to develop a dynamic musculoskeletal simulation capable of estimating internal musculoskeletal loading during multi-touch tasks involving multi digits of the hand, and use the simulation to better understand complex multi-touch and gestural movements, and potentially guide the design of technologies the reduce injury risk. To accomplish these, we focused on three specific tasks. First, we aimed at determining the optimal index finger muscle attachment points within the context of the established, validated OpenSim arm model using measured moment arm data taken from the literature. Second, we aimed at deriving moment arm values from experimentally-measured muscle attachments and using these values to determine muscle-tendon paths for both extrinsic and intrinsic muscles of middle, ring and little fingers. Finally, we aimed at exploring differences in hand muscle activation patterns during zooming and rotating tasks on the tablet computer in twelve subjects. Towards this end, our musculoskeletal hand model will help better address the neuromuscular coordination, safe gesture performance and internal loadings for multi-touch applications.
ContributorsYi, Chong-hwan (Author) / Jindrich, Devin L. (Thesis advisor) / Artemiadis, Panagiotis K. (Thesis advisor) / Phelan, Patrick (Committee member) / Santos, Veronica J. (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2014
153086-Thumbnail Image.png
Description
A municipal electric utility in Mesa, Arizona with a peak load of approximately 85 megawatts (MW) was analyzed to determine how the implementation of renewable resources (both wind and solar) would affect the overall cost of energy purchased by the utility. The utility currently purchases all of its energy

A municipal electric utility in Mesa, Arizona with a peak load of approximately 85 megawatts (MW) was analyzed to determine how the implementation of renewable resources (both wind and solar) would affect the overall cost of energy purchased by the utility. The utility currently purchases all of its energy through long term energy supply contracts and does not own any generation assets and so optimization was achieved by minimizing the overall cost of energy while adhering to specific constraints on how much energy the utility could purchase from the short term energy market. Scenarios were analyzed for a five percent and a ten percent penetration of renewable energy in the years 2015 and 2025. Demand Side Management measures (through thermal storage in the City's district cooling system, electric vehicles, and customers' air conditioning improvements) were evaluated to determine if they would mitigate some of the cost increases that resulted from the addition of renewable resources.

In the 2015 simulation, wind energy was less expensive than solar to integrate to the supply mix. When five percent of the utility's energy requirements in 2015 are met by wind, this caused a 3.59% increase in the overall cost of energy. When that five percent is met by solar in 2015, it is estimated to cause a 3.62% increase in the overall cost of energy. A mix of wind and solar in 2015 caused a lower increase in the overall cost of energy of 3.57%. At the ten percent implementation level in 2015, solar, wind, and a mix of solar and wind caused increases of 7.28%, 7.51% and 7.27% respectively in the overall cost of energy.

In 2025, at the five percent implementation level, wind and solar caused increases in the overall cost of energy of 3.07% and 2.22% respectively. In 2025, at the ten percent implementation level, wind and solar caused increases in the overall cost of energy of 6.23% and 4.67% respectively.

Demand Side Management reduced the overall cost of energy by approximately 0.6%, mitigating some of the cost increase from adding renewable resources.
ContributorsCadorin, Anthony (Author) / Phelan, Patrick (Thesis advisor) / Calhoun, Ronald (Committee member) / Trimble, Steve (Committee member) / Arizona State University (Publisher)
Created2014