This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 59
Filtering by

Clear all filters

149977-Thumbnail Image.png
Description
Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from

Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from video data by decomposing various key contributing factors, such as pose, view angle, and body shape, in the generation of the image observations. Experimental results have shown that the resulting pose features extracted using the proposed methods exhibit excellent invariance properties to changes in view angles and body shapes. Furthermore, using the proposed invariant multifactor pose features, a suite of simple while effective algorithms have been developed to solve the movement recognition and pose estimation problems. Using these proposed algorithms, excellent human movement analysis results have been obtained, and most of them are superior to those obtained from state-of-the-art algorithms on the same testing datasets. Moreover, a number of key movement analysis challenges, including robust online gesture spotting and multi-camera gesture recognition, have also been addressed in this research. To this end, an online gesture spotting framework has been developed to automatically detect and learn non-gesture movement patterns to improve gesture localization and recognition from continuous data streams using a hidden Markov network. In addition, the optimal data fusion scheme has been investigated for multicamera gesture recognition, and the decision-level camera fusion scheme using the product rule has been found to be optimal for gesture recognition using multiple uncalibrated cameras. Furthermore, the challenge of optimal camera selection in multi-camera gesture recognition has also been tackled. A measure to quantify the complementary strength across cameras has been proposed. Experimental results obtained from a real-life gesture recognition dataset have shown that the optimal camera combinations identified according to the proposed complementary measure always lead to the best gesture recognition results.
ContributorsPeng, Bo (Author) / Qian, Gang (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149993-Thumbnail Image.png
Description
Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's

Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's inherent quality. However, at times, there may be cues in the upstream test data that, if detected, could serve to predict the likelihood of downstream failure or performance degradation induced by product use or environmental stresses. This study explores the use of downstream factory test data or product field reliability data to infer data mining or pattern recognition criteria onto manufacturing process or upstream test data by means of support vector machines (SVM) in order to provide reliability prediction models. In concert with a risk/benefit analysis, these models can be utilized to drive improvement of the product or, at least, via screening to improve the reliability of the product delivered to the customer. Such models can be used to aid in reliability risk assessment based on detectable correlations between the product test performance and the sources of supply, test stands, or other factors related to product manufacture. As an enhancement to the usefulness of the SVM or hyperplane classifier within this context, L-moments and the Western Electric Company (WECO) Rules are used to augment or replace the native process or test data used as inputs to the classifier. As part of this research, a generalizable binary classification methodology was developed that can be used to design and implement predictors of end-item field failure or downstream product performance based on upstream test data that may be composed of single-parameter, time-series, or multivariate real-valued data. Additionally, the methodology provides input parameter weighting factors that have proved useful in failure analysis and root cause investigations as indicators of which of several upstream product parameters have the greater influence on the downstream failure outcomes.
ContributorsMosley, James (Author) / Morrell, Darryl (Committee member) / Cochran, Douglas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Roberts, Chell (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149962-Thumbnail Image.png
Description
In the last few years, significant advances in nanofabrication have allowed tailoring of structures and materials at a molecular level enabling nanofabrication with precise control of dimensions and organization at molecular length scales, a development leading to significant advances in nanoscale systems. Although, the direction of progress seems to follow

In the last few years, significant advances in nanofabrication have allowed tailoring of structures and materials at a molecular level enabling nanofabrication with precise control of dimensions and organization at molecular length scales, a development leading to significant advances in nanoscale systems. Although, the direction of progress seems to follow the path of microelectronics, the fundamental physics in a nanoscale system changes more rapidly compared to microelectronics, as the size scale is decreased. The changes in length, area, and volume ratios due to reduction in size alter the relative influence of various physical effects determining the overall operation of a system in unexpected ways. One such category of nanofluidic structures demonstrating unique ionic and molecular transport characteristics are nanopores. Nanopores derive their unique transport characteristics from the electrostatic interaction of nanopore surface charge with aqueous ionic solutions. In this doctoral research cylindrical nanopores, in single and array configuration, were fabricated in silicon-on-insulator (SOI) using a combination of electron beam lithography (EBL) and reactive ion etching (RIE). The fabrication method presented is compatible with standard semiconductor foundries and allows fabrication of nanopores with desired geometries and precise dimensional control, providing near ideal and isolated physical modeling systems to study ion transport at the nanometer level. Ion transport through nanopores was characterized by measuring ionic conductances of arrays of nanopores of various diameters for a wide range of concentration of aqueous hydrochloric acid (HCl) ionic solutions. Measured ionic conductances demonstrated two distinct regimes based on surface charge interactions at low ionic concentrations and nanopore geometry at high ionic concentrations. Field effect modulation of ion transport through nanopore arrays, in a fashion similar to semiconductor transistors, was also studied. Using ionic conductance measurements, it was shown that the concentration of ions in the nanopore volume was significantly changed when a gate voltage on nanopore arrays was applied, hence controlling their transport. Based on the ion transport results, single nanopores were used to demonstrate their application as nanoscale particle counters by using polystyrene nanobeads, monodispersed in aqueous HCl solutions of different molarities. Effects of field effect modulation on particle transition events were also demonstrated.
ContributorsJoshi, Punarvasu (Author) / Thornton, Trevor J (Thesis advisor) / Goryll, Michael (Thesis advisor) / Spanias, Andreas (Committee member) / Saraniti, Marco (Committee member) / Arizona State University (Publisher)
Created2011
150298-Thumbnail Image.png
Description
Due to restructuring and open access to the transmission system, modern electric power systems are being operated closer to their operational limits. Additionally, the secure operational limits of modern power systems have become increasingly difficult to evaluate as the scale of the network and the number of transactions between utilities

Due to restructuring and open access to the transmission system, modern electric power systems are being operated closer to their operational limits. Additionally, the secure operational limits of modern power systems have become increasingly difficult to evaluate as the scale of the network and the number of transactions between utilities increase. To account for these challenges associated with the rapid expansion of electric power systems, dynamic equivalents have been widely applied for the purpose of reducing the computational effort of simulation-based transient security assessment. Dynamic equivalents are commonly developed using a coherency-based approach in which a retained area and an external area are first demarcated. Then the coherent generators in the external area are aggregated and replaced by equivalenced models, followed by network reduction and load aggregation. In this process, an improperly defined retained area can result in detrimental impacts on the effectiveness of the equivalents in preserving the dynamic characteristics of the original unreduced system. In this dissertation, a comprehensive approach has been proposed to determine an appropriate retained area boundary by including the critical generators in the external area that are tightly coupled with the initial retained area. Further-more, a systematic approach has also been investigated to efficiently predict the variation in generator slow coherency behavior when the system operating condition is subject to change. Based on this determination, the critical generators in the external area that are tightly coherent with the generators in the initial retained area are retained, resulting in a new retained area boundary. Finally, a novel hybrid dynamic equivalent, consisting of both a coherency-based equivalent and an artificial neural network (ANN)-based equivalent, has been proposed and analyzed. The ANN-based equivalent complements the coherency-based equivalent at all the retained area boundary buses, and it is designed to compensate for the discrepancy between the full system and the conventional coherency-based equivalent. The approaches developed have been validated on a large portion of the Western Electricity Coordinating Council (WECC) system and on a test case including a significant portion of the eastern interconnection.
ContributorsMa, Feng (Author) / Vittal, Vijay (Thesis advisor) / Tylavsky, Daniel (Committee member) / Heydt, Gerald (Committee member) / Si, Jennie (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2011
151561-Thumbnail Image.png
Description
This dissertation presents a new hybrid fault current limiter (FCL) topology that is primarily intended to protect single-phase power equipment. It can however be extended to protect three phase systems but would need three devices to protect each individual phase. In comparison against the existing fault current limiter technology, the

This dissertation presents a new hybrid fault current limiter (FCL) topology that is primarily intended to protect single-phase power equipment. It can however be extended to protect three phase systems but would need three devices to protect each individual phase. In comparison against the existing fault current limiter technology, the salient fea-tures of the proposed topology are: a) provides variable impedance that provides a 50% reduction in prospective fault current; b) near instantaneous response time which is with-in the first half cycle (1-4 ms); c) the use of semiconductor switches as the commutating switch which produces reduced leakage current, reduced losses, improved reliability, and a faster switch time (ns-µs); d) zero losses in steady-state operation; e) use of a Neodym-ium (NdFeB) permanent magnet as the limiting impedance which reduces size, cost, weight, eliminates DC biasing and cooling costs; f) use of Pulse Width Modulation (PWM) to control the magnitude of the fault current to a user's desired level. g) experi-mental test system is developed and tested to prove the concepts of the proposed FCL. This dissertation presents the proposed topology and its working principle backed up with numerical verifications, simulation results, and hardware implementation results. Conclu-sions and future work are also presented.
ContributorsPrigmore, Jay (Author) / Karady, George G. (Thesis advisor) / Ayyanar, Raja (Committee member) / Holbert, Keith E. (Committee member) / Hedman, Kory (Committee member) / Arizona State University (Publisher)
Created2013
151656-Thumbnail Image.png
Description
Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all

Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all of the known samples. The selection of the contributing data points and the specifics of how they are used to define the interpolated values influences how effectively the interpolation algorithm is able to estimate the underlying, continuous signal. The main contributions of this dissertation are three fold: 1) Reframing edge-directed interpolation of a single image as an intensity-based registration problem. 2) Providing an analytical framework for intensity-based registration using control grid constraints. 3) Quantitative assessment of the new, single-image enlargement algorithm based on analytical intensity-based registration. In addition to single image resizing, the new methods and analytical approaches were extended to address a wide range of applications including volumetric (multi-slice) image interpolation, video deinterlacing, motion detection, and atmospheric distortion correction. Overall, the new approaches generate results that more accurately reflect the underlying signals than less computationally demanding approaches and with lower processing requirements and fewer restrictions than methods with comparable accuracy.
ContributorsZwart, Christine M. (Author) / Frakes, David H (Thesis advisor) / Karam, Lina (Committee member) / Kodibagkar, Vikram (Committee member) / Spanias, Andreas (Committee member) / Towe, Bruce (Committee member) / Arizona State University (Publisher)
Created2013
151542-Thumbnail Image.png
Description
Asymptotic comparisons of ergodic channel capacity at high and low signal-to-noise ratios (SNRs) are provided for several adaptive transmission schemes over fading channels with general distributions, including optimal power and rate adaptation, rate adaptation only, channel inversion and its variants. Analysis of the high-SNR pre-log constants of the ergodic capacity

Asymptotic comparisons of ergodic channel capacity at high and low signal-to-noise ratios (SNRs) are provided for several adaptive transmission schemes over fading channels with general distributions, including optimal power and rate adaptation, rate adaptation only, channel inversion and its variants. Analysis of the high-SNR pre-log constants of the ergodic capacity reveals the existence of constant capacity difference gaps among the schemes with a pre-log constant of 1. Closed-form expressions for these high-SNR capacity difference gaps are derived, which are proportional to the SNR loss between these schemes in dB scale. The largest one of these gaps is found to be between the optimal power and rate adaptation scheme and the channel inversion scheme. Based on these expressions it is shown that the presence of space diversity or multi-user diversity makes channel inversion arbitrarily close to achieving optimal capacity at high SNR with sufficiently large number of antennas or users. A low-SNR analysis also reveals that the presence of fading provably always improves capacity at sufficiently low SNR, compared to the additive white Gaussian noise (AWGN) case. Numerical results are shown to corroborate our analytical results. This dissertation derives high-SNR asymptotic average error rates over fading channels by relating them to the outage probability, under mild assumptions. The analysis is based on the Tauberian theorem for Laplace-Stieltjes transforms which is grounded on the notion of regular variation, and applies to a wider range of channel distributions than existing approaches. The theory of regular variation is argued to be the proper mathematical framework for finding sufficient and necessary conditions for outage events to dominate high-SNR error rate performance. It is proved that the diversity order being d and the cumulative distribution function (CDF) of the channel power gain having variation exponent d at 0 imply each other, provided that the instantaneous error rate is upper-bounded by an exponential function of the instantaneous SNR. High-SNR asymptotic average error rates are derived for specific instantaneous error rates. Compared to existing approaches in the literature, the asymptotic expressions are related to the channel distribution in a much simpler manner herein, and related with outage more intuitively. The high-SNR asymptotic error rate is also characterized under diversity combining schemes with the channel power gain of each branch having a regularly varying CDF. Numerical results are shown to corroborate our theoretical analysis. This dissertation studies several problems concerning channel inclusion, which is a partial ordering between discrete memoryless channels (DMCs) proposed by Shannon. Specifically, majorization-based conditions are derived for channel inclusion between certain DMCs. Furthermore, under general conditions, channel equivalence defined through Shannon ordering is shown to be the same as permutation of input and output symbols. The determination of channel inclusion is considered as a convex optimization problem, and the sparsity of the weights related to the representation of the worse DMC in terms of the better one is revealed when channel inclusion holds between two DMCs. For the exploitation of this sparsity, an effective iterative algorithm is established based on modifying the orthogonal matching pursuit algorithm. The extension of channel inclusion to continuous channels and its application in ordering phase noises are briefly addressed.
ContributorsZhang, Yuan (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Zhang, Junshan (Committee member) / Reisslein, Martin (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2013
150788-Thumbnail Image.png
Description
Interictal spikes, together with seizures, have been recognized as the two hallmarks of epilepsy, a brain disorder that 1% of the world's population suffers from. Even though the presence of spikes in brain's electromagnetic activity has diagnostic value, their dynamics are still elusive. It was an objective of this dissertation

Interictal spikes, together with seizures, have been recognized as the two hallmarks of epilepsy, a brain disorder that 1% of the world's population suffers from. Even though the presence of spikes in brain's electromagnetic activity has diagnostic value, their dynamics are still elusive. It was an objective of this dissertation to formulate a mathematical framework within which the dynamics of interictal spikes could be thoroughly investigated. A new epileptic spike detection algorithm was developed by employing data adaptive morphological filters. The performance of the spike detection algorithm was favorably compared with others in the literature. A novel spike spatial synchronization measure was developed and tested on coupled spiking neuron models. Application of this measure to individual epileptic spikes in EEG from patients with temporal lobe epilepsy revealed long-term trends of increase in synchronization between pairs of brain sites before seizures and desynchronization after seizures, in the same patient as well as across patients, thus supporting the hypothesis that seizures may occur to break (reset) the abnormal spike synchronization in the brain network. Furthermore, based on these results, a separate spatial analysis of spike rates was conducted that shed light onto conflicting results in the literature about variability of spike rate before and after seizure. The ability to automatically classify seizures into clinical and subclinical was a result of the above findings. A novel method for epileptogenic focus localization from interictal periods based on spike occurrences was also devised, combining concepts from graph theory, like eigenvector centrality, and the developed spike synchronization measure, and tested very favorably against the utilized gold rule in clinical practice for focus localization from seizures onset. Finally, in another application of resetting of brain dynamics at seizures, it was shown that it is possible to differentiate with a high accuracy between patients with epileptic seizures (ES) and patients with psychogenic nonepileptic seizures (PNES). The above studies of spike dynamics have elucidated many unknown aspects of ictogenesis and it is expected to significantly contribute to further understanding of the basic mechanisms that lead to seizures, the diagnosis and treatment of epilepsy.
ContributorsKrishnan, Balu (Author) / Iasemidis, Leonidas (Thesis advisor) / Tsakalis, Kostantinos (Committee member) / Spanias, Andreas (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2012
151050-Thumbnail Image.png
Description
In the deregulated power system, locational marginal prices are used in transmission engineering predominantly as near real-time pricing signals. This work extends this concept to distribution engineering so that a distribution class locational marginal price might be used for real-time pricing and control of advanced control systems in distribution circuits.

In the deregulated power system, locational marginal prices are used in transmission engineering predominantly as near real-time pricing signals. This work extends this concept to distribution engineering so that a distribution class locational marginal price might be used for real-time pricing and control of advanced control systems in distribution circuits. A formulation for the distribution locational marginal price signal is presented that is based on power flow sensitivities in a distribution system. A Jacobian-based sensitivity analysis has been developed for application in the distribution pricing method. Increasing deployment of distributed energy sources is being seen at the distribution level and this trend is expected to continue. To facilitate an optimal use of the distributed infrastructure, the control of the energy demand on a feeder node in the distribution system has been formulated as a multiobjective optimization problem and a solution algorithm has been developed. In multiobjective problems the Pareto optimality criterion is generally applied, and commonly used solution algorithms are decision-based and heuristic. In contrast, a mathematically-robust technique called normal boundary intersection has been modeled for use in this work, and the control variable is solved via separable programming. The Roy Billinton Test System (RBTS) has predominantly been used to demonstrate the application of the formulation in distribution system control. A parallel processing environment has been used to replicate the distributed nature of controls at many points in the distribution system. Interactions between the real-time prices in a distribution feeder and the nodal prices at the aggregated load bus have been investigated. The application of the formulations in an islanded operating condition has also been demonstrated. The DLMP formulation has been validated using the test bed systems and a practical framework for its application in distribution engineering has been presented. The multiobjective optimization yields excellent results and is found to be robust for finer time resolutions. The work shown in this report is applicable to, and has been researched under the aegis of the Future Renewable Electric Energy Delivery and Management (FREEDM) center, which is a generation III National Science Foundation engineering research center headquartered at North Carolina State University.
ContributorsRanganathan Sathyanarayana, Bharadwaj (Author) / Heydt, Gerald T (Thesis advisor) / Vittal, Vijay (Committee member) / Ayyanar, Raja (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2012
151242-Thumbnail Image.png
Description
Photovoltaic (PV) power generation has the potential to cause a significant impact on power system reliability since its total installed capacity is projected to increase at a significant rate. PV generation can be described as an intermittent and variable resource because its production is influenced by ever-changing environmental conditions. The

Photovoltaic (PV) power generation has the potential to cause a significant impact on power system reliability since its total installed capacity is projected to increase at a significant rate. PV generation can be described as an intermittent and variable resource because its production is influenced by ever-changing environmental conditions. The study in this dissertation focuses on the influence of PV generation on trans-mission system reliability. This is a concern because PV generation output is integrated into present power systems at various voltage levels and may significantly affect the power flow patterns. This dissertation applies a probabilistic power flow (PPF) algorithm to evaluate the influence of PV generation uncertainty on transmission system perfor-mance. A cumulant-based PPF algorithm suitable for large systems is used. Correlation among adjacent PV resources is considered. Three types of approximation expansions based on cumulants namely Gram-Charlier expansion, Edgeworth expansion and Cor-nish-Fisher expansion are compared, and their properties, advantages and deficiencies are discussed. Additionally, a novel probabilistic model of PV generation is developed to obtain the probability density function (PDF) of the PV generation production based on environmental conditions. Besides, this dissertation proposes a novel PPF algorithm considering the conven-tional generation dispatching operation to balance PV generation uncertainties. It is pru-dent to include generation dispatch in the PPF algorithm since the dispatching strategy compensates for PV generation injections and influences the uncertainty results. Fur-thermore, this dissertation also proposes a probabilistic optimal power dispatching strat-egy which considers uncertainty problems in the economic dispatch and optimizes the expected value of the total cost with the overload probability as a constraint. The proposed PPF algorithm with the three expansions is compared with Monte Carlo simulations (MCS) with results for a 2497-bus representation of the Arizona area of the Western Electricity Coordinating Council (WECC) system. The PDFs of the bus voltages, line flows and slack bus production are computed, and are used to identify the confidence interval, the over limit probability and the expected over limit time of the ob-jective variables. The proposed algorithm is of significant relevance to the operating and planning studies of the transmission systems with PV generation installed.
ContributorsFan, Miao (Author) / Vittal, Vijay (Thesis advisor) / Heydt, Gerald Thomas (Committee member) / Ayyanar, Raja (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2012