Matching Items (18)
Filtering by

Clear all filters

149993-Thumbnail Image.png
Description
Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's

Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's inherent quality. However, at times, there may be cues in the upstream test data that, if detected, could serve to predict the likelihood of downstream failure or performance degradation induced by product use or environmental stresses. This study explores the use of downstream factory test data or product field reliability data to infer data mining or pattern recognition criteria onto manufacturing process or upstream test data by means of support vector machines (SVM) in order to provide reliability prediction models. In concert with a risk/benefit analysis, these models can be utilized to drive improvement of the product or, at least, via screening to improve the reliability of the product delivered to the customer. Such models can be used to aid in reliability risk assessment based on detectable correlations between the product test performance and the sources of supply, test stands, or other factors related to product manufacture. As an enhancement to the usefulness of the SVM or hyperplane classifier within this context, L-moments and the Western Electric Company (WECO) Rules are used to augment or replace the native process or test data used as inputs to the classifier. As part of this research, a generalizable binary classification methodology was developed that can be used to design and implement predictors of end-item field failure or downstream product performance based on upstream test data that may be composed of single-parameter, time-series, or multivariate real-valued data. Additionally, the methodology provides input parameter weighting factors that have proved useful in failure analysis and root cause investigations as indicators of which of several upstream product parameters have the greater influence on the downstream failure outcomes.
ContributorsMosley, James (Author) / Morrell, Darryl (Committee member) / Cochran, Douglas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Roberts, Chell (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
150833-Thumbnail Image.png
Description
Composite materials are increasingly being used in aircraft, automobiles, and other applications due to their high strength to weight and stiffness to weight ratios. However, the presence of damage, such as delamination or matrix cracks, can significantly compromise the performance of these materials and result in premature failure. Structural components

Composite materials are increasingly being used in aircraft, automobiles, and other applications due to their high strength to weight and stiffness to weight ratios. However, the presence of damage, such as delamination or matrix cracks, can significantly compromise the performance of these materials and result in premature failure. Structural components are often manually inspected to detect the presence of damage. This technique, known as schedule based maintenance, however, is expensive, time-consuming, and often limited to easily accessible structural elements. Therefore, there is an increased demand for robust and efficient Structural Health Monitoring (SHM) techniques that can be used for Condition Based Monitoring, which is the method in which structural components are inspected based upon damage metrics as opposed to flight hours. SHM relies on in situ frameworks for detecting early signs of damage in exposed and unexposed structural elements, offering not only reduced number of schedule based inspections, but also providing better useful life estimates. SHM frameworks require the development of different sensing technologies, algorithms, and procedures to detect, localize, quantify, characterize, as well as assess overall damage in aerospace structures so that strong estimations in the remaining useful life can be determined. The use of piezoelectric transducers along with guided Lamb waves is a method that has received considerable attention due to the weight, cost, and function of the systems based on these elements. The research in this thesis investigates the ability of Lamb waves to detect damage in feature dense anisotropic composite panels. Most current research negates the effects of experimental variability by performing tests on structurally simple isotropic plates that are used as a baseline and damaged specimen. However, in actual applications, variability cannot be negated, and therefore there is a need to research the effects of complex sample geometries, environmental operating conditions, and the effects of variability in material properties. This research is based on experiments conducted on a single blade-stiffened anisotropic composite panel that localizes delamination damage caused by impact. The overall goal was to utilize a correlative approach that used only the damage feature produced by the delamination as the damage index. This approach was adopted because it offered a simplistic way to determine the existence and location of damage without having to conduct a more complex wave propagation analysis or having to take into account the geometric complexities of the test specimen. Results showed that even in a complex structure, if the damage feature can be extracted and measured, then an appropriate damage index can be associated to it and the location of the damage can be inferred using a dense sensor array. The second experiment presented in this research studies the effects of temperature on damage detection when using one test specimen for a benchmark data set and another for damage data collection. This expands the previous experiment into exploring not only the effects of variable temperature, but also the effects of high experimental variability. Results from this work show that the damage feature in the data is not only extractable at higher temperatures, but that the data from one panel at one temperature can be directly compared to another panel at another temperature for baseline comparison due to linearity of the collected data.
ContributorsVizzini, Anthony James, II (Author) / Chattopadhyay, Aditi (Thesis advisor) / Fard, Masoud (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
Description
Non-Destructive Testing (NDT) is integral to preserving the structural health of materials. Techniques that fall under the NDT category are able to evaluate integrity and condition of a material without permanently altering any property of the material. Additionally, they can typically be used while the material is in

Non-Destructive Testing (NDT) is integral to preserving the structural health of materials. Techniques that fall under the NDT category are able to evaluate integrity and condition of a material without permanently altering any property of the material. Additionally, they can typically be used while the material is in active use instead of needing downtime for inspection.
The two general categories of structural health monitoring (SHM) systems include passive and active monitoring. Active SHM systems utilize an input of energy to monitor the health of a structure (such as sound waves in ultrasonics), while passive systems do not. As such, passive SHM tends to be more desirable. A system could be permanently fixed to a critical location, passively accepting signals until it records a damage event, then localize and characterize the damage. This is the goal of acoustic emissions testing.
When certain types of damage occur, such as matrix cracking or delamination in composites, the corresponding release of energy creates sound waves, or acoustic emissions, that propagate through the material. Audio sensors fixed to the surface can pick up data from both the time and frequency domains of the wave. With proper data analysis, a time of arrival (TOA) can be calculated for each sensor allowing for localization of the damage event. The frequency data can be used to characterize the damage.
In traditional acoustic emissions testing, the TOA combined with wave velocity and information about signal attenuation in the material is used to localize events. However, in instances of complex geometries or anisotropic materials (such as carbon fibre composites), velocity and attenuation can vary wildly based on the direction of interest. In these cases, localization can be based off of the time of arrival distances for each sensor pair. This technique is called Delta T mapping, and is the main focus of this study.
ContributorsBriggs, Nathaniel (Author) / Chattopadhyay, Aditi (Thesis director) / Papandreou-Suppappola, Antonia (Committee member) / Skinner, Travis (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132037-Thumbnail Image.png
Description
In the field of electronic music, haptic feedback is a crucial feature of digital musical instruments (DMIs) because it gives the musician a more immersive experience. This feedback might come in the form of a wearable haptic device that vibrates in response to music. Such advancements in the electronic music

In the field of electronic music, haptic feedback is a crucial feature of digital musical instruments (DMIs) because it gives the musician a more immersive experience. This feedback might come in the form of a wearable haptic device that vibrates in response to music. Such advancements in the electronic music field are applicable to the field of speech and hearing. More specifically, wearable haptic feedback devices can enhance the musical listening experience for people who use cochlear implant (CI) devices.
This Honors Thesis is a continuation of Prof. Lauren Hayes’s and Dr. Xin Luo’s research initiative, Haptic Electronic Audio Research into Musical Experience (HEAR-ME), which investigates how to enhance the musical listening experience for CI users using a wearable haptic system. The goals of this Honors Thesis are to adapt Prof. Hayes’s system code from the Max visual programming language into the C++ object-oriented programming language and to study the results of the developed C++ codes. This adaptation allows the system to operate in real-time and independently of a computer.
Towards these goals, two signal processing algorithms were developed and programmed in C++. The first algorithm is a thresholding method, which outputs a pulse of a predefined width when the input signal falls below some threshold in amplitude. The second algorithm is a root-mean-square (RMS) method, which outputs a pulse-width modulation signal with a fixed period and with a duty cycle dependent on the RMS of the input signal. The thresholding method was found to work best with speech, and the RMS method was found to work best with music. Future work entails the design of adaptive signal processing algorithms to allow the system to work more effectively on speech in a noisy environment and to emphasize a variety of elements in music.
ContributorsBonelli, Dominic Berlage (Author) / Papandreou-Suppappola, Antonia (Thesis director) / Hayes, Lauren (Thesis director, Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
168844-Thumbnail Image.png
Description
The continuous time-tagging of photon arrival times for high count rate sources isnecessary for applications such as optical communications, quantum key encryption, and astronomical measurements. Detection of Hanbury-Brown and Twiss (HBT) single photon correlations from thermal sources, such as stars, requires a combination of high dynamic range, long integration times, and low systematics

The continuous time-tagging of photon arrival times for high count rate sources isnecessary for applications such as optical communications, quantum key encryption, and astronomical measurements. Detection of Hanbury-Brown and Twiss (HBT) single photon correlations from thermal sources, such as stars, requires a combination of high dynamic range, long integration times, and low systematics in the photon detection and time tagging system. The continuous nature of the measurements and the need for highly accurate timing resolution requires a customized time-to-digital converter (TDC). A custom built, two-channel, field programmable gate array (FPGA)-based TDC capable of continuously time tagging single photons with sub clock cycle timing resolution was characterized. Auto-correlation and cross-correlation measurements were used to constrain spurious systematic effects in the pulse count data as a function of system variables. These variables included, but were not limited to, incident photon count rate, incoming signal attenuation, and measurements of fixed signals. Additionally, a generalized likelihood ratio test using maximum likelihood estimators (MLEs) was derived as a means to detect and estimate correlated photon signal parameters. The derived GLRT was capable of detecting correlated photon signals in a laboratory setting with a high degree of statistical confidence. A proof is presented in which the MLE for the amplitude of the correlated photon signal is shown to be the minimum variance unbiased estimator (MVUE). The fully characterized TDC was used in preliminary measurements of astronomical sources using ground based telescopes. Finally, preliminary theoretical groundwork is established for the deep space optical communications system of the proposed Breakthrough Starshot project, in which low-mass craft will travel to the Alpha Centauri system to collect scientific data from Proxima B. This theoretical groundwork utilizes recent and upcoming space based optical communication systems as starting points for the Starshot communication system.
ContributorsHodges, Todd Michael William (Author) / Mauskopf, Philip (Thesis advisor) / Trichopoulos, George (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel (Committee member) / Arizona State University (Publisher)
Created2022
190959-Thumbnail Image.png
Description
The propagation of waves in solids, especially when characterized by dispersion, remains a topic of profound interest in the field of signal processing. Dispersion represents a phenomenon where wave speed becomes a function of frequency and results in multiple oscillatory modes. Such signals find application in structural healthmonitoring for identifying

The propagation of waves in solids, especially when characterized by dispersion, remains a topic of profound interest in the field of signal processing. Dispersion represents a phenomenon where wave speed becomes a function of frequency and results in multiple oscillatory modes. Such signals find application in structural healthmonitoring for identifying potential damage sensitive features in complex materials. Consequently, it becomes important to find matched time-frequency representations for characterizing the properties of the multiple frequency-dependent modes of propagation in dispersive material. Various time-frequency representations have been used for dispersive signal analysis. However, some of them suffered from poor timefrequency localization or were designed to match only specific dispersion modes with known characteristics, or could not reconstruct individual dispersive modes. This thesis proposes a new time-frequency representation, the nonlinear synchrosqueezing transform (NSST) that is designed to offer high localization to signals with nonlinear time-frequency group delay signatures. The NSST follows the technique used by reassignment and synchrosqueezing methods to reassign time-frequency points of the short-time Fourier transform and wavelet transform to specific localized regions in the time-frequency plane. As the NSST is designed to match signals with third order polynomial phase functions in the frequency domain, we derive matched group delay estimators for the time-frequency point reassignment. This leads to a highly localized representation for nonlinear time-frequency characteristics that also allow for the reconstruction of individual dispersive modes from multicomponent signals. For the reconstruction process, we propose a novel unsupervised learning approach that does not require prior information on the variation or number of modes in the signal. We also propose a Bayesian group delay mode merging approach for reconstructing modes that overlap in time and frequency. In addition to using simulated signals, we demonstrate the performance of the new NSST, together with mode extraction, using real experimental data of ultrasonic guided waves propagating through a composite plate.
ContributorsIkram, Javaid (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chattopadhyay, Aditi (Thesis advisor) / Bertoni, Mariana (Committee member) / Sinha, Kanu (Committee member) / Arizona State University (Publisher)
Created2023
189258-Thumbnail Image.png
Description
Predicting nonlinear dynamical systems has been a long-standing challenge in science. This field is currently witnessing a revolution with the advent of machine learning methods. Concurrently, the analysis of dynamics in various nonlinear complex systems continues to be crucial. Guided by these directions, I conduct the following studies. Predicting critical

Predicting nonlinear dynamical systems has been a long-standing challenge in science. This field is currently witnessing a revolution with the advent of machine learning methods. Concurrently, the analysis of dynamics in various nonlinear complex systems continues to be crucial. Guided by these directions, I conduct the following studies. Predicting critical transitions and transient states in nonlinear dynamics is a complex problem. I developed a solution called parameter-aware reservoir computing, which uses machine learning to track how system dynamics change with a driving parameter. I show that the transition point can be accurately predicted while trained in a sustained functioning regime before the transition. Notably, it can also predict if the system will enter a transient state, the distribution of transient lifetimes, and their average before a final collapse, which are crucial for management. I introduce a machine-learning-based digital twin for monitoring and predicting the evolution of externally driven nonlinear dynamical systems, where reservoir computing is exploited. Extensive tests on various models, encompassing optics, ecology, and climate, verify the approach’s effectiveness. The digital twins can extrapolate unknown system dynamics, continually forecast and monitor under non-stationary external driving, infer hidden variables, adapt to different driving waveforms, and extrapolate bifurcation behaviors across varying system sizes. Integrating engineered gene circuits into host cells poses a significant challenge in synthetic biology due to circuit-host interactions, such as growth feedback. I conducted systematic studies on hundreds of circuit structures exhibiting various functionalities, and identified a comprehensive categorization of growth-induced failures. I discerned three dynamical mechanisms behind these circuit failures. Moreover, my comprehensive computations reveal a scaling law between the circuit robustness and the intensity of growth feedback. A class of circuits with optimal robustness is also identified. Chimera states, a phenomenon of symmetry-breaking in oscillator networks, traditionally have transient lifetimes that grow exponentially with system size. However, my research on high-dimensional oscillators leads to the discovery of ’short-lived’ chimera states. Their lifetime increases logarithmically with system size and decreases logarithmically with random perturbations, indicating a unique fragility. To understand these states, I use a transverse stability analysis supported by simulations.
ContributorsKong, Lingwei (Author) / Lai, Ying-Cheng (Thesis advisor) / Tian, Xiaojun (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Alkhateeb, Ahmed (Committee member) / Arizona State University (Publisher)
Created2023
154672-Thumbnail Image.png
Description
In recent years, there has been an increased interest in sharing available bandwidth to avoid spectrum congestion. With an ever-increasing number wireless users, it is critical to develop signal processing based spectrum sharing algorithms to achieve cooperative use of the allocated spectrum among multiple systems in order to reduce

In recent years, there has been an increased interest in sharing available bandwidth to avoid spectrum congestion. With an ever-increasing number wireless users, it is critical to develop signal processing based spectrum sharing algorithms to achieve cooperative use of the allocated spectrum among multiple systems in order to reduce interference between systems. This work studies the radar and communications systems coexistence problem using two main approaches. The first approach develops methodologies to increase radar target tracking performance under low signal-to-interference-plus-noise ratio (SINR) conditions due to the coexistence of strong communications interference. The second approach jointly optimizes the performance of both systems by co-designing a common transmit waveform.

When concentrating on improving radar tracking performance, a pulsed radar that is tracking a single target coexisting with high powered communications interference is considered. Although the Cramer-Rao lower bound (CRLB) on the covariance of an unbiased estimator of deterministic parameters provides a bound on the estimation mean squared error (MSE), there exists an SINR threshold at which estimator covariance rapidly deviates from the CRLB. After demonstrating that different radar waveforms experience different estimation SINR thresholds using the Barankin bound (BB), a new radar waveform design method is proposed based on predicting the waveform-dependent BB SINR threshold under low SINR operating conditions.

A novel method of predicting the SINR threshold value for maximum likelihood estimation (MLE) is proposed. A relationship is shown to exist between the formulation of the BB kernel and the probability of selecting sidelobes for the MLE. This relationship is demonstrated as an accurate means of threshold prediction for the radar target parameter estimation of frequency, time-delay and angle-of-arrival.



For the co-design radar and communications system problem, the use of a common transmit waveform for a pulse-Doppler radar and a multiuser communications system is proposed. The signaling scheme for each system is selected from a class of waveforms with nonlinear phase function by optimizing the waveform parameters to minimize interference between the two systems and interference among communications users. Using multi-objective optimization, a trade-off in system performance is demonstrated when selecting waveforms that minimize both system interference and tracking MSE.
ContributorsKota, John S (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Berisha, Visar (Committee member) / Bliss, Daniel (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2016
154572-Thumbnail Image.png
Description
This work examines two main areas in model-based time-varying signal processing with emphasis in speech processing applications. The first area concentrates on improving speech intelligibility and on increasing the proposed methodologies application for clinical practice in speech-language pathology. The second area concentrates on signal expansions matched to physical-based models but

This work examines two main areas in model-based time-varying signal processing with emphasis in speech processing applications. The first area concentrates on improving speech intelligibility and on increasing the proposed methodologies application for clinical practice in speech-language pathology. The second area concentrates on signal expansions matched to physical-based models but without requiring independent basis functions; the significance of this work is demonstrated with speech vowels.

A fully automated Vowel Space Area (VSA) computation method is proposed that can be applied to any type of speech. It is shown that the VSA provides an efficient and reliable measure and is correlated to speech intelligibility. A clinical tool that incorporates the automated VSA was proposed for evaluation and treatment to be used by speech language pathologists. Two exploratory studies are performed using two databases by analyzing mean formant trajectories in healthy speech for a wide range of speakers, dialects, and coarticulation contexts. It is shown that phonemes crowded in formant space can often have distinct trajectories, possibly due to accurate perception.

A theory for analyzing time-varying signals models with amplitude modulation and frequency modulation is developed. Examples are provided that demonstrate other possible signal model decompositions with independent basis functions and corresponding physical interpretations. The Hilbert transform (HT) and the use of the analytic form of a signal are motivated, and a proof is provided to show that a signal can still preserve desirable mathematical properties without the use of the HT. A visualization of the Hilbert spectrum is proposed to aid in the interpretation. A signal demodulation is proposed and used to develop a modified Empirical Mode Decomposition (EMD) algorithm.
ContributorsSandoval, Steven, 1984- (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Liss, Julie M (Committee member) / Turaga, Pavan (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2016
153479-Thumbnail Image.png
Description
Analysis of social networks has the potential to provide insights into wide range of applications. As datasets continue to grow, a key challenge is the lack of a widely applicable algorithmic framework for detection of statistically anomalous networks and network properties. Unlike traditional signal processing, where models of truth or

Analysis of social networks has the potential to provide insights into wide range of applications. As datasets continue to grow, a key challenge is the lack of a widely applicable algorithmic framework for detection of statistically anomalous networks and network properties. Unlike traditional signal processing, where models of truth or empirical verification and background data exist and are often well defined, these features are commonly lacking in social and other networks. Here, a novel algorithmic framework for statistical signal processing for graphs is presented. The framework is based on the analysis of spectral properties of the residuals matrix. The framework is applied to the detection of innovation patterns in publication networks, leveraging well-studied empirical knowledge from the history of science. Both the framework itself and the application constitute novel contributions, while advancing algorithmic and mathematical techniques for graph-based data and understanding of the patterns of emergence of novel scientific research. Results indicate the efficacy of the approach and highlight a number of fruitful future directions.
ContributorsBliss, Nadya Travinin (Author) / Laubichler, Manfred (Thesis advisor) / Castillo-Chavez, Carlos (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2015