This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 33
Filtering by

Clear all filters

151700-Thumbnail Image.png
Description
Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance

Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance of two velocity estimation schemes used in Doppler processing systems, namely, directional velocity estimation (DVE) and conventional velocity estimation (CVE). We find that DVE provides better estimation performance and is the only functioning method when the beam to flow angle is large. Unfortunately, DVE is computationally expensive and also requires divisions and square root operations that are hard to implement. We propose two approximation techniques to replace these computations. The simulation results on cyst images show that the proposed approximations do not affect the estimation performance. We also study backend processing which includes envelope detection, log compression and scan conversion. Three different envelope detection methods are compared. Among them, FIR based Hilbert Transform is considered the best choice when phase information is not needed, while quadrature demodulation is a better choice if phase information is necessary. Bilinear and Gaussian interpolation are considered for scan conversion. Through simulations of a cyst image, we show that bilinear interpolation provides comparable contrast-to-noise ratio (CNR) performance with Gaussian interpolation and has lower computational complexity. Thus, bilinear interpolation is chosen for our system.
ContributorsWei, Siyuan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151771-Thumbnail Image.png
Description
This research examines the current challenges of using Lamb wave interrogation methods to localize fatigue crack damage in a complex metallic structural component subjected to unknown temperatures. The goal of this work is to improve damage localization results for a structural component interrogated at an unknown temperature, by developing a

This research examines the current challenges of using Lamb wave interrogation methods to localize fatigue crack damage in a complex metallic structural component subjected to unknown temperatures. The goal of this work is to improve damage localization results for a structural component interrogated at an unknown temperature, by developing a probabilistic and reference-free framework for estimating Lamb wave velocities and the damage location. The methodology for damage localization at unknown temperatures includes the following key elements: i) a model that can describe the change in Lamb wave velocities with temperature; ii) the extension of an advanced time-frequency based signal processing technique for enhanced time-of-flight feature extraction from a dispersive signal; iii) the development of a Bayesian damage localization framework incorporating data association and sensor fusion. The technique requires no additional transducers to be installed on a structure, and allows for the estimation of both the temperature and the wave velocity in the component. Additionally, the framework of the algorithm allows it to function completely in an unsupervised manner by probabilistically accounting for all measurement origin uncertainty. The novel algorithm was experimentally validated using an aluminum lug joint with a growing fatigue crack. The lug joint was interrogated using piezoelectric transducers at multiple fatigue crack lengths, and at temperatures between 20°C and 80°C. The results showed that the algorithm could accurately predict the temperature and wave speed of the lug joint. The localization results for the fatigue damage were found to correlate well with the true locations at long crack lengths, but loss of accuracy was observed in localizing small cracks due to time-of-flight measurement errors. To validate the algorithm across a wider range of temperatures the electromechanically coupled LISA/SIM model was used to simulate the effects of temperatures. The numerical results showed that this approach would be capable of experimentally estimating the temperature and velocity in the lug joint for temperatures from -60°C to 150°C. The velocity estimation algorithm was found to significantly increase the accuracy of localization at temperatures above 120°C when error due to incorrect velocity selection begins to outweigh the error due to time-of-flight measurements.
ContributorsHensberry, Kevin (Author) / Chattopadhyay, Aditi (Thesis advisor) / Liu, Yongming (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151824-Thumbnail Image.png
Description
There is a lack of music therapy services for college students who have problems with depression and/or anxiety. Even among universities and colleges that offer music therapy degrees, there are no known programs offering music therapy to the institution's students. Female college students are particularly vulnerable to depression and anxiety

There is a lack of music therapy services for college students who have problems with depression and/or anxiety. Even among universities and colleges that offer music therapy degrees, there are no known programs offering music therapy to the institution's students. Female college students are particularly vulnerable to depression and anxiety symptoms compared to their male counterparts. Many students who experience mental health problems do not receive treatment, because of lack of knowledge, lack of services, or refusal of treatment. Music therapy is proposed as a reliable and valid complement or even an alternative to traditional counseling and pharmacotherapy because of the appeal of music to young women and the potential for a music therapy group to help isolated students form supportive networks. The present study recruited 14 female university students to participate in a randomized controlled trial of short-term group music therapy to address symptoms of depression and anxiety. The students were randomly divided into either the treatment group or the control group. Over 4 weeks, each group completed surveys related to depression and anxiety. Results indicate that the treatment group's depression and anxiety scores gradually decreased over the span of the treatment protocol. The control group showed either maintenance or slight worsening of depression and anxiety scores. Although none of the results were statistically significant, the general trend indicates that group music therapy was beneficial for the students. A qualitative analysis was also conducted for the treatment group. Common themes were financial concerns, relationship problems, loneliness, and time management/academic stress. All participants indicated that they benefited from the sessions. The group progressed in its cohesion and the participants bonded to the extent that they formed a supportive network which lasted beyond the end of the protocol. The results of this study are by no means conclusive, but do indicate that colleges with music therapy degree programs should consider adding music therapy services for their general student bodies.
ContributorsAshton, Barbara (Author) / Crowe, Barbara J. (Thesis advisor) / Rio, Robin (Committee member) / Davis, Mary (Committee member) / Arizona State University (Publisher)
Created2013
152639-Thumbnail Image.png
Description
Sometimes difficult life events challenge our existing resources in such a way that routinized responses are inadequate to handle the challenge. Some individuals will persist in habitual, automatic behavior, regardless of environmental cues that indicate a mismatch between coping strategy and the demands of the stressor. Other individuals will marshal

Sometimes difficult life events challenge our existing resources in such a way that routinized responses are inadequate to handle the challenge. Some individuals will persist in habitual, automatic behavior, regardless of environmental cues that indicate a mismatch between coping strategy and the demands of the stressor. Other individuals will marshal adaptive resources to construct new courses of action and reconceptualize the problem, associated goals and/or values. A mixed methods approach was used to describe and operationalize cognitive shift, a relatively unexplored construct in existing literature. The study was conducted using secondary data from a parent multi-year cross-sectional study of resilience with eight hundred mid-aged adults from the Phoenix metro area. Semi-structured telephone interviews were analyzed using a purposive sample (n=136) chosen by type of life event. Participants' beliefs, assumptions, and experiences were examined to understand how they shaped adaptation to adversity. An adaptive mechanism, "cognitive shift," was theorized as the transition from automatic coping to effortful cognitive processes aimed at novel resolution of issues. Aims included understanding when and how cognitive shift emerges and manifests. Cognitive shift was scored as a binary variable and triangulated through correlational and logistic regression analyses. Interaction effects revealed that positive personality attributes influence cognitive shift most when people suffered early adversity. This finding indicates that a certain complexity, self-awareness and flexibility of mind may lead to a greater capacity to find meaning in adversity. This work bridges an acknowledged gap in literature and provides new insights into resilience.
ContributorsRivers, Crystal T (Author) / Zautra, Alex (Thesis advisor) / Davis, Mary (Committee member) / Kurpius, Sharon (Committee member) / Arizona State University (Publisher)
Created2014
152813-Thumbnail Image.png
Description
Continuous monitoring of sensor data from smart phones to identify human activities and gestures, puts a heavy load on the smart phone's power consumption. In this research study, the non-Euclidean geometry of the rich sensor data obtained from the user's smart phone is utilized to perform compressive analysis and efficient

Continuous monitoring of sensor data from smart phones to identify human activities and gestures, puts a heavy load on the smart phone's power consumption. In this research study, the non-Euclidean geometry of the rich sensor data obtained from the user's smart phone is utilized to perform compressive analysis and efficient classification of human activities by employing machine learning techniques. We are interested in the generalization of classical tools for signal approximation to newer spaces, such as rotation data, which is best studied in a non-Euclidean setting, and its application to activity analysis. Attributing to the non-linear nature of the rotation data space, which involve a heavy overload on the smart phone's processor and memory as opposed to feature extraction on the Euclidean space, indexing and compaction of the acquired sensor data is performed prior to feature extraction, to reduce CPU overhead and thereby increase the lifetime of the battery with a little loss in recognition accuracy of the activities. The sensor data represented as unit quaternions, is a more intrinsic representation of the orientation of smart phone compared to Euler angles (which suffers from Gimbal lock problem) or the computationally intensive rotation matrices. Classification algorithms are employed to classify these manifold sequences in the non-Euclidean space. By performing customized indexing (using K-means algorithm) of the evolved manifold sequences before feature extraction, considerable energy savings is achieved in terms of smart phone's battery life.
ContributorsSivakumar, Aswin (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2014
150353-Thumbnail Image.png
Description
Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.
ContributorsMuralidhar, Ashwini (Author) / Saripalli, Srikanth (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2011
149915-Thumbnail Image.png
Description
Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage

Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage and communication that can lead to data corruption and a resulting degradation of image quality. It is convenient to consider corrupted samples as missing, creating a sparsely sampled aperture. A sparse aperture would also result from compressive sensing, which is a very attractive concept for data intensive sen- sors such as SAR. Recent developments in sparse decomposition algorithms can be applied to the problem of SAR image formation from a sparsely sampled aperture. Two modified sparse decomposition algorithms are developed, based on well known existing algorithms, modified to be practical in application on modest computa- tional resources. The two algorithms are demonstrated on real-world SAR images. Algorithm performance with respect to super-resolution, noise, coherent speckle and target/clutter decomposition is explored. These algorithms yield more accu- rate image reconstruction from sparsely sampled apertures than classical spectral estimators. At the current state of development, sparse image reconstruction using these two algorithms require about two orders of magnitude greater processing time than classical SAR image formation.
ContributorsWerth, Nicholas (Author) / Karam, Lina (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
150530-Thumbnail Image.png
Description
With increased usage of green energy, the number of photovoltaic arrays used in power generation is increasing rapidly. Many of the arrays are located at remote locations where faults that occur within the array often go unnoticed and unattended for large periods of time. Technicians sent to rectify the faults

With increased usage of green energy, the number of photovoltaic arrays used in power generation is increasing rapidly. Many of the arrays are located at remote locations where faults that occur within the array often go unnoticed and unattended for large periods of time. Technicians sent to rectify the faults have to spend a large amount of time determining the location of the fault manually. Automated monitoring systems are needed to obtain the information about the performance of the array and detect faults. Such systems must monitor the DC side of the array in addition to the AC side to identify non catastrophic faults. This thesis focuses on two of the requirements for DC side monitoring of an automated PV array monitoring system. The first part of the thesis quantifies the advantages of obtaining higher resolution data from a PV array on detection of faults. Data for the monitoring system can be gathered for the array as a whole or from additional places within the array such as individual modules and end of strings. The fault detection rate and the false positive rates are compared for array level, string level and module level PV data. Monte Carlo simulations are performed using PV array models developed in Simulink and MATLAB for fault and no fault cases. The second part describes a graphical user interface (GUI) that can be used to visualize the PV array for module level monitoring system information. A demonstration GUI is built in MATLAB using data obtained from a PV array test facility in Tempe, AZ. Visualizations are implemented to display information about the array as a whole or individual modules and locate faults in the array.
ContributorsKrishnan, Venkatachalam (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Ayyanar, Raja (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
151093-Thumbnail Image.png
Description
This thesis aims to investigate the capacity and bit error rate (BER) performance of multi-user diversity systems with random number of users and considers its application to cognitive radio systems. Ergodic capacity, normalized capacity, outage capacity, and average bit error rate metrics are studied. It has been found that the

This thesis aims to investigate the capacity and bit error rate (BER) performance of multi-user diversity systems with random number of users and considers its application to cognitive radio systems. Ergodic capacity, normalized capacity, outage capacity, and average bit error rate metrics are studied. It has been found that the randomization of the number of users will reduce the ergodic capacity. A stochastic ordering framework is adopted to order user distributions, for example, Laplace transform ordering. The ergodic capacity under different user distributions will follow their corresponding Laplace transform order. The scaling law of ergodic capacity with mean number of users under Poisson and negative binomial user distributions are studied for large mean number of users and these two random distributions are ordered in Laplace transform ordering sense. The ergodic capacity per user is defined and is shown to increase when the total number of users is randomized, which is the opposite to the case of unnormalized ergodic capacity metric. Outage probability under slow fading is also considered and shown to decrease when the total number of users is randomized. The bit error rate (BER) in a general multi-user diversity system has a completely monotonic derivative, which implies that, according to the Jensen's inequality, the randomization of the total number of users will decrease the average BER performance. The special case of Poisson number of users and Rayleigh fading is studied. Combining with the knowledge of regular variation, the average BER is shown to achieve tightness in the Jensen's inequality. This is followed by the extension to the negative binomial number of users, for which the BER is derived and shown to be decreasing in the number of users. A single primary user cognitive radio system with multi-user diversity at the secondary users is proposed. Comparing to the general multi-user diversity system, there exists an interference constraint between secondary and primary users, which is independent of the secondary users' transmission. The secondary user with high- est transmitted SNR which also satisfies the interference constraint is selected to communicate. The active number of secondary users is a binomial random variable. This is then followed by a derivation of the scaling law of the ergodic capacity with mean number of users and the closed form expression of average BER under this situation. The ergodic capacity under binomial user distribution is shown to outperform the Poisson case. Monte-Carlo simulations are used to supplement our analytical results and compare the performance of different user distributions.
ContributorsZeng, Ruochen (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Duman, Tolga (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
150833-Thumbnail Image.png
Description
Composite materials are increasingly being used in aircraft, automobiles, and other applications due to their high strength to weight and stiffness to weight ratios. However, the presence of damage, such as delamination or matrix cracks, can significantly compromise the performance of these materials and result in premature failure. Structural components

Composite materials are increasingly being used in aircraft, automobiles, and other applications due to their high strength to weight and stiffness to weight ratios. However, the presence of damage, such as delamination or matrix cracks, can significantly compromise the performance of these materials and result in premature failure. Structural components are often manually inspected to detect the presence of damage. This technique, known as schedule based maintenance, however, is expensive, time-consuming, and often limited to easily accessible structural elements. Therefore, there is an increased demand for robust and efficient Structural Health Monitoring (SHM) techniques that can be used for Condition Based Monitoring, which is the method in which structural components are inspected based upon damage metrics as opposed to flight hours. SHM relies on in situ frameworks for detecting early signs of damage in exposed and unexposed structural elements, offering not only reduced number of schedule based inspections, but also providing better useful life estimates. SHM frameworks require the development of different sensing technologies, algorithms, and procedures to detect, localize, quantify, characterize, as well as assess overall damage in aerospace structures so that strong estimations in the remaining useful life can be determined. The use of piezoelectric transducers along with guided Lamb waves is a method that has received considerable attention due to the weight, cost, and function of the systems based on these elements. The research in this thesis investigates the ability of Lamb waves to detect damage in feature dense anisotropic composite panels. Most current research negates the effects of experimental variability by performing tests on structurally simple isotropic plates that are used as a baseline and damaged specimen. However, in actual applications, variability cannot be negated, and therefore there is a need to research the effects of complex sample geometries, environmental operating conditions, and the effects of variability in material properties. This research is based on experiments conducted on a single blade-stiffened anisotropic composite panel that localizes delamination damage caused by impact. The overall goal was to utilize a correlative approach that used only the damage feature produced by the delamination as the damage index. This approach was adopted because it offered a simplistic way to determine the existence and location of damage without having to conduct a more complex wave propagation analysis or having to take into account the geometric complexities of the test specimen. Results showed that even in a complex structure, if the damage feature can be extracted and measured, then an appropriate damage index can be associated to it and the location of the damage can be inferred using a dense sensor array. The second experiment presented in this research studies the effects of temperature on damage detection when using one test specimen for a benchmark data set and another for damage data collection. This expands the previous experiment into exploring not only the effects of variable temperature, but also the effects of high experimental variability. Results from this work show that the damage feature in the data is not only extractable at higher temperatures, but that the data from one panel at one temperature can be directly compared to another panel at another temperature for baseline comparison due to linearity of the collected data.
ContributorsVizzini, Anthony James, II (Author) / Chattopadhyay, Aditi (Thesis advisor) / Fard, Masoud (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012