This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 125
151700-Thumbnail Image.png
Description
Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance

Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance of two velocity estimation schemes used in Doppler processing systems, namely, directional velocity estimation (DVE) and conventional velocity estimation (CVE). We find that DVE provides better estimation performance and is the only functioning method when the beam to flow angle is large. Unfortunately, DVE is computationally expensive and also requires divisions and square root operations that are hard to implement. We propose two approximation techniques to replace these computations. The simulation results on cyst images show that the proposed approximations do not affect the estimation performance. We also study backend processing which includes envelope detection, log compression and scan conversion. Three different envelope detection methods are compared. Among them, FIR based Hilbert Transform is considered the best choice when phase information is not needed, while quadrature demodulation is a better choice if phase information is necessary. Bilinear and Gaussian interpolation are considered for scan conversion. Through simulations of a cyst image, we show that bilinear interpolation provides comparable contrast-to-noise ratio (CNR) performance with Gaussian interpolation and has lower computational complexity. Thus, bilinear interpolation is chosen for our system.
ContributorsWei, Siyuan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151953-Thumbnail Image.png
Description
Distributed inference has applications in a wide range of fields such as source localization, target detection, environment monitoring, and healthcare. In this dissertation, distributed inference schemes which use bounded transmit power are considered. The performance of the proposed schemes are studied for a variety of inference problems. In the first

Distributed inference has applications in a wide range of fields such as source localization, target detection, environment monitoring, and healthcare. In this dissertation, distributed inference schemes which use bounded transmit power are considered. The performance of the proposed schemes are studied for a variety of inference problems. In the first part of the dissertation, a distributed detection scheme where the sensors transmit with constant modulus signals over a Gaussian multiple access channel is considered. The deflection coefficient of the proposed scheme is shown to depend on the characteristic function of the sensing noise, and the error exponent for the system is derived using large deviation theory. Optimization of the deflection coefficient and error exponent are considered with respect to a transmission phase parameter for a variety of sensing noise distributions including impulsive ones. The proposed scheme is also favorably compared with existing amplify-and-forward (AF) and detect-and-forward (DF) schemes. The effect of fading is shown to be detrimental to the detection performance and simulations are provided to corroborate the analytical results. The second part of the dissertation studies a distributed inference scheme which uses bounded transmission functions over a Gaussian multiple access channel. The conditions on the transmission functions under which consistent estimation and reliable detection are possible is characterized. For the distributed estimation problem, an estimation scheme that uses bounded transmission functions is proved to be strongly consistent provided that the variance of the noise samples are bounded and that the transmission function is one-to-one. The proposed estimation scheme is compared with the amplify and forward technique and its robustness to impulsive sensing noise distributions is highlighted. It is also shown that bounded transmissions suffer from inconsistent estimates if the sensing noise variance goes to infinity. For the distributed detection problem, similar results are obtained by studying the deflection coefficient. Simulations corroborate our analytical results. In the third part of this dissertation, the problem of estimating the average of samples distributed at the nodes of a sensor network is considered. A distributed average consensus algorithm in which every sensor transmits with bounded peak power is proposed. In the presence of communication noise, it is shown that the nodes reach consensus asymptotically to a finite random variable whose expectation is the desired sample average of the initial observations with a variance that depends on the step size of the algorithm and the variance of the communication noise. The asymptotic performance is characterized by deriving the asymptotic covariance matrix using results from stochastic approximation theory. It is shown that using bounded transmissions results in slower convergence compared to the linear consensus algorithm based on the Laplacian heuristic. Simulations corroborate our analytical findings. Finally, a robust distributed average consensus algorithm in which every sensor performs a nonlinear processing at the receiver is proposed. It is shown that non-linearity at the receiver nodes makes the algorithm robust to a wide range of channel noise distributions including the impulsive ones. It is shown that the nodes reach consensus asymptotically and similar results are obtained as in the case of transmit non-linearity. Simulations corroborate our analytical findings and highlight the robustness of the proposed algorithm.
ContributorsDasarathan, Sivaraman (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Reisslein, Martin (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2013
151771-Thumbnail Image.png
Description
This research examines the current challenges of using Lamb wave interrogation methods to localize fatigue crack damage in a complex metallic structural component subjected to unknown temperatures. The goal of this work is to improve damage localization results for a structural component interrogated at an unknown temperature, by developing a

This research examines the current challenges of using Lamb wave interrogation methods to localize fatigue crack damage in a complex metallic structural component subjected to unknown temperatures. The goal of this work is to improve damage localization results for a structural component interrogated at an unknown temperature, by developing a probabilistic and reference-free framework for estimating Lamb wave velocities and the damage location. The methodology for damage localization at unknown temperatures includes the following key elements: i) a model that can describe the change in Lamb wave velocities with temperature; ii) the extension of an advanced time-frequency based signal processing technique for enhanced time-of-flight feature extraction from a dispersive signal; iii) the development of a Bayesian damage localization framework incorporating data association and sensor fusion. The technique requires no additional transducers to be installed on a structure, and allows for the estimation of both the temperature and the wave velocity in the component. Additionally, the framework of the algorithm allows it to function completely in an unsupervised manner by probabilistically accounting for all measurement origin uncertainty. The novel algorithm was experimentally validated using an aluminum lug joint with a growing fatigue crack. The lug joint was interrogated using piezoelectric transducers at multiple fatigue crack lengths, and at temperatures between 20°C and 80°C. The results showed that the algorithm could accurately predict the temperature and wave speed of the lug joint. The localization results for the fatigue damage were found to correlate well with the true locations at long crack lengths, but loss of accuracy was observed in localizing small cracks due to time-of-flight measurement errors. To validate the algorithm across a wider range of temperatures the electromechanically coupled LISA/SIM model was used to simulate the effects of temperatures. The numerical results showed that this approach would be capable of experimentally estimating the temperature and velocity in the lug joint for temperatures from -60°C to 150°C. The velocity estimation algorithm was found to significantly increase the accuracy of localization at temperatures above 120°C when error due to incorrect velocity selection begins to outweigh the error due to time-of-flight measurements.
ContributorsHensberry, Kevin (Author) / Chattopadhyay, Aditi (Thesis advisor) / Liu, Yongming (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151455-Thumbnail Image.png
Description
Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving

Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving reliable composite systems is a strong capability of assessing and inspecting physical damage of critical structural components. Installation of a robust Structural Health Monitoring (SHM) system would be very valuable in detecting the onset of composite failure. A number of major issues still require serious attention in connection with the research and development aspects of sensor-integrated reliable SHM systems for composite structures. In particular, the sensitivity of currently available sensor systems does not allow detection of micro level damage; this limits the capability of data driven SHM systems. As a fundamental layer in SHM, modeling can provide in-depth information on material and structural behavior for sensing and detection, as well as data for learning algorithms. This dissertation focusses on the development of a multiscale analysis framework, which is used to detect various forms of damage in complex composite structures. A generalized method of cells based micromechanics analysis, as implemented in NASA's MAC/GMC code, is used for the micro-level analysis. First, a baseline study of MAC/GMC is performed to determine the governing failure theories that best capture the damage progression. The deficiencies associated with various layups and loading conditions are addressed. In most micromechanics analysis, a representative unit cell (RUC) with a common fiber packing arrangement is used. The effect of variation in this arrangement within the RUC has been studied and results indicate this variation influences the macro-scale effective material properties and failure stresses. The developed model has been used to simulate impact damage in a composite beam and an airfoil structure. The model data was verified through active interrogation using piezoelectric sensors. The multiscale model was further extended to develop a coupled damage and wave attenuation model, which was used to study different damage states such as fiber-matrix debonding in composite structures with surface bonded piezoelectric sensors.
ContributorsMoncada, Albert (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Rajadas, John (Committee member) / Yekani Fard, Masoud (Committee member) / Arizona State University (Publisher)
Created2012
152315-Thumbnail Image.png
Description
ABSTRACT Whole genome sequencing (WGS) and whole exome sequencing (WES) are two comprehensive genomic tests which use next-generation sequencing technology to sequence most of the 3.2 billion base pairs in a human genome (WGS) or many of the estimated 22,000 protein-coding genes in the genome (WES). The promises offered from

ABSTRACT Whole genome sequencing (WGS) and whole exome sequencing (WES) are two comprehensive genomic tests which use next-generation sequencing technology to sequence most of the 3.2 billion base pairs in a human genome (WGS) or many of the estimated 22,000 protein-coding genes in the genome (WES). The promises offered from WGS/WES are: to identify suspected yet unidentified genetic diseases, to characterize the genomic mutations in a tumor to identify targeted therapeutic agents and, to predict future diseases with the hope of promoting disease prevention strategies and/or offering early treatment. Promises notwithstanding, sequencing a human genome presents several interrelated challenges: how to adequately analyze, interpret, store, reanalyze and apply an unprecedented amount of genomic data (with uncertain clinical utility) to patient care? In addition, genomic data has the potential to become integral for improving the medical care of an individual and their family, years after a genome is sequenced. Current informed consent protocols do not adequately address the unique challenges and complexities inherent to the process of WGS/WES. This dissertation constructs a novel informed consent process for individuals considering WGS/WES, capable of fulfilling both legal and ethical requirements of medical consent while addressing the intricacies of WGS/WES, ultimately resulting in a more effective consenting experience. To better understand components of an effective consenting experience, the first part of this dissertation traces the historical origin of the informed consent process to identify the motivations, rationales and institutional commitments that sustain our current consenting protocols for genetic testing. After understanding the underlying commitments that shape our current informed consent protocols, I discuss the effectiveness of the informed consent process from an ethical and legal standpoint. I illustrate how WGS/WES introduces new complexities to the informed consent process and assess whether informed consent protocols proposed for WGS/WES address these complexities. The last section of this dissertation describes a novel informed consent process for WGS/WES, constructed from the original ethical intent of informed consent, analysis of existing informed consent protocols, and my own observations as a genetic counselor for what constitutes an effective consenting experience.
ContributorsHunt, Katherine (Author) / Hurlbut, J. Benjamin (Thesis advisor) / Robert, Jason S. (Thesis advisor) / Maienschein, Jane (Committee member) / Northfelt, Donald W. (Committee member) / Marchant, Gary (Committee member) / Ellison, Karin (Committee member) / Arizona State University (Publisher)
Created2013
152455-Thumbnail Image.png
Description
This dissertation introduces stochastic ordering of instantaneous channel powers of fading channels as a general method to compare the performance of a communication system over two different channels, even when a closed-form expression for the metric may not be available. Such a comparison is with respect to a variety of

This dissertation introduces stochastic ordering of instantaneous channel powers of fading channels as a general method to compare the performance of a communication system over two different channels, even when a closed-form expression for the metric may not be available. Such a comparison is with respect to a variety of performance metrics such as error rates, outage probability and ergodic capacity, which share common mathematical properties such as monotonicity, convexity or complete monotonicity. Complete monotonicity of a metric, such as the symbol error rate, in conjunction with the stochastic Laplace transform order between two fading channels implies the ordering of the two channels with respect to the metric. While it has been established previously that certain modulation schemes have convex symbol error rates, there is no study of the complete monotonicity of the same, which helps in establishing stronger channel ordering results. Toward this goal, the current research proves for the first time, that all 1-dimensional and 2-dimensional modulations have completely monotone symbol error rates. Furthermore, it is shown that the frequently used parametric fading distributions for modeling line of sight exhibit a monotonicity in the line of sight parameter with respect to the Laplace transform order. While the Laplace transform order can also be used to order fading distributions based on the ergodic capacity, there exist several distributions which are not Laplace transform ordered, although they have ordered ergodic capacities. To address this gap, a new stochastic order called the ergodic capacity order has been proposed herein, which can be used to compare channels based on the ergodic capacity. Using stochastic orders, average performance of systems involving multiple random variables are compared over two different channels. These systems include diversity combining schemes, relay networks, and signal detection over fading channels with non-Gaussian additive noise. This research also addresses the problem of unifying fading distributions. This unification is based on infinite divisibility, which subsumes almost all known fading distributions, and provides simplified expressions for performance metrics, in addition to enabling stochastic ordering.
ContributorsRajan, Adithya (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel (Committee member) / Kosut, Oliver (Committee member) / Arizona State University (Publisher)
Created2014
152351-Thumbnail Image.png
Description
Lung Cancer Alliance, a nonprofit organization, released the "No One Deserves to Die" advertising campaign in June 2012. The campaign visuals presented a clean, simple message to the public: the stigma associated with lung cancer drives marginalization of lung cancer patients. Lung Cancer Alliance (LCA) asserts that negative public attitude

Lung Cancer Alliance, a nonprofit organization, released the "No One Deserves to Die" advertising campaign in June 2012. The campaign visuals presented a clean, simple message to the public: the stigma associated with lung cancer drives marginalization of lung cancer patients. Lung Cancer Alliance (LCA) asserts that negative public attitude toward lung cancer stems from unacknowledged moral judgments that generate 'stigma.' The campaign materials are meant to expose and challenge these common public category-making processes that occur when subconsciously evaluating lung cancer patients. These processes involve comparison, perception of difference, and exclusion. The campaign implies that society sees suffering of lung cancer patients as indicative of moral failure, thus, not warranting assistance from society, which leads to marginalization of the diseased. Attributing to society a morally laden view of the disease, the campaign extends this view to its logical end and makes it explicit: lung cancer patients no longer deserve to live because they themselves caused the disease (by smoking). This judgment and resulting marginalization is, according to LCA, evident in the ways lung cancer patients are marginalized relative to other diseases via minimal research funding, high- mortality rates and low awareness of the disease. Therefore, society commits an injustice against those with lung cancer. This research analyzes the relationship between disease, identity-making, and responsibilities within society as represented by this stigma framework. LCA asserts that society understands lung cancer in terms of stigma, and advocates that society's understanding of lung cancer should be shifted from a stigma framework toward a medical framework. Analysis of identity-making and responsibility encoded in both frameworks contributes to evaluation of the significance of reframing this disease. One aim of this thesis is to explore the relationship between these frameworks in medical sociology. The results show a complex interaction that suggest trading one frame for another will not destigmatize the lung cancer patient. Those interactions cause tangible harms, such as high mortality rates, and there are important implications for other communities that experience a stigmatized disease.
ContributorsCalvelage, Victoria (Author) / Hurlbut, J. Benjamin (Thesis advisor) / Maienschein, Jane (Committee member) / Ellison, Karin (Committee member) / Arizona State University (Publisher)
Created2013
152813-Thumbnail Image.png
Description
Continuous monitoring of sensor data from smart phones to identify human activities and gestures, puts a heavy load on the smart phone's power consumption. In this research study, the non-Euclidean geometry of the rich sensor data obtained from the user's smart phone is utilized to perform compressive analysis and efficient

Continuous monitoring of sensor data from smart phones to identify human activities and gestures, puts a heavy load on the smart phone's power consumption. In this research study, the non-Euclidean geometry of the rich sensor data obtained from the user's smart phone is utilized to perform compressive analysis and efficient classification of human activities by employing machine learning techniques. We are interested in the generalization of classical tools for signal approximation to newer spaces, such as rotation data, which is best studied in a non-Euclidean setting, and its application to activity analysis. Attributing to the non-linear nature of the rotation data space, which involve a heavy overload on the smart phone's processor and memory as opposed to feature extraction on the Euclidean space, indexing and compaction of the acquired sensor data is performed prior to feature extraction, to reduce CPU overhead and thereby increase the lifetime of the battery with a little loss in recognition accuracy of the activities. The sensor data represented as unit quaternions, is a more intrinsic representation of the orientation of smart phone compared to Euler angles (which suffers from Gimbal lock problem) or the computationally intensive rotation matrices. Classification algorithms are employed to classify these manifold sequences in the non-Euclidean space. By performing customized indexing (using K-means algorithm) of the evolved manifold sequences before feature extraction, considerable energy savings is achieved in terms of smart phone's battery life.
ContributorsSivakumar, Aswin (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2014
152926-Thumbnail Image.png
Description
Teaching evolution has been shown to be a challenge for faculty, in both K-12 and postsecondary education. Many of these challenges stem from perceived conflicts not only between religion and evolution, but also faculty beliefs about religion, it's compatibility with evolutionary theory, and it's proper role in classroom curriculum. Studies

Teaching evolution has been shown to be a challenge for faculty, in both K-12 and postsecondary education. Many of these challenges stem from perceived conflicts not only between religion and evolution, but also faculty beliefs about religion, it's compatibility with evolutionary theory, and it's proper role in classroom curriculum. Studies suggest that if educators engage with students' religious beliefs and identity, this may help students have positive attitudes towards evolution. The aim of this study was to reveal attitudes and beliefs professors have about addressing religion and providing religious scientist role models to students when teaching evolution. 15 semi-structured interviews of tenured biology professors were conducted at a large Midwestern universiy regarding their beliefs, experiences, and strategies teaching evolution and particularly, their willingness to address religion in a class section on evolution. Following a qualitative analysis of transcripts, professors did not agree on whether or not it is their job to help students accept evolution (although the majority said it is not), nor did they agree on a definition of "acceptance of evolution". Professors are willing to engage in students' religious beliefs, if this would help their students accept evolution. Finally, professors perceived many challenges to engaging students' religious beliefs in a science classroom such as the appropriateness of the material for a science class, large class sizes, and time constraints. Given the results of this study, the author concludes that instructors must come to a consensus about their goals as biology educators as well as what "acceptance of evolution" means, before they can realistically apply the engagement of student's religious beliefs and identity as an educational strategy.
ContributorsBarnes, Maryann Elizabeth (Author) / Brownell, Sara E (Thesis advisor) / Brem, Sarah K. (Thesis advisor) / Lynch, John M. (Committee member) / Ellison, Karin (Committee member) / Arizona State University (Publisher)
Created2014
152982-Thumbnail Image.png
Description
Damage detection in heterogeneous material systems is a complex problem and requires an in-depth understanding of the material characteristics and response under varying load and environmental conditions. A significant amount of research has been conducted in this field to enhance the fidelity of damage assessment methodologies, using a wide range

Damage detection in heterogeneous material systems is a complex problem and requires an in-depth understanding of the material characteristics and response under varying load and environmental conditions. A significant amount of research has been conducted in this field to enhance the fidelity of damage assessment methodologies, using a wide range of sensors and detection techniques, for both metallic materials and composites. However, detecting damage at the microscale is not possible with commercially available sensors. A probable way to approach this problem is through accurate and efficient multiscale modeling techniques, which are capable of tracking damage initiation at the microscale and propagation across the length scales. The output from these models will provide an improved understanding of damage initiation; the knowledge can be used in conjunction with information from physical sensors to improve the size of detectable damage. In this research, effort has been dedicated to develop multiscale modeling approaches and associated damage criteria for the estimation of damage evolution across the relevant length scales. Important issues such as length and time scales, anisotropy and variability in material properties at the microscale, and response under mechanical and thermal loading are addressed. Two different material systems have been studied: metallic material and a novel stress-sensitive epoxy polymer.

For metallic material (Al 2024-T351), the methodology initiates at the microscale where extensive material characterization is conducted to capture the microstructural variability. A statistical volume element (SVE) model is constructed to represent the material properties. Geometric and crystallographic features including grain orientation, misorientation, size, shape, principal axis direction and aspect ratio are captured. This SVE model provides a computationally efficient alternative to traditional techniques using representative volume element (RVE) models while maintaining statistical accuracy. A physics based multiscale damage criterion is developed to simulate the fatigue crack initiation. The crack growth rate and probable directions are estimated simultaneously.

Mechanically sensitive materials that exhibit specific chemical reactions upon external loading are currently being investigated for self-sensing applications. The "smart" polymer modeled in this research consists of epoxy resin, hardener, and a stress-sensitive material called mechanophore The mechanophore activation is based on covalent bond-breaking induced by external stimuli; this feature can be used for material-level damage detections. In this work Tris-(Cinnamoyl oxymethyl)-Ethane (TCE) is used as the cyclobutane-based mechanophore (stress-sensitive) material in the polymer matrix. The TCE embedded polymers have shown promising results in early damage detection through mechanically induced fluorescence. A spring-bead based network model, which bridges nanoscale information to higher length scales, has been developed to model this material system. The material is partitioned into discrete mass beads which are linked using linear springs at the microscale. A series of MD simulations were performed to define the spring stiffness in the statistical network model. By integrating multiple spring-bead models a network model has been developed to represent the material properties at the mesoscale. The model captures the statistical distribution of crosslinking degree of the polymer to represent the heterogeneous material properties at the microscale. The developed multiscale methodology is computationally efficient and provides a possible means to bridge multiple length scales (from 10 nm in MD simulation to 10 mm in FE model) without significant loss of accuracy. Parametric studies have been conducted to investigate the influence of the crosslinking degree on the material behavior. The developed methodology has been used to evaluate damage evolution in the self-sensing polymer.
ContributorsZhang, Jinjun (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Jiang, Hanqing (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Rajadas, John (Committee member) / Arizona State University (Publisher)
Created2014