Matching Items (12,696)
Filtering by

Clear all filters

158270-Thumbnail Image.png
Description
This work is concerned with how best to reconstruct images from limited angle tomographic measurements. An introduction to tomography and to limited angle tomography will be provided and a brief overview of the many fields to which this work may contribute is given.

The traditional tomographic image reconstruction approach involves

This work is concerned with how best to reconstruct images from limited angle tomographic measurements. An introduction to tomography and to limited angle tomography will be provided and a brief overview of the many fields to which this work may contribute is given.

The traditional tomographic image reconstruction approach involves Fourier domain representations. The classic Filtered Back Projection algorithm will be discussed and used for comparison throughout the work. Bayesian statistics and information entropy considerations will be described. The Maximum Entropy reconstruction method will be derived and its performance in limited angular measurement scenarios will be examined.

Many new approaches become available once the reconstruction problem is placed within an algebraic form of Ax=b in which the measurement geometry and instrument response are defined as the matrix A, the measured object as the column vector x, and the resulting measurements by b. It is straightforward to invert A. However, for the limited angle measurement scenarios of interest in this work, the inversion is highly underconstrained and has an infinite number of possible solutions x consistent with the measurements b in a high dimensional space.

The algebraic formulation leads to the need for high performing regularization approaches which add constraints based on prior information of what is being measured. These are constraints beyond the measurement matrix A added with the goal of selecting the best image from this vast uncertainty space. It is well established within this work that developing satisfactory regularization techniques is all but impossible except for the simplest pathological cases. There is a need to capture the "character" of the objects being measured.

The novel result of this effort will be in developing a reconstruction approach that will match whatever reconstruction approach has proven best for the types of objects being measured given full angular coverage. However, when confronted with limited angle tomographic situations or early in a series of measurements, the approach will rely on a prior understanding of the "character" of the objects measured. This understanding will be learned by a parallel Deep Neural Network from examples.
ContributorsDallmann, Nicholas A. (Author) / Tsakalis, Konstantinos (Thesis advisor) / Hardgrove, Craig (Committee member) / Rodriguez, Armando (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2020
158271-Thumbnail Image.png
Description
Students across the United States of America are struggling to achieve college and career readiness in reading before they graduate from high school. The phenomenon of reading comprehension in older adolescent students plagues teachers because of its complexity and the perceived need for multiple solutions. However, close inspection

Students across the United States of America are struggling to achieve college and career readiness in reading before they graduate from high school. The phenomenon of reading comprehension in older adolescent students plagues teachers because of its complexity and the perceived need for multiple solutions. However, close inspection of the research reveals factors such as self-efficacy, motivation, and lack of skills with regards to using reading strategies all contribute to the problem. The purpose of this study was to explore the effect of sketchnoting as a reading strategy on student self-efficacy for reading, motivation for reading, and reading comprehension in a high school classroom setting. With words, symbols and pictures, sketchnoting as a reading strategy provides students with a platform to interact with their text while recording key ideas and details as well as connections they make to the text. While there are several theoretical frameworks that guide research on reading, this concurrent, mixed methods, action research study specifically focuses on Collaborative Learning Theory, Self-determination theory, and Schema Theory. These theoretical frameworks also establish a foundation for the study of methods to address the problem. This framework is rooted in the constructivist perspective in that each student brings to the learning environment their own levels of motivation and self-efficacy as well as their own perspectives on the truth to be learned. The participants of this study were juniors in a required English 11 class that I was teaching. There were six instruments used for this study: pre- and post-reading survey, Qualitative Reading Inventory (QRI), Reading Skills Assessment, general observations, sketchnote assessment, and interviews. Results of the semester-long study show that while there statistically was no evidence of a relationship between student use of sketchnoting as a reading strategy and an increase in reading motivation or self-efficacy for reading, there was evidence to show that there is a relationship between student perception of sketchnoting being meaningful to their understanding of the text and their motivation and self-efficacy. Sketchnoting as a reading strategy did not have a statistical influence on student reading comprehension; however, the students reported that they remembered the details of the text they read better when using sketchnoting and that sketchnoting helped them make connections to the text they read. This research showed that sketchnoting as a reading strategy provided students with a tool to help them identify the key ideas and details of a text and it also provided them with a platform to take them beyond the key ideas and details through making connections.
ContributorsTreptow, Jennifer (Author) / Wylie, Ruth (Thesis advisor) / Peyton Marsh, Josephine (Committee member) / Droessler Mersch, Rebecca (Committee member) / Arizona State University (Publisher)
Created2020
158273-Thumbnail Image.png
Description
Traumatic injuries are the leading cause of death in children under 18, with head trauma being the leading cause of death in children below 5. A large but unknown number of traumatic injuries are non-accidental, i.e. inflicted. The lack of sensitivity and specificity required to diagnose Abusive Head Trauma (AHT)

Traumatic injuries are the leading cause of death in children under 18, with head trauma being the leading cause of death in children below 5. A large but unknown number of traumatic injuries are non-accidental, i.e. inflicted. The lack of sensitivity and specificity required to diagnose Abusive Head Trauma (AHT) from radiological studies results in putting the children at risk of re-injury and death. Modern Deep Learning techniques can be utilized to detect Abusive Head Trauma using Computer Tomography (CT) scans. Training models using these techniques are only a part of building AI-driven Computer-Aided Diagnostic systems. There are challenges in deploying the models to make them highly available and scalable.

The thesis models the domain of Abusive Head Trauma using Deep Learning techniques and builds an AI-driven System at scale using best Software Engineering Practices. It has been done in collaboration with Phoenix Children Hospital (PCH). The thesis breaks down AHT into sub-domains of Medical Knowledge, Data Collection, Data Pre-processing, Image Generation, Image Classification, Building APIs, Containers and Kubernetes. Data Collection and Pre-processing were done at PCH with the help of trauma researchers and radiologists. Experiments are run using Deep Learning models such as DCGAN (for Image Generation), Pretrained 2D and custom 3D CNN classifiers for the classification tasks. The trained models are exposed as APIs using the Flask web framework, contained using Docker and deployed on a Kubernetes cluster.



The results are analyzed based on the accuracy of the models, the feasibility of their implementation as APIs and load testing the Kubernetes cluster. They suggest the need for Data Annotation at the Slice level for CT scans and an increase in the Data Collection process. Load Testing reveals the auto-scalability feature of the cluster to serve a high number of requests.
ContributorsVikram, Aditya (Author) / Sanchez, Javier Gonzalez (Thesis advisor) / Gaffar, Ashraf (Thesis advisor) / Findler, Michael (Committee member) / Arizona State University (Publisher)
Created2020
158274-Thumbnail Image.png
Description
This dissertation consist two chapters related with misallocation and economic development.

The first chapter studies the organization of production, as summarized by the number of managers per plant, the number of workers per manager and the mean size of plants in terms of employment. First, I document that in the manufacturing

This dissertation consist two chapters related with misallocation and economic development.

The first chapter studies the organization of production, as summarized by the number of managers per plant, the number of workers per manager and the mean size of plants in terms of employment. First, I document that in the manufacturing sector, richer countries tend to have (i) more managers per plant, (ii) less workers per manager and (iii) larger plants on average. I then extend a knowledge-based hierarchies model of the organization of production where the communication technology depends on the managerial level in the hierarchy and the abilities of subordinates. I estimate model parameters so that the model jointly produces plant size distribution and number of managers per plant in the United States manufacturing sector. I find that when the largest, more complex, plants face distortions that are twice as large as distortions faced by smaller plants, output declines by 33.4% and the number of managers per plant falls by 30%. Moreover, I find that a 10% increase in communication cost parameters can account for a 35% decrease in the aggregate output without having a significant effect on the number of managers per plant.

The second chapter examines the relationship between bribery, plant size and economic development. Using the Enterprise Survey, I document that small plants spend higher fraction of their output on bribery than big plants do. Then I develop a one sector growth model in which size-dependent distortions, bribery opportunities and different plant sizes coexist. I find that size-dependent distortions become less distortionary in the presence of bribery opportunities and the effect of such distortions on the plant size become reversed since bigger plants are able to avoid from distortions by paying larger bribes. My results indicate that changes in the distortion level do not affect output and size significantly because managers are able to circumvent the distortions by adjusting their bribery expenditures. However, the removal of distortions can have a substantial effect on both the output and the mean size. Output in Turkey can increase by 12.3%, while the mean size can increase by almost double.
ContributorsTamkoc, Mehmet Nazim (Author) / Ventura, Gustavo (Thesis advisor) / Herrendorf, Berthold (Committee member) / Ferraro, Domenico (Committee member) / Arizona State University (Publisher)
Created2020
158275-Thumbnail Image.png
Description
Proteins are a large collection of biomolecules that orchestrate the vital

cellular processes of life. The last decade has witnessed dramatic advances in the

field of proteomics, which broadly include characterizing the composition, structure,

functions, interactions, and modifications of numerous proteins in biological systems,

and elucidating how the miscellaneous components collectively contribute to the

phenotypes

Proteins are a large collection of biomolecules that orchestrate the vital

cellular processes of life. The last decade has witnessed dramatic advances in the

field of proteomics, which broadly include characterizing the composition, structure,

functions, interactions, and modifications of numerous proteins in biological systems,

and elucidating how the miscellaneous components collectively contribute to the

phenotypes associated with various disorders. Such large-scale proteomics studies

have steadily gained momentum with the evolution of diverse high-throughput

technologies. This work illustrates the development of novel high-throughput

proteomics platforms and their applications in translational and structural biology. In

Chapter 1, nucleic acid programmable protein arrays displaying the human

proteomes were applied to immunoprofiling of paired serum and cerebrospinal fluid

samples from patients with Alzheimer’s disease. This high-throughput

immunoproteomic approach allows us to investigate the global antibody responses

associated with Alzheimer’s disease and potentially identify the diagnostic

autoantibody biomarkers. In Chapter 2, a versatile proteomic pipeline based on the

baculovirus-insect cell expression system was established to enable high-throughput

gene cloning, protein production, in vivo crystallization and sample preparation for Xray diffraction. In conjunction with the advanced crystallography methods, this endto-end pipeline promises to substantially facilitate the protein structural

determination. In Chapter 3, modified nucleic acid programmable protein arrays

were developed and used for probing protein-protein interactions at the proteome

level. From the perspective of biomarker discovery, structural proteomics, and

protein interaction networks, this work demonstrated the power of high-throughput

proteomics technologies in myriad applications for proteome-scale structural,

functional, and biomedical research.
ContributorsTang, Yanyang (Author) / LaBaer, Joshua (Thesis advisor) / Anderson, Karen S (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2020
158276-Thumbnail Image.png
Description
“The Mystery of Light” is the first movement of a yet to be completed larger work titled ...to melt into the sun for chamber choir and percussion quartet. The text of the work is an excerpt from Kahlil Gibran’s masterpiece, The Prophet. This book tells the story of a prophet-like

“The Mystery of Light” is the first movement of a yet to be completed larger work titled ...to melt into the sun for chamber choir and percussion quartet. The text of the work is an excerpt from Kahlil Gibran’s masterpiece, The Prophet. This book tells the story of a prophet-like man, Almustafa, who, before embarking on the journey back to his native land, stops in the city of Orphalese, where the townspeople, having known him for many years, entreat him to share his wisdom before he departs. The seeress, Almitra, urges him, “speak to us and give us of your truth.” Almustafa proceeds to philosophize on a range of topics including love, laws, pain, friendship, children, time, beauty, and self-knowledge. Just before his farewell to the people of Orphalese, he speaks of death, saying that it is not something to be feared, but rather, embraced as a necessary and beautiful part of life.

This interconnectedness of the life and death process, of which Almustafa speaks, is the subject of “The Mystery of Light.” Almitra’s aforementioned request returns directly and indirectly throughout the movement as a reference to humanity’s undying desire to understand the great mysteries of our own mortal condition. The choir shifts throughout the movement between the three following perspectives: 1) that of people who live in fear, whose anxious whispers grow into shouts of horror as they are faced with the threat of death, 2) that of people who share Almitra’s inquisitiveness and are inspired with wonder by the secret of death and 3) that of the prophet, as he speaks words of comfort and wisdom to those who look, either in terror or wonder, upon the face of death. My hope with this music is to share the comforting words which Gibran has spoken through the character, Almustafa, so that, as they have done for me, these words may provide comfort to those who will stand trembling in the presence of life’s most inevitable consequence.
ContributorsStefans, Karl (Author) / Temple, Alex (Thesis advisor) / Knowles, Kristina (Committee member) / Rockmaker, Jody (Committee member) / Arizona State University (Publisher)
Created2020
158277-Thumbnail Image.png
Description
Stress in individuals presents in various forms and may accumulate across development to predict maladaptive physical and psychological outcomes, including greater risk for the onset of internalizing symptoms. Early life stress, daily life experiences, and the stress response of the hypothalamic-pituitary-adrenal (HPA) axis have all been examined as potential predictors

Stress in individuals presents in various forms and may accumulate across development to predict maladaptive physical and psychological outcomes, including greater risk for the onset of internalizing symptoms. Early life stress, daily life experiences, and the stress response of the hypothalamic-pituitary-adrenal (HPA) axis have all been examined as potential predictors of the development of psychopathology, but rarely have researchers attempted to understand the covariation or interaction among these stress domains using a longitudinal design when looking at the influence of stress on internalizing psychopathology. Further, most research has examined these processes in adulthood or adolescence with much less attention given to the influence of these dynamic stress pathways in childhood. Guided by the biopsychosocial model of stress, this study explored early life stress, daily life stress, diurnal cortisol (cortisol AM slope), and internalizing symptoms in a racially/ethnically and socioeconomically diverse sample of twins participating in an ongoing longitudinal study (N=970 children; Arizona Twin Project; Lemery-Chalfant et al. 2013). An additive model of stress and a stress sensitization framework model were considered as potential pathways of stress to internalizing symptoms in middle childhood. Based on a thorough review of relevant literature, it was expected that each stress indicator would individually predict internalizing symptoms. It was also predicted that early life stress would moderate the associations between diurnal cortisol and internalizing symptoms, as well as daily life stress and internalizing symptoms. Multilevel modeling analyses showed that early life stress and cortisol AM slope, but not daily life stress, predicted internalizing symptoms. Early life stress did not moderate the associations between daily life stress and internalizing symptoms or cortisol AM slope and internalizing symptoms. Results support independent additive contributions of both physiological stress processes and early life parental stressors in the development of internalizing symptoms in middle childhood. Future investigation is needed to better understand the sensitizing effects of early parental life stress during this developmental stage.
ContributorsLecarie, Emma (Author) / Doane, Leah (Thesis advisor) / Davis, Mary (Committee member) / Grimm, Kevin (Committee member) / Arizona State University (Publisher)
Created2020
158278-Thumbnail Image.png
Description
Humans have a great ability to recognize objects in different environments irrespective of their variations. However, the same does not apply to machine learning models which are unable to generalize to images of objects from different domains. The generalization of these models to new data is constrained by the domain

Humans have a great ability to recognize objects in different environments irrespective of their variations. However, the same does not apply to machine learning models which are unable to generalize to images of objects from different domains. The generalization of these models to new data is constrained by the domain gap. Many factors such as image background, image resolution, color, camera perspective and variations in the objects are responsible for the domain gap between the training data (source domain) and testing data (target domain). Domain adaptation algorithms aim to overcome the domain gap between the source and target domains and learn robust models that can perform well across both the domains.

This thesis provides solutions for the standard problem of unsupervised domain adaptation (UDA) and the more generic problem of generalized domain adaptation (GDA). The contributions of this thesis are as follows. (1) Certain and Consistent Domain Adaptation model for closed-set unsupervised domain adaptation by aligning the features of the source and target domain using deep neural networks. (2) A multi-adversarial deep learning model for generalized domain adaptation. (3) A gating model that detects out-of-distribution samples for generalized domain adaptation.

The models were tested across multiple computer vision datasets for domain adaptation.

The dissertation concludes with a discussion on the proposed approaches and future directions for research in closed set and generalized domain adaptation.
ContributorsNagabandi, Bhadrinath (Author) / Panchanathan, Sethuraman (Thesis advisor) / Venkateswara, Hemanth (Thesis advisor) / McDaniel, Troy (Committee member) / Arizona State University (Publisher)
Created2020
158279-Thumbnail Image.png
Description
Organic compounds are influenced by hydrothermal conditions in both marine and terrestrial environments. Sedimentary organic reservoirs make up the largest share of organic carbon in the carbon cycle, leading to petroleum generation and to chemoautotrophic microbial communities. There have been numerous studies on the reactivity of organic compounds in water

Organic compounds are influenced by hydrothermal conditions in both marine and terrestrial environments. Sedimentary organic reservoirs make up the largest share of organic carbon in the carbon cycle, leading to petroleum generation and to chemoautotrophic microbial communities. There have been numerous studies on the reactivity of organic compounds in water at elevated temperatures, but these studies rarely explore the consequences of inorganic solutes in hydrothermal fluids. The experiments in this thesis explore new reaction pathways of organic compounds mediated by aqueous and solid phase metals, mainly Earth-abundant copper. These experiments show that copper species have the potential to oxidize benzene and toluene, which are typically viewed as unreactive. These pathways add to the growing list of known organic transformations that are possible in natural hydrothermal systems. In addition to the characterization of reactions in natural systems, there has been recent interest in using hydrothermal conditions to facilitate organic transformations that would be useful in an applied, industrial or synthetic setting. This thesis identifies two sets of conditions that may serve as alternatives to commonplace industrial processes. The first process is the oxidation of benzene with copper to form phenol and chlorobenzene. The second is the copper mediated dehalogenation of aryl halides. Both of these processes apply the concepts of geomimicry by carrying out organic reactions under Earth-like conditions. Only water and copper are needed to implement these processes and there is no need for exotic catalysts or toxic reagents.
ContributorsLoescher, Grant (Author) / Shock, Everett (Thesis advisor) / Hartnett, Hilairy (Committee member) / Gould, Ian (Committee member) / Arizona State University (Publisher)
Created2020
158281-Thumbnail Image.png
Description
During the nineteenth century, it was common for pianists to publish their own editions of Beethoven’s piano sonatas. They did this to demonstrate their understanding of the pieces. Towards the end of the century, musicians focused their attention on critical editions in an effort to reproduce the composer’s original intention.

During the nineteenth century, it was common for pianists to publish their own editions of Beethoven’s piano sonatas. They did this to demonstrate their understanding of the pieces. Towards the end of the century, musicians focused their attention on critical editions in an effort to reproduce the composer’s original intention. Unfortunately, this caused interpretive editions such as those created in the nineteenth century to fade from attention. This research focuses on situating these interpretive editions within the greater discourse surrounding the editorial development of Beethoven’s piano sonatas. The study opens with the critical reception of Beethoven, his Sonata in C-sharp minor, Op. 27 No. 2, also known as the “Moonlight” Sonata, the organology of the nineteenth-century fortepianos and the editorial practices of subsequent editions of the piece. It also contextualizes the aesthetic and performance practice of nineteenth-century piano playing. I go on to analyze and demonstrate how the performance practices conveyed in the modern Henle edition (1976) differ from those in selected earlier interpretive editions. I will conclude with an assessment of the ways in which nineteenth-century performance practices were reflected by contemporary editions.

This study compares the First edition (1802) and seven selected editions of Beethoven’s “Moonlight” Sonata by Ignaz Moscheles (1814), Carl Czerny (1846), Franz Liszt (1857), Louis Köhler (1869), Hugo Riemann (1885), Sigmund Lebert and Hans von Bülow (1896), and Carl Krebs (1898) with the Henle edition. It covers the tempo, rubato, articulations, phrasing, dynamics, fingerings, pedaling, ornamentation, note-stem and beaming, pitch, and rhythm. I evaluate these editorial changes and performance practice to determine that, compared to modern practice, the 19th century fostered a tendency of applying rubato, longer slurs, diverse articulations, and expanded dynamic range. Furthermore, the instructions of fingerings, pedaling and ornamentation became more detailed towards the end of the century.
ContributorsLi, King Yue (Author) / Meir, Baruch (Thesis advisor) / Hamilton, Robert (Committee member) / Marshall, Kimberly (Committee member) / Norton, Kay (Committee member) / Arizona State University (Publisher)
Created2020