Matching Items (410)
151833-Thumbnail Image.png
Description
The end of the nineteenth century was an exhilarating and revolutionary era for the flute. This period is the Second Golden Age of the flute, when players and teachers associated with the Paris Conservatory developed what would be considered the birth of the modern flute school. In addition, the founding

The end of the nineteenth century was an exhilarating and revolutionary era for the flute. This period is the Second Golden Age of the flute, when players and teachers associated with the Paris Conservatory developed what would be considered the birth of the modern flute school. In addition, the founding in 1871 of the Société Nationale de Musique by Camille Saint-Saëns (1835-1921) and Romain Bussine (1830-1899) made possible the promotion of contemporary French composers. The founding of the Société des Instruments à Vent by Paul Taffanel (1844-1908) in 1879 also invigorated a new era of chamber music for wind instruments. Within this groundbreaking environment, Mélanie Hélène Bonis (pen name Mel Bonis) entered the Paris Conservatory in 1876, under the tutelage of César Franck (1822-1890). Many flutists are dismayed by the scarcity of repertoire for the instrument in the Romantic and post-Romantic traditions; they make up for this absence by borrowing the violin sonatas of Gabriel Fauré (1845-1924) and Franck. The flute and piano works of Mel Bonis help to fill this void with music composed originally for flute. Bonis was a prolific composer with over 300 works to her credit, but her works for flute and piano have not been researched or professionally recorded in the United States before the present study. Although virtually unknown today in the American flute community, Bonis's music received much acclaim from her contemporaries and deserves a prominent place in the flutist's repertoire. After a brief biographical introduction, this document examines Mel Bonis's musical style and describes in detail her six works for flute and piano while also offering performance suggestions.
ContributorsDaum, Jenna Elyse (Author) / Buck, Elizabeth (Thesis advisor) / Holbrook, Amy (Committee member) / Micklich, Albie (Committee member) / Schuring, Martin (Committee member) / Norton, Kay (Committee member) / Arizona State University (Publisher)
Created2013
ContributorsMatthews, Eyona (Performer) / Yoo, Katie Jihye (Performer) / Roubison, Ryan (Performer) / ASU Library. Music Library (Publisher)
Created2018-03-25
ContributorsHoeckley, Stephanie (Performer) / Lee, Juhyun (Performer) / ASU Library. Music Library (Publisher)
Created2018-03-24
150319-Thumbnail Image.png
Description
This thesis describes an approach to system identification based on compressive sensing and demonstrates its efficacy on a challenging classical benchmark single-input, multiple output (SIMO) mechanical system consisting of an inverted pendulum on a cart. Due to its inherent non-linearity and unstable behavior, very few techniques currently exist that are

This thesis describes an approach to system identification based on compressive sensing and demonstrates its efficacy on a challenging classical benchmark single-input, multiple output (SIMO) mechanical system consisting of an inverted pendulum on a cart. Due to its inherent non-linearity and unstable behavior, very few techniques currently exist that are capable of identifying this system. The challenge in identification also lies in the coupled behavior of the system and in the difficulty of obtaining the full-range dynamics. The differential equations describing the system dynamics are determined from measurements of the system's input-output behavior. These equations are assumed to consist of the superposition, with unknown weights, of a small number of terms drawn from a large library of nonlinear terms. Under this assumption, compressed sensing allows the constituent library elements and their corresponding weights to be identified by decomposing a time-series signal of the system's outputs into a sparse superposition of corresponding time-series signals produced by the library components. The most popular techniques for non-linear system identification entail the use of ANN's (Artificial Neural Networks), which require a large number of measurements of the input and output data at high sampling frequencies. The method developed in this project requires very few samples and the accuracy of reconstruction is extremely high. Furthermore, this method yields the Ordinary Differential Equation (ODE) of the system explicitly. This is in contrast to some ANN approaches that produce only a trained network which might lose fidelity with change of initial conditions or if facing an input that wasn't used during its training. This technique is expected to be of value in system identification of complex dynamic systems encountered in diverse fields such as Biology, Computation, Statistics, Mechanics and Electrical Engineering.
ContributorsNaik, Manjish Arvind (Author) / Cochran, Douglas (Thesis advisor) / Kovvali, Narayan (Committee member) / Kawski, Matthias (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2011
ContributorsMcClain, Katelyn (Performer) / Buringrud, Deanna (Contributor) / Lee, Juhyun (Performer) / ASU Library. Music Library (Publisher)
Created2018-03-31
156822-Thumbnail Image.png
Description
Hardware implementation of deep neural networks is earning significant importance nowadays. Deep neural networks are mathematical models that use learning algorithms inspired by the brain. Numerous deep learning algorithms such as multi-layer perceptrons (MLP) have demonstrated human-level recognition accuracy in image and speech classification tasks. Multiple layers of processing elements

Hardware implementation of deep neural networks is earning significant importance nowadays. Deep neural networks are mathematical models that use learning algorithms inspired by the brain. Numerous deep learning algorithms such as multi-layer perceptrons (MLP) have demonstrated human-level recognition accuracy in image and speech classification tasks. Multiple layers of processing elements called neurons with several connections between them called synapses are used to build these networks. Hence, it involves operations that exhibit a high level of parallelism making it computationally and memory intensive. Constrained by computing resources and memory, most of the applications require a neural network which utilizes less energy. Energy efficient implementation of these computationally intense algorithms on neuromorphic hardware demands a lot of architectural optimizations. One of these optimizations would be the reduction in the network size using compression and several studies investigated compression by introducing element-wise or row-/column-/block-wise sparsity via pruning and regularization. Additionally, numerous recent works have concentrated on reducing the precision of activations and weights with some reducing to a single bit. However, combining various sparsity structures with binarized or very-low-precision (2-3 bit) neural networks have not been comprehensively explored. Output activations in these deep neural network algorithms are habitually non-binary making it difficult to exploit sparsity. On the other hand, biologically realistic models like spiking neural networks (SNN) closely mimic the operations in biological nervous systems and explore new avenues for brain-like cognitive computing. These networks deal with binary spikes, and they can exploit the input-dependent sparsity or redundancy to dynamically scale the amount of computation in turn leading to energy-efficient hardware implementation. This work discusses configurable spiking neuromorphic architecture that supports multiple hidden layers exploiting hardware reuse. It also presents design techniques for minimum-area/-energy DNN hardware with minimal degradation in accuracy. Area, performance and energy results of these DNN and SNN hardware is reported for the MNIST dataset. The Neuromorphic hardware designed for SNN algorithm in 28nm CMOS demonstrates high classification accuracy (>98% on MNIST) and low energy (51.4 - 773 (nJ) per classification). The optimized DNN hardware designed in 40nm CMOS that combines 8X structured compression and 3-bit weight precision showed 98.4% accuracy at 33 (nJ) per classification.
ContributorsKolala Venkataramanaiah, Shreyas (Author) / Seo, Jae-Sun (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2018
ContributorsHur, Jiyoun (Performer) / Lee, Juhyun (Performer) / ASU Library. Music Library (Publisher)
Created2018-03-01
157015-Thumbnail Image.png
Description
Deep learning (DL) has proved itself be one of the most important developements till date with far reaching impacts in numerous fields like robotics, computer vision, surveillance, speech processing, machine translation, finance, etc. They are now widely used for countless applications because of their ability to generalize real world data,

Deep learning (DL) has proved itself be one of the most important developements till date with far reaching impacts in numerous fields like robotics, computer vision, surveillance, speech processing, machine translation, finance, etc. They are now widely used for countless applications because of their ability to generalize real world data, robustness to noise in previously unseen data and high inference accuracy. With the ability to learn useful features from raw sensor data, deep learning algorithms have out-performed tradinal AI algorithms and pushed the boundaries of what can be achieved with AI. In this work, we demonstrate the power of deep learning by developing a neural network to automatically detect cough instances from audio recorded in un-constrained environments. For this, 24 hours long recordings from 9 dierent patients is collected and carefully labeled by medical personel. A pre-processing algorithm is proposed to convert event based cough dataset to a more informative dataset with start and end of coughs and also introduce data augmentation for regularizing the training procedure. The proposed neural network achieves 92.3% leave-one-out accuracy on data captured in real world.

Deep neural networks are composed of multiple layers that are compute/memory intensive. This makes it difficult to execute these algorithms real-time with low power consumption using existing general purpose computers. In this work, we propose hardware accelerators for a traditional AI algorithm based on random forest trees and two representative deep convolutional neural networks (AlexNet and VGG). With the proposed acceleration techniques, ~ 30x performance improvement was achieved compared to CPU for random forest trees. For deep CNNS, we demonstrate that much higher performance can be achieved with architecture space exploration using any optimization algorithms with system level performance and area models for hardware primitives as inputs and goal of minimizing latency with given resource constraints. With this method, ~30GOPs performance was achieved for Stratix V FPGA boards.

Hardware acceleration of DL algorithms alone is not always the most ecient way and sucient to achieve desired performance. There is a huge headroom available for performance improvement provided the algorithms are designed keeping in mind the hardware limitations and bottlenecks. This work achieves hardware-software co-optimization for Non-Maximal Suppression (NMS) algorithm. Using the proposed algorithmic changes and hardware architecture

With CMOS scaling coming to an end and increasing memory bandwidth bottlenecks, CMOS based system might not scale enough to accommodate requirements of more complicated and deeper neural networks in future. In this work, we explore RRAM crossbars and arrays as compact, high performing and energy efficient alternative to CMOS accelerators for deep learning training and inference. We propose and implement RRAM periphery read and write circuits and achieved ~3000x performance improvement in online dictionary learning compared to CPU.

This work also examines the realistic RRAM devices and their non-idealities. We do an in-depth study of the effects of RRAM non-idealities on inference accuracy when a pretrained model is mapped to RRAM based accelerators. To mitigate this issue, we propose Random Sparse Adaptation (RSA), a novel scheme aimed at tuning the model to take care of the faults of the RRAM array on which it is mapped. Our proposed method can achieve inference accuracy much higher than what traditional Read-Verify-Write (R-V-W) method could achieve. RSA can also recover lost inference accuracy 100x ~ 1000x faster compared to R-V-W. Using 32-bit high precision RSA cells, we achieved ~10% higher accuracy using fautly RRAM arrays compared to what can be achieved by mapping a deep network to an 32 level RRAM array with no variations.
ContributorsMohanty, Abinash (Author) / Cao, Yu (Thesis advisor) / Seo, Jae-Sun (Committee member) / Vrudhula, Sarma (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2018
Description
With the advent of sophisticated computer technology, we increasingly see the use of computational techniques in the study of problems from a variety of disciplines, including the humanities. In a field such as poetry, where classic works are subject to frequent re-analysis over the course of years, decades, or even

With the advent of sophisticated computer technology, we increasingly see the use of computational techniques in the study of problems from a variety of disciplines, including the humanities. In a field such as poetry, where classic works are subject to frequent re-analysis over the course of years, decades, or even centuries, there is a certain demand for fresh approaches to familiar tasks, and such breaks from convention may even be necessary for the advancement of the field. Existing quantitative studies of poetry have employed computational techniques in their analyses, however, there remains work to be done with regards to the deployment of deep neural networks on large corpora of poetry to classify portions of the works contained therein based on certain features. While applications of neural networks to social media sites, consumer reviews, and other web-originated data are common within computational linguistics and natural language processing, comparatively little work has been done on the computational analysis of poetry using the same techniques. In this work, I begin to lay out the first steps for the study of poetry using neural networks. Using a convolutional neural network to classify author birth date, I was able to not only extract a non-trivial signal from the data, but also identify the presence of clustering within by-author model accuracy. While definitive conclusions about the cause of this clustering were not reached, investigation of this clustering reveals immense heterogeneity in the traits of accurately classified authors. Further study may unpack this clustering and reveal key insights about how temporal information is encoded in poetry. The study of poetry using neural networks remains very open but exhibits potential to be an interesting and deep area of work.
ContributorsGoodloe, Oscar Laurence (Author) / Nishimura, Joel (Thesis director) / Broatch, Jennifer (Committee member) / School of Mathematical and Natural Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
ContributorsZaleski, Kimberly (Contributor) / Kazarian, Trevor (Performer) / Ryan, Russell (Performer) / IN2ATIVE (Performer) / ASU Library. Music Library (Publisher)
Created2018-09-28