Matching Items (1,403)
Filtering by

Clear all filters

152264-Thumbnail Image.png
Description
In order to cope with the decreasing availability of symphony jobs and collegiate faculty positions, many musicians are starting to pursue less traditional career paths. Also, to combat declining audiences, musicians are exploring ways to cultivate new and enthusiastic listeners through relevant and engaging performances. Due to these challenges, many

In order to cope with the decreasing availability of symphony jobs and collegiate faculty positions, many musicians are starting to pursue less traditional career paths. Also, to combat declining audiences, musicians are exploring ways to cultivate new and enthusiastic listeners through relevant and engaging performances. Due to these challenges, many community-based chamber music ensembles have been formed throughout the United States. These groups not only focus on performing classical music, but serve the needs of their communities as well. The problem, however, is that many musicians have not learned the business skills necessary to create these career opportunities. In this document I discuss the steps ensembles must take to develop sustainable careers. I first analyze how groups build a strong foundation through getting to know their communities and creating core values. I then discuss branding and marketing so ensembles can develop a public image and learn how to publicize themselves. This is followed by an investigation of how ensembles make and organize their money. I then examine the ways groups ensure long-lasting relationships with their communities and within the ensemble. I end by presenting three case studies of professional ensembles to show how groups create and maintain successful careers. Ensembles must develop entrepreneurship skills in addition to cultivating their artistry. These business concepts are crucial to the longevity of chamber groups. Through interviews of successful ensemble members and my own personal experiences in the Tetra String Quartet, I provide a guide for musicians to use when creating a community-based ensemble.
ContributorsDalbey, Jenna (Author) / Landschoot, Thomas (Thesis advisor) / McLin, Katherine (Committee member) / Ryan, Russell (Committee member) / Solis, Theodore (Committee member) / Spring, Robert (Committee member) / Arizona State University (Publisher)
Created2013
152727-Thumbnail Image.png
Description
American Primitive is a composition written for wind ensemble with an instrumentation of flute, oboe, clarinet, bass clarinet, alto, tenor, and baritone saxophones, trumpet, horn, trombone, euphonium, tuba, piano, and percussion. The piece is approximately twelve minutes in duration and was written September - December 2013. American Primitive is absolute

American Primitive is a composition written for wind ensemble with an instrumentation of flute, oboe, clarinet, bass clarinet, alto, tenor, and baritone saxophones, trumpet, horn, trombone, euphonium, tuba, piano, and percussion. The piece is approximately twelve minutes in duration and was written September - December 2013. American Primitive is absolute music (i.e. it does not follow a specific narrative) comprising blocks of distinct, contrasting gestures which bookend a central region of delicate textural layering and minimal gestural contrast. Though three gestures (a descending interval followed by a smaller ascending interval, a dynamic swell, and a chordal "chop") were consciously employed throughout, it is the first gesture of the three that creates a sense of unification and overall coherence to the work. Additionally, the work challenges listeners' expectations of traditional wind ensemble music by featuring the trumpet as a quasi-soloist whose material is predominately inspired by transcriptions of jazz solos. This jazz-inspired material is at times mimicked and further developed by the ensemble, also often in a soloistic manner while the trumpet maintains its role throughout. This interplay of dialogue between the "soloists" and the "ensemble" further skews listeners' conceptions of traditional wind ensemble music by featuring almost every instrument in the ensemble. Though the term "American Primitive" is usually associated with the "naïve art" movement, it bears no association to the music presented in this work. Instead, the term refers to the author's own compositional attitudes, education, and aesthetic interests.
ContributorsJandreau, Joshua (Composer) / Rockmaker, Jody D (Thesis advisor) / Rogers, Rodney I (Committee member) / Demars, James R (Committee member) / Arizona State University (Publisher)
Created2014
153120-Thumbnail Image.png
Description
This project is a practical annotated bibliography of original works for oboe trio with the specific instrumentation of two oboes and English horn. Presenting descriptions of 116 readily available oboe trios, this project is intended to promote awareness, accessibility, and performance of compositions within this genre.

The annotated bibliography focuses

This project is a practical annotated bibliography of original works for oboe trio with the specific instrumentation of two oboes and English horn. Presenting descriptions of 116 readily available oboe trios, this project is intended to promote awareness, accessibility, and performance of compositions within this genre.

The annotated bibliography focuses exclusively on original, published works for two oboes and English horn. Unpublished works, arrangements, works that are out of print and not available through interlibrary loan, or works that feature slightly altered instrumentation are not included.

Entries in this annotated bibliography are listed alphabetically by the last name of the composer. Each entry includes the dates of the composer and a brief biography, followed by the title of the work, composition date, commission, and dedication of the piece. Also included are the names of publishers, the length of the entire piece in minutes and seconds, and an incipit of the first one to eight measures for each movement of the work.

In addition to providing a comprehensive and detailed bibliography of oboe trios, this document traces the history of the oboe trio and includes biographical sketches of each composer cited, allowing readers to place the genre of oboe trios and each individual composition into its historical context. Four appendices at the end include a list of trios arranged alphabetically by composer's last name, chronologically by the date of composition, and by country of origin and a list of publications of Ludwig van Beethoven's oboe trios from the 1940s and earlier.
ContributorsSassaman, Melissa Ann (Author) / Schuring, Martin (Thesis advisor) / Buck, Elizabeth (Committee member) / Holbrook, Amy (Committee member) / Hill, Gary (Committee member) / Arizona State University (Publisher)
Created2014
152527-Thumbnail Image.png
Description
This thesis report aims at introducing the background of QR decomposition and its application. QR decomposition using Givens rotations is a efficient method to prevent directly matrix inverse in solving least square minimization problem, which is a typical approach for weight calculation in adaptive beamforming. Furthermore, this thesis introduces Givens

This thesis report aims at introducing the background of QR decomposition and its application. QR decomposition using Givens rotations is a efficient method to prevent directly matrix inverse in solving least square minimization problem, which is a typical approach for weight calculation in adaptive beamforming. Furthermore, this thesis introduces Givens rotations algorithm and two general VLSI (very large scale integrated circuit) architectures namely triangular systolic array and linear systolic array for numerically QR decomposition. To fulfill the goal, a 4 input channels triangular systolic array with 16 bits fixed-point format and a 5 input channels linear systolic array are implemented on FPGA (Field programmable gate array). The final result shows that the estimated clock frequencies of 65 MHz and 135 MHz on post-place and route static timing report could be achieved using Xilinx Virtex 6 xc6vlx240t chip. Meanwhile, this report proposes a new method to test the dynamic range of QR-D. The dynamic range of the both architectures can be achieved around 110dB.
ContributorsYu, Hanguang (Author) / Bliss, Daniel W (Thesis advisor) / Ying, Lei (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2014
150476-Thumbnail Image.png
Description
Multidimensional (MD) discrete Fourier transform (DFT) is a key kernel algorithm in many signal processing applications, such as radar imaging and medical imaging. Traditionally, a two-dimensional (2-D) DFT is computed using Row-Column (RC) decomposition, where one-dimensional (1-D) DFTs are computed along the rows followed by 1-D DFTs along the columns.

Multidimensional (MD) discrete Fourier transform (DFT) is a key kernel algorithm in many signal processing applications, such as radar imaging and medical imaging. Traditionally, a two-dimensional (2-D) DFT is computed using Row-Column (RC) decomposition, where one-dimensional (1-D) DFTs are computed along the rows followed by 1-D DFTs along the columns. However, architectures based on RC decomposition are not efficient for large input size data which have to be stored in external memories based Synchronous Dynamic RAM (SDRAM). In this dissertation, first an efficient architecture to implement 2-D DFT for large-sized input data is proposed. This architecture achieves very high throughput by exploiting the inherent parallelism due to a novel 2-D decomposition and by utilizing the row-wise burst access pattern of the SDRAM external memory. In addition, an automatic IP generator is provided for mapping this architecture onto a reconfigurable platform of Xilinx Virtex-5 devices. For a 2048x2048 input size, the proposed architecture is 1.96 times faster than RC decomposition based implementation under the same memory constraints, and also outperforms other existing implementations. While the proposed 2-D DFT IP can achieve high performance, its output is bit-reversed. For systems where the output is required to be in natural order, use of this DFT IP would result in timing overhead. To solve this problem, a new bandwidth-efficient MD DFT IP that is transpose-free and produces outputs in natural order is proposed. It is based on a novel decomposition algorithm that takes into account the output order, FPGA resources, and the characteristics of off-chip memory access. An IP generator is designed and integrated into an in-house FPGA development platform, AlgoFLEX, for easy verification and fast integration. The corresponding 2-D and 3-D DFT architectures are ported onto the BEE3 board and their performance measured and analyzed. The results shows that the architecture can maintain the maximum memory bandwidth throughout the whole procedure while avoiding matrix transpose operations used in most other MD DFT implementations. The proposed architecture has also been ported onto the Xilinx ML605 board. When clocked at 100 MHz, 2048x2048 images with complex single-precision can be processed in less than 27 ms. Finally, transpose-free imaging flows for range-Doppler algorithm (RDA) and chirp-scaling algorithm (CSA) in SAR imaging are proposed. The corresponding implementations take advantage of the memory access patterns designed for the MD DFT IP and have superior timing performance. The RDA and CSA flows are mapped onto a unified architecture which is implemented on an FPGA platform. When clocked at 100MHz, the RDA and CSA computations with data size 4096x4096 can be completed in 323ms and 162ms, respectively. This implementation outperforms existing SAR image accelerators based on FPGA and GPU.
ContributorsYu, Chi-Li (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Karam, Lina (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2012
ContributorsPagano, Caio, 1940- (Performer) / Mechetti, Fabio (Conductor) / Buck, Elizabeth (Performer) / Schuring, Martin (Performer) / Spring, Robert (Performer) / Rodrigues, Christiano (Performer) / Landschoot, Thomas (Performer) / Rotaru, Catalin (Performer) / Avanti Festival Orchestra (Performer) / ASU Library. Music Library (Publisher)
Created2018-03-02
156845-Thumbnail Image.png
Description
The rapid improvement in computation capability has made deep convolutional neural networks (CNNs) a great success in recent years on many computer vision tasks with significantly improved accuracy. During the inference phase, many applications demand low latency processing of one image with strict power consumption requirement, which reduces the efficiency

The rapid improvement in computation capability has made deep convolutional neural networks (CNNs) a great success in recent years on many computer vision tasks with significantly improved accuracy. During the inference phase, many applications demand low latency processing of one image with strict power consumption requirement, which reduces the efficiency of GPU and other general-purpose platform, bringing opportunities for specific acceleration hardware, e.g. FPGA, by customizing the digital circuit specific for the deep learning algorithm inference. However, deploying CNNs on portable and embedded systems is still challenging due to large data volume, intensive computation, varying algorithm structures, and frequent memory accesses. This dissertation proposes a complete design methodology and framework to accelerate the inference process of various CNN algorithms on FPGA hardware with high performance, efficiency and flexibility.

As convolution contributes most operations in CNNs, the convolution acceleration scheme significantly affects the efficiency and performance of a hardware CNN accelerator. Convolution involves multiply and accumulate (MAC) operations with four levels of loops. Without fully studying the convolution loop optimization before the hardware design phase, the resulting accelerator can hardly exploit the data reuse and manage data movement efficiently. This work overcomes these barriers by quantitatively analyzing and optimizing the design objectives (e.g. memory access) of the CNN accelerator based on multiple design variables. An efficient dataflow and hardware architecture of CNN acceleration are proposed to minimize the data communication while maximizing the resource utilization to achieve high performance.

Although great performance and efficiency can be achieved by customizing the FPGA hardware for each CNN model, significant efforts and expertise are required leading to long development time, which makes it difficult to catch up with the rapid development of CNN algorithms. In this work, we present an RTL-level CNN compiler that automatically generates customized FPGA hardware for the inference tasks of various CNNs, in order to enable high-level fast prototyping of CNNs from software to FPGA and still keep the benefits of low-level hardware optimization. First, a general-purpose library of RTL modules is developed to model different operations at each layer. The integration and dataflow of physical modules are predefined in the top-level system template and reconfigured during compilation for a given CNN algorithm. The runtime control of layer-by-layer sequential computation is managed by the proposed execution schedule so that even highly irregular and complex network topology, e.g. GoogLeNet and ResNet, can be compiled. The proposed methodology is demonstrated with various CNN algorithms, e.g. NiN, VGG, GoogLeNet and ResNet, on two different standalone FPGAs achieving state-of-the art performance.

Based on the optimized acceleration strategy, there are still a lot of design options, e.g. the degree and dimension of computation parallelism, the size of on-chip buffers, and the external memory bandwidth, which impact the utilization of computation resources and data communication efficiency, and finally affect the performance and energy consumption of the accelerator. The large design space of the accelerator makes it impractical to explore the optimal design choice during the real implementation phase. Therefore, a performance model is proposed in this work to quantitatively estimate the accelerator performance and resource utilization. By this means, the performance bottleneck and design bound can be identified and the optimal design option can be explored early in the design phase.
ContributorsMa, Yufei (Author) / Vrudhula, Sarma (Thesis advisor) / Seo, Jae-Sun (Thesis advisor) / Cao, Yu (Committee member) / Barnaby, Hugh (Committee member) / Arizona State University (Publisher)
Created2018
ContributorsDe La Cruz, Nathaniel (Performer) / LoGiudice, Rosa (Contributor) / Tallino, Michael (Performer) / McKinch, Riley (Performer) / Li, Yuhui (Performer) / Armenta, Tyler (Contributor) / Gonzalez, David (Performer) / Jones, Tarin (Performer) / Ryall, Blake (Performer) / Senseman, Stephen (Performer)
Created2018-10-10
153668-Thumbnail Image.png
Description
Error correcting systems have put increasing demands on system designers, both due to increasing error correcting requirements and higher throughput targets. These requirements have led to greater silicon area, power consumption and have forced system designers to make trade-offs in Error Correcting Code (ECC) functionality. Solutions to increase the efficiency

Error correcting systems have put increasing demands on system designers, both due to increasing error correcting requirements and higher throughput targets. These requirements have led to greater silicon area, power consumption and have forced system designers to make trade-offs in Error Correcting Code (ECC) functionality. Solutions to increase the efficiency of ECC systems are very important to system designers and have become a heavily researched area.

Many such systems incorporate the Bose-Chaudhuri-Hocquenghem (BCH) method of error correcting in a multi-channel configuration. BCH is a commonly used code because of its configurability, low storage overhead, and low decoding requirements when compared to other codes. Multi-channel configurations are popular with system designers because they offer a straightforward way to increase bandwidth. The ECC hardware is duplicated for each channel and the throughput increases linearly with the number of channels. The combination of these two technologies provides a configurable and high throughput ECC architecture.

This research proposes a new method to optimize a BCH error correction decoder in multi-channel configurations. In this thesis, I examine how error frequency effects the utilization of BCH hardware. Rather than implement each decoder as a single pipeline of independent decoding stages, the channels are considered together and served by a pool of decoding stages. Modified hardware blocks for handling common cases are included and the pool is sized based on an acceptable, but negligible decrease in performance.
ContributorsDill, Russell (Author) / Shrivastava, Aviral (Thesis advisor) / Oh, Hyunok (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2015