Matching Items (96)
Filtering by

Clear all filters

149992-Thumbnail Image.png
Description
Process variations have become increasingly important for scaled technologies starting at 45nm. The increased variations are primarily due to random dopant fluctuations, line-edge roughness and oxide thickness fluctuation. These variations greatly impact all aspects of circuit performance and pose a grand challenge to future robust IC design. To improve robustness,

Process variations have become increasingly important for scaled technologies starting at 45nm. The increased variations are primarily due to random dopant fluctuations, line-edge roughness and oxide thickness fluctuation. These variations greatly impact all aspects of circuit performance and pose a grand challenge to future robust IC design. To improve robustness, efficient methodology is required that considers effect of variations in the design flow. Analyzing timing variability of complex circuits with HSPICE simulations is very time consuming. This thesis proposes an analytical model to predict variability in CMOS circuits that is quick and accurate. There are several analytical models to estimate nominal delay performance but very little work has been done to accurately model delay variability. The proposed model is comprehensive and estimates nominal delay and variability as a function of transistor width, load capacitance and transition time. First, models are developed for library gates and the accuracy of the models is verified with HSPICE simulations for 45nm and 32nm technology nodes. The difference between predicted and simulated σ/μ for the library gates is less than 1%. Next, the accuracy of the model for nominal delay is verified for larger circuits including ISCAS'85 benchmark circuits. The model predicted results are within 4% error of HSPICE simulated results and take a small fraction of the time, for 45nm technology. Delay variability is analyzed for various paths and it is observed that non-critical paths can become critical because of Vth variation. Variability on shortest paths show that rate of hold violations increase enormously with increasing Vth variation.
ContributorsGummalla, Samatha (Author) / Chakrabarti, Chaitali (Thesis advisor) / Cao, Yu (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Arizona State University (Publisher)
Created2011
Description
Delirium is a piece for large wind ensemble that synthesizes compositional techniques to generate unique juxtapositions of contrasting musical elements. The piece is about 8:30 long and uses the full complement of winds, brass, and percussion. Although the composition begins tonally, chromatic alterations gradually shift the melodic content outside of

Delirium is a piece for large wind ensemble that synthesizes compositional techniques to generate unique juxtapositions of contrasting musical elements. The piece is about 8:30 long and uses the full complement of winds, brass, and percussion. Although the composition begins tonally, chromatic alterations gradually shift the melodic content outside of the tonal center. In addition to changes in the melody, octatonic, chromatic, and synthetic scales and quartal and quintal harmonies are progressively introduced throughout the piece to add color and create dissonance. Delirium contains four primary sections that are all related by chromatic mediant. The subdivisions of the first part create abrupt transitions between contrasting material, evocative of the symptoms of delirium. As each sub-section progresses, the A minor tonality of the opening gradually gives way to increased chromaticism and dissonance. The next area transitions to C minor and begins to feature octatonic scales, secundal harmonies, and chromatic flourishes more prominently. The full sound of the ensemble then drops to solo instruments in the third section, now in G# minor, where the elements of the previous section are built upon with the addition of synthetic scales and quartal harmonies. The last division, before the recapitulation of the opening material, provides a drastic change in atmosphere as the chromatic elements from before are removed and the tense sound of the quartal harmonies are replaced with quintal sonorities and a more tonal melody. The tonality of this final section is used to return to the opening material. After an incomplete recapitulation, the descending motive that is used throughout the piece, which can be found in measure 61 in the flutes, is inverted and layered by minor 3rds. This inverted figure builds to the same sonority found in measure138, before ending on an F# chord, a minor third away from the A minor tonal center of the opening and where the piece seems like it should end.
ContributorsBell, Jeremy, 1986- (Composer) / Rogers, Rodney (Thesis advisor) / Oldani, Robert (Committee member) / Levy, Benjamin (Committee member) / Arizona State University (Publisher)
Created2011
149797-Thumbnail Image.png
Description
Many of the works of Dominick Argento have been researched and analyzed, but his choral work Evensong: Of Love and Angels s has received limited attention thus far. Written in memoriam for his wife Carolyn Bailey Argento, Evensong draws its musical material from her initials C.B.A. These letters, translated into

Many of the works of Dominick Argento have been researched and analyzed, but his choral work Evensong: Of Love and Angels s has received limited attention thus far. Written in memoriam for his wife Carolyn Bailey Argento, Evensong draws its musical material from her initials C.B.A. These letters, translated into note names, form a conspicuous head motive that is present in each movement of the work, and it serves multiple functions: as a melodic feature, as the foundation for a twelve-tone row, and as a harmonic base. This paper provides an overview of the work's conception with specific relation to Argento's biographical details, compositional style, and work habits; a brief review of the critical reception of the work; and a succinct analysis of the form and cyclical materials found in each movement.
ContributorsPage, Carrie Leigh, 1980- (Author) / Rogers, Rodney (Thesis advisor) / DeMars, James (Committee member) / Levy, Benjamin (Committee member) / Oldani, Robert (Committee member) / Arizona State University (Publisher)
Created2011
149798-Thumbnail Image.png
Description
Everyday Arias for soprano and orchestra was composed largely in Arizona and completed in February 2011. The text was taken from a small collection of the composer's own poetry referencing her memories of life in rural Mississippi. Everyday Arias endeavors to elevate these prosaic experiences and settings to art, expressing

Everyday Arias for soprano and orchestra was composed largely in Arizona and completed in February 2011. The text was taken from a small collection of the composer's own poetry referencing her memories of life in rural Mississippi. Everyday Arias endeavors to elevate these prosaic experiences and settings to art, expressing the everyday as beautiful and worthy of artistic treatment. The primary compositional model for this work was Samuel Barber's Knoxville: Summer of 1915, but other influences included Charles Ives, Aaron Copland, Benjamin Britten, and Dominick Argento. Barber's and Argento's musical treatment of prose style seemed particularly appropriate to the goals of Everyday Arias. Ives and Copland used hymn tunes both to evoke certain associations of worship and as sources of interesting material. The vocal writing of all five composers was influential, but the orchestration techniques for winds are largely a product of studying Ives and Argento, while many string gestures are more obviously tied to Britten and - more historically - Debussy.The primary motive that weaves through the work features an ascending major second followed by a descending perfect fourth, in a long-short-long rhythmic pattern. As a melodic fragment, the motive is often inverted to a descending-ascending pattern, or distorted slightly by expanding the second interval to a perfect fifth, or used in retrograde. The motive was derived from the first measure of the melody "Toplady" (1830) by Thomas Hastings, better known as the hymn "Rock of Ages." In the first movement, the motive is used most frequently in sequences. The second movement treats the motive as a melodic element and as a unit in ostinati. The final movement humorously transforms it into a syncopated gesture to evoke ragtime.
ContributorsPage, Carrie Leigh (Composer) / Rogers, Rodney (Thesis advisor) / DeMars, James (Committee member) / Levy, Benjamin (Committee member) / Oldani, Robert (Committee member) / Arizona State University (Publisher)
Created2011
150167-Thumbnail Image.png
Description
Redundant Binary (RBR) number representations have been extensively used in the past for high-throughput Digital Signal Processing (DSP) systems. Data-path components based on this number system have smaller critical path delay but larger area compared to conventional two's complement systems. This work explores the use of RBR number representation for

Redundant Binary (RBR) number representations have been extensively used in the past for high-throughput Digital Signal Processing (DSP) systems. Data-path components based on this number system have smaller critical path delay but larger area compared to conventional two's complement systems. This work explores the use of RBR number representation for implementing high-throughput DSP systems that are also energy-efficient. Data-path components such as adders and multipliers are evaluated with respect to critical path delay, energy and Energy-Delay Product (EDP). A new design for a RBR adder with very good EDP performance has been proposed. The corresponding RBR parallel adder has a much lower critical path delay and EDP compared to two's complement carry select and carry look-ahead adder implementations. Next, several RBR multiplier architectures are investigated and their performance compared to two's complement systems. These include two new multiplier architectures: a purely RBR multiplier where both the operands are in RBR form, and a hybrid multiplier where the multiplicand is in RBR form and the other operand is represented in conventional two's complement form. Both the RBR and hybrid designs are demonstrated to have better EDP performance compared to conventional two's complement multipliers. The hybrid multiplier is also shown to have a superior EDP performance compared to the RBR multiplier, with much lower implementation area. Analysis on the effect of bit-precision is also performed, and it is shown that the performance gain of RBR systems improves for higher bit precision. Next, in order to demonstrate the efficacy of the RBR representation at the system-level, the performance of RBR and hybrid implementations of some common DSP kernels such as Discrete Cosine Transform, edge detection using Sobel operator, complex multiplication, Lifting-based Discrete Wavelet Transform (9, 7) filter, and FIR filter, is compared with two's complement systems. It is shown that for relatively large computation modules, the RBR to two's complement conversion overhead gets amortized. In case of systems with high complexity, for iso-throughput, both the hybrid and RBR implementations are demonstrated to be superior with lower average energy consumption. For low complexity systems, the conversion overhead is significant, and overpowers the EDP performance gain obtained from the RBR computation operation.
ContributorsMahadevan, Rupa (Author) / Chakrabarti, Chaitali (Thesis advisor) / Kiaei, Sayfe (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2011
152360-Thumbnail Image.png
Description
In this work, we present approximate adders and multipliers to reduce data-path complexity of specialized hardware for various image processing systems. These approximate circuits have a lower area, latency and power consumption compared to their accurate counterparts and produce fairly accurate results. We build upon the work on approximate adders

In this work, we present approximate adders and multipliers to reduce data-path complexity of specialized hardware for various image processing systems. These approximate circuits have a lower area, latency and power consumption compared to their accurate counterparts and produce fairly accurate results. We build upon the work on approximate adders and multipliers presented in [23] and [24]. First, we show how choice of algorithm and parallel adder design can be used to implement 2D Discrete Cosine Transform (DCT) algorithm with good performance but low area. Our implementation of the 2D DCT has comparable PSNR performance with respect to the algorithm presented in [23] with ~35-50% reduction in area. Next, we use the approximate 2x2 multiplier presented in [24] to implement parallel approximate multipliers. We demonstrate that if some of the 2x2 multipliers in the design of the parallel multiplier are accurate, the accuracy of the multiplier improves significantly, especially when two large numbers are multiplied. We choose Gaussian FIR Filter and Fast Fourier Transform (FFT) algorithms to illustrate the efficacy of our proposed approximate multiplier. We show that application of the proposed approximate multiplier improves the PSNR performance of 32x32 FFT implementation by 4.7 dB compared to the implementation using the approximate multiplier described in [24]. We also implement a state-of-the-art image enlargement algorithm, namely Segment Adaptive Gradient Angle (SAGA) [29], in hardware. The algorithm is mapped to pipelined hardware blocks and we synthesized the design using 90 nm technology. We show that a 64x64 image can be processed in 496.48 µs when clocked at 100 MHz. The average PSNR performance of our implementation using accurate parallel adders and multipliers is 31.33 dB and that using approximate parallel adders and multipliers is 30.86 dB, when evaluated against the original image. The PSNR performance of both designs is comparable to the performance of the double precision floating point MATLAB implementation of the algorithm.
ContributorsVasudevan, Madhu (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Gupta, Sandeep (Committee member) / Arizona State University (Publisher)
Created2013
152344-Thumbnail Image.png
Description
Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of

Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of damage over time can provide extremely useful information in assessing the operational worthiness of a structure and in determining whether the structure should be repaired or removed from service. In this work, a sequential Bayesian approach with active sensing is employed for monitoring crack growth within fatigue-loaded materials. The monitoring approach is based on predicting crack damage state dynamics and modeling crack length observations. Since fatigue loading of a structural component can change while in service, an interacting multiple model technique is employed to estimate probabilities of different loading modes and incorporate this information in the crack length estimation problem. For the observation model, features are obtained from regions of high signal energy in the time-frequency plane and modeled for each crack length damage condition. Although this observation model approach exhibits high classification accuracy, the resolution characteristics can change depending upon the extent of the damage. Therefore, several different transmission waveforms and receiver sensors are considered to create multiple modes for making observations of crack damage. Resolution characteristics of the different observation modes are assessed using a predicted mean squared error criterion and observations are obtained using the predicted, optimal observation modes based on these characteristics. Calculation of the predicted mean square error metric can be computationally intensive, especially if performed in real time, and an approximation method is proposed. With this approach, the real time computational burden is decreased significantly and the number of possible observation modes can be increased. Using sensor measurements from real experiments, the overall sequential Bayesian estimation approach, with the adaptive capability of varying the state dynamics and observation modes, is demonstrated for tracking crack damage.
ContributorsHuff, Daniel W (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Chakrabarti, Chaitali (Committee member) / Chattopadhyay, Aditi (Committee member) / Arizona State University (Publisher)
Created2013
151465-Thumbnail Image.png
Description
Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on

Adaptive processing and classification of electrocardiogram (ECG) signals are important in eliminating the strenuous process of manually annotating ECG recordings for clinical use. Such algorithms require robust models whose parameters can adequately describe the ECG signals. Although different dynamic statistical models describing ECG signals currently exist, they depend considerably on a priori information and user-specified model parameters. Also, ECG beat morphologies, which vary greatly across patients and disease states, cannot be uniquely characterized by a single model. In this work, sequential Bayesian based methods are used to appropriately model and adaptively select the corresponding model parameters of ECG signals. An adaptive framework based on a sequential Bayesian tracking method is proposed to adaptively select the cardiac parameters that minimize the estimation error, thus precluding the need for pre-processing. Simulations using real ECG data from the online Physionet database demonstrate the improvement in performance of the proposed algorithm in accurately estimating critical heart disease parameters. In addition, two new approaches to ECG modeling are presented using the interacting multiple model and the sequential Markov chain Monte Carlo technique with adaptive model selection. Both these methods can adaptively choose between different models for various ECG beat morphologies without requiring prior ECG information, as demonstrated by using real ECG signals. A supervised Bayesian maximum-likelihood (ML) based classifier uses the estimated model parameters to classify different types of cardiac arrhythmias. However, the non-availability of sufficient amounts of representative training data and the large inter-patient variability pose a challenge to the existing supervised learning algorithms, resulting in a poor classification performance. In addition, recently developed unsupervised learning methods require a priori knowledge on the number of diseases to cluster the ECG data, which often evolves over time. In order to address these issues, an adaptive learning ECG classification method that uses Dirichlet process Gaussian mixture models is proposed. This approach does not place any restriction on the number of disease classes, nor does it require any training data. This algorithm is adapted to be patient-specific by labeling or identifying the generated mixtures using the Bayesian ML method, assuming the availability of labeled training data.
ContributorsEdla, Shwetha Reddy (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2012
151700-Thumbnail Image.png
Description
Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance

Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance of two velocity estimation schemes used in Doppler processing systems, namely, directional velocity estimation (DVE) and conventional velocity estimation (CVE). We find that DVE provides better estimation performance and is the only functioning method when the beam to flow angle is large. Unfortunately, DVE is computationally expensive and also requires divisions and square root operations that are hard to implement. We propose two approximation techniques to replace these computations. The simulation results on cyst images show that the proposed approximations do not affect the estimation performance. We also study backend processing which includes envelope detection, log compression and scan conversion. Three different envelope detection methods are compared. Among them, FIR based Hilbert Transform is considered the best choice when phase information is not needed, while quadrature demodulation is a better choice if phase information is necessary. Bilinear and Gaussian interpolation are considered for scan conversion. Through simulations of a cyst image, we show that bilinear interpolation provides comparable contrast-to-noise ratio (CNR) performance with Gaussian interpolation and has lower computational complexity. Thus, bilinear interpolation is chosen for our system.
ContributorsWei, Siyuan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151610-Thumbnail Image.png
Description
This thesis presents a new arrangement of Richard Peaslee's trombone solo "Arrows of Time" for brass band. This arrangement adapts Peaslee's orchestration - and subsequent arrangement by Dr. Joshua Hauser for wind ensemble - for the modern brass band instrumentation and includes a full score. A brief biography of Richard

This thesis presents a new arrangement of Richard Peaslee's trombone solo "Arrows of Time" for brass band. This arrangement adapts Peaslee's orchestration - and subsequent arrangement by Dr. Joshua Hauser for wind ensemble - for the modern brass band instrumentation and includes a full score. A brief biography of Richard Peaslee and his work accompanies this new arrangement, along with commentary on the orchestration of "Arrows of Time", and discussion of the evolution and adaptation of the work for wind ensemble by Dr. Hauser. The methodology used to adapt these versions for the brass band completes the background information.
ContributorsMalloy, Jason Patrick (Author) / Ericson, John (Thesis advisor) / Oldani, Robert (Committee member) / Rockmaker, Jody (Committee member) / Arizona State University (Publisher)
Created2013