Matching Items (307)
150380-Thumbnail Image.png
Description
Great advances have been made in the construction of photovoltaic (PV) cells and modules, but array level management remains much the same as it has been in previous decades. Conventionally, the PV array is connected in a fixed topology which is not always appropriate in the presence of faults in

Great advances have been made in the construction of photovoltaic (PV) cells and modules, but array level management remains much the same as it has been in previous decades. Conventionally, the PV array is connected in a fixed topology which is not always appropriate in the presence of faults in the array, and varying weather conditions. With the introduction of smarter inverters and solar modules, the data obtained from the photovoltaic array can be used to dynamically modify the array topology and improve the array power output. This is beneficial especially when module mismatches such as shading, soiling and aging occur in the photovoltaic array. This research focuses on the topology optimization of PV arrays under shading conditions using measurements obtained from a PV array set-up. A scheme known as topology reconfiguration method is proposed to find the optimal array topology for a given weather condition and faulty module information. Various topologies such as the series-parallel (SP), the total cross-tied (TCT), the bridge link (BL) and their bypassed versions are considered. The topology reconfiguration method compares the efficiencies of the topologies, evaluates the percentage gain in the generated power that would be obtained by reconfiguration of the array and other factors to find the optimal topology. This method is employed for various possible shading patterns to predict the best topology. The results demonstrate the benefit of having an electrically reconfigurable array topology. The effects of irradiance and shading on the array performance are also studied. The simulations are carried out using a SPICE simulator. The simulation results are validated with the experimental data provided by the PACECO Company.
ContributorsBuddha, Santoshi Tejasri (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Thesis advisor) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2011
150384-Thumbnail Image.png
Description
In this thesis, a Built-in Self Test (BiST) based testing solution is proposed to measure linear and non-linear impairments in the RF Transmitter path using analytical approach. Design issues and challenges with the impairments modeling and extraction in transmitter path are discussed. Transmitter is modeled for I/Q gain & phase

In this thesis, a Built-in Self Test (BiST) based testing solution is proposed to measure linear and non-linear impairments in the RF Transmitter path using analytical approach. Design issues and challenges with the impairments modeling and extraction in transmitter path are discussed. Transmitter is modeled for I/Q gain & phase mismatch, system non-linearity and DC offset using Matlab. BiST architecture includes a peak detector which includes a self mode mixer and 200 MHz filter. Self Mode mixing operation with filtering removes the high frequency signal contents and allows performing analysis on baseband frequency signals. Transmitter impairments were calculated using spectral analysis of output from the BiST circuitry using an analytical method. Matlab was used to simulate the system with known test impairments and impairment values from simulations were calculated based on system modeling in Mathematica. Simulated data is in good correlation with input test data along with very fast test time and high accuracy. The key contribution of the work is that, system impairments are extracted from transmitter response at baseband frequency using envelope detector hence eliminating the need of expensive high frequency ATE (Automated Test Equipments).
ContributorsGoyal, Nitin (Author) / Ozev, Sule (Thesis advisor) / Duman, Tolga (Committee member) / Bakkaloglu, Bertan (Committee member) / Arizona State University (Publisher)
Created2011
150333-Thumbnail Image.png
Description
A systematic approach to composition has been used by a variety of composers to control an assortment of musical elements in their pieces. This paper begins with a brief survey of some of the important systematic approaches that composers have employed in their compositions, devoting particular attention to Pierre Boulez's

A systematic approach to composition has been used by a variety of composers to control an assortment of musical elements in their pieces. This paper begins with a brief survey of some of the important systematic approaches that composers have employed in their compositions, devoting particular attention to Pierre Boulez's Structures Ia . The purpose of this survey is to examine several systematic approaches to composition by prominent composers and their philosophy in adopting this type of approach. The next section of the paper introduces my own systematic approach to composition: the Take-Away System. The third provides several musical applications of the system, citing my work, Octulus for two pianos, as an example. The appendix details theorems and observations within the system for further study.
ContributorsHarbin, Doug (Author) / Hackbarth, Glenn (Thesis advisor) / DeMars, James (Committee member) / Etezady, Roshanne, 1973- (Committee member) / Rockmaker, Jody (Committee member) / Rogers, Rodney (Committee member) / Arizona State University (Publisher)
Created2011
150360-Thumbnail Image.png
Description
A workload-aware low-power neuromorphic controller for dynamic power and thermal management in VLSI systems is presented. The neuromorphic controller predicts future workload and temperature values based on the past values and CPU performance counters and preemptively regulates supply voltage and frequency. System-level measurements from stateof-the-art commercial microprocessors are used to

A workload-aware low-power neuromorphic controller for dynamic power and thermal management in VLSI systems is presented. The neuromorphic controller predicts future workload and temperature values based on the past values and CPU performance counters and preemptively regulates supply voltage and frequency. System-level measurements from stateof-the-art commercial microprocessors are used to get workload, temperature and CPU performance counter values. The controller is designed and simulated using circuit-design and synthesis tools. At device-level, on-chip planar inductors suffer from low inductance occupying large chip area. On-chip inductors with integrated magnetic materials are designed, simulated and fabricated to explore performance-efficiency trade offs and explore potential applications such as resonant clocking and on-chip voltage regulation. A system level study is conducted to evaluate the effect of on-chip voltage regulator employing magnetic inductors as the output filter. It is concluded that neuromorphic power controller is beneficial for fine-grained per-core power management in conjunction with on-chip voltage regulators utilizing scaled magnetic inductors.
ContributorsSinha, Saurabh (Author) / Cao, Yu (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Yu, Hongbin (Committee member) / Christen, Jennifer B. (Committee member) / Arizona State University (Publisher)
Created2011
149867-Thumbnail Image.png
Description
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
ContributorsKrishnamoorthi, Harish (Author) / Spanias, Andreas (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2011
149893-Thumbnail Image.png
Description
Sensing and controlling current flow is a fundamental requirement for many electronic systems, including power management (DC-DC converters and LDOs), battery chargers, electric vehicles, solenoid positioning, motor control, and power monitoring. Current Shunt Monitor (CSM) systems have various applications for precise current monitoring of those aforementioned applications. CSMs enable current

Sensing and controlling current flow is a fundamental requirement for many electronic systems, including power management (DC-DC converters and LDOs), battery chargers, electric vehicles, solenoid positioning, motor control, and power monitoring. Current Shunt Monitor (CSM) systems have various applications for precise current monitoring of those aforementioned applications. CSMs enable current measurement across an external sense resistor (RS) in series to current flow. Two different types of CSMs designed and characterized in this paper. First design used direct current reading method and the other design used indirect current reading method. Proposed CSM systems can sense power supply current ranging from 1mA to 200mA for the direct current reading topology and from 1mA to 500mA for the indirect current reading topology across a typical board Cu-trace resistance of 1 ohm with less than 10 µV input-referred offset, 0.3 µV/°C offset drift and 0.1% accuracy for both topologies. Proposed systems avoid using a costly zero-temperature coefficient (TC) sense resistor that is normally used in typical CSM systems. Instead, both of the designs used existing Cu-trace on the printed circuit board (PCB) in place of the costly resistor. The systems use chopper stabilization at the front-end amplifier signal path to suppress input-referred offset down to less than 10 µV. Switching current-mode (SI) FIR filtering technique is used at the instrumentation amplifier output to filter out the chopping ripple caused by input offset and flicker noise by averaging half of the phase 1 signal and the other half of the phase 2 signal. In addition, residual offset mainly caused by clock feed-through and charge injection of the chopper switches at the chopping frequency and its multiple frequencies notched out by the since response of the SI-FIR filter. A frequency domain Sigma Delta ADC which is used for the indirect current reading type design enables a digital interface to processor applications with minimally added circuitries to build a simple 1st order Sigma Delta ADC. The CSMs are fabricated on a 0.7µm CMOS process with 3 levels of metal, with maximum Vds tolerance of 8V and operates across a common mode range of 0 to 26V for the direct current reading type and of 0 to 30V for the indirect current reading type achieving less than 10nV/sqrtHz of flicker noise at 100 Hz for both approaches. By using a semi-digital SI-FIR filter, residual chopper offset is suppressed down to 0.5mVpp from a baseline of 8mVpp, which is equivalent to 25dB suppression.
ContributorsYeom, Hyunsoo (Author) / Bakkaloglu, Bertan (Thesis advisor) / Kiaei, Sayfe (Committee member) / Ozev, Sule (Committee member) / Yu, Hongyu (Committee member) / Arizona State University (Publisher)
Created2011
149915-Thumbnail Image.png
Description
Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage

Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage and communication that can lead to data corruption and a resulting degradation of image quality. It is convenient to consider corrupted samples as missing, creating a sparsely sampled aperture. A sparse aperture would also result from compressive sensing, which is a very attractive concept for data intensive sen- sors such as SAR. Recent developments in sparse decomposition algorithms can be applied to the problem of SAR image formation from a sparsely sampled aperture. Two modified sparse decomposition algorithms are developed, based on well known existing algorithms, modified to be practical in application on modest computa- tional resources. The two algorithms are demonstrated on real-world SAR images. Algorithm performance with respect to super-resolution, noise, coherent speckle and target/clutter decomposition is explored. These algorithms yield more accu- rate image reconstruction from sparsely sampled apertures than classical spectral estimators. At the current state of development, sparse image reconstruction using these two algorithms require about two orders of magnitude greater processing time than classical SAR image formation.
ContributorsWerth, Nicholas (Author) / Karam, Lina (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149962-Thumbnail Image.png
Description
In the last few years, significant advances in nanofabrication have allowed tailoring of structures and materials at a molecular level enabling nanofabrication with precise control of dimensions and organization at molecular length scales, a development leading to significant advances in nanoscale systems. Although, the direction of progress seems to follow

In the last few years, significant advances in nanofabrication have allowed tailoring of structures and materials at a molecular level enabling nanofabrication with precise control of dimensions and organization at molecular length scales, a development leading to significant advances in nanoscale systems. Although, the direction of progress seems to follow the path of microelectronics, the fundamental physics in a nanoscale system changes more rapidly compared to microelectronics, as the size scale is decreased. The changes in length, area, and volume ratios due to reduction in size alter the relative influence of various physical effects determining the overall operation of a system in unexpected ways. One such category of nanofluidic structures demonstrating unique ionic and molecular transport characteristics are nanopores. Nanopores derive their unique transport characteristics from the electrostatic interaction of nanopore surface charge with aqueous ionic solutions. In this doctoral research cylindrical nanopores, in single and array configuration, were fabricated in silicon-on-insulator (SOI) using a combination of electron beam lithography (EBL) and reactive ion etching (RIE). The fabrication method presented is compatible with standard semiconductor foundries and allows fabrication of nanopores with desired geometries and precise dimensional control, providing near ideal and isolated physical modeling systems to study ion transport at the nanometer level. Ion transport through nanopores was characterized by measuring ionic conductances of arrays of nanopores of various diameters for a wide range of concentration of aqueous hydrochloric acid (HCl) ionic solutions. Measured ionic conductances demonstrated two distinct regimes based on surface charge interactions at low ionic concentrations and nanopore geometry at high ionic concentrations. Field effect modulation of ion transport through nanopore arrays, in a fashion similar to semiconductor transistors, was also studied. Using ionic conductance measurements, it was shown that the concentration of ions in the nanopore volume was significantly changed when a gate voltage on nanopore arrays was applied, hence controlling their transport. Based on the ion transport results, single nanopores were used to demonstrate their application as nanoscale particle counters by using polystyrene nanobeads, monodispersed in aqueous HCl solutions of different molarities. Effects of field effect modulation on particle transition events were also demonstrated.
ContributorsJoshi, Punarvasu (Author) / Thornton, Trevor J (Thesis advisor) / Goryll, Michael (Thesis advisor) / Spanias, Andreas (Committee member) / Saraniti, Marco (Committee member) / Arizona State University (Publisher)
Created2011
149902-Thumbnail Image.png
Description
For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it

For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it does not require interpolation, and it can be used on both stripmap and spotlight SAR systems. Another transform that can be used to enhance the processing of SAR image formation is the fractional Fourier transform (FRFT). This transform has been recently introduced to the signal processing community, and it has shown many promising applications in the realm of SAR signal processing, specifically because of its close association to the Wigner distribution and ambiguity function. The objective of this work is to improve the application of the FRFT in order to enhance the implementation of the CSA for SAR processing. This will be achieved by processing real phase-history data from the RADARSAT-1 satellite, a multi-mode SAR platform operating in the C-band, providing imagery with resolution between 8 and 100 meters at incidence angles of 10 through 59 degrees. The phase-history data will be processed into imagery using the conventional chirp scaling algorithm. The results will then be compared using a new implementation of the CSA based on the use of the FRFT, combined with traditional SAR focusing techniques, to enhance the algorithm's focusing ability, thereby increasing the peak-to-sidelobe ratio of the focused targets. The FRFT can also be used to provide focusing enhancements at extended ranges.
ContributorsNorthrop, Judith (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Spanias, Andreas (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011
149826-Thumbnail Image.png
Description
ABSTRACT &eacutetudes; written for violin ensemble, which include violin duets, trios, and quartets, are less numerous than solo &eacutetudes.; These works rarely go by the title "&eacutetude;," and have not been the focus of much scholarly research. Ensemble &eacutetudes; have much to offer students, teachers and

ABSTRACT &eacutetudes; written for violin ensemble, which include violin duets, trios, and quartets, are less numerous than solo &eacutetudes.; These works rarely go by the title "&eacutetude;," and have not been the focus of much scholarly research. Ensemble &eacutetudes; have much to offer students, teachers and composers, however, because they add an extra dimension to the learning, teaching, and composing processes. This document establishes the value of ensemble &eacutetudes; in pedagogy and explores applications of the repertoire currently available. Rather than focus on violin duets, the most common form of ensemble &eacutetude;, it mainly considers works for three and four violins without accompaniment. Concentrating on the pedagogical possibilities of studying &eacutetudes; in a group, this document introduces creative ways that works for violin ensemble can be used as both &eacutetudes; and performance pieces. The first two chapters explore the history and philosophy of the violin &eacutetude; and multiple-violin works, the practice of arranging of solo &eacutetudes; for multiple instruments, and the benefits of group learning and cooperative learning that distinguish ensemble &eacutetude; study from solo &eacutetude; study. The third chapter is an annotated survey of works for three and four violins without accompaniment, and serves as a pedagogical guide to some of the available repertoire. Representing a wide variety of styles, techniques and levels, it illuminates an historical association between violin ensemble works and pedagogy. The fourth chapter presents an original composition by the author, titled Variations on a Scottish Folk Song: &eacutetude; for Four Violins, with an explanation of the process and techniques used to create this ensemble &eacutetude.; This work is an example of the musical and technical integration essential to &eacutetude; study, and demonstrates various compositional traits that promote cooperative learning. Ensemble &eacutetudes; are valuable pedagogical tools that deserve wider exposure. It is my hope that the information and ideas about ensemble &eacutetudes; in this paper and the individual descriptions of the works presented will increase interest in and application of violin trios and quartets at the university level.
ContributorsLundell, Eva Rachel (Contributor) / Swartz, Jonathan (Thesis advisor) / Rockmaker, Jody (Committee member) / Buck, Nancy (Committee member) / Koonce, Frank (Committee member) / Norton, Kay (Committee member) / Arizona State University (Publisher)
Created2011