Matching Items (252)
151722-Thumbnail Image.png
Description
Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating

Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds.
ContributorsFink, Alex M (Author) / Spanias, Andreas S (Thesis advisor) / Cook, Perry R. (Committee member) / Turaga, Pavan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2013
152143-Thumbnail Image.png
Description
Radio frequency (RF) transceivers require a disproportionately high effort in terms of test development time, test equipment cost, and test time. The relatively high test cost stems from two contributing factors. First, RF transceivers require the measurement of a diverse set of specifications, requiring multiple test set-ups and long test

Radio frequency (RF) transceivers require a disproportionately high effort in terms of test development time, test equipment cost, and test time. The relatively high test cost stems from two contributing factors. First, RF transceivers require the measurement of a diverse set of specifications, requiring multiple test set-ups and long test times, which complicates load-board design, debug, and diagnosis. Second, high frequency operation necessitates the use of expensive equipment, resulting in higher per second test time cost compared with mixed-signal or digital circuits. Moreover, in terms of the non-recurring engineering cost, the need to measure complex specfications complicates the test development process and necessitates a long learning process for test engineers. Test time is dominated by changing and settling time for each test set-up. Thus, single set-up test solutions are desirable. Loop-back configuration where the transmitter output is connected to the receiver input are used as the desirable test set- up for RF transceivers, since it eliminates the reliance on expensive instrumentation for RF signal analysis and enables measuring multiple parameters at once. In-phase and Quadrature (IQ) imbalance, non-linearity, DC offset and IQ time skews are some of the most detrimental imperfections in transceiver performance. Measurement of these parameters in the loop-back mode is challenging due to the coupling between the receiver (RX) and transmitter (TX) parameters. Loop-back based solutions are proposed in this work to resolve this issue. A calibration algorithm for a subset of the above mentioned impairments is also presented. Error Vector Magnitude (EVM) is a system-level parameter that is specified for most advanced communication standards. EVM measurement often takes extensive test development efforts, tester resources, and long test times. EVM is analytically related to system impairments, which are typically measured in a production test i environment. Thus, EVM test can be eliminated from the test list if the relations between EVM and system impairments are derived independent of the circuit implementation and manufacturing process. In this work, the focus is on the WLAN standard, and deriving the relations between EVM and three of the most detrimental impairments for QAM/OFDM based systems (IQ imbalance, non-linearity, and noise). Having low cost test techniques for measuring the RF transceivers imperfections and being able to analytically compute EVM from the measured parameters is a complete test solution for RF transceivers. These techniques along with the proposed calibration method can be used in improving the yield by widening the pass/fail boundaries for transceivers imperfections. For all of the proposed methods, simulation and hardware measurements prove that the proposed techniques provide accurate characterization of RF transceivers.
ContributorsNassery, Afsaneh (Author) / Ozev, Sule (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Kiaei, Sayfe (Committee member) / Kitchen, Jennifer (Committee member) / Arizona State University (Publisher)
Created2013
152149-Thumbnail Image.png
Description
Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating

Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating range. In this document a linear model is derived for an inverter connected to the Thevenin equivalent of a microgrid. This model is then compared to a nonlinear simulation model and analyzed using the open and closed loop systems in both the time and frequency domains. The modeling error is quantified with emphasis on its use for controller design purposes. Control design examples are given using a Glover McFarlane controller, gain sched- uled Glover McFarlane controller, and bumpless transfer controller which are compared to the standard droop control approach. These examples serve as a guide to illustrate the use of multi-variable modeling techniques in the context of robust controller design and show that gain scheduled MIMO control techniques can extend the operating range of a microgrid. A hardware implementation is used to compare constant gain droop controllers with Glover McFarlane controllers and shows a clear advantage of the Glover McFarlane approach.
ContributorsSteenis, Joel (Author) / Ayyanar, Raja (Thesis advisor) / Mittelmann, Hans (Committee member) / Tsakalis, Konstantinos (Committee member) / Tylavsky, Daniel (Committee member) / Arizona State University (Publisher)
Created2013
152260-Thumbnail Image.png
Description
Autonomous vehicle control systems utilize real-time kinematic Global Navigation Satellite Systems (GNSS) receivers to provide a position within two-centimeter of truth. GNSS receivers utilize the satellite signal time of arrival estimates to solve for position; and multipath corrupts the time of arrival estimates with a time-varying bias. Time of arrival

Autonomous vehicle control systems utilize real-time kinematic Global Navigation Satellite Systems (GNSS) receivers to provide a position within two-centimeter of truth. GNSS receivers utilize the satellite signal time of arrival estimates to solve for position; and multipath corrupts the time of arrival estimates with a time-varying bias. Time of arrival estimates are based upon accurate direct sequence spread spectrum (DSSS) code and carrier phase tracking. Current multipath mitigating GNSS solutions include fixed radiation pattern antennas and windowed delay-lock loop code phase discriminators. A new multipath mitigating code tracking algorithm is introduced that utilizes a non-symmetric correlation kernel to reject multipath. Independent parameters provide a means to trade-off code tracking discriminant gain against multipath mitigation performance. The algorithm performance is characterized in terms of multipath phase error bias, phase error estimation variance, tracking range, tracking ambiguity and implementation complexity. The algorithm is suitable for modernized GNSS signals including Binary Phase Shift Keyed (BPSK) and a variety of Binary Offset Keyed (BOC) signals. The algorithm compensates for unbalanced code sequences to ensure a code tracking bias does not result from the use of asymmetric correlation kernels. The algorithm does not require explicit knowledge of the propagation channel model. Design recommendations for selecting the algorithm parameters to mitigate precorrelation filter distortion are also provided.
ContributorsMiller, Steven (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2013
152234-Thumbnail Image.png
Description
One of the main challenges in planetary robotics is to traverse the shortest path through a set of waypoints. The shortest distance between any two waypoints is a direct linear traversal. Often times, there are physical restrictions that prevent a rover form traversing straight to a waypoint. Thus, knowledge of

One of the main challenges in planetary robotics is to traverse the shortest path through a set of waypoints. The shortest distance between any two waypoints is a direct linear traversal. Often times, there are physical restrictions that prevent a rover form traversing straight to a waypoint. Thus, knowledge of the terrain is needed prior to traversal. The Digital Terrain Model (DTM) provides information about the terrain along with waypoints for the rover to traverse. However, traversing a set of waypoints linearly is burdensome, as the rovers would constantly need to modify their orientation as they successively approach waypoints. Although there are various solutions to this problem, this research paper proposes the smooth traversability of the rover using splines as a quick and easy implementation to traverse a set of waypoints. In addition, a rover was used to compare the smoothness of the linear traversal along with the spline interpolations. The data collected illustrated that spline traversals had a less rate of change in the velocity over time, indicating that the rover performed smoother than with linear paths.
ContributorsKamasamudram, Anurag (Author) / Saripalli, Srikanth (Thesis advisor) / Fainekos, Georgios (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
152170-Thumbnail Image.png
Description
Sliding-Mode Control (SMC) has several benefits over traditional Proportional-Integral-Differential (PID) control in terms of fast transient response, robustness to parameter and component variations, and low sensitivity to loop disturbances. An All-Digital Sliding-Mode (ADSM) controlled DC-DC converter, utilizing single-bit oversampled frequency domain digitizers is proposed. In the proposed approach, feedback and

Sliding-Mode Control (SMC) has several benefits over traditional Proportional-Integral-Differential (PID) control in terms of fast transient response, robustness to parameter and component variations, and low sensitivity to loop disturbances. An All-Digital Sliding-Mode (ADSM) controlled DC-DC converter, utilizing single-bit oversampled frequency domain digitizers is proposed. In the proposed approach, feedback and reference digitizing Analog-to-Digital Converters (ADC) are based on a single-bit, first order Sigma-Delta frequency to digital converter, running at 32MHz over-sampling rate. The ADSM regulator achieves 1% settling time in less than 5uSec for a load variation of 600mA. The sliding-mode controller utilizes a high-bandwidth hysteretic differentiator and an integrator to perform the sliding control law in digital domain. The proposed approach overcomes the steady state error (or DC offset), and limits the switching frequency range, which are the two common problems associated with sliding-mode controllers. The IC is designed and fabricated on a 0.35um CMOS process occupying an active area of 2.72mm-squared. Measured peak efficiency is 83%.
ContributorsDashtestani, Ahmad (Author) / Bakkaloglu, Bertan (Thesis advisor) / Thornton, Trevor (Committee member) / Song, Hongjiang (Committee member) / Kiaei, Sayfe (Committee member) / Arizona State University (Publisher)
Created2013
151435-Thumbnail Image.png
Description
The main objective of this study is to develop an innovative system in the form of a sandwich panel type composite with textile reinforced skins and aerated concrete core. Existing theoretical concepts along with extensive experimental investigations were utilized to characterize the behavior of cement based systems in the presence

The main objective of this study is to develop an innovative system in the form of a sandwich panel type composite with textile reinforced skins and aerated concrete core. Existing theoretical concepts along with extensive experimental investigations were utilized to characterize the behavior of cement based systems in the presence of individual fibers and textile yarns. Part of this thesis is based on a material model developed here in Arizona State University to simulate experimental flexural response and back calculate tensile response. This concept is based on a constitutive law consisting of a tri-linear tension model with residual strength and a bilinear elastic perfectly plastic compression stress strain model. This parametric model was used to characterize Textile Reinforced Concrete (TRC) with aramid, carbon, alkali resistant glass, polypropylene TRC and hybrid systems of aramid and polypropylene. The same material model was also used to characterize long term durability issues with glass fiber reinforced concrete (GFRC). Historical data associated with effect of temperature dependency in aging of GFRC composites were used. An experimental study was conducted to understand the behavior of aerated concrete systems under high stain rate impact loading. Test setup was modeled on a free fall drop of an instrumented hammer using three point bending configuration. Two types of aerated concrete: autoclaved aerated concrete (AAC) and polymeric fiber-reinforced aerated concrete (FRAC) were tested and compared in terms of their impact behavior. The effect of impact energy on the mechanical properties was investigated for various drop heights and different specimen sizes. Both materials showed similar flexural load carrying capacity under impact, however, flexural toughness of fiber-reinforced aerated concrete was proved to be several degrees higher in magnitude than that provided by plain autoclaved aerated concrete. Effect of specimen size and drop height on the impact response of AAC and FRAC was studied and discussed. Results obtained were compared to the performance of sandwich beams with AR glass textile skins with aerated concrete core under similar impact conditions. After this extensive study it was concluded that this type of sandwich composite could be effectively used in low cost sustainable infrastructure projects.
ContributorsDey, Vikram (Author) / Mobasher, Barzin (Thesis advisor) / Rajan, Subramaniam D. (Committee member) / Neithalath, Narayanan (Committee member) / Arizona State University (Publisher)
Created2012
151537-Thumbnail Image.png
Description
Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses

Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses on the study of sparse models and their interplay with modern machine learning techniques such as manifold, ensemble and graph-based methods, along with their applications in image analysis and recovery. By considering graph relations between data samples while learning sparse models, graph-embedded codes can be obtained for use in unsupervised, supervised and semi-supervised problems. Using experiments on standard datasets, it is demonstrated that the codes obtained from the proposed methods outperform several baseline algorithms. In order to facilitate sparse learning with large scale data, the paradigm of ensemble sparse coding is proposed, and different strategies for constructing weak base models are developed. Experiments with image recovery and clustering demonstrate that these ensemble models perform better when compared to conventional sparse coding frameworks. When examples from the data manifold are available, manifold constraints can be incorporated with sparse models and two approaches are proposed to combine sparse coding with manifold projection. The improved performance of the proposed techniques in comparison to sparse coding approaches is demonstrated using several image recovery experiments. In addition to these approaches, it might be required in some applications to combine multiple sparse models with different regularizations. In particular, combining an unconstrained sparse model with non-negative sparse coding is important in image analysis, and it poses several algorithmic and theoretical challenges. A convex and an efficient greedy algorithm for recovering combined representations are proposed. Theoretical guarantees on sparsity thresholds for exact recovery using these algorithms are derived and recovery performance is also demonstrated using simulations on synthetic data. Finally, the problem of non-linear compressive sensing, where the measurement process is carried out in feature space obtained using non-linear transformations, is considered. An optimized non-linear measurement system is proposed, and improvements in recovery performance are demonstrated in comparison to using random measurements as well as optimized linear measurements.
ContributorsNatesan Ramamurthy, Karthikeyan (Author) / Spanias, Andreas (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Karam, Lina (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151544-Thumbnail Image.png
Description
Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse

Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse modeling, data is represented as a sparse linear combination of atoms from a "dictionary" matrix. This dissertation focuses on understanding different aspects of sparse learning, thereby enhancing the use of sparse methods by incorporating tools from machine learning. With the growing need to adapt models for large scale data, it is important to design dictionaries that can model the entire data space and not just the samples considered. By exploiting the relation of dictionary learning to 1-D subspace clustering, a multilevel dictionary learning algorithm is developed, and it is shown to outperform conventional sparse models in compressed recovery, and image denoising. Theoretical aspects of learning such as algorithmic stability and generalization are considered, and ensemble learning is incorporated for effective large scale learning. In addition to building strategies for efficiently implementing 1-D subspace clustering, a discriminative clustering approach is designed to estimate the unknown mixing process in blind source separation. By exploiting the non-linear relation between the image descriptors, and allowing the use of multiple features, sparse methods can be made more effective in recognition problems. The idea of multiple kernel sparse representations is developed, and algorithms for learning dictionaries in the feature space are presented. Using object recognition experiments on standard datasets it is shown that the proposed approaches outperform other sparse coding-based recognition frameworks. Furthermore, a segmentation technique based on multiple kernel sparse representations is developed, and successfully applied for automated brain tumor identification. Using sparse codes to define the relation between data samples can lead to a more robust graph embedding for unsupervised clustering. By performing discriminative embedding using sparse coding-based graphs, an algorithm for measuring the glomerular number in kidney MRI images is developed. Finally, approaches to build dictionaries for local sparse coding of image descriptors are presented, and applied to object recognition and image retrieval.
ContributorsJayaraman Thiagarajan, Jayaraman (Author) / Spanias, Andreas (Thesis advisor) / Frakes, David (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151345-Thumbnail Image.png
Description
Woven fabric composite materials are widely used in the construction of aircraft engine fan containment systems, mostly due to their high strength to weight ratios and ease of implementation. The development of a predictive model for fan blade containment would provide great benefit to engine manufactures in shortened development cycle

Woven fabric composite materials are widely used in the construction of aircraft engine fan containment systems, mostly due to their high strength to weight ratios and ease of implementation. The development of a predictive model for fan blade containment would provide great benefit to engine manufactures in shortened development cycle time, less risk in certification and fewer dollars lost to redesign/recertification cycles. A mechanistic user-defined material model subroutine has been developed at Arizona State University (ASU) that captures the behavioral response of these fabrics, namely Kevlar® 49, under ballistic loading. Previously developed finite element models used to validate the consistency of this material model neglected the effects of the physical constraints imposed on the test setup during ballistic testing performed at NASA Glenn Research Center (NASA GRC). Part of this research was to explore the effects of these boundary conditions on the results of the numerical simulations. These effects were found to be negligible in most instances. Other material models for woven fabrics are available in the LS-DYNA finite element code. One of these models, MAT234: MAT_VISCOELASTIC_LOOSE_FABRIC (Ivanov & Tabiei, 2004) was studied and implemented in the finite element simulations of ballistic testing associated with the FAA ASU research. The results from these models are compared to results obtained from the ASU UMAT as part of this research. The results indicate an underestimation in the energy absorption characteristics of the Kevlar 49 fabric containment systems. More investigation needs to be performed in the implementation of MAT234 for Kevlar 49 fabric. Static penetrator testing of Kevlar® 49 fabric was performed at ASU in conjunction with this research. These experiments are designed to mimic the type of loading experienced during fan blade out events. The resulting experimental strains were measured using a non-contact optical strain measurement system (ARAMIS).
ContributorsFein, Jonathan (Author) / Rajan, Subramaniam D. (Thesis advisor) / Mobasher, Barzin (Committee member) / Jiang, Hanqing (Committee member) / Arizona State University (Publisher)
Created2012