Matching Items (1,503)
Filtering by

Clear all filters

152198-Thumbnail Image.png
Description
The processing power and storage capacity of portable devices have improved considerably over the past decade. This has motivated the implementation of sophisticated audio and other signal processing algorithms on such mobile devices. Of particular interest in this thesis is audio/speech processing based on perceptual criteria. Specifically, estimation of parameters

The processing power and storage capacity of portable devices have improved considerably over the past decade. This has motivated the implementation of sophisticated audio and other signal processing algorithms on such mobile devices. Of particular interest in this thesis is audio/speech processing based on perceptual criteria. Specifically, estimation of parameters from human auditory models, such as auditory patterns and loudness, involves computationally intensive operations which can strain device resources. Hence, strategies for implementing computationally efficient human auditory models for loudness estimation have been studied in this thesis. Existing algorithms for reducing computations in auditory pattern and loudness estimation have been examined and improved algorithms have been proposed to overcome limitations of these methods. In addition, real-time applications such as perceptual loudness estimation and loudness equalization using auditory models have also been implemented. A software implementation of loudness estimation on iOS devices is also reported in this thesis. In addition to the loudness estimation algorithms and software, in this thesis project we also created new illustrations of speech and audio processing concepts for research and education. As a result, a new suite of speech/audio DSP functions was developed and integrated as part of the award-winning educational iOS App 'iJDSP." These functions are described in detail in this thesis. Several enhancements in the architecture of the application have also been introduced for providing the supporting framework for speech/audio processing. Frame-by-frame processing and visualization functionalities have been developed to facilitate speech/audio processing. In addition, facilities for easy sound recording, processing and audio rendering have also been developed to provide students, practitioners and researchers with an enriched DSP simulation tool. Simulations and assessments have been also developed for use in classes and training of practitioners and students.
ContributorsKalyanasundaram, Girish (Author) / Spanias, Andreas S (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2013
152200-Thumbnail Image.png
Description
Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in

Magnetic Resonance Imaging using spiral trajectories has many advantages in speed, efficiency in data-acquistion and robustness to motion and flow related artifacts. The increase in sampling speed, however, requires high performance of the gradient system. Hardware inaccuracies from system delays and eddy currents can cause spatial and temporal distortions in the encoding gradient waveforms. This causes sampling discrepancies between the actual and the ideal k-space trajectory. Reconstruction assuming an ideal trajectory can result in shading and blurring artifacts in spiral images. Current methods to estimate such hardware errors require many modifications to the pulse sequence, phantom measurements or specialized hardware. This work presents a new method to estimate time-varying system delays for spiral-based trajectories. It requires a minor modification of a conventional stack-of-spirals sequence and analyzes data collected on three orthogonal cylinders. The method is fast, robust to off-resonance effects, requires no phantom measurements or specialized hardware and estimate variable system delays for the three gradient channels over the data-sampling period. The initial results are presented for acquired phantom and in-vivo data, which show a substantial reduction in the artifacts and improvement in the image quality.
ContributorsBhavsar, Payal (Author) / Pipe, James G (Thesis advisor) / Frakes, David (Committee member) / Kodibagkar, Vikram (Committee member) / Arizona State University (Publisher)
Created2013
152201-Thumbnail Image.png
Description
Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After

Coronary computed tomography angiography (CTA) has a high negative predictive value for ruling out coronary artery disease with non-invasive evaluation of the coronary arteries. My work has attempted to provide metrics that could increase the positive predictive value of coronary CTA through the use of dual energy CTA imaging. After developing an algorithm for obtaining calcium scores from a CTA exam, a dual energy CTA exam was performed on patients at dose levels equivalent to levels for single energy CTA with a calcium scoring exam. Calcium Agatston scores obtained from the dual energy CTA exam were within ±11% of scores obtained with conventional calcium scoring exams. In the presence of highly attenuating coronary calcium plaques, the virtual non-calcium images obtained with dual energy CTA were able to successfully measure percent coronary stenosis within 5% of known stenosis values, which is not possible with single energy CTA images due to the presence of the calcium blooming artifact. After fabricating an anthropomorphic beating heart phantom with coronary plaques, characterization of soft plaque vulnerability to rupture or erosion was demonstrated with measurements of the distance from soft plaque to aortic ostium, percent stenosis, and percent lipid volume in soft plaque. A classification model was developed, with training data from the beating heart phantom and plaques, which utilized support vector machines to classify coronary soft plaque pixels as lipid or fibrous. Lipid versus fibrous classification with single energy CTA images exhibited a 17% error while dual energy CTA images in the classification model developed here only exhibited a 4% error. Combining the calcium blooming correction and the percent lipid volume methods developed in this work will provide physicians with metrics for increasing the positive predictive value of coronary CTA as well as expanding the use of coronary CTA to patients with highly attenuating calcium plaques.
ContributorsBoltz, Thomas (Author) / Frakes, David (Thesis advisor) / Towe, Bruce (Committee member) / Kodibagkar, Vikram (Committee member) / Pavlicek, William (Committee member) / Bouman, Charles (Committee member) / Arizona State University (Publisher)
Created2013
152202-Thumbnail Image.png
Description
This thesis addresses the issue of making an economic case for energy storage in power systems. Bulk energy storage has often been suggested for large scale electric power systems in order to levelize load; store energy when it is inexpensive and discharge energy when it is expensive; potentially defer transmission

This thesis addresses the issue of making an economic case for energy storage in power systems. Bulk energy storage has often been suggested for large scale electric power systems in order to levelize load; store energy when it is inexpensive and discharge energy when it is expensive; potentially defer transmission and generation expansion; and provide for generation reserve margins. As renewable energy resource penetration increases, the uncertainty and variability of wind and solar may be alleviated by bulk energy storage technologies. The quadratic programming function in MATLAB is used to simulate an economic dispatch that includes energy storage. A program is created that utilizes quadratic programming to analyze various cases using a 2010 summer peak load from the Arizona transmission system, part of the Western Electricity Coordinating Council (WECC). The MATLAB program is used first to test the Arizona test bed with a low level of energy storage to study how the storage power limit effects several optimization out-puts such as the system wide operating costs. Very high levels of energy storage are then added to see how high level energy storage affects peak shaving, load factor, and other system applications. Finally, various constraint relaxations are made to analyze why the applications tested eventually approach a constant value. This research illustrates the use of energy storage which helps minimize the system wide generator operating cost by "shaving" energy off of the peak demand.
ContributorsRuggiero, John (Author) / Heydt, Gerald T (Thesis advisor) / Datta, Rajib (Committee member) / Karady, George G. (Committee member) / Arizona State University (Publisher)
Created2013
152174-Thumbnail Image.png
Description
Recent trends in the electric power industry have led to more attention to optimal operation of power transformers. In a deregulated environment, optimal operation means minimizing the maintenance and extending the life of this critical and costly equipment for the purpose of maximizing profits. Optimal utilization of a transformer can

Recent trends in the electric power industry have led to more attention to optimal operation of power transformers. In a deregulated environment, optimal operation means minimizing the maintenance and extending the life of this critical and costly equipment for the purpose of maximizing profits. Optimal utilization of a transformer can be achieved through the use of dynamic loading. A benefit of dynamic loading is that it allows better utilization of the transformer capacity, thus increasing the flexibility and reliability of the power system. This document presents the progress on a software application which can estimate the maximum time-varying loading capability of transformers. This information can be used to load devices closer to their limits without exceeding the manufacturer specified operating limits. The maximally efficient dynamic loading of transformers requires a model that can accurately predict both top-oil temperatures (TOTs) and hottest-spot temperatures (HSTs). In the previous work, two kinds of thermal TOT and HST models have been studied and used in the application: the IEEE TOT/HST models and the ASU TOT/HST models. And, several metrics have been applied to evaluate the model acceptability and determine the most appropriate models for using in the dynamic loading calculations. In this work, an investigation to improve the existing transformer thermal models performance is presented. Some factors that may affect the model performance such as improper fan status and the error caused by the poor performance of IEEE models are discussed. Additional methods to determine the reliability of transformer thermal models using metrics such as time constant and the model parameters are also provided. A new production grade application for real-time dynamic loading operating purpose is introduced. This application is developed by using an existing planning application, TTeMP, as a start point, which is designed for the dispatchers and load specialists. To overcome the limitations of TTeMP, the new application can perform dynamic loading under emergency conditions, such as loss-of transformer loading. It also has the capability to determine the emergency rating of the transformers for a real-time estimation.
ContributorsZhang, Ming (Author) / Tylavsky, Daniel J (Thesis advisor) / Ayyanar, Raja (Committee member) / Holbert, Keith E. (Committee member) / Arizona State University (Publisher)
Created2013
152129-Thumbnail Image.png
Description
The objective of this research is to investigate the relationship among key process design variables associated with the development of nanoscale electrospun polymeric scaffolds capable of tissue regeneration. To date, there has been no systematic approach toward understanding electrospinning process parameters responsible for the production of 3-D nanoscaffold architectures with

The objective of this research is to investigate the relationship among key process design variables associated with the development of nanoscale electrospun polymeric scaffolds capable of tissue regeneration. To date, there has been no systematic approach toward understanding electrospinning process parameters responsible for the production of 3-D nanoscaffold architectures with desired levels quality assurance envisioned to satisfy emerging regenerative medicine market needs. , As such, this study encompassed a more systematic, rational design of experiment (DOE) approach toward the identification of electrospinning process conditions responsible for the production of dextran-polyacrylic acid (DEX-PAA) nanoscaffolds with desired architectures and tissue engineering properties. The latter includes scaffold fiber diameter, pore size, porosity, and degree of crosslinking that together can provide a range of scaffold nanomechanical properties that closely mimics the cell microenvironment. The results obtained from this preliminary DOE study indicate that there exist electrospinning operation conditions capable of producing Dex-PAA nanoarchitecture having potential utility for regenerative medicine applications.
ContributorsEspinoza, Roberta (Author) / Pizziconi, Vincent (Thesis advisor) / Massia, Stephen (Committee member) / Garcia, Antonio (Committee member) / Arizona State University (Publisher)
Created2013
152131-Thumbnail Image.png
Description
The overall goal of this research project was to assess the feasibility of investigating the effects of microgravity on mineralization systems in unit gravity environments. If possible to perform these studies in unit gravity earth environments, such as earth, such systems can offer markedly less costly and more concerted research

The overall goal of this research project was to assess the feasibility of investigating the effects of microgravity on mineralization systems in unit gravity environments. If possible to perform these studies in unit gravity earth environments, such as earth, such systems can offer markedly less costly and more concerted research efforts to study these vitally important systems. Expected outcomes from easily accessible test environments and more tractable studies include the development of more advanced and adaptive material systems, including biological systems, particularly as humans ponder human exploration in deep space. The specific focus of the research was the design and development of a prototypical experimental test system that could preliminarily meet the challenging design specifications required of such test systems. Guided by a more unified theoretical foundation and building upon concept design and development heuristics, assessment of the feasibility of two experimental test systems was explored. Test System I was a rotating wall reactor experimental system that closely followed the specifications of a similar test system, Synthecon, designed by NASA contractors and thus closely mimicked microgravity conditions of the space shuttle and station. The latter includes terminal velocity conditions experienced by both innate material systems, as well as, biological systems, including living tissue and humans but has the ability to extend to include those material test systems associated with mineralization processes. Test System II is comprised of a unique vertical column design that offered more easily controlled fluid mechanical test conditions over a much wider flow regime that was necessary to achieving terminal velocities under free convection-less conditions that are important in mineralization processes. Preliminary results indicate that Test System II offers distinct advantages in studying microgravity effects in test systems operating in unit gravity environments and particularly when investigating mineralization and related processes. Verification of the Test System II was performed on validating microgravity effects on calcite mineralization processes reported earlier others. There studies were conducted on calcite mineralization in fixed-wing, reduced gravity aircraft, known as the `vomit comet' where reduced gravity conditions are include for very short (~20second) time periods. Preliminary results indicate that test systems, such as test system II, can be devised to assess microgravity conditions in unit gravity environments, such as earth. Furthermore, the preliminary data obtained on calcite formation suggest that strictly physicochemical mechanisms may be the dominant factors that control adaptation in materials processes, a theory first proposed by Liu et al. Thus the result of this study may also help shine a light on the problem of early osteoporosis in astronauts and long term interest in deep space exploration.
ContributorsSeyedmadani, Kimia (Author) / Pizziconi, Vincent (Thesis advisor) / Towe, Bruce (Committee member) / Alford, Terry (Committee member) / Arizona State University (Publisher)
Created2013
152139-Thumbnail Image.png
Description
ABSTRACT Developing new non-traditional device models is gaining popularity as the silicon-based electrical device approaches its limitation when it scales down. Membrane systems, also called P systems, are a new class of biological computation model inspired by the way cells process chemical signals. Spiking Neural P systems (SNP systems), a

ABSTRACT Developing new non-traditional device models is gaining popularity as the silicon-based electrical device approaches its limitation when it scales down. Membrane systems, also called P systems, are a new class of biological computation model inspired by the way cells process chemical signals. Spiking Neural P systems (SNP systems), a certain kind of membrane systems, is inspired by the way the neurons in brain interact using electrical spikes. Compared to the traditional Boolean logic, SNP systems not only perform similar functions but also provide a more promising solution for reliable computation. Two basic neuron types, Low Pass (LP) neurons and High Pass (HP) neurons, are introduced. These two basic types of neurons are capable to build an arbitrary SNP neuron. This leads to the conclusion that these two basic neuron types are Turing complete since SNP systems has been proved Turing complete. These two basic types of neurons are further used as the elements to construct general-purpose arithmetic circuits, such as adder, subtractor and comparator. In this thesis, erroneous behaviors of neurons are discussed. Transmission error (spike loss) is proved to be equivalent to threshold error, which makes threshold error discussion more universal. To improve the reliability, a new structure called motif is proposed. Compared to Triple Modular Redundancy improvement, motif design presents its efficiency and effectiveness in both single neuron and arithmetic circuit analysis. DRAM-based CMOS circuits are used to implement the two basic types of neurons. Functionality of basic type neurons is proved using the SPICE simulations. The motif improved adder and the comparator, as compared to conventional Boolean logic design, are much more reliable with lower leakage, and smaller silicon area. This leads to the conclusion that SNP system could provide a more promising solution for reliable computation than the conventional Boolean logic.
ContributorsAn, Pei (Author) / Cao, Yu (Thesis advisor) / Barnaby, Hugh (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2013
152140-Thumbnail Image.png
Description
Specificity and affinity towards a given ligand/epitope limit target-specific delivery. Companies can spend between $500 million to $2 billion attempting to discover a new drug or therapy; a significant portion of this expense funds high-throughput screening to find the most successful target-specific compound available. A more recent addition to discovering

Specificity and affinity towards a given ligand/epitope limit target-specific delivery. Companies can spend between $500 million to $2 billion attempting to discover a new drug or therapy; a significant portion of this expense funds high-throughput screening to find the most successful target-specific compound available. A more recent addition to discovering highly specific targets is the application of phage display utilizing single chain variable fragment antibodies (scFv). The aim of this research was to employ phage display to identify pathologies related to traumatic brain injury (TBI), particularly astrogliosis. A unique biopanning method against viable astrocyte cultures activated with TGF-β achieved this aim. Four scFv clones of interest showed varying relative affinities toward astrocytes. One of those four showed the ability to identify reactive astroctyes over basal astrocytes through max signal readings, while another showed a statistical significance in max signal reading toward basal astrocytes. Future studies will include further affinity characterization assays. This work contributes to the development of targeting therapeutics and diagnostics for TBI.
ContributorsMarsh, William (Author) / Stabenfeldt, Sarah (Thesis advisor) / Caplan, Michael (Committee member) / Sierks, Michael (Committee member) / Arizona State University (Publisher)
Created2013
152143-Thumbnail Image.png
Description
Radio frequency (RF) transceivers require a disproportionately high effort in terms of test development time, test equipment cost, and test time. The relatively high test cost stems from two contributing factors. First, RF transceivers require the measurement of a diverse set of specifications, requiring multiple test set-ups and long test

Radio frequency (RF) transceivers require a disproportionately high effort in terms of test development time, test equipment cost, and test time. The relatively high test cost stems from two contributing factors. First, RF transceivers require the measurement of a diverse set of specifications, requiring multiple test set-ups and long test times, which complicates load-board design, debug, and diagnosis. Second, high frequency operation necessitates the use of expensive equipment, resulting in higher per second test time cost compared with mixed-signal or digital circuits. Moreover, in terms of the non-recurring engineering cost, the need to measure complex specfications complicates the test development process and necessitates a long learning process for test engineers. Test time is dominated by changing and settling time for each test set-up. Thus, single set-up test solutions are desirable. Loop-back configuration where the transmitter output is connected to the receiver input are used as the desirable test set- up for RF transceivers, since it eliminates the reliance on expensive instrumentation for RF signal analysis and enables measuring multiple parameters at once. In-phase and Quadrature (IQ) imbalance, non-linearity, DC offset and IQ time skews are some of the most detrimental imperfections in transceiver performance. Measurement of these parameters in the loop-back mode is challenging due to the coupling between the receiver (RX) and transmitter (TX) parameters. Loop-back based solutions are proposed in this work to resolve this issue. A calibration algorithm for a subset of the above mentioned impairments is also presented. Error Vector Magnitude (EVM) is a system-level parameter that is specified for most advanced communication standards. EVM measurement often takes extensive test development efforts, tester resources, and long test times. EVM is analytically related to system impairments, which are typically measured in a production test i environment. Thus, EVM test can be eliminated from the test list if the relations between EVM and system impairments are derived independent of the circuit implementation and manufacturing process. In this work, the focus is on the WLAN standard, and deriving the relations between EVM and three of the most detrimental impairments for QAM/OFDM based systems (IQ imbalance, non-linearity, and noise). Having low cost test techniques for measuring the RF transceivers imperfections and being able to analytically compute EVM from the measured parameters is a complete test solution for RF transceivers. These techniques along with the proposed calibration method can be used in improving the yield by widening the pass/fail boundaries for transceivers imperfections. For all of the proposed methods, simulation and hardware measurements prove that the proposed techniques provide accurate characterization of RF transceivers.
ContributorsNassery, Afsaneh (Author) / Ozev, Sule (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Kiaei, Sayfe (Committee member) / Kitchen, Jennifer (Committee member) / Arizona State University (Publisher)
Created2013