Matching Items (12)
Filtering by

Clear all filters

151846-Thumbnail Image.png
Description
Efficiency of components is an ever increasing area of importance to portable applications, where a finite battery means finite operating time. Higher efficiency devices need to be designed that don't compromise on the performance that the consumer has come to expect. Class D amplifiers deliver on the goal of increased

Efficiency of components is an ever increasing area of importance to portable applications, where a finite battery means finite operating time. Higher efficiency devices need to be designed that don't compromise on the performance that the consumer has come to expect. Class D amplifiers deliver on the goal of increased efficiency, but at the cost of distortion. Class AB amplifiers have low efficiency, but high linearity. By modulating the supply voltage of a Class AB amplifier to make a Class H amplifier, the efficiency can increase while still maintaining the Class AB level of linearity. A 92dB Power Supply Rejection Ratio (PSRR) Class AB amplifier and a Class H amplifier were designed in a 0.24um process for portable audio applications. Using a multiphase buck converter increased the efficiency of the Class H amplifier while still maintaining a fast response time to respond to audio frequencies. The Class H amplifier had an efficiency above the Class AB amplifier by 5-7% from 5-30mW of output power without affecting the total harmonic distortion (THD) at the design specifications. The Class H amplifier design met all design specifications and showed performance comparable to the designed Class AB amplifier across 1kHz-20kHz and 0.01mW-30mW. The Class H design was able to output 30mW into 16Ohms without any increase in THD. This design shows that Class H amplifiers merit more research into their potential for increasing efficiency of audio amplifiers and that even simple designs can give significant increases in efficiency without compromising linearity.
ContributorsPeterson, Cory (Author) / Bakkaloglu, Bertan (Thesis advisor) / Barnaby, Hugh (Committee member) / Kiaei, Sayfe (Committee member) / Arizona State University (Publisher)
Created2013
153334-Thumbnail Image.png
Description
Three dimensional (3-D) ultrasound is safe, inexpensive, and has been shown to drastically improve system ease-of-use, diagnostic efficiency, and patient throughput. However, its high computational complexity and resulting high power consumption has precluded its use in hand-held applications.

In this dissertation, algorithm-architecture co-design techniques that aim to make hand-held 3-D ultrasound

Three dimensional (3-D) ultrasound is safe, inexpensive, and has been shown to drastically improve system ease-of-use, diagnostic efficiency, and patient throughput. However, its high computational complexity and resulting high power consumption has precluded its use in hand-held applications.

In this dissertation, algorithm-architecture co-design techniques that aim to make hand-held 3-D ultrasound a reality are presented. First, image enhancement methods to improve signal-to-noise ratio (SNR) are proposed. These include virtual source firing techniques and a low overhead digital front-end architecture using orthogonal chirps and orthogonal Golay codes.

Second, algorithm-architecture co-design techniques to reduce the power consumption of 3-D SAU imaging systems is presented. These include (i) a subaperture multiplexing strategy and the corresponding apodization method to alleviate the signal bandwidth bottleneck, and (ii) a highly efficient iterative delay calculation method to eliminate complex operations such as multiplications, divisions and square-root in delay calculation during beamforming. These techniques were used to define Sonic Millip3De, a 3-D die stacked architecture for digital beamforming in SAU systems. Sonic Millip3De produces 3-D high resolution images at 2 frames per second with system power consumption of 15W in 45nm technology.

Third, a new beamforming method based on separable delay decomposition is proposed to reduce the computational complexity of the beamforming unit in an SAU system. The method is based on minimizing the root-mean-square error (RMSE) due to delay decomposition. It reduces the beamforming complexity of a SAU system by 19x while providing high image fidelity that is comparable to non-separable beamforming. The resulting modified Sonic Millip3De architecture supports a frame rate of 32 volumes per second while maintaining power consumption of 15W in 45nm technology.

Next a 3-D plane-wave imaging system that utilizes both separable beamforming and coherent compounding is presented. The resulting system has computational complexity comparable to that of a non-separable non-compounding baseline system while significantly improving contrast-to-noise ratio and SNR. The modified Sonic Millip3De architecture is now capable of generating high resolution images at 1000 volumes per second with 9-fire-angle compounding.
ContributorsYang, Ming (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Karam, Lina (Committee member) / Frakes, David (Committee member) / Ogras, Umit Y. (Committee member) / Arizona State University (Publisher)
Created2015
156189-Thumbnail Image.png
Description
Static CMOS logic has remained the dominant design style of digital systems for

more than four decades due to its robustness and near zero standby current. Static

CMOS logic circuits consist of a network of combinational logic cells and clocked sequential

elements, such as latches and flip-flops that are used for sequencing computations

over

Static CMOS logic has remained the dominant design style of digital systems for

more than four decades due to its robustness and near zero standby current. Static

CMOS logic circuits consist of a network of combinational logic cells and clocked sequential

elements, such as latches and flip-flops that are used for sequencing computations

over time. The majority of the digital design techniques to reduce power, area, and

leakage over the past four decades have focused almost entirely on optimizing the

combinational logic. This work explores alternate architectures for the flip-flops for

improving the overall circuit performance, power and area. It consists of three main

sections.

First, is the design of a multi-input configurable flip-flop structure with embedded

logic. A conventional D-type flip-flop may be viewed as realizing an identity function,

in which the output is simply the value of the input sampled at the clock edge. In

contrast, the proposed multi-input flip-flop, named PNAND, can be configured to

realize one of a family of Boolean functions called threshold functions. In essence,

the PNAND is a circuit implementation of the well-known binary perceptron. Unlike

other reconfigurable circuits, a PNAND can be configured by simply changing the

assignment of signals to its inputs. Using a standard cell library of such gates, a technology

mapping algorithm can be applied to transform a given netlist into one with

an optimal mixture of conventional logic gates and threshold gates. This approach

was used to fabricate a 32-bit Wallace Tree multiplier and a 32-bit booth multiplier

in 65nm LP technology. Simulation and chip measurements show more than 30%

improvement in dynamic power and more than 20% reduction in core area.

The functional yield of the PNAND reduces with geometry and voltage scaling.

The second part of this research investigates the use of two mechanisms to improve

the robustness of the PNAND circuit architecture. One is the use of forward and reverse body biases to change the device threshold and the other is the use of RRAM

devices for low voltage operation.

The third part of this research focused on the design of flip-flops with non-volatile

storage. Spin-transfer torque magnetic tunnel junctions (STT-MTJ) are integrated

with both conventional D-flipflop and the PNAND circuits to implement non-volatile

logic (NVL). These non-volatile storage enhanced flip-flops are able to save the state of

system locally when a power interruption occurs. However, manufacturing variations

in the STT-MTJs and in the CMOS transistors significantly reduce the yield, leading

to an overly pessimistic design and consequently, higher energy consumption. A

detailed analysis of the design trade-offs in the driver circuitry for performing backup

and restore, and a novel method to design the energy optimal driver for a given yield is

presented. Efficient designs of two nonvolatile flip-flop (NVFF) circuits are presented,

in which the backup time is determined on a per-chip basis, resulting in minimizing

the energy wastage and satisfying the yield constraint. To achieve a yield of 98%,

the conventional approach would have to expend nearly 5X more energy than the

minimum required, whereas the proposed tunable approach expends only 26% more

energy than the minimum. A non-volatile threshold gate architecture NV-TLFF are

designed with the same backup and restore circuitry in 65nm technology. The embedded

logic in NV-TLFF compensates performance overhead of NVL. This leads to the

possibility of zero-overhead non-volatile datapath circuits. An 8-bit multiply-and-

accumulate (MAC) unit is designed to demonstrate the performance benefits of the

proposed architecture. Based on the results of HSPICE simulations, the MAC circuit

with the proposed NV-TLFF cells is shown to consume at least 20% less power and

area as compared to the circuit designed with conventional DFFs, without sacrificing

any performance.
ContributorsYang, Jinghua (Author) / Vrudhula, Sarma (Thesis advisor) / Barnaby, Hugh (Committee member) / Cao, Yu (Committee member) / Seo, Jae-Sun (Committee member) / Arizona State University (Publisher)
Created2018
156504-Thumbnail Image.png
Description
The Internet of Things (IoT) has become a more pervasive part of everyday life. IoT networks such as wireless sensor networks, depend greatly on the limiting unnecessary power consumption. As such, providing low-power, adaptable software can greatly improve network design. For streaming live video content, Wireless Video Sensor Network Platform

The Internet of Things (IoT) has become a more pervasive part of everyday life. IoT networks such as wireless sensor networks, depend greatly on the limiting unnecessary power consumption. As such, providing low-power, adaptable software can greatly improve network design. For streaming live video content, Wireless Video Sensor Network Platform compatible Dynamic Adaptive Streaming over HTTP (WVSNP-DASH) aims to revolutionize wireless segmented video streaming by providing a low-power, adaptable framework to compete with modern DASH players such as Moving Picture Experts Group (MPEG-DASH) and Apple’s Hypertext Transfer Protocol (HTTP) Live Streaming (HLS). Each segment is independently playable, and does not depend on a manifest file, resulting in greatly improved power performance. My work was to show that WVSNP-DASH is capable of further power savings at the level of the wireless sensor node itself if a native capture program is implemented at the camera sensor node. I created a native capture program in the C language that fulfills the name-based segmentation requirements of WVSNP-DASH. I present this program with intent to measure its power consumption on a hardware test-bed in future. To my knowledge, this is the first program to generate WVSNP-DASH playable video segments. The results show that our program could be utilized by WVSNP-DASH, but there are issues with the efficiency, so provided are an additional outline for further improvements.
ContributorsKhan, Zarah (Author) / Reisslein, Martin (Thesis advisor) / Seema, Adolph (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2018
156845-Thumbnail Image.png
Description
The rapid improvement in computation capability has made deep convolutional neural networks (CNNs) a great success in recent years on many computer vision tasks with significantly improved accuracy. During the inference phase, many applications demand low latency processing of one image with strict power consumption requirement, which reduces the efficiency

The rapid improvement in computation capability has made deep convolutional neural networks (CNNs) a great success in recent years on many computer vision tasks with significantly improved accuracy. During the inference phase, many applications demand low latency processing of one image with strict power consumption requirement, which reduces the efficiency of GPU and other general-purpose platform, bringing opportunities for specific acceleration hardware, e.g. FPGA, by customizing the digital circuit specific for the deep learning algorithm inference. However, deploying CNNs on portable and embedded systems is still challenging due to large data volume, intensive computation, varying algorithm structures, and frequent memory accesses. This dissertation proposes a complete design methodology and framework to accelerate the inference process of various CNN algorithms on FPGA hardware with high performance, efficiency and flexibility.

As convolution contributes most operations in CNNs, the convolution acceleration scheme significantly affects the efficiency and performance of a hardware CNN accelerator. Convolution involves multiply and accumulate (MAC) operations with four levels of loops. Without fully studying the convolution loop optimization before the hardware design phase, the resulting accelerator can hardly exploit the data reuse and manage data movement efficiently. This work overcomes these barriers by quantitatively analyzing and optimizing the design objectives (e.g. memory access) of the CNN accelerator based on multiple design variables. An efficient dataflow and hardware architecture of CNN acceleration are proposed to minimize the data communication while maximizing the resource utilization to achieve high performance.

Although great performance and efficiency can be achieved by customizing the FPGA hardware for each CNN model, significant efforts and expertise are required leading to long development time, which makes it difficult to catch up with the rapid development of CNN algorithms. In this work, we present an RTL-level CNN compiler that automatically generates customized FPGA hardware for the inference tasks of various CNNs, in order to enable high-level fast prototyping of CNNs from software to FPGA and still keep the benefits of low-level hardware optimization. First, a general-purpose library of RTL modules is developed to model different operations at each layer. The integration and dataflow of physical modules are predefined in the top-level system template and reconfigured during compilation for a given CNN algorithm. The runtime control of layer-by-layer sequential computation is managed by the proposed execution schedule so that even highly irregular and complex network topology, e.g. GoogLeNet and ResNet, can be compiled. The proposed methodology is demonstrated with various CNN algorithms, e.g. NiN, VGG, GoogLeNet and ResNet, on two different standalone FPGAs achieving state-of-the art performance.

Based on the optimized acceleration strategy, there are still a lot of design options, e.g. the degree and dimension of computation parallelism, the size of on-chip buffers, and the external memory bandwidth, which impact the utilization of computation resources and data communication efficiency, and finally affect the performance and energy consumption of the accelerator. The large design space of the accelerator makes it impractical to explore the optimal design choice during the real implementation phase. Therefore, a performance model is proposed in this work to quantitatively estimate the accelerator performance and resource utilization. By this means, the performance bottleneck and design bound can be identified and the optimal design option can be explored early in the design phase.
ContributorsMa, Yufei (Author) / Vrudhula, Sarma (Thesis advisor) / Seo, Jae-Sun (Thesis advisor) / Cao, Yu (Committee member) / Barnaby, Hugh (Committee member) / Arizona State University (Publisher)
Created2018
187820-Thumbnail Image.png
Description
With the advent of new advanced analysis tools and access to related published data, it is getting more difficult for data owners to suppress private information from published data while still providing useful information. This dual problem of providing useful, accurate information and protecting it at the same time has

With the advent of new advanced analysis tools and access to related published data, it is getting more difficult for data owners to suppress private information from published data while still providing useful information. This dual problem of providing useful, accurate information and protecting it at the same time has been challenging, especially in healthcare. The data owners lack an automated resource that provides layers of protection on a published dataset with validated statistical values for usability. Differential privacy (DP) has gained a lot of attention in the past few years as a solution to the above-mentioned dual problem. DP is defined as a statistical anonymity model that can protect the data from adversarial observation while still providing intended usage. This dissertation introduces a novel DP protection mechanism called Inexact Data Cloning (IDC), which simultaneously protects and preserves information in published data while conveying source data intent. IDC preserves the privacy of the records by converting the raw data records into clonesets. The clonesets then pass through a classifier that removes potential compromising clonesets, filtering only good inexact cloneset. The mechanism of IDC is dependent on a set of privacy protection metrics called differential privacy protection metrics (DPPM), which represents the overall protection level. IDC uses two novel performance values, differential privacy protection score (DPPS) and clone classifier selection percentage (CCSP), to estimate the privacy level of protected data. In support of using IDC as a viable data security product, a software tool chain prototype, differential privacy protection architecture (DPPA), was developed to utilize the IDC. DPPA used the engineering security mechanism of IDC. DPPA is a hub which facilitates a market for data DP security mechanisms. DPPA works by incorporating standalone IDC mechanisms and provides automation, IDC protected published datasets and statistically verified IDC dataset diagnostic report. DPPA is currently doing functional, and operational benchmark processes that quantifies the DP protection of a given published dataset. The DPPA tool was recently used to test a couple of health datasets. The test results further validate the IDC mechanism as being feasible.
Contributorsthomas, zelpha (Author) / Bliss, Daniel W (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Banerjee, Ayan (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2023
161843-Thumbnail Image.png
Description
In this thesis, the applications of deep learning in the analysis, detection and classification of medical imaging datasets were studied, with a focus on datasets having a limited sample size. A combined machine learning-deep learning model was designed to classify one small dataset, prostate cancer provided by Mayo

In this thesis, the applications of deep learning in the analysis, detection and classification of medical imaging datasets were studied, with a focus on datasets having a limited sample size. A combined machine learning-deep learning model was designed to classify one small dataset, prostate cancer provided by Mayo Clinic, Arizona. Deep learning model was implemented to extract imaging features followed by machine learning classifier for prostate cancer diagnosis. The results were compared against models trained on texture-based features, namely gray level co-occurrence matrix (GLCM) and Gabor. Some of the challenges of performing diagnosis on medical imaging datasets with limited sample sizes, have been identified. Lastly, a set of future works have been proposed. Keywords: Deep learning, radiology, transfer learning, convolutional neural network.
ContributorsSarkar, Suryadipto (Author) / Wu, Teresa (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Silva, Alvin (Committee member) / Arizona State University (Publisher)
Created2021
156306-Thumbnail Image.png
Description
Software-defined radio provides users with a low-cost and flexible platform for implementing and studying advanced communications and remote sensing applications. Two such applications include unmanned aerial system-to-ground communications channel and joint sensing and communication systems. In this work, these applications are studied.

In the first part, unmanned aerial system-to-ground communications

Software-defined radio provides users with a low-cost and flexible platform for implementing and studying advanced communications and remote sensing applications. Two such applications include unmanned aerial system-to-ground communications channel and joint sensing and communication systems. In this work, these applications are studied.

In the first part, unmanned aerial system-to-ground communications channel models are derived from empirical data collected from software-defined radio transceivers in residential and mountainous desert environments using a small (< 20 kg) unmanned aerial system during low-altitude flight (< 130 m). The Kullback-Leibler divergence measure was employed to characterize model mismatch from the empirical data. Using this measure the derived models accurately describe the underlying data.

In the second part, an experimental joint sensing and communications system is implemented using a network of software-defined radio transceivers. A novel co-design receiver architecture is presented and demonstrated within a three-node joint multiple access system topology consisting of an independent radar and communications transmitter along with a joint radar and communications receiver. The receiver tracks an emulated target moving along a predefined path and simultaneously decodes a communications message. Experimental system performance bounds are characterized jointly using the communications channel capacity and novel estimation information rate.
ContributorsGutierrez, Richard (Author) / Bliss, Daniel W (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Ogras, Umit Y. (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2018
171654-Thumbnail Image.png
Description
The advancement and marked increase in the use of computing devices in health care for large scale and personal medical use has transformed the field of medicine and health care into a data rich domain. This surge in the availability of data has allowed domain experts to investigate, study and

The advancement and marked increase in the use of computing devices in health care for large scale and personal medical use has transformed the field of medicine and health care into a data rich domain. This surge in the availability of data has allowed domain experts to investigate, study and discover inherent patterns in diseases from new perspectives and in turn, further the field of medicine. Storage and analysis of this data in real time aids in enhancing the response time and efficiency of doctors and health care specialists. However, due to the time critical nature of most life- threatening diseases, there is a growing need to make informed decisions prior to the occurrence of any fatal outcome. Alongside time sensitivity, analyzing data specific to diseases and their effects on an individual basis leads to more efficient prognosis and rapid deployment of cures. The primary challenge in addressing both of these issues arises from the time varying and time sensitive nature of the data being studied and in the ability to successfully predict anomalous events using only observed data.This dissertation introduces adaptive machine learning algorithms that aid in the prediction of anomalous situations arising due to abnormalities present in patients diagnosed with certain types of diseases. Emphasis is given to the adaptation and development of algorithms based on an individual basis to further the accuracy of all predictions made. The main objectives are to learn the underlying representation of the data using empirical methods and enhance it using domain knowledge. The learned model is then utilized as a guide for statistical machine learning methods to predict the occurrence of anomalous events in the near future. Further enhancement of the learned model is achieved by means of tuning the objective function of the algorithm to incorporate domain knowledge. Along with anomaly forecasting using multi-modal data, this dissertation also investigates the use of univariate time series data towards the prediction of onset of diseases using Bayesian nonparametrics.
ContributorsDas, Subhasish (Author) / Gupta, Sandeep K.S. (Thesis advisor) / Banerjee, Ayan (Committee member) / Indic, Premananda (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2022
157937-Thumbnail Image.png
Description
Signal compressed using classical compression methods can be acquired using brute force (i.e. searching for non-zero entries in component-wise). However, sparse solutions require combinatorial searches of high computations. In this thesis, instead, two Bayesian approaches are considered to recover a sparse vector from underdetermined noisy measurements. The first is constructed

Signal compressed using classical compression methods can be acquired using brute force (i.e. searching for non-zero entries in component-wise). However, sparse solutions require combinatorial searches of high computations. In this thesis, instead, two Bayesian approaches are considered to recover a sparse vector from underdetermined noisy measurements. The first is constructed using a Bernoulli-Gaussian (BG) prior distribution and is assumed to be the true generative model. The second is constructed using a Gamma-Normal (GN) prior distribution and is, therefore, a different (i.e. misspecified) model. To estimate the posterior distribution for the correctly specified scenario, an algorithm based on generalized approximated message passing (GAMP) is constructed, while an algorithm based on sparse Bayesian learning (SBL) is used for the misspecified scenario. Recovering sparse signal using Bayesian framework is one class of algorithms to solve the sparse problem. All classes of algorithms aim to get around the high computations associated with the combinatorial searches. Compressive sensing (CS) is a widely-used terminology attributed to optimize the sparse problem and its applications. Applications such as magnetic resonance imaging (MRI), image acquisition in radar imaging, and facial recognition. In CS literature, the target vector can be recovered either by optimizing an objective function using point estimation, or recovering a distribution of the sparse vector using Bayesian estimation. Although Bayesian framework provides an extra degree of freedom to assume a distribution that is directly applicable to the problem of interest, it is hard to find a theoretical guarantee of convergence. This limitation has shifted some of researches to use a non-Bayesian framework. This thesis tries to close this gab by proposing a Bayesian framework with a suggested theoretical bound for the assumed, not necessarily correct, distribution. In the simulation study, a general lower Bayesian Cram\'er-Rao bound (BCRB) bound is extracted along with misspecified Bayesian Cram\'er-Rao bound (MBCRB) for GN model. Both bounds are validated using mean square error (MSE) performances of the aforementioned algorithms. Also, a quantification of the performance in terms of gains versus losses is introduced as one main finding of this report.
ContributorsAlhowaish, Abdulhakim (Author) / Richmond, Christ D (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Sankar, Lalitha (Committee member) / Arizona State University (Publisher)
Created2019