Matching Items (8)
Filtering by

Clear all filters

151941-Thumbnail Image.png
Description
With increasing transistor volume and reducing feature size, it has become a major design constraint to reduce power consumption also. This has given rise to aggressive architectural changes for on-chip power management and rapid development to energy efficient hardware accelerators. Accordingly, the objective of this research work is to facilitate

With increasing transistor volume and reducing feature size, it has become a major design constraint to reduce power consumption also. This has given rise to aggressive architectural changes for on-chip power management and rapid development to energy efficient hardware accelerators. Accordingly, the objective of this research work is to facilitate software developers to leverage these hardware techniques and improve energy efficiency of the system. To achieve this, I propose two solutions for Linux kernel: Optimal use of these architectural enhancements to achieve greater energy efficiency requires accurate modeling of processor power consumption. Though there are many models available in literature to model processor power consumption, there is a lack of such models to capture power consumption at the task-level. Task-level energy models are a requirement for an operating system (OS) to perform real-time power management as OS time multiplexes tasks to enable sharing of hardware resources. I propose a detailed design methodology for constructing an architecture agnostic task-level power model and incorporating it into a modern operating system to build an online task-level power profiler. The profiler is implemented inside the latest Linux kernel and validated for Intel Sandy Bridge processor. It has a negligible overhead of less than 1\% hardware resource consumption. The profiler power prediction was demonstrated for various application benchmarks from SPEC to PARSEC with less than 4\% error. I also demonstrate the importance of the proposed profiler for emerging architectural techniques through use case scenarios, which include heterogeneous computing and fine grained per-core DVFS. Along with architectural enhancement in general purpose processors to improve energy efficiency, hardware accelerators like Coarse Grain reconfigurable architecture (CGRA) are gaining popularity. Unlike vector processors, which rely on data parallelism, CGRA can provide greater flexibility and compiler level control making it more suitable for present SoC environment. To provide streamline development environment for CGRA, I propose a flexible framework in Linux to do design space exploration for CGRA. With accurate and flexible hardware models, fine grained integration with accurate architectural simulator, and Linux memory management and DMA support, a user can carry out limitless experiments on CGRA in full system environment.
ContributorsDesai, Digant Pareshkumar (Author) / Vrudhula, Sarma (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Wu, Carole-Jean (Committee member) / Arizona State University (Publisher)
Created2013
153420-Thumbnail Image.png
Description
Tracking a time-varying number of targets is a challenging

dynamic state estimation problem whose complexity is intensified

under low signal-to-noise ratio (SNR) or high clutter conditions.

This is important, for example, when tracking

multiple, closely spaced targets moving in the same direction such as a

convoy of low observable vehicles moving

Tracking a time-varying number of targets is a challenging

dynamic state estimation problem whose complexity is intensified

under low signal-to-noise ratio (SNR) or high clutter conditions.

This is important, for example, when tracking

multiple, closely spaced targets moving in the same direction such as a

convoy of low observable vehicles moving through a forest or multiple

targets moving in a crisscross pattern. The SNR in

these applications is usually low as the reflected signals from

the targets are weak or the noise level is very high.

An effective approach for detecting and tracking a single target

under low SNR conditions is the track-before-detect filter (TBDF)

that uses unthresholded measurements. However, the TBDF has only been used to

track a small fixed number of targets at low SNR.

This work proposes a new multiple target TBDF approach to track a

dynamically varying number of targets under the recursive Bayesian framework.

For a given maximum number of

targets, the state estimates are obtained by estimating the joint

multiple target posterior probability density function under all possible

target

existence combinations. The estimation of the corresponding target existence

combination probabilities and the target existence probabilities are also

derived. A feasible sequential Monte Carlo (SMC) based implementation

algorithm is proposed. The approximation accuracy of the SMC

method with a reduced number of particles is improved by an efficient

proposal density function that partitions the multiple target space into a

single target space.

The proposed multiple target TBDF method is extended to track targets in sea

clutter using highly time-varying radar measurements. A generalized

likelihood function for closely spaced multiple targets in compound Gaussian

sea clutter is derived together with the maximum likelihood estimate of

the model parameters using an iterative fixed point algorithm.

The TBDF performance is improved by proposing a computationally feasible

method to estimate the space-time covariance matrix of rapidly-varying sea

clutter. The method applies the Kronecker product approximation to the

covariance matrix and uses particle filtering to solve the resulting dynamic

state space model formulation.
ContributorsEbenezer, Samuel P (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Bliss, Daniel (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2015
150187-Thumbnail Image.png
Description
Genomic and proteomic sequences, which are in the form of deoxyribonucleic acid (DNA) and amino acids respectively, play a vital role in the structure, function and diversity of every living cell. As a result, various genomic and proteomic sequence processing methods have been proposed from diverse disciplines, including biology, chemistry,

Genomic and proteomic sequences, which are in the form of deoxyribonucleic acid (DNA) and amino acids respectively, play a vital role in the structure, function and diversity of every living cell. As a result, various genomic and proteomic sequence processing methods have been proposed from diverse disciplines, including biology, chemistry, physics, computer science and electrical engineering. In particular, signal processing techniques were applied to the problems of sequence querying and alignment, that compare and classify regions of similarity in the sequences based on their composition. However, although current approaches obtain results that can be attributed to key biological properties, they require pre-processing and lack robustness to sequence repetitions. In addition, these approaches do not provide much support for efficiently querying sub-sequences, a process that is essential for tracking localized database matches. In this work, a query-based alignment method for biological sequences that maps sequences to time-domain waveforms before processing the waveforms for alignment in the time-frequency plane is first proposed. The mapping uses waveforms, such as time-domain Gaussian functions, with unique sequence representations in the time-frequency plane. The proposed alignment method employs a robust querying algorithm that utilizes a time-frequency signal expansion whose basis function is matched to the basic waveform in the mapped sequences. The resulting WAVEQuery approach is demonstrated for both DNA and protein sequences using the matching pursuit decomposition as the signal basis expansion. The alignment localization of WAVEQuery is specifically evaluated over repetitive database segments, and operable in real-time without pre-processing. It is demonstrated that WAVEQuery significantly outperforms the biological sequence alignment method BLAST for queries with repetitive segments for DNA sequences. A generalized version of the WAVEQuery approach with the metaplectic transform is also described for protein sequence structure prediction. For protein alignment, it is often necessary to not only compare the one-dimensional (1-D) primary sequence structure but also the secondary and tertiary three-dimensional (3-D) space structures. This is done after considering the conformations in the 3-D space due to the degrees of freedom of these structures. As a result, a novel directionality based 3-D waveform mapping for the 3-D protein structures is also proposed and it is used to compare protein structures using a matched filter approach. By incorporating a 3-D time axis, a highly-localized Gaussian-windowed chirp waveform is defined, and the amino acid information is mapped to the chirp parameters that are then directly used to obtain directionality in the 3-D space. This mapping is unique in that additional characteristic protein information such as hydrophobicity, that relates the sequence with the structure, can be added as another representation parameter. The additional parameter helps tracking similarities over local segments of the structure, this enabling classification of distantly related proteins which have partial structural similarities. This approach is successfully tested for pairwise alignments over full length structures, alignments over multiple structures to form a phylogenetic trees, and also alignments over local segments. Also, basic classification over protein structural classes using directional descriptors for the protein structure is performed.
ContributorsRavichandran, Lakshminarayan (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Spanias, Andreas S (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Lacroix, Zoé (Committee member) / Arizona State University (Publisher)
Created2011
157465-Thumbnail Image.png
Description
Parkinson’s disease (PD) is a neurological disorder with complicated and disabling motor and non-motor symptoms. The pathology for PD is difficult and expensive. Furthermore, it depends on patient diaries and the neurologist’s subjective assessment of clinical scales. Objective, accurate, and continuous patient monitoring have become possible with the

Parkinson’s disease (PD) is a neurological disorder with complicated and disabling motor and non-motor symptoms. The pathology for PD is difficult and expensive. Furthermore, it depends on patient diaries and the neurologist’s subjective assessment of clinical scales. Objective, accurate, and continuous patient monitoring have become possible with the advancement in mobile and portable equipment. Consequently, a significant amount of work has been done to explore new cost-effective and subjective assessment methods or PD symptoms. For example, smart technologies, such as wearable sensors and optical motion capturing systems, have been used to analyze the symptoms of a PD patient to assess their disease progression and even to detect signs in their nascent stage for early diagnosis of PD.

This review focuses on the use of modern equipment for PD applications that were developed in the last decade. Four significant fields of research were identified: Assistance diagnosis, Prognosis or Monitoring of Symptoms and their Severity, Predicting Response to Treatment, and Assistance to Therapy or Rehabilitation. This study reviews the papers published between January 2008 and December 2018 in the following four databases: Pubmed Central, Science Direct, IEEE Xplore and MDPI. After removing unrelated articles, ones published in languages other than English, duplicate entries and other articles that did not fulfill the selection criteria, 778 papers were manually investigated and included in this review. A general overview of PD applications, devices used and aspects monitored for PD management is provided in this systematic review.
ContributorsDeb, Ranadeep (Author) / Ogras, Umit Y. (Thesis advisor) / Shill, Holly (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2019
156822-Thumbnail Image.png
Description
Hardware implementation of deep neural networks is earning significant importance nowadays. Deep neural networks are mathematical models that use learning algorithms inspired by the brain. Numerous deep learning algorithms such as multi-layer perceptrons (MLP) have demonstrated human-level recognition accuracy in image and speech classification tasks. Multiple layers of processing elements

Hardware implementation of deep neural networks is earning significant importance nowadays. Deep neural networks are mathematical models that use learning algorithms inspired by the brain. Numerous deep learning algorithms such as multi-layer perceptrons (MLP) have demonstrated human-level recognition accuracy in image and speech classification tasks. Multiple layers of processing elements called neurons with several connections between them called synapses are used to build these networks. Hence, it involves operations that exhibit a high level of parallelism making it computationally and memory intensive. Constrained by computing resources and memory, most of the applications require a neural network which utilizes less energy. Energy efficient implementation of these computationally intense algorithms on neuromorphic hardware demands a lot of architectural optimizations. One of these optimizations would be the reduction in the network size using compression and several studies investigated compression by introducing element-wise or row-/column-/block-wise sparsity via pruning and regularization. Additionally, numerous recent works have concentrated on reducing the precision of activations and weights with some reducing to a single bit. However, combining various sparsity structures with binarized or very-low-precision (2-3 bit) neural networks have not been comprehensively explored. Output activations in these deep neural network algorithms are habitually non-binary making it difficult to exploit sparsity. On the other hand, biologically realistic models like spiking neural networks (SNN) closely mimic the operations in biological nervous systems and explore new avenues for brain-like cognitive computing. These networks deal with binary spikes, and they can exploit the input-dependent sparsity or redundancy to dynamically scale the amount of computation in turn leading to energy-efficient hardware implementation. This work discusses configurable spiking neuromorphic architecture that supports multiple hidden layers exploiting hardware reuse. It also presents design techniques for minimum-area/-energy DNN hardware with minimal degradation in accuracy. Area, performance and energy results of these DNN and SNN hardware is reported for the MNIST dataset. The Neuromorphic hardware designed for SNN algorithm in 28nm CMOS demonstrates high classification accuracy (>98% on MNIST) and low energy (51.4 - 773 (nJ) per classification). The optimized DNN hardware designed in 40nm CMOS that combines 8X structured compression and 3-bit weight precision showed 98.4% accuracy at 33 (nJ) per classification.
ContributorsKolala Venkataramanaiah, Shreyas (Author) / Seo, Jae-Sun (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2018
151310-Thumbnail Image.png
Description
Characterization of standard cells is one of the crucial steps in the IC design. Scaling of CMOS technology has lead to timing un-certainties such as that of cross coupling noise due to interconnect parasitic, skew variation due to voltage jitter and proximity effect of multiple inputs switching (MIS). Due to

Characterization of standard cells is one of the crucial steps in the IC design. Scaling of CMOS technology has lead to timing un-certainties such as that of cross coupling noise due to interconnect parasitic, skew variation due to voltage jitter and proximity effect of multiple inputs switching (MIS). Due to increased operating frequency and process variation, the probability of MIS occurrence and setup / hold failure within a clock cycle is high. The delay variation due to temporal proximity of MIS is significant for multiple input gates in the standard cell library. The shortest paths are affected by MIS due to the lack of averaging effect. Thus, sensitive designs such as that of SRAM row and column decoder circuits have high probability for MIS impact. The traditional static timing analysis (STA) assumes single input switching (SIS) scenario which is not adequate enough to capture gate delay accurately, as the delay variation due to temporal proximity of the MIS is ~15%-45%. Whereas, considering all possible scenarios of MIS for characterization is computationally intensive with huge data volume. Various modeling techniques are developed for the characterization of MIS effect. Some techniques require coefficient extraction through multiple spice simulation, and do not discuss speed up approach or apply models with complicated algorithms to account for MIS effect. The STA flow accounts for process variation through uncertainty parameter to improve product yield. Some of the MIS delay variability models account for MIS variation through table look up approach, resulting in huge data volume or do not consider propagation of RAT in the design flow. Thus, there is a need for a methodology to model MIS effect with less computational resource, and integration of such effect into design flow without trading off the accuracy. A finite-point based analytical model for MIS effect is proposed for multiple input logic gates and similar approach is extended for setup/hold characterization of sequential elements. Integration of MIS variation into design flow is explored. The proposed methodology is validated using benchmark circuits at 45nm technology node under process variation. Experimental results show significant reduction in runtime and data volume with ~10% error compared to that of SPICE simulation.
ContributorsSubramaniam, Anupama R (Author) / Cao, Yu (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Roveda, Janet (Committee member) / Yu, Hongbin (Committee member) / Arizona State University (Publisher)
Created2012
171768-Thumbnail Image.png
Description
Object tracking refers to the problem of estimating a moving object's time-varying parameters that are indirectly observed in measurements at each time step. Increased noise and clutter in the measurements reduce estimation accuracy as they increase the uncertainty of tracking in the field of view. Whereas tracking is performed using

Object tracking refers to the problem of estimating a moving object's time-varying parameters that are indirectly observed in measurements at each time step. Increased noise and clutter in the measurements reduce estimation accuracy as they increase the uncertainty of tracking in the field of view. Whereas tracking is performed using a Bayesian filter, a Bayesian smoother can be utilized to refine parameter state estimations that occurred before the current time. In practice, smoothing can be widely used to improve state estimation or correct data association errors, and it can lead to significantly better estimation performance as it reduces the impact of noise and clutter. In this work, a single object tracking method is proposed based on integrating Kalman filtering and smoothing with thresholding to remove unreliable measurements. As the new method is effective when the noise and clutter in the measurements are high, the main goal is to find these measurements using a moving average filter and a thresholding method to improve estimation. Thus, the proposed method is designed to reduce estimation errors that result from measurements corrupted with high noise and clutter. Simulations are provided to demonstrate the improved performance of the new method when compared to smoothing without thresholding. The root-mean-square error in estimating the object state parameters is shown to be especially reduced under high noise conditions.
ContributorsSeo, Yongho (Author) / Papandreaou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel W (Committee member) / Chakrabarti, Chaitali (Committee member) / Moraffah, Bahman (Committee member) / Arizona State University (Publisher)
Created2022
157687-Thumbnail Image.png
Description
Graphs are one of the key data structures for many real-world computing applica-

tions such as machine learning, social networks, genomics etc. The main challenges of

graph processing include diculty in parallelizing the workload that results in work-

load imbalance, poor memory locality and very large number of memory accesses.

This causes large-scale graph

Graphs are one of the key data structures for many real-world computing applica-

tions such as machine learning, social networks, genomics etc. The main challenges of

graph processing include diculty in parallelizing the workload that results in work-

load imbalance, poor memory locality and very large number of memory accesses.

This causes large-scale graph processing to be very expensive.

This thesis presents implementation of a select set of graph kernels on a multi-core

architecture, Transmuter. The kernels are Breadth-First Search (BFS), Page Rank

(PR), and Single Source Shortest Path (SSSP). Transmuter is a multi-tiled architec-

ture with 4 tiles and 16 general processing elements (GPE) per tile that supports a

two level cache hierarchy. All graph processing kernels have been implemented on

Transmuter using Gem5 architectural simulator.

The key pre-processing steps in improving the performance are static partition-

ing by destination and balancing the workload among the processing cores. Results

obtained by processing graphs that are partitioned against un-partitioned graphs

show almost 3x improvement in performance. Choice of data structure also plays an

important role in the amount of storage space consumed and the amount of synchro-

nization required in a parallel implementation. Here the compressed sparse column

data format was used. BFS and SSSP are frontier-based algorithms where a frontier

represents a subset of vertices that are active during the current iteration. They

were implemented using the Boolean frontier array data structure. PR is an iterative

algorithm where all vertices are active at all times.

The performance of the dierent Transmuter implementations for the 14nm node

were evaluated based on metrics such as power consumption (Watt), Giga Operations

Per Second(GOPS), GOPS/Watt and L1/L2 cache misses. GOPS/W numbers for

graphs with 10k nodes and 10k edges is 33 for BFS, 477 for PR and 10 for SSSP.

i

Frontier-based algorithms have much lower GOPS/W compared to iterative algo-

rithms such as PR. This is because all nodes in Page Rank are active at all points

in time. For all three kernel implementations, the L1 cache miss rates are quite low

while the L2 cache hit rates are high.
ContributorsRENGANATHAN, SRINIDHI (Author) / Chakrabarti, Chaitali (Thesis advisor) / Shrivastava, Aviral (Committee member) / Mudge, Trevor (Committee member) / Arizona State University (Publisher)
Created2019