Matching Items (49)
Filtering by

Clear all filters

149992-Thumbnail Image.png
Description
Process variations have become increasingly important for scaled technologies starting at 45nm. The increased variations are primarily due to random dopant fluctuations, line-edge roughness and oxide thickness fluctuation. These variations greatly impact all aspects of circuit performance and pose a grand challenge to future robust IC design. To improve robustness,

Process variations have become increasingly important for scaled technologies starting at 45nm. The increased variations are primarily due to random dopant fluctuations, line-edge roughness and oxide thickness fluctuation. These variations greatly impact all aspects of circuit performance and pose a grand challenge to future robust IC design. To improve robustness, efficient methodology is required that considers effect of variations in the design flow. Analyzing timing variability of complex circuits with HSPICE simulations is very time consuming. This thesis proposes an analytical model to predict variability in CMOS circuits that is quick and accurate. There are several analytical models to estimate nominal delay performance but very little work has been done to accurately model delay variability. The proposed model is comprehensive and estimates nominal delay and variability as a function of transistor width, load capacitance and transition time. First, models are developed for library gates and the accuracy of the models is verified with HSPICE simulations for 45nm and 32nm technology nodes. The difference between predicted and simulated σ/μ for the library gates is less than 1%. Next, the accuracy of the model for nominal delay is verified for larger circuits including ISCAS'85 benchmark circuits. The model predicted results are within 4% error of HSPICE simulated results and take a small fraction of the time, for 45nm technology. Delay variability is analyzed for various paths and it is observed that non-critical paths can become critical because of Vth variation. Variability on shortest paths show that rate of hold violations increase enormously with increasing Vth variation.
ContributorsGummalla, Samatha (Author) / Chakrabarti, Chaitali (Thesis advisor) / Cao, Yu (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Arizona State University (Publisher)
Created2011
150423-Thumbnail Image.png
Description
In this thesis, an adaptive waveform selection technique for dynamic target tracking under low signal-to-noise ratio (SNR) conditions is investigated. The approach is integrated with a track-before-detect (TBD) algorithm and uses delay-Doppler matched filter (MF) outputs as raw measurements without setting any threshold for extracting delay-Doppler estimates. The particle filter

In this thesis, an adaptive waveform selection technique for dynamic target tracking under low signal-to-noise ratio (SNR) conditions is investigated. The approach is integrated with a track-before-detect (TBD) algorithm and uses delay-Doppler matched filter (MF) outputs as raw measurements without setting any threshold for extracting delay-Doppler estimates. The particle filter (PF) Bayesian sequential estimation approach is used with the TBD algorithm (PF-TBD) to estimate the dynamic target state. A waveform-agile TBD technique is proposed that integrates the PF-TBD with a waveform selection technique. The new approach predicts the waveform to transmit at the next time step by minimizing the predicted mean-squared error (MSE). As a result, the radar parameters are adaptively and optimally selected for superior performance. Based on previous work, this thesis highlights the applicability of the predicted covariance matrix to the lower SNR waveform-agile tracking problem. The adaptive waveform selection algorithm's MSE performance was compared against fixed waveforms using Monte Carlo simulations. It was found that the adaptive approach performed at least as well as the best fixed waveform when focusing on estimating only position or only velocity. When these estimates were weighted by different amounts, then the adaptive performance exceeded all fixed waveforms. This improvement in performance demonstrates the utility of the predicted covariance in waveform design, at low SNR conditions that are poorly handled with more traditional tracking algorithms.
ContributorsPiwowarski, Ryan (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2011
150830-Thumbnail Image.png
Description
Research on developing new algorithms to improve information on brain functionality and structure is ongoing. Studying neural activity through dipole source localization with electroencephalography (EEG) and magnetoencephalography (MEG) sensor measurements can lead to diagnosis and treatment of a brain disorder and can also identify the area of the brain from

Research on developing new algorithms to improve information on brain functionality and structure is ongoing. Studying neural activity through dipole source localization with electroencephalography (EEG) and magnetoencephalography (MEG) sensor measurements can lead to diagnosis and treatment of a brain disorder and can also identify the area of the brain from where the disorder has originated. Designing advanced localization algorithms that can adapt to environmental changes is considered a significant shift from manual diagnosis which is based on the knowledge and observation of the doctor, to an adaptive and improved brain disorder diagnosis as these algorithms can track activities that might not be noticed by the human eye. An important consideration of these localization algorithms, however, is to try and minimize the overall power consumption in order to improve the study and treatment of brain disorders. This thesis considers the problem of estimating dynamic parameters of neural dipole sources while minimizing the system's overall power consumption; this is achieved by minimizing the number of EEG/MEG measurements sensors without a loss in estimation performance accuracy. As the EEG/MEG measurements models are related non-linearity to the dipole source locations and moments, these dynamic parameters can be estimated using sequential Monte Carlo methods such as particle filtering. Due to the large number of sensors required to record EEG/MEG Measurements for use in the particle filter, over long period recordings, a large amounts of power is required for storage and transmission. In order to reduce the overall power consumption, two methods are proposed. The first method used the predicted mean square estimation error as the performance metric under the constraint of a maximum power consumption. The performance metric of the second method uses the distance between the location of the sensors and the location estimate of the dipole source at the previous time step; this sensor scheduling scheme results in maximizing the overall signal-to-noise ratio. The performance of both methods is demonstrated using simulated data, and both methods show that they can provide good estimation results with significant reduction in the number of activated sensors at each time step.
ContributorsMichael, Stefanos (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Kovvali, Narayan (Committee member) / Arizona State University (Publisher)
Created2012
154141-Thumbnail Image.png
Description
Object tracking is an important topic in multimedia, particularly in applications such as teleconferencing, surveillance and human-computer interface. Its goal is to determine the position of objects in images continuously and reliably. The key steps involved in object tracking are foreground detection to detect moving objects, clustering to enable representation

Object tracking is an important topic in multimedia, particularly in applications such as teleconferencing, surveillance and human-computer interface. Its goal is to determine the position of objects in images continuously and reliably. The key steps involved in object tracking are foreground detection to detect moving objects, clustering to enable representation of an object by its centroid, and tracking the centroids to determine the motion parameters.

In this thesis, a low cost object tracking system is implemented on a hardware accelerator that is a warp based processor for SIMD/Vector style computations. First, the different foreground detection techniques are explored to figure out the best technique that involves the least number of computations without compromising on the performance. It is found that the Gaussian Mixture Model proposed by Zivkovic gives the best performance with respect to both accuracy and number of computations. Pixel level parallelization is applied to this algorithm and it is mapped onto the hardware accelerator.

Next, the different clustering algorithms are studied and it is found that while DBSCAN is highly accurate and robust to outliers, it is very computationally intensive. In contrast, K-means is computationally simple, but it requires that the number of means to be specified beforehand. So, a new clustering algorithm is proposed that uses a combination of both DBSCAN and K-means algorithm along with a diagnostic algorithm on K-means to estimate the right number of centroids. The proposed hybrid algorithm is shown to be faster than the DBSCAN algorithm by ~2.5x with minimal loss in accuracy. Also, the 1D Kalman filter is implemented assuming constant acceleration model. Since the computations involved in Kalman filter is just a set of recursive equations, the sequential model in itself exhibits good performance, thereby alleviating the need for parallelization. The tracking performance of the low cost implementation is evaluated against the sequential version. It is found that the proposed hybrid algorithm performs very close to the reference algorithm based on the DBSCAN algorithm.
ContributorsSasikumar, Asha (Author) / Chakrabarti, Chaitali (Thesis advisor) / Ogras, Umit Y. (Committee member) / Suppapola, Antonia Pappandreau (Committee member) / Arizona State University (Publisher)
Created2015
153890-Thumbnail Image.png
Description
The recent flurry of security breaches have raised serious concerns about the security of data communication and storage. A promising way to enhance the security of the system is through physical root of trust, such as, through use of physical unclonable functions (PUF). PUF leverages the inherent randomness in physical

The recent flurry of security breaches have raised serious concerns about the security of data communication and storage. A promising way to enhance the security of the system is through physical root of trust, such as, through use of physical unclonable functions (PUF). PUF leverages the inherent randomness in physical systems to provide device specific authentication and encryption.

In this thesis, first the design of a highly reliable resistive random access memory (RRAM) PUF is presented. Compared to existing 1 cell/bit RRAM, here the sum of the read-out currents of multiple RRAM cells are used for generating one response bit. This method statistically minimizes any early-lifetime failure due to RRAM retention degradation at high temperature or under voltage stress. Using a device model that was calibrated using IMEC HfOx RRAM experimental data, it was shown that an 8 cells/bit architecture achieves 99.9999% reliability for a lifetime >10 years at 125℃ . Also, the hardware area overhead of the proposed 8 cells/bit RRAM PUF architecture was smaller than 1 cell/bit RRAM PUF that requires error correction coding to achieve the same reliability.

Next, a basic security primitive is presented, where the RRAM PUF is embedded in the cryptographic module, SHA-256. This architecture is referred to as Embedded PUF or EPUF. EPUF has a security advantage over SHA-256 as it never exposes the PUF response to the outside world. Instead, in each round, the PUF response is used to change a few bits of the message word to produce a unique message digest for each IC. The use of EPUF as a key generation module for AES is also shown. The hardware area requirement for SHA-256 and AES-128 is then analyzed using synthesis results based on TSMC 65nm library. It is shown that the area overhead of 8 cells/bit RRAM PUF is only 1.08% of the SHA-256 module and 0.04% of the AES-128 module. The security analysis of the PUF based systems is also presented. It is shown that the EPUF-based systems are resistant towards standard attacks on PUFs, and that the security of the cryptographic modules is not compromised.
ContributorsShrivastava, Ayush (Author) / Chakrabarti, Chaitali (Thesis advisor) / Yu, Shimeng (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2015
155967-Thumbnail Image.png
Description
This thesis work present the simulation of Bluetooth and Wi-Fi radios in real life interference environments. When information is transmitted via communication channels, data may get corrupted due to noise and other channel discrepancies. In order to receive the information safely and correctly, error correction coding schemes are generally employed

This thesis work present the simulation of Bluetooth and Wi-Fi radios in real life interference environments. When information is transmitted via communication channels, data may get corrupted due to noise and other channel discrepancies. In order to receive the information safely and correctly, error correction coding schemes are generally employed during the design of communication systems. Usually the simulations of wireless communication systems are done in such a way that they focus on some aspect of communications and neglect the others. The simulators available currently will either do network layer simulations or physical layer level simulations. In many situations, simulations are required which show inter-layer aspects of communication systems. For all such scenarios, a simulation environment, WiscaComm which is based on time-domain samples is built. WiscaComm allows the study of network and physical layer interactions in detail. The advantage of time domain sampling is that it allows the simulation of different radios together which is better than the complex baseband representation of symbols. The environment also supports study of multiple protocols operating simultaneously, which is of increasing importance in today's environment.
ContributorsNolastname, Ujjwala (Author) / Bliss, Daniel W. (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / McGiffen, Thomas (Committee member) / Arizona State University (Publisher)
Created2017
155926-Thumbnail Image.png
Description
With the new age Internet of Things (IoT) revolution, there is a need to connect a wide range of devices with varying throughput and performance requirements. In this thesis, a wireless system is proposed which is targeted towards very low power, delay insensitive IoT applications with low throughput requirements. The

With the new age Internet of Things (IoT) revolution, there is a need to connect a wide range of devices with varying throughput and performance requirements. In this thesis, a wireless system is proposed which is targeted towards very low power, delay insensitive IoT applications with low throughput requirements. The low cost receivers for such devices will have very low complexity, consume very less power and hence will run for several years.

Long Term Evolution (LTE) is a standard developed and administered by 3rd Generation Partnership Project (3GPP) for high speed wireless communications for mobile devices. As a part of Release 13, another standard called narrowband IoT (NB-IoT) was introduced by 3GPP to serve the needs of IoT applications with low throughput requirements. Working along similar lines, this thesis proposes yet another LTE based solution called very narrowband IoT (VNB-IoT), which further reduces the complexity and power consumption of the user equipment (UE) while maintaining the base station (BS) architecture as defined in NB-IoT.

In the downlink operation, the transmitter of the proposed system uses the NB-IoT resource block with each subcarrier modulated with data symbols intended for a different user. On the receiver side, each UE locks to a particular subcarrier frequency instead of the entire resource block and operates as a single carrier receiver. On the uplink, the system uses a single-tone transmission as specified in the NB-IoT standard.

Performance of the proposed system is analyzed in an additive white Gaussian noise (AWGN) channel followed by an analysis of the inter carrier interference (ICI). Relationship between the overall filter bandwidth and ICI is established towards the end.
ContributorsSharma, Prashant (Author) / Bliss, Daniel (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / McGiffen, Thomas (Committee member) / Arizona State University (Publisher)
Created2017
156077-Thumbnail Image.png
Description
The goal is to provide accurate measurement of the channel between a ground source and a receiving satellite.

The effects of the the ionosphere for ground to space propagation for radio waves in the 3-30 MHz HF band is an unstudied subject.

The effects of the ionosphere on radio propagation is a

The goal is to provide accurate measurement of the channel between a ground source and a receiving satellite.

The effects of the the ionosphere for ground to space propagation for radio waves in the 3-30 MHz HF band is an unstudied subject.

The effects of the ionosphere on radio propagation is a long studied subject, the primary focus has been ground to ground by means of ionospheric reflection and space to ground corrections of ionospheric distortions of GPS.

Because of the plasma properties of the ionosphere there is a strong dependence on the frequency of use.

GPS L1 1575.42 MHz and L2 1227.60 MHz are much less effected than the 3-30 MHz HF band used for skywave propagation.

The channel between the ground transmitter and the satellite receiver is characterized by 2 unique polarization modes with respective delays and Dopplers.

Accurate estimates of delay and Doppler are done using polynomial fit functions.

The application of polarimetric separation of the two propagating polarizations allows improved estimate quality of delay and Doppler of the respective mode.

These methods yield good channel models and an effective channel estimation method well suited for the ground to space propagation.
ContributorsStandage-Beier, Wylie S (Author) / Bliss, Daniel W (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / McGiffen, Thomas (Committee member) / Arizona State University (Publisher)
Created2017
157465-Thumbnail Image.png
Description
Parkinson’s disease (PD) is a neurological disorder with complicated and disabling motor and non-motor symptoms. The pathology for PD is difficult and expensive. Furthermore, it depends on patient diaries and the neurologist’s subjective assessment of clinical scales. Objective, accurate, and continuous patient monitoring have become possible with the

Parkinson’s disease (PD) is a neurological disorder with complicated and disabling motor and non-motor symptoms. The pathology for PD is difficult and expensive. Furthermore, it depends on patient diaries and the neurologist’s subjective assessment of clinical scales. Objective, accurate, and continuous patient monitoring have become possible with the advancement in mobile and portable equipment. Consequently, a significant amount of work has been done to explore new cost-effective and subjective assessment methods or PD symptoms. For example, smart technologies, such as wearable sensors and optical motion capturing systems, have been used to analyze the symptoms of a PD patient to assess their disease progression and even to detect signs in their nascent stage for early diagnosis of PD.

This review focuses on the use of modern equipment for PD applications that were developed in the last decade. Four significant fields of research were identified: Assistance diagnosis, Prognosis or Monitoring of Symptoms and their Severity, Predicting Response to Treatment, and Assistance to Therapy or Rehabilitation. This study reviews the papers published between January 2008 and December 2018 in the following four databases: Pubmed Central, Science Direct, IEEE Xplore and MDPI. After removing unrelated articles, ones published in languages other than English, duplicate entries and other articles that did not fulfill the selection criteria, 778 papers were manually investigated and included in this review. A general overview of PD applications, devices used and aspects monitored for PD management is provided in this systematic review.
ContributorsDeb, Ranadeep (Author) / Ogras, Umit Y. (Thesis advisor) / Shill, Holly (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2019
156610-Thumbnail Image.png
Description
Deep neural networks (DNN) have shown tremendous success in various cognitive tasks, such as image classification, speech recognition, etc. However, their usage on resource-constrained edge devices has been limited due to high computation and large memory requirement.

To overcome these challenges, recent works have extensively investigated model compression techniques such

Deep neural networks (DNN) have shown tremendous success in various cognitive tasks, such as image classification, speech recognition, etc. However, their usage on resource-constrained edge devices has been limited due to high computation and large memory requirement.

To overcome these challenges, recent works have extensively investigated model compression techniques such as element-wise sparsity, structured sparsity and quantization. While most of these works have applied these compression techniques in isolation, there have been very few studies on application of quantization and structured sparsity together on a DNN model.

This thesis co-optimizes structured sparsity and quantization constraints on DNN models during training. Specifically, it obtains optimal setting of 2-bit weight and 2-bit activation coupled with 4X structured compression by performing combined exploration of quantization and structured compression settings. The optimal DNN model achieves 50X weight memory reduction compared to floating-point uncompressed DNN. This memory saving is significant since applying only structured sparsity constraints achieves 2X memory savings and only quantization constraints achieves 16X memory savings. The algorithm has been validated on both high and low capacity DNNs and on wide-sparse and deep-sparse DNN models. Experiments demonstrated that deep-sparse DNN outperforms shallow-dense DNN with varying level of memory savings depending on DNN precision and sparsity levels. This work further proposed a Pareto-optimal approach to systematically extract optimal DNN models from a huge set of sparse and dense DNN models. The resulting 11 optimal designs were further evaluated by considering overall DNN memory which includes activation memory and weight memory. It was found that there is only a small change in the memory footprint of the optimal designs corresponding to the low sparsity DNNs. However, activation memory cannot be ignored for high sparsity DNNs.
ContributorsSrivastava, Gaurav (Author) / Seo, Jae-Sun (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2018