Matching Items (7)
Filtering by

Clear all filters

152886-Thumbnail Image.png
Description
As the number of devices with wireless capabilities and the proximity of these devices to each other increases, better ways to handle the interference they cause need to be explored. Also important is for these devices to keep up with the demand for data rates while not compromising on

As the number of devices with wireless capabilities and the proximity of these devices to each other increases, better ways to handle the interference they cause need to be explored. Also important is for these devices to keep up with the demand for data rates while not compromising on industry established expectations of power consumption and mobility. Current methods of distributing the spectrum among all participants are expected to not cope with the demand in a very near future. In this thesis, the effect of employing sophisticated multiple-input, multiple-output (MIMO) systems in this regard is explored. The efficacy of systems which can make intelligent decisions on the transmission mode usage and power allocation to these modes becomes relevant in the current scenario, where the need for performance far exceeds the cost expendable on hardware. The effect of adding multiple antennas at either ends will be examined, the capacity of such systems and of networks comprised of many such participants will be evaluated. Methods of simulating said networks, and ways to achieve better performance by making intelligent transmission decisions will be proposed. Finally, a way of access control closer to the physical layer (a 'statistical MAC') and a possible metric to be used for such a MAC is suggested.
ContributorsThontadarya, Niranjan (Author) / Bliss, Daniel W (Thesis advisor) / Berisha, Visar (Committee member) / Ying, Lei (Committee member) / Arizona State University (Publisher)
Created2014
156610-Thumbnail Image.png
Description
Deep neural networks (DNN) have shown tremendous success in various cognitive tasks, such as image classification, speech recognition, etc. However, their usage on resource-constrained edge devices has been limited due to high computation and large memory requirement.

To overcome these challenges, recent works have extensively investigated model compression techniques such

Deep neural networks (DNN) have shown tremendous success in various cognitive tasks, such as image classification, speech recognition, etc. However, their usage on resource-constrained edge devices has been limited due to high computation and large memory requirement.

To overcome these challenges, recent works have extensively investigated model compression techniques such as element-wise sparsity, structured sparsity and quantization. While most of these works have applied these compression techniques in isolation, there have been very few studies on application of quantization and structured sparsity together on a DNN model.

This thesis co-optimizes structured sparsity and quantization constraints on DNN models during training. Specifically, it obtains optimal setting of 2-bit weight and 2-bit activation coupled with 4X structured compression by performing combined exploration of quantization and structured compression settings. The optimal DNN model achieves 50X weight memory reduction compared to floating-point uncompressed DNN. This memory saving is significant since applying only structured sparsity constraints achieves 2X memory savings and only quantization constraints achieves 16X memory savings. The algorithm has been validated on both high and low capacity DNNs and on wide-sparse and deep-sparse DNN models. Experiments demonstrated that deep-sparse DNN outperforms shallow-dense DNN with varying level of memory savings depending on DNN precision and sparsity levels. This work further proposed a Pareto-optimal approach to systematically extract optimal DNN models from a huge set of sparse and dense DNN models. The resulting 11 optimal designs were further evaluated by considering overall DNN memory which includes activation memory and weight memory. It was found that there is only a small change in the memory footprint of the optimal designs corresponding to the low sparsity DNNs. However, activation memory cannot be ignored for high sparsity DNNs.
ContributorsSrivastava, Gaurav (Author) / Seo, Jae-Sun (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2018
154545-Thumbnail Image.png
Description
Many neurological disorders, especially those that result in dementia, impact speech and language production. A number of studies have shown that there exist subtle changes in linguistic complexity in these individuals that precede disease onset. However, these studies are conducted on controlled speech samples from a specific task. This thesis

Many neurological disorders, especially those that result in dementia, impact speech and language production. A number of studies have shown that there exist subtle changes in linguistic complexity in these individuals that precede disease onset. However, these studies are conducted on controlled speech samples from a specific task. This thesis explores the possibility of using natural language processing in order to detect declining linguistic complexity from more natural discourse. We use existing data from public figures suspected (or at risk) of suffering from cognitive-linguistic decline, downloaded from the Internet, to detect changes in linguistic complexity. In particular, we focus on two case studies. The first case study analyzes President Ronald Reagan’s transcribed spontaneous speech samples during his presidency. President Reagan was diagnosed with Alzheimer’s disease in 1994, however my results showed declining linguistic complexity during the span of the 8 years he was in office. President George Herbert Walker Bush, who has no known diagnosis of Alzheimer’s disease, shows no decline in the same measures. In the second case study, we analyze transcribed spontaneous speech samples from the news conferences of 10 current NFL players and 18 non-player personnel since 2007. The non-player personnel have never played professional football. Longitudinal analysis of linguistic complexity showed contrasting patterns in the two groups. The majority (6 of 10) of current players showed decline in at least one measure of linguistic complexity over time. In contrast, the majority (11 out of 18) of non-player personnel showed an increase in at least one linguistic complexity measure.
ContributorsWang, Shuai (Author) / Berisha, Visar (Thesis advisor) / LaCross, Amy (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2016
157977-Thumbnail Image.png
Description
Deep neural networks (DNNs) have had tremendous success in a variety of

statistical learning applications due to their vast expressive power. Most

applications run DNNs on the cloud on parallelized architectures. There is a need

for for efficient DNN inference on edge with low precision hardware and analog

accelerators. To make trained models more

Deep neural networks (DNNs) have had tremendous success in a variety of

statistical learning applications due to their vast expressive power. Most

applications run DNNs on the cloud on parallelized architectures. There is a need

for for efficient DNN inference on edge with low precision hardware and analog

accelerators. To make trained models more robust for this setting, quantization and

analog compute noise are modeled as weight space perturbations to DNNs and an

information theoretic regularization scheme is used to penalize the KL-divergence

between perturbed and unperturbed models. This regularizer has similarities to

both natural gradient descent and knowledge distillation, but has the advantage of

explicitly promoting the network to and a broader minimum that is robust to

weight space perturbations. In addition to the proposed regularization,

KL-divergence is directly minimized using knowledge distillation. Initial validation

on FashionMNIST and CIFAR10 shows that the information theoretic regularizer

and knowledge distillation outperform existing quantization schemes based on the

straight through estimator or L2 constrained quantization.
ContributorsKadambi, Pradyumna (Author) / Berisha, Visar (Thesis advisor) / Dasarathy, Gautam (Committee member) / Seo, Jae-Sun (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2019
161579-Thumbnail Image.png
Description
Infectious diseases spread at a rapid rate, due to the increasing mobility of the human population. It is important to have a variety of containment and assessment strategies to prevent and limit their spread. In the on-going COVID-19 pandemic, telehealth services including daily health surveys are used to study the

Infectious diseases spread at a rapid rate, due to the increasing mobility of the human population. It is important to have a variety of containment and assessment strategies to prevent and limit their spread. In the on-going COVID-19 pandemic, telehealth services including daily health surveys are used to study the prevalence and severity of the disease. Daily health surveys can also help to study the progression and fluctuation of symptoms as recalling, tracking, and explaining symptoms to doctors can often be challenging for patients. Data aggregates collected from the daily health surveys can be used to identify the surge of a disease in a community. This thesis enhances a well-known boosting algorithm, XGBoost, to predict COVID-19 from the anonymized self-reported survey responses provided by Carnegie Mellon University (CMU) - Delphi research group in collaboration with Facebook. Despite the tremendous COVID-19 surge in the United States, this survey dataset is highly imbalanced with 84% negative COVID-19 cases and 16% positive cases. It is tedious to learn from an imbalanced dataset, especially when the dataset could also be noisy, as seen commonly in self-reported surveys. This thesis addresses these challenges by enhancing XGBoost with a tunable loss function, ?-loss, that interpolates between the exponential loss (? = 1/2), the log-loss (? = 1), and the 0-1 loss (? = ∞). Results show that tuning XGBoost with ?-loss can enhance performance over the standard XGBoost with log-loss (? = 1).
ContributorsVikash Babu, Gokulan (Author) / Sankar, Lalitha (Thesis advisor) / Berisha, Visar (Committee member) / Zhao, Ming (Committee member) / Trieu, Ni (Committee member) / Arizona State University (Publisher)
Created2021
171928-Thumbnail Image.png
Description
Linear-regression estimators have become widely accepted as a reliable statistical tool in predicting outcomes. Because linear regression is a long-established procedure, the properties of linear-regression estimators are well understood and can be trained very quickly. Many estimators exist for modeling linear relationships, each having ideal conditions for optimal performance. The

Linear-regression estimators have become widely accepted as a reliable statistical tool in predicting outcomes. Because linear regression is a long-established procedure, the properties of linear-regression estimators are well understood and can be trained very quickly. Many estimators exist for modeling linear relationships, each having ideal conditions for optimal performance. The differences stem from the introduction of a bias into the parameter estimation through the use of various regularization strategies. One of the more popular ones is ridge regression which uses ℓ2-penalization of the parameter vector. In this work, the proposed graph regularized linear estimator is pitted against the popular ridge regression when the parameter vector is known to be dense. When additional knowledge that parameters are smooth with respect to a graph is available, it can be used to improve the parameter estimates. To achieve this goal an additional smoothing penalty is introduced into the traditional loss function of ridge regression. The mean squared error(m.s.e) is used as a performance metric and the analysis is presented for fixed design matrices having a unit covariance matrix. The specific problem setup enables us to study the theoretical conditions where the graph regularized estimator out-performs the ridge estimator. The eigenvectors of the laplacian matrix indicating the graph of connections between the various dimensions of the parameter vector form an integral part of the analysis. Experiments have been conducted on simulated data to compare the performance of the two estimators for laplacian matrices of several types of graphs – complete, star, line and 4-regular. The experimental results indicate that the theory can possibly be extended to more general settings taking smoothness, a concept defined in this work, into consideration.
ContributorsSajja, Akarshan (Author) / Dasarathy, Gautam (Thesis advisor) / Berisha, Visar (Committee member) / Yang, Yingzhen (Committee member) / Arizona State University (Publisher)
Created2022
161524-Thumbnail Image.png
Description
Contact tracing has been shown to be effective in limiting the rate of spread of infectious diseases like COVID-19. Several solutions based on the exchange of random, anonymous tokens between users’ mobile devices via Bluetooth, or using users’ location traces have been proposed and deployed. These solutions require the user

Contact tracing has been shown to be effective in limiting the rate of spread of infectious diseases like COVID-19. Several solutions based on the exchange of random, anonymous tokens between users’ mobile devices via Bluetooth, or using users’ location traces have been proposed and deployed. These solutions require the user device to download the tokens (or traces) of infected users from the server. The user tokens are matched with infected users’ tokens to determine an exposure event. These solutions are vulnerable to a range of security and privacy issues, and require large downloads, thus warranting the need for an efficient protocol with strong privacy guarantees. Moreover, these solutions are based solely on proximity between user devices, while COVID-19 can spread from common surfaces as well. Knowledge of areas with a large number of visits by infected users (hotspots) can help inform users to avoid those areas and thereby reduce surface transmission. This thesis proposes a strong secure system for contact tracing and hotspots histogram computation. The contact tracing protocol uses a combination of Bluetooth Low Energy and Global Positioning System (GPS) location data. A novel and deployment-friendly Delegated Private Set Intersection Cardinality protocol is proposed for efficient and secure server aided matching of tokens. Secure aggregation techniques are used to allow the server to learn areas of high risk from location traces of diagnosed users, without revealing any individual user’s location history.
ContributorsSurana, Chetan (Author) / Trieu, Ni (Thesis advisor) / Sankar, Lalitha (Committee member) / Berisha, Visar (Committee member) / Zhao, Ming (Committee member) / Arizona State University (Publisher)
Created2021