Matching Items (107)
Filtering by

Clear all filters

156790-Thumbnail Image.png
Description
Vision processing on traditional architectures is inefficient due to energy-expensive off-chip data movements. Many researchers advocate pushing processing close to the sensor to substantially reduce data movements. However, continuous near-sensor processing raises the sensor temperature, impairing the fidelity of imaging/vision tasks.

The work characterizes the thermal implications of using 3D stacked

Vision processing on traditional architectures is inefficient due to energy-expensive off-chip data movements. Many researchers advocate pushing processing close to the sensor to substantially reduce data movements. However, continuous near-sensor processing raises the sensor temperature, impairing the fidelity of imaging/vision tasks.

The work characterizes the thermal implications of using 3D stacked image sensors with near-sensor vision processing units. The characterization reveals that near-sensor processing reduces system power but degrades image quality. For reasonable image fidelity, the sensor temperature needs to stay below a threshold, situationally determined by application needs. Fortunately, the characterization also identifies opportunities -- unique to the needs of near-sensor processing -- to regulate temperature based on dynamic visual task requirements and rapidly increase capture quality on demand.

Based on the characterization, the work proposes and investigate two thermal management strategies -- stop-capture-go and seasonal migration -- for imaging-aware thermal management. The work present parameters that govern the policy decisions and explore the trade-offs between system power and policy overhead. The work's evaluation shows that the novel dynamic thermal management strategies can unlock the energy-efficiency potential of near-sensor processing with minimal performance impact, without compromising image fidelity.
ContributorsKodukula, Venkatesh (Author) / LiKamWa, Robert (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Brunhaver, John (Committee member) / Arizona State University (Publisher)
Created2019
156822-Thumbnail Image.png
Description
Hardware implementation of deep neural networks is earning significant importance nowadays. Deep neural networks are mathematical models that use learning algorithms inspired by the brain. Numerous deep learning algorithms such as multi-layer perceptrons (MLP) have demonstrated human-level recognition accuracy in image and speech classification tasks. Multiple layers of processing elements

Hardware implementation of deep neural networks is earning significant importance nowadays. Deep neural networks are mathematical models that use learning algorithms inspired by the brain. Numerous deep learning algorithms such as multi-layer perceptrons (MLP) have demonstrated human-level recognition accuracy in image and speech classification tasks. Multiple layers of processing elements called neurons with several connections between them called synapses are used to build these networks. Hence, it involves operations that exhibit a high level of parallelism making it computationally and memory intensive. Constrained by computing resources and memory, most of the applications require a neural network which utilizes less energy. Energy efficient implementation of these computationally intense algorithms on neuromorphic hardware demands a lot of architectural optimizations. One of these optimizations would be the reduction in the network size using compression and several studies investigated compression by introducing element-wise or row-/column-/block-wise sparsity via pruning and regularization. Additionally, numerous recent works have concentrated on reducing the precision of activations and weights with some reducing to a single bit. However, combining various sparsity structures with binarized or very-low-precision (2-3 bit) neural networks have not been comprehensively explored. Output activations in these deep neural network algorithms are habitually non-binary making it difficult to exploit sparsity. On the other hand, biologically realistic models like spiking neural networks (SNN) closely mimic the operations in biological nervous systems and explore new avenues for brain-like cognitive computing. These networks deal with binary spikes, and they can exploit the input-dependent sparsity or redundancy to dynamically scale the amount of computation in turn leading to energy-efficient hardware implementation. This work discusses configurable spiking neuromorphic architecture that supports multiple hidden layers exploiting hardware reuse. It also presents design techniques for minimum-area/-energy DNN hardware with minimal degradation in accuracy. Area, performance and energy results of these DNN and SNN hardware is reported for the MNIST dataset. The Neuromorphic hardware designed for SNN algorithm in 28nm CMOS demonstrates high classification accuracy (>98% on MNIST) and low energy (51.4 - 773 (nJ) per classification). The optimized DNN hardware designed in 40nm CMOS that combines 8X structured compression and 3-bit weight precision showed 98.4% accuracy at 33 (nJ) per classification.
ContributorsKolala Venkataramanaiah, Shreyas (Author) / Seo, Jae-Sun (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2018
156948-Thumbnail Image.png
Description
The Internet of Things ecosystem has spawned a wide variety of embedded real-time systems that complicate the identification and resolution of bugs in software. The methods of concurrent checkpoint provide a means to monitor the application state with the ability to replay the execution on like hardware and software,

The Internet of Things ecosystem has spawned a wide variety of embedded real-time systems that complicate the identification and resolution of bugs in software. The methods of concurrent checkpoint provide a means to monitor the application state with the ability to replay the execution on like hardware and software, without holding off and delaying the execution of application threads. In this thesis, it is accomplished by monitoring physical memory of the application using a soft-dirty page tracker and measuring the various types of overhead when employing concurrent checkpointing. The solution presented is an advancement of the Checkpoint and Replay In Userspace (CRIU) thereby eliminating the large stalls and parasitic operation for each successive checkpoint. Impact and performance is measured using the Parsec 3.0 Benchmark suite and 4.11.12-rt16+ Linux kernel on a MinnowBoard Turbot Quad-Core board.
ContributorsPrinke, Michael L (Author) / Lee, Yann-Hang (Thesis advisor) / Shrivastava, Aviral (Committee member) / Zhao, Ming (Committee member) / Arizona State University (Publisher)
Created2018
157028-Thumbnail Image.png
Description
Due to large data resources generated by online educational applications, Educational Data Mining (EDM) has improved learning effects in different ways: Students Visualization, Recommendations for students, Students Modeling, Grouping Students, etc. A lot of programming assignments have the features like automating submissions, examining the test cases to verify the correctness,

Due to large data resources generated by online educational applications, Educational Data Mining (EDM) has improved learning effects in different ways: Students Visualization, Recommendations for students, Students Modeling, Grouping Students, etc. A lot of programming assignments have the features like automating submissions, examining the test cases to verify the correctness, but limited studies compared different statistical techniques with latest frameworks, and interpreted models in a unified approach.

In this thesis, several data mining algorithms have been applied to analyze students’ code assignment submission data from a real classroom study. The goal of this work is to explore

and predict students’ performances. Multiple machine learning models and the model accuracy were evaluated based on the Shapley Additive Explanation.

The Cross-Validation shows the Gradient Boosting Decision Tree has the best precision 85.93% with average 82.90%. Features like Component grade, Due Date, Submission Times have higher impact than others. Baseline model received lower precision due to lack of non-linear fitting.
ContributorsTian, Wenbo (Author) / Hsiao, Ihan (Thesis advisor) / Bazzi, Rida (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2019
153829-Thumbnail Image.png
Description
The reduced availability of 3He is a motivation for developing alternative neutron detectors. 6Li-enriched CLYC (Cs2LiYCl6), a scintillator, is a promising candidate to replace 3He. The neutron and gamma ray signals from CLYC have different shapes due to the slower decay of neutron pulses. Some of the well-known pulse shape

The reduced availability of 3He is a motivation for developing alternative neutron detectors. 6Li-enriched CLYC (Cs2LiYCl6), a scintillator, is a promising candidate to replace 3He. The neutron and gamma ray signals from CLYC have different shapes due to the slower decay of neutron pulses. Some of the well-known pulse shape discrimination techniques are charge comparison method, pulse gradient method and frequency gradient method. In the work presented here, we have applied a normalized cross correlation (NCC) approach to real neutron and gamma ray pulses produced by exposing CLYC scintillators to a mixed radiation environment generated by 137Cs, 22Na, 57Co and 252Cf/AmBe at different event rates. The cross correlation analysis produces distinctive results for measured neutron pulses and gamma ray pulses when they are cross correlated with reference neutron and/or gamma templates. NCC produces good separation between neutron and gamma rays at low (< 100 kHz) to mid event rate (< 200 kHz). However, the separation disappears at high event rate (> 200 kHz) because of pileup, noise and baseline shift. This is also confirmed by observing the pulse shape discrimination (PSD) plots and figure of merit (FOM) of NCC. FOM is close to 3, which is good, for low event rate but rolls off significantly along with the increase in the event rate and reaches 1 at high event rate. Future efforts are required to reduce the noise by using better hardware system, remove pileup and detect the NCC shapes of neutron and gamma rays using advanced techniques.
ContributorsChandhran, Premkumar (Author) / Holbert, Keith E. (Thesis advisor) / Spanias, Andreas (Committee member) / Ogras, Umit Y. (Committee member) / Arizona State University (Publisher)
Created2015
154545-Thumbnail Image.png
Description
Many neurological disorders, especially those that result in dementia, impact speech and language production. A number of studies have shown that there exist subtle changes in linguistic complexity in these individuals that precede disease onset. However, these studies are conducted on controlled speech samples from a specific task. This thesis

Many neurological disorders, especially those that result in dementia, impact speech and language production. A number of studies have shown that there exist subtle changes in linguistic complexity in these individuals that precede disease onset. However, these studies are conducted on controlled speech samples from a specific task. This thesis explores the possibility of using natural language processing in order to detect declining linguistic complexity from more natural discourse. We use existing data from public figures suspected (or at risk) of suffering from cognitive-linguistic decline, downloaded from the Internet, to detect changes in linguistic complexity. In particular, we focus on two case studies. The first case study analyzes President Ronald Reagan’s transcribed spontaneous speech samples during his presidency. President Reagan was diagnosed with Alzheimer’s disease in 1994, however my results showed declining linguistic complexity during the span of the 8 years he was in office. President George Herbert Walker Bush, who has no known diagnosis of Alzheimer’s disease, shows no decline in the same measures. In the second case study, we analyze transcribed spontaneous speech samples from the news conferences of 10 current NFL players and 18 non-player personnel since 2007. The non-player personnel have never played professional football. Longitudinal analysis of linguistic complexity showed contrasting patterns in the two groups. The majority (6 of 10) of current players showed decline in at least one measure of linguistic complexity over time. In contrast, the majority (11 out of 18) of non-player personnel showed an increase in at least one linguistic complexity measure.
ContributorsWang, Shuai (Author) / Berisha, Visar (Thesis advisor) / LaCross, Amy (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2016
154425-Thumbnail Image.png
Description
Digital systems are essential to the technological advancements in space exploration. Microprocessor and flash memory are the essential parts of such a digital system. Space exploration requires a special class of radiation hardened microprocessors and flash memories, which are not functionally disrupted in the presence of radiation. The reference design

Digital systems are essential to the technological advancements in space exploration. Microprocessor and flash memory are the essential parts of such a digital system. Space exploration requires a special class of radiation hardened microprocessors and flash memories, which are not functionally disrupted in the presence of radiation. The reference design ‘HERMES’ is a radiation-hardened microprocessor with performance comparable to commercially available designs. The reference design ‘eFlash’ is a prototype of soft-error hardened flash memory for configuring Xilinx FPGAs. These designs are manufactured using a foundry bulk CMOS 90-nm low standby power (LP) process. This thesis presents the post-silicon validation results of these designs.
ContributorsGogulamudi, Anudeep Reddy (Author) / Clark, Lawrence T (Thesis advisor) / Holbert, Keith E. (Committee member) / Brunhaver, John (Committee member) / Arizona State University (Publisher)
Created2016
154567-Thumbnail Image.png
Description
With the software-defined networking trend growing, several network virtualization controllers have been developed in recent years. These controllers, also called network hypervisors, attempt to manage physical SDN based networks so that multiple tenants can safely share the same forwarding plane hardware without risk of being affected by or affecting other

With the software-defined networking trend growing, several network virtualization controllers have been developed in recent years. These controllers, also called network hypervisors, attempt to manage physical SDN based networks so that multiple tenants can safely share the same forwarding plane hardware without risk of being affected by or affecting other tenants. However, many areas remain unexplored by current network hypervisor implementations. This thesis presents and evaluates some of the features offered by network hypervisors, such as full header space availability, isolation, and transparent traffic forwarding capabilities for tenants. Flow setup time and throughput are also measured and compared among different network hypervisors. Three different network hypervisors are evaluated: FlowVisor, VeRTIGO and OpenVirteX. These virtualization tools are assessed with experiments conducted on three different testbeds: an emulated Mininet scenario, a physical single-switch testbed, and also a remote GENI testbed. The results indicate that network hypervisors bring SDN flexibility to network virtualization, making it easier for network administrators to define with precision how the network is sliced and divided among tenants. This increased flexibility, however, may come with the cost of decreased performance, and also brings additional risks of interoperability due to a lack of standardization of virtualization methods.
ContributorsStall Rechia, Felipe (Author) / Syrotiuk, Violet R. (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2016
154785-Thumbnail Image.png
Description
A computational framework based on convex optimization is presented for stability analysis of systems described by Partial Differential Equations (PDEs). Specifically, two forms of linear PDEs with spatially distributed polynomial coefficients are considered.

The first class includes linear coupled PDEs with one spatial variable. Parabolic, elliptic or hyperbolic PDEs with

A computational framework based on convex optimization is presented for stability analysis of systems described by Partial Differential Equations (PDEs). Specifically, two forms of linear PDEs with spatially distributed polynomial coefficients are considered.

The first class includes linear coupled PDEs with one spatial variable. Parabolic, elliptic or hyperbolic PDEs with Dirichlet, Neumann, Robin or mixed boundary conditions can be reformulated in order to be used by the framework. As an example, the reformulation is presented for systems governed by Schr¨odinger equation, parabolic type, relativistic heat conduction PDE and acoustic wave equation, hyperbolic types. The second form of PDEs of interest are scalar-valued with two spatial variables. An extra spatial variable allows consideration of problems such as local stability of fluid flows in channels and dynamics of population over two dimensional domains.

The approach does not involve discretization and is based on using Sum-of-Squares (SOS) polynomials and positive semi-definite matrices to parameterize operators which are positive on function spaces. Applying the parameterization to construct Lyapunov functionals with negative derivatives allows to express stability conditions as a set of LinearMatrix Inequalities (LMIs). The MATLAB package SOSTOOLS was used to construct the LMIs. The resultant LMIs then can be solved using existent Semi-Definite Programming (SDP) solvers such as SeDuMi or MOSEK. Moreover, the proposed approach allows to calculate bounds on the rate of decay of the solution norm.

The methodology is tested using several numerical examples and compared with the results obtained from simulation using standard methods of numerical discretization and analytic solutions.
ContributorsMeyer, Evgeny (Author) / Peet, Matthew (Thesis advisor) / Berman, Spring (Committee member) / Rivera, Daniel (Committee member) / Arizona State University (Publisher)
Created2016
154858-Thumbnail Image.png
Description
Historically, wireless communication devices have been developed to process one specific waveform. In contrast, a modern cellular phone supports multiple waveforms corresponding to LTE, WCDMA(3G) and 2G standards. The selection of the network is controlled by software running on a general purpose processor, not by the user. Now, instead of

Historically, wireless communication devices have been developed to process one specific waveform. In contrast, a modern cellular phone supports multiple waveforms corresponding to LTE, WCDMA(3G) and 2G standards. The selection of the network is controlled by software running on a general purpose processor, not by the user. Now, instead of selecting from a set of complete radios as in software controlled radio, what if the software could select the building blocks based on the user needs. This is the new software-defined flexible radio which would enable users to construct wireless systems that fit their needs, rather than forcing to use from a small set of pre-existing protocols.

To develop and implement flexible protocols, a flexible hardware very similar to a Software Defined Radio (SDR) is required. In this thesis, the Intel T2200 board is chosen as the SDR platform. It is a heterogeneous platform with ARM, CEVA DSP and several accelerators. A wide range of protocols is mapped onto this platform and their performance evaluated. These include two OFDM based protocols (WiFi-Lite-A, WiFi-Lite-B), one DFT-spread OFDM based protocol (SCFDM-Lite) and one single carrier based protocol (SC-Lite). The transmitter and receiver blocks of the different protocols are first mapped on ARM in the T2200 board. The timing results show that IFFT, FFT, and Viterbi decoder blocks take most of the transmitter and receiver execution time and so in the next step these are mapped onto CEVA DSP. Mapping onto CEVA DSP resulted in significant execution time savings. The savings for WiFi-Lite-A were 60%, for WiFi-Lite-B were 64%, and for SCFDM-Lite were 71.5%. No savings are reported for SC-Lite since it was not mapped onto CEVA DSP.

Significant reduction in execution time is achieved for WiFi-Lite-A and WiFi-Lite-B protocols by implementing the entire transmitter and receiver chains on CEVA DSP. For instance, for WiFi-Lite-A, the savings were as large as 90%. Such huge savings are because the entire transmitter or receiver chain are implemented on CEVA and the timing overhead due to ARM-CEVA communication is completely eliminated. Finally, over-the-air testing was done for WiFi-Lite-A and WiFi-Lite-B protocols. Data was sent over the air using one Intel T2200 WBS board and received using another Intel T2200 WBS board. The received frames were decoded with no errors, thereby validating the over-the-air-communications.
ContributorsChagari, Vamsi Reddy (Author) / Chakrabarti, Chaitali (Thesis advisor) / Lee, Hyunseok (Committee member) / Ogras, Umit Y. (Committee member) / Arizona State University (Publisher)
Created2016