Matching Items (6)
Filtering by

Clear all filters

150524-Thumbnail Image.png
Description
Network-on-Chip (NoC) architectures have emerged as the solution to the on-chip communication challenges of multi-core embedded processor architectures. Design space exploration and performance evaluation of a NoC design requires fast simulation infrastructure. Simulation of register transfer level model of NoC is too slow for any meaningful design space exploration. One

Network-on-Chip (NoC) architectures have emerged as the solution to the on-chip communication challenges of multi-core embedded processor architectures. Design space exploration and performance evaluation of a NoC design requires fast simulation infrastructure. Simulation of register transfer level model of NoC is too slow for any meaningful design space exploration. One of the solutions to reduce the speed of simulation is to increase the level of abstraction. SystemC TLM2.0 provides the capability to model hardware design at higher levels of abstraction with trade-off of simulation speed and accuracy. In this thesis, SystemC TLM2.0 models of NoC routers are developed at three levels of abstraction namely loosely-timed, approximately-timed, and cycle accurate. Simulation speed and accuracy of these three models are evaluated by a case study of a 4x4 mesh NoC.
ContributorsArlagadda Narasimharaju, Jyothi Swaroop (Author) / Chatha, Karamvir S (Thesis advisor) / Sen, Arunabha (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2012
156036-Thumbnail Image.png
Description
Topological methods for data analysis present opportunities for enforcing certain invariances of broad interest in computer vision: including view-point in activity analysis, articulation in shape analysis, and measurement invariance in non-linear dynamical modeling. The increasing success of these methods is attributed to the complementary information that topology provides, as well

Topological methods for data analysis present opportunities for enforcing certain invariances of broad interest in computer vision: including view-point in activity analysis, articulation in shape analysis, and measurement invariance in non-linear dynamical modeling. The increasing success of these methods is attributed to the complementary information that topology provides, as well as availability of tools for computing topological summaries such as persistence diagrams. However, persistence diagrams are multi-sets of points and hence it is not straightforward to fuse them with features used for contemporary machine learning tools like deep-nets. In this paper theoretically well-grounded approaches to develop novel perturbation robust topological representations are presented, with the long-term view of making them amenable to fusion with contemporary learning architectures. The proposed representation lives on a Grassmann manifold and hence can be efficiently used in machine learning pipelines.

The proposed representation.The efficacy of the proposed descriptor was explored on three applications: view-invariant activity analysis, 3D shape analysis, and non-linear dynamical modeling. Favorable results in both high-level recognition performance and improved performance in reduction of time-complexity when compared to other baseline methods are obtained.
ContributorsThopalli, Kowshik (Author) / Turaga, Pavan Kumar (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2017
155809-Thumbnail Image.png
Description
Light field imaging is limited in its computational processing demands of high

sampling for both spatial and angular dimensions. Single-shot light field cameras

sacrifice spatial resolution to sample angular viewpoints, typically by multiplexing

incoming rays onto a 2D sensor array. While this resolution can be recovered using

compressive sensing, these iterative solutions are slow

Light field imaging is limited in its computational processing demands of high

sampling for both spatial and angular dimensions. Single-shot light field cameras

sacrifice spatial resolution to sample angular viewpoints, typically by multiplexing

incoming rays onto a 2D sensor array. While this resolution can be recovered using

compressive sensing, these iterative solutions are slow in processing a light field. We

present a deep learning approach using a new, two branch network architecture,

consisting jointly of an autoencoder and a 4D CNN, to recover a high resolution

4D light field from a single coded 2D image. This network decreases reconstruction

time significantly while achieving average PSNR values of 26-32 dB on a variety of

light fields. In particular, reconstruction time is decreased from 35 minutes to 6.7

minutes as compared to the dictionary method for equivalent visual quality. These

reconstructions are performed at small sampling/compression ratios as low as 8%,

allowing for cheaper coded light field cameras. We test our network reconstructions

on synthetic light fields, simulated coded measurements of real light fields captured

from a Lytro Illum camera, and real coded images from a custom CMOS diffractive

light field camera. The combination of compressive light field capture with deep

learning allows the potential for real-time light field video acquisition systems in the

future.
ContributorsGupta, Mayank (Author) / Turaga, Pavan (Thesis advisor) / Yang, Yezhou (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2017
168739-Thumbnail Image.png
Description
Visual navigation is a useful and important task for a variety of applications. As the preva­lence of robots increase, there is an increasing need for energy-­efficient navigation methods as well. Many aspects of efficient visual navigation algorithms have been implemented in the lit­erature, but there is a lack of work

Visual navigation is a useful and important task for a variety of applications. As the preva­lence of robots increase, there is an increasing need for energy-­efficient navigation methods as well. Many aspects of efficient visual navigation algorithms have been implemented in the lit­erature, but there is a lack of work on evaluation of the efficiency of the image sensors. In this thesis, two methods are evaluated: adaptive image sensor quantization for traditional camera pipelines as well as new event­-based sensors for low­-power computer vision.The first contribution in this thesis is an evaluation of performing varying levels of sen­sor linear and logarithmic quantization with the task of visual simultaneous localization and mapping (SLAM). This unconventional method can provide efficiency benefits with a trade­ off between accuracy of the task and energy-­efficiency. A new sensor quantization method, gradient­-based quantization, is introduced to improve the accuracy of the task. This method only lowers the bit level of parts of the image that are less likely to be important in the SLAM algorithm since lower bit levels signify better energy­-efficiency, but worse task accuracy. The third contribution is an evaluation of the efficiency and accuracy of event­-based camera inten­sity representations for the task of optical flow. The results of performing a learning based optical flow are provided for each of five different reconstruction methods along with ablation studies. Lastly, the challenges of an event feature­-based SLAM system are presented with re­sults demonstrating the necessity for high quality and high­ resolution event data. The work in this thesis provides studies useful for examining trade­offs for an efficient visual navigation system with traditional and event vision sensors. The results of this thesis also provide multiple directions for future work.
ContributorsChristie, Olivia Catherine (Author) / Jayasuriya, Suren (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2022
158867-Thumbnail Image.png
Description
The accurate monitoring of the bulk transmission system of the electric power grid by sensors, such as Remote Terminal Units (RTUs) and Phasor Measurement Units (PMUs), is essential for maintaining the reliability of the modern power system. One of the primary objectives of power system monitoring is the identification of

The accurate monitoring of the bulk transmission system of the electric power grid by sensors, such as Remote Terminal Units (RTUs) and Phasor Measurement Units (PMUs), is essential for maintaining the reliability of the modern power system. One of the primary objectives of power system monitoring is the identification of the snapshots of the system at regular intervals by performing state estimation using the available measurements from the sensors. The process of state estimation corresponds to the estimation of the complex voltages at all buses of the system. PMU measurements play an important role in this regard, because of the time-synchronized nature of these measurements as well as the faster rates at which they are produced. However, a model-based linear state estimator created using PMU-only data requires complete observability of the system by PMUs for its continuous functioning. The conventional model-based techniques also make certain assumptions in the modeling of the physical system, such as the constant values of the line parameters. The measurement error models in the conventional state estimators are also assumed to follow a Gaussian distribution. In this research, a data mining technique using Deep Neural Networks (DNNs) is proposed for performing a high-speed, time-synchronized state estimation of the transmission system of the power system. The proposed technique uses historical data to identify the correlation between the measurements and the system states as opposed to directly using the physical model of the system. Therefore, the highlight of the proposed technique is its ability to provide an accurate, fast, time-synchronized estimate of the system states even in the absence of complete system observability by PMUs.
The state estimator is formulated for the IEEE 118-bus system and its reliable performance is demonstrated in the presence of redundant observability, complete observability, and incomplete observability. The robustness of the state estimator is also demonstrated by performing the estimation in presence of Non-Gaussian measurement errors and varying line parameters. The consistency of the DNN state estimator is demonstrated by performing state estimation for an entire day.
ContributorsChandrasekaran, Harish (Author) / Pal, Anamitra (Thesis advisor) / Sen, Arunabha (Committee member) / Tylavsky, Daniel (Committee member) / Arizona State University (Publisher)
Created2020
157919-Thumbnail Image.png
Description
Due to the rapid penetration of solar power systems in residential areas, there has

been a dramatic increase in bidirectional power flow. Such a phenomenon of bidirectional

power flow creates a need to know where Photovoltaic (PV) systems are

located, what their quantity is, and how much they generate. However, significant

challenges exist for

Due to the rapid penetration of solar power systems in residential areas, there has

been a dramatic increase in bidirectional power flow. Such a phenomenon of bidirectional

power flow creates a need to know where Photovoltaic (PV) systems are

located, what their quantity is, and how much they generate. However, significant

challenges exist for accurate solar panel detection, capacity quantification,

and generation estimation by employing existing methods, because of the limited

labeled ground truth and relatively poor performance for direct supervised learning.

To mitigate these issue, this thesis revolutionizes key learning concepts to (1)

largely increase the volume of training data set and expand the labelled data set by

creating highly realistic solar panel images, (2) boost detection and quantification

learning through physical knowledge and (3) greatly enhance the generation estimation

capability by utilizing effective features and neighboring generation patterns.

These techniques not only reshape the machine learning methods in the GIS

domain but also provides a highly accurate solution to gain a better understanding

of distribution networks with high PV penetration. The numerical

validation and performance evaluation establishes the high accuracy and scalability

of the proposed methodologies on the existing solar power systems in the

Southwest region of the United States of America. The distribution and transmission

networks both have primitive control methodologies, but now is the high time

to work out intelligent control schemes based on reinforcement learning and show

that they can not only perform well but also have the ability to adapt to the changing

environments. This thesis proposes a sequence task-based learning method to

create an agent that can learn to come up with the best action set that can overcome

the issues of transient over-voltage.
ContributorsHashmy, Syed Muhammad Yousaf (Author) / Weng, Yang (Thesis advisor) / Sen, Arunabha (Committee member) / Qin, Jiangchao (Committee member) / Arizona State University (Publisher)
Created2019