Matching Items (92)
Filtering by

Clear all filters

158010-Thumbnail Image.png
Description
Robotic lower limb prostheses provide new opportunities to help transfemoral amputees regain mobility. However, their application is impeded by that the impedance control parameters need to be tuned and optimized manually by prosthetists for each individual user in different task environments. Reinforcement learning (RL) is capable of automatically learning from

Robotic lower limb prostheses provide new opportunities to help transfemoral amputees regain mobility. However, their application is impeded by that the impedance control parameters need to be tuned and optimized manually by prosthetists for each individual user in different task environments. Reinforcement learning (RL) is capable of automatically learning from interacting with the environment. It becomes a natural candidate to replace human prosthetists to customize the control parameters. However, neither traditional RL approaches nor the popular deep RL approaches are readily suitable for learning with limited number of samples and samples with large variations. This dissertation aims to explore new RL based adaptive solutions that are data-efficient for controlling robotic prostheses.

This dissertation begins by proposing a new flexible policy iteration (FPI) framework. To improve sample efficiency, FPI can utilize either on-policy or off-policy learning strategy, can learn from either online or offline data, and can even adopt exiting knowledge of an external critic. Approximate convergence to Bellman optimal solutions are guaranteed under mild conditions. Simulation studies validated that FPI was data efficient compared to several established RL methods. Furthermore, a simplified version of FPI was implemented to learn from offline data, and then the learned policy was successfully tested for tuning the control parameters online on a human subject.

Next, the dissertation discusses RL control with information transfer (RL-IT), or knowledge-guided RL (KG-RL), which is motivated to benefit from transferring knowledge acquired from one subject to another. To explore its feasibility, knowledge was extracted from data measurements of able-bodied (AB) subjects, and transferred to guide Q-learning control for an amputee in OpenSim simulations. This result again demonstrated that data and time efficiency were improved using previous knowledge.

While the present study is new and promising, there are still many open questions to be addressed in future research. To account for human adaption, the learning control objective function may be designed to incorporate human-prosthesis performance feedback such as symmetry, user comfort level and satisfaction, and user energy consumption. To make the RL based control parameter tuning practical in real life, it should be further developed and tested in different use environments, such as from level ground walking to stair ascending or descending, and from walking to running.
ContributorsGao, Xiang (Author) / Si, Jennie (Thesis advisor) / Huang, He Helen (Committee member) / Santello, Marco (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2020
157982-Thumbnail Image.png
Description
Ultrasound B-mode imaging is an increasingly significant medical imaging modality for clinical applications. Compared to other imaging modalities like computed tomography (CT) or magnetic resonance imaging (MRI), ultrasound imaging has the advantage of being safe, inexpensive, and portable. While two dimensional (2-D) ultrasound imaging is very popular, three dimensional (3-D)

Ultrasound B-mode imaging is an increasingly significant medical imaging modality for clinical applications. Compared to other imaging modalities like computed tomography (CT) or magnetic resonance imaging (MRI), ultrasound imaging has the advantage of being safe, inexpensive, and portable. While two dimensional (2-D) ultrasound imaging is very popular, three dimensional (3-D) ultrasound imaging provides distinct advantages over its 2-D counterpart by providing volumetric imaging, which leads to more accurate analysis of tumor and cysts. However, the amount of received data at the front-end of 3-D system is extremely large, making it impractical for power-constrained portable systems.



In this thesis, algorithm and hardware design techniques to support a hand-held 3-D ultrasound imaging system are proposed. Synthetic aperture sequential beamforming (SASB) is chosen since its computations can be split into two stages, where the output generated of Stage 1 is significantly smaller in size compared to the input. This characteristic enables Stage 1 to be done in the front end while Stage 2 can be sent out to be processed elsewhere.



The contributions of this thesis are as follows. First, 2-D SASB is extended to 3-D. Techniques to increase the volume rate of 3-D SASB through a new multi-line firing scheme and use of linear chirp as the excitation waveform, are presented. A new sparse array design that not only reduces the number of active transducers but also avoids the imaging degradation caused by grating lobes, is proposed. A combination of these techniques increases the volume rate of 3-D SASB by 4\texttimes{} without introducing extra computations at the front end.



Next, algorithmic techniques to further reduce the Stage 1 computations in the front end are presented. These include reducing the number of distinct apodization coefficients and operating with narrow-bit-width fixed-point data. A 3-D die stacked architecture is designed for the front end. This highly parallel architecture enables the signals received by 961 active transducers to be digitalized, routed by a network-on-chip, and processed in parallel. The processed data are accumulated through a bus-based structure. This architecture is synthesized using TSMC 28 nm technology node and the estimated power consumption of the front end is less than 2 W.



Finally, the Stage 2 computations are mapped onto a reconfigurable multi-core architecture, TRANSFORMER, which supports different types of on-chip memory banks and run-time reconfigurable connections between general processing elements and memory banks. The matched filtering step and the beamforming step in Stage 2 are mapped onto TRANSFORMER with different memory configurations. Gem5 simulations show that the private cache mode generates shorter execution time and higher computation efficiency compared to other cache modes. The overall execution time for Stage 2 is 14.73 ms. The average power consumption and the average Giga-operations-per-second/Watt in 14 nm technology node are 0.14 W and 103.84, respectively.
ContributorsZhou, Jian (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Wenisch, Thomas F. (Committee member) / Ogras, Umit Y. (Committee member) / Arizona State University (Publisher)
Created2019
157748-Thumbnail Image.png
Description
The problem of multiple object tracking seeks to jointly estimate the time-varying cardinality and trajectory of each object. There are numerous challenges that are encountered in tracking multiple objects including a time-varying number of measurements, under varying constraints, and environmental conditions. In this thesis, the proposed statistical methods integrate the

The problem of multiple object tracking seeks to jointly estimate the time-varying cardinality and trajectory of each object. There are numerous challenges that are encountered in tracking multiple objects including a time-varying number of measurements, under varying constraints, and environmental conditions. In this thesis, the proposed statistical methods integrate the use of physical-based models with Bayesian nonparametric methods to address the main challenges in a tracking problem. In particular, Bayesian nonparametric methods are exploited to efficiently and robustly infer object identity and learn time-dependent cardinality; together with Bayesian inference methods, they are also used to associate measurements to objects and estimate the trajectory of objects. These methods differ from the current methods to the core as the existing methods are mainly based on random finite set theory.

The first contribution proposes dependent nonparametric models such as the dependent Dirichlet process and the dependent Pitman-Yor process to capture the inherent time-dependency in the problem at hand. These processes are used as priors for object state distributions to learn dependent information between previous and current time steps. Markov chain Monte Carlo sampling methods exploit the learned information to sample from posterior distributions and update the estimated object parameters.

The second contribution proposes a novel, robust, and fast nonparametric approach based on a diffusion process over infinite random trees to infer information on object cardinality and trajectory. This method follows the hierarchy induced by objects entering and leaving a scene and the time-dependency between unknown object parameters. Markov chain Monte Carlo sampling methods integrate the prior distributions over the infinite random trees with time-dependent diffusion processes to update object states.

The third contribution develops the use of hierarchical models to form a prior for statistically dependent measurements in a single object tracking setup. Dependency among the sensor measurements provides extra information which is incorporated to achieve the optimal tracking performance. The hierarchical Dirichlet process as a prior provides the required flexibility to do inference. Bayesian tracker is integrated with the hierarchical Dirichlet process prior to accurately estimate the object trajectory.

The fourth contribution proposes an approach to model both the multiple dependent objects and multiple dependent measurements. This approach integrates the dependent Dirichlet process modeling over the dependent object with the hierarchical Dirichlet process modeling of the measurements to fully capture the dependency among both object and measurements. Bayesian nonparametric models can successfully associate each measurement to the corresponding object and exploit dependency among them to more accurately infer the trajectory of objects. Markov chain Monte Carlo methods amalgamate the dependent Dirichlet process with the hierarchical Dirichlet process to infer the object identity and object cardinality.

Simulations are exploited to demonstrate the improvement in multiple object tracking performance when compared to approaches that are developed based on random finite set theory.
ContributorsMoraffah, Bahman (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel W. (Committee member) / Richmond, Christ D. (Committee member) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2019
157824-Thumbnail Image.png
Description
The human brain controls a person's actions and reactions. In this study, the main objective is to quantify reaction time towards a change of visual event and figuring out the inherent relationship between response time and corresponding brain activities. Furthermore, which parts of the human brain are responsible for the

The human brain controls a person's actions and reactions. In this study, the main objective is to quantify reaction time towards a change of visual event and figuring out the inherent relationship between response time and corresponding brain activities. Furthermore, which parts of the human brain are responsible for the reaction time is also of interest. As electroencephalogram (EEG) signals are proportional to the change of brain functionalities with time, EEG signals from different locations of the brain are used as indicators of brain activities. As the different channels are from different parts of our brain, identifying most relevant channels can provide the idea of responsible brain locations. In this study, response time is estimated using EEG signal features from time, frequency and time-frequency domain. Regression-based estimation using the full data-set results in RMSE (Root Mean Square Error) of 99.5 milliseconds and a correlation value of 0.57. However, the addition of non-EEG features with the existing features gives RMSE of 101.7 ms and a correlation value of 0.58. Using the same analysis with a custom data-set provides RMSE of 135.7 milliseconds and a correlation value of 0.69. Classification-based estimation provides 79% & 72% of accuracy for binary and 3-class classication respectively. Classification of extremes (high-low) results in 95% of accuracy. Combining recursive feature elimination, tree-based feature importance, and mutual feature information method, important channels, and features are isolated based on the best result. As human response time is not solely dependent on brain activities, it requires additional information about the subject to improve the reaction time estimation.
ContributorsChowdhury, Mohammad Samin Nur (Author) / Bliss, Daniel W (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2019
157934-Thumbnail Image.png
Description
Transportation plays a significant role in every human's life. Numerous factors, such as cost of living, available amenities, work style, to name a few, play a vital role in determining the amount of travel time. Such factors, among others, led in part to an increased need for private transportation and,

Transportation plays a significant role in every human's life. Numerous factors, such as cost of living, available amenities, work style, to name a few, play a vital role in determining the amount of travel time. Such factors, among others, led in part to an increased need for private transportation and, consequently, leading to an increase in the purchase of private cars. Also, road safety was impacted by numerous factors such as Driving Under Influence (DUI), driver’s distraction due to the increase in the use of mobile devices while driving. These factors led to an increasing need for an Advanced Driver Assistance System (ADAS) to help the driver stay aware of the environment and to improve road safety.

EcoCAR3 is one of the Advanced Vehicle Technology Competitions, sponsored by the United States Department of Energy (DoE) and managed by Argonne National Laboratory in partnership with the North American automotive industry. Students are challenged beyond the traditional classroom environment in these competitions, where they redesign a donated production vehicle to improve energy efficiency and to meet emission standards while maintaining the features that are attractive to the customer, including but not limited to performance, consumer acceptability, safety, and cost.

This thesis presents a driver assistance system interface that was implemented as part of EcoCAR3, including the adopted sensors, hardware and software components, system implementation, validation, and testing. The implemented driver assistance system uses a combination of range measurement sensors to determine the distance, relative location, & the relative velocity of obstacles and surrounding objects together with a computer vision algorithm for obstacle detection and classification. The sensor system and vision system were tested individually and then combined within the overall system. Also, a visual and audio feedback system was designed and implemented to provide timely feedback for the driver as an attempt to enhance situational awareness and improve safety.

Since the driver assistance system was designed and developed as part of a DoE sponsored competition, the system needed to satisfy competition requirements and rules. This work attempted to optimize the system in terms of performance, robustness, and cost while satisfying these constraints.
ContributorsBalaji, Venkatesh (Author) / Karam, Lina J (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Yu, Hongbin (Committee member) / Arizona State University (Publisher)
Created2019
157937-Thumbnail Image.png
Description
Signal compressed using classical compression methods can be acquired using brute force (i.e. searching for non-zero entries in component-wise). However, sparse solutions require combinatorial searches of high computations. In this thesis, instead, two Bayesian approaches are considered to recover a sparse vector from underdetermined noisy measurements. The first is constructed

Signal compressed using classical compression methods can be acquired using brute force (i.e. searching for non-zero entries in component-wise). However, sparse solutions require combinatorial searches of high computations. In this thesis, instead, two Bayesian approaches are considered to recover a sparse vector from underdetermined noisy measurements. The first is constructed using a Bernoulli-Gaussian (BG) prior distribution and is assumed to be the true generative model. The second is constructed using a Gamma-Normal (GN) prior distribution and is, therefore, a different (i.e. misspecified) model. To estimate the posterior distribution for the correctly specified scenario, an algorithm based on generalized approximated message passing (GAMP) is constructed, while an algorithm based on sparse Bayesian learning (SBL) is used for the misspecified scenario. Recovering sparse signal using Bayesian framework is one class of algorithms to solve the sparse problem. All classes of algorithms aim to get around the high computations associated with the combinatorial searches. Compressive sensing (CS) is a widely-used terminology attributed to optimize the sparse problem and its applications. Applications such as magnetic resonance imaging (MRI), image acquisition in radar imaging, and facial recognition. In CS literature, the target vector can be recovered either by optimizing an objective function using point estimation, or recovering a distribution of the sparse vector using Bayesian estimation. Although Bayesian framework provides an extra degree of freedom to assume a distribution that is directly applicable to the problem of interest, it is hard to find a theoretical guarantee of convergence. This limitation has shifted some of researches to use a non-Bayesian framework. This thesis tries to close this gab by proposing a Bayesian framework with a suggested theoretical bound for the assumed, not necessarily correct, distribution. In the simulation study, a general lower Bayesian Cram\'er-Rao bound (BCRB) bound is extracted along with misspecified Bayesian Cram\'er-Rao bound (MBCRB) for GN model. Both bounds are validated using mean square error (MSE) performances of the aforementioned algorithms. Also, a quantification of the performance in terms of gains versus losses is introduced as one main finding of this report.
ContributorsAlhowaish, Abdulhakim (Author) / Richmond, Christ D (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Sankar, Lalitha (Committee member) / Arizona State University (Publisher)
Created2019
158254-Thumbnail Image.png
Description
Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good performance in detecting areas of significant changes. Coherent change detection

Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good performance in detecting areas of significant changes. Coherent change detection can be implemented using the classical coherence estimator, which does better at detecting subtle changes, like vehicle tracks. A two-stage detector was proposed by Cha et al., where the sample variance ratio forms the first stage, and the second stage comprises of Berger's alternative coherence estimator.

A modification to the first stage of the two-stage detector is proposed in this study, which significantly simplifies the analysis of the this detector. Cha et al. have used a heuristic approach to determine the thresholds for this two-stage detector. In this study, the probability density function for the modified two-stage detector is derived, and using this probability density function, an approach for determining the thresholds for this two-dimensional detection problem has been proposed. The proposed method of threshold selection reveals an interesting behavior shown by the two-stage detector. With the help of theoretical receiver operating characteristic analysis, it is shown that the two-stage detector gives a better detection performance as compared to the other three detectors. However, the Berger's estimator proves to be a simpler alternative, since it gives only a slightly poorer performance as compared to the two-stage detector. All the four detectors have also been implemented on a SAR data set, and it is shown that the two-stage detector and the Berger's estimator generate images where the areas showing change are easily visible.
ContributorsBondre, Akshay Sunil (Author) / Richmond, Christ D (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel W (Committee member) / Arizona State University (Publisher)
Created2020
161703-Thumbnail Image.png
Description
With the formation of next generation wireless communication, a growing number of new applications like internet of things, autonomous car, and drone is crowding the unlicensed spectrum. Licensed network such as LTE also comes to the unlicensed spectrum for better providing high-capacity contents with low cost. However, LTE was not

With the formation of next generation wireless communication, a growing number of new applications like internet of things, autonomous car, and drone is crowding the unlicensed spectrum. Licensed network such as LTE also comes to the unlicensed spectrum for better providing high-capacity contents with low cost. However, LTE was not designed for sharing spectrum with others. A cooperation center for these networks is costly because they possess heterogeneous properties and everyone can enter and leave the spectrum unrestrictedly, so the design will be challenging. Since it is infeasible to incorporate potentially infinite scenarios with one unified design, an alternative solution is to let each network learn its own coexistence policy. Previous solutions only work on fixed scenarios. In this work we present a reinforcement learning algorithm to cope with the coexistence between Wi-Fi and LTE-LAA agents in 5 GHz unlicensed spectrum. The coexistence problem was modeled as a Dec-POMDP and Bayesian approach was adopted for policy learning with nonparametric prior to accommodate the uncertainty of policy for different agents. A fairness measure was introduced in the reward function to encourage fair sharing between agents. We turned the reinforcement learning into an optimization problem by transforming the value function as likelihood and variational inference for posterior approximation. Simulation results demonstrate that this algorithm can reach high value with compact policy representations, and stay computationally efficient when applying to agent set.
ContributorsSHIH, PO-KAN (Author) / Moraffah, Bahman (Thesis advisor) / Papandreou-Suppappola, Antonia (Thesis advisor) / Dasarathy, Gautam (Committee member) / Shih, YiChang (Committee member) / Arizona State University (Publisher)
Created2021
158876-Thumbnail Image.png
Description
Lattice-based Cryptography is an up and coming field of cryptography that utilizes the difficulty of lattice problems to design lattice-based cryptosystems that are resistant to quantum attacks and applicable to Fully Homomorphic Encryption schemes (FHE). In this thesis, the parallelization of the Residue Number System (RNS) and algorithmic efficiency of

Lattice-based Cryptography is an up and coming field of cryptography that utilizes the difficulty of lattice problems to design lattice-based cryptosystems that are resistant to quantum attacks and applicable to Fully Homomorphic Encryption schemes (FHE). In this thesis, the parallelization of the Residue Number System (RNS) and algorithmic efficiency of the Number Theoretic Transform (NTT) are combined to tackle the most significant bottleneck of polynomial ring multiplication with the hardware design of an optimized RNS-based NTT polynomial multiplier. The design utilizes Negative Wrapped Convolution, the NTT, RNS Montgomery reduction with Bajard and Shenoy extensions, and optimized modular 32-bit channel arithmetic for nine RNS channels to accomplish an RNS polynomial multiplication. In addition to a full software implementation of the whole system, a pipelined and optimized RNS-based NTT unit with 4 RNS butterflies is implemented on the Xilinx Artix-7 FPGA(xc7a200tlffg1156-2L) for size and delay estimates. The hardware implementation achieves an operating frequency of 47.043 MHz and utilizes 13239 LUT's, 4010 FF's, and 330 DSP blocks, allowing for multiple simultaneously operating NTT units depending on FGPA size constraints.
ContributorsBrist, Logan Alan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel (Committee member) / Arizona State University (Publisher)
Created2020
161872-Thumbnail Image.png
Description
This research presents advances in time-synchronized phasor (i.e.,synchrophasor) estimation and imaging with very-low-frequency electric fields. Phasor measurement units measure and track dynamic systems, often power systems, using synchrophasor estimation algorithms. Two improvements to subspace-based synchrophasor estimation algorithms are shown. The first improvement is a dynamic thresholding method for accurately determining the signal subspace

This research presents advances in time-synchronized phasor (i.e.,synchrophasor) estimation and imaging with very-low-frequency electric fields. Phasor measurement units measure and track dynamic systems, often power systems, using synchrophasor estimation algorithms. Two improvements to subspace-based synchrophasor estimation algorithms are shown. The first improvement is a dynamic thresholding method for accurately determining the signal subspace when using the estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm. This improvement facilitates accurate ESPRIT-based frequency estimates of both the nominal system frequency and the frequencies of interfering signals such as harmonics or out-of-band interference signals. Proper frequency estimation of all signals present in measurement data allows for accurate least squares estimates of synchrophasors for the nominal system frequency. By including the effects of clutter signals in the synchrophasor estimate, interference from clutter signals can be excluded. The result is near-flat estimation error during nominal system frequency changes, the presence of harmonic distortion, and out-of-band interference. The second improvement reduces the computational burden of the ESPRIT frequency estimation step by showing that an optimized Eigenvalue decomposition of the measurement data can be used instead of a singular value decomposition. This research also explores a deep-learning-based inversion method for imaging objects with a uniform electric field and a 2D planar D-dot array. Using electric fields as an illumination source has seen multiple applications ranging from medical imaging to mineral deposit detection. It is shown that a planar D-dot array and deep neural network can reconstruct the electrical properties of randomized objects. A 16000-sample dataset of objects comprised of a three-by-three grid of randomized dielectric constants was generated to train a deep neural network for predicting these dielectric constants from measured field distortions. Increasingly complex imaging environments are simulated, ranging from objects in free space to objects placed in a physical cage designed to produce uniform electric fields. Finally, this research relaxes the uniform electric field constraint, showing that the volume of an opaque container can be imaged with a copper tube antenna and a 1x4 array of D-dot sensors. Real world experimental results show that it is possible to image buckets of water (targets) within a plastic shed These experiments explore the detectability of targets as a function of target placement within the shed.
ContributorsDrummond, Zachary (Author) / Allee, David R (Thesis advisor) / Claytor, Kevin E (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Aberle, James (Committee member) / Arizona State University (Publisher)
Created2021