Matching Items (84)
Filtering by

Clear all filters

157934-Thumbnail Image.png
Description
Transportation plays a significant role in every human's life. Numerous factors, such as cost of living, available amenities, work style, to name a few, play a vital role in determining the amount of travel time. Such factors, among others, led in part to an increased need for private transportation and,

Transportation plays a significant role in every human's life. Numerous factors, such as cost of living, available amenities, work style, to name a few, play a vital role in determining the amount of travel time. Such factors, among others, led in part to an increased need for private transportation and, consequently, leading to an increase in the purchase of private cars. Also, road safety was impacted by numerous factors such as Driving Under Influence (DUI), driver’s distraction due to the increase in the use of mobile devices while driving. These factors led to an increasing need for an Advanced Driver Assistance System (ADAS) to help the driver stay aware of the environment and to improve road safety.

EcoCAR3 is one of the Advanced Vehicle Technology Competitions, sponsored by the United States Department of Energy (DoE) and managed by Argonne National Laboratory in partnership with the North American automotive industry. Students are challenged beyond the traditional classroom environment in these competitions, where they redesign a donated production vehicle to improve energy efficiency and to meet emission standards while maintaining the features that are attractive to the customer, including but not limited to performance, consumer acceptability, safety, and cost.

This thesis presents a driver assistance system interface that was implemented as part of EcoCAR3, including the adopted sensors, hardware and software components, system implementation, validation, and testing. The implemented driver assistance system uses a combination of range measurement sensors to determine the distance, relative location, & the relative velocity of obstacles and surrounding objects together with a computer vision algorithm for obstacle detection and classification. The sensor system and vision system were tested individually and then combined within the overall system. Also, a visual and audio feedback system was designed and implemented to provide timely feedback for the driver as an attempt to enhance situational awareness and improve safety.

Since the driver assistance system was designed and developed as part of a DoE sponsored competition, the system needed to satisfy competition requirements and rules. This work attempted to optimize the system in terms of performance, robustness, and cost while satisfying these constraints.
ContributorsBalaji, Venkatesh (Author) / Karam, Lina J (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Yu, Hongbin (Committee member) / Arizona State University (Publisher)
Created2019
157982-Thumbnail Image.png
Description
Ultrasound B-mode imaging is an increasingly significant medical imaging modality for clinical applications. Compared to other imaging modalities like computed tomography (CT) or magnetic resonance imaging (MRI), ultrasound imaging has the advantage of being safe, inexpensive, and portable. While two dimensional (2-D) ultrasound imaging is very popular, three dimensional (3-D)

Ultrasound B-mode imaging is an increasingly significant medical imaging modality for clinical applications. Compared to other imaging modalities like computed tomography (CT) or magnetic resonance imaging (MRI), ultrasound imaging has the advantage of being safe, inexpensive, and portable. While two dimensional (2-D) ultrasound imaging is very popular, three dimensional (3-D) ultrasound imaging provides distinct advantages over its 2-D counterpart by providing volumetric imaging, which leads to more accurate analysis of tumor and cysts. However, the amount of received data at the front-end of 3-D system is extremely large, making it impractical for power-constrained portable systems.



In this thesis, algorithm and hardware design techniques to support a hand-held 3-D ultrasound imaging system are proposed. Synthetic aperture sequential beamforming (SASB) is chosen since its computations can be split into two stages, where the output generated of Stage 1 is significantly smaller in size compared to the input. This characteristic enables Stage 1 to be done in the front end while Stage 2 can be sent out to be processed elsewhere.



The contributions of this thesis are as follows. First, 2-D SASB is extended to 3-D. Techniques to increase the volume rate of 3-D SASB through a new multi-line firing scheme and use of linear chirp as the excitation waveform, are presented. A new sparse array design that not only reduces the number of active transducers but also avoids the imaging degradation caused by grating lobes, is proposed. A combination of these techniques increases the volume rate of 3-D SASB by 4\texttimes{} without introducing extra computations at the front end.



Next, algorithmic techniques to further reduce the Stage 1 computations in the front end are presented. These include reducing the number of distinct apodization coefficients and operating with narrow-bit-width fixed-point data. A 3-D die stacked architecture is designed for the front end. This highly parallel architecture enables the signals received by 961 active transducers to be digitalized, routed by a network-on-chip, and processed in parallel. The processed data are accumulated through a bus-based structure. This architecture is synthesized using TSMC 28 nm technology node and the estimated power consumption of the front end is less than 2 W.



Finally, the Stage 2 computations are mapped onto a reconfigurable multi-core architecture, TRANSFORMER, which supports different types of on-chip memory banks and run-time reconfigurable connections between general processing elements and memory banks. The matched filtering step and the beamforming step in Stage 2 are mapped onto TRANSFORMER with different memory configurations. Gem5 simulations show that the private cache mode generates shorter execution time and higher computation efficiency compared to other cache modes. The overall execution time for Stage 2 is 14.73 ms. The average power consumption and the average Giga-operations-per-second/Watt in 14 nm technology node are 0.14 W and 103.84, respectively.
ContributorsZhou, Jian (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Wenisch, Thomas F. (Committee member) / Ogras, Umit Y. (Committee member) / Arizona State University (Publisher)
Created2019
158864-Thumbnail Image.png
Description
Infants born before 37 weeks of pregnancy are considered to be preterm. Typically, preterm infants have to be strictly monitored since they are highly susceptible to health problems like hypoxemia (low blood oxygen level), apnea, respiratory issues, cardiac problems, neurological problems as well as an increased chance of long-term health

Infants born before 37 weeks of pregnancy are considered to be preterm. Typically, preterm infants have to be strictly monitored since they are highly susceptible to health problems like hypoxemia (low blood oxygen level), apnea, respiratory issues, cardiac problems, neurological problems as well as an increased chance of long-term health issues such as cerebral palsy, asthma and sudden infant death syndrome. One of the leading health complications in preterm infants is bradycardia - which is defined as the slower than expected heart rate, generally beating lower than 60 beats per minute. Bradycardia is often accompanied by low oxygen levels and can cause additional long term health problems in the premature infant.The implementation of a non-parametric method to predict the onset of brady- cardia is presented. This method assumes no prior knowledge of the data and uses kernel density estimation to predict the future onset of bradycardia events. The data is preprocessed, and then analyzed to detect the peaks in the ECG signals, following which different kernels are implemented to estimate the shared underlying distribu- tion of the data. The performance of the algorithm is evaluated using various metrics and the computational challenges and methods to overcome them are also discussed.
It is observed that the performance of the algorithm with regards to the kernels used are consistent with the theoretical performance of the kernel as presented in a previous work. The theoretical approach has also been automated in this work and the various implementation challenges have been addressed.
ContributorsMitra, Sinjini (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Moraffah, Bahman (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2020
158876-Thumbnail Image.png
Description
Lattice-based Cryptography is an up and coming field of cryptography that utilizes the difficulty of lattice problems to design lattice-based cryptosystems that are resistant to quantum attacks and applicable to Fully Homomorphic Encryption schemes (FHE). In this thesis, the parallelization of the Residue Number System (RNS) and algorithmic efficiency of

Lattice-based Cryptography is an up and coming field of cryptography that utilizes the difficulty of lattice problems to design lattice-based cryptosystems that are resistant to quantum attacks and applicable to Fully Homomorphic Encryption schemes (FHE). In this thesis, the parallelization of the Residue Number System (RNS) and algorithmic efficiency of the Number Theoretic Transform (NTT) are combined to tackle the most significant bottleneck of polynomial ring multiplication with the hardware design of an optimized RNS-based NTT polynomial multiplier. The design utilizes Negative Wrapped Convolution, the NTT, RNS Montgomery reduction with Bajard and Shenoy extensions, and optimized modular 32-bit channel arithmetic for nine RNS channels to accomplish an RNS polynomial multiplication. In addition to a full software implementation of the whole system, a pipelined and optimized RNS-based NTT unit with 4 RNS butterflies is implemented on the Xilinx Artix-7 FPGA(xc7a200tlffg1156-2L) for size and delay estimates. The hardware implementation achieves an operating frequency of 47.043 MHz and utilizes 13239 LUT's, 4010 FF's, and 330 DSP blocks, allowing for multiple simultaneously operating NTT units depending on FGPA size constraints.
ContributorsBrist, Logan Alan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel (Committee member) / Arizona State University (Publisher)
Created2020
158425-Thumbnail Image.png
Description
The inverse problem in electroencephalography (EEG) is the determination of form and location of neural activity associated to EEG recordings. This determination is of interest in evoked potential experiments where the activity is elicited by an external stimulus. This work investigates three aspects of this problem: the use of forward

The inverse problem in electroencephalography (EEG) is the determination of form and location of neural activity associated to EEG recordings. This determination is of interest in evoked potential experiments where the activity is elicited by an external stimulus. This work investigates three aspects of this problem: the use of forward methods in its solution, the elimination of artifacts that complicate the accurate determination of sources, and the construction of physical models that capture the electrical properties of the human head.

Results from this work aim to increase the accuracy and performance of the inverse solution process.

The inverse problem can be approached by constructing forward solutions where, for a know source, the scalp potentials are determined. This work demonstrates that the use of two variables, the dissipated power and the accumulated charge at interfaces, leads to a new solution method for the forward problem. The accumulated charge satisfies a boundary integral equation. Consideration of dissipated power determines bounds on the range of eigenvalues of the integral operators that appear in this formulation. The new method uses the eigenvalue structure to regularize singular integral operators thus allowing unambiguous solutions to the forward problem.

A major problem in the estimation of properties of neural sources is the presence of artifacts that corrupt EEG recordings. A method is proposed for the determination of inverse solutions that integrates sequential Bayesian estimation with probabilistic data association in order to suppress artifacts before estimating neural activity. This method improves the tracking of neural activity in a dynamic setting in the presence of artifacts.

Solution of the inverse problem requires the use of models of the human head. The electrical properties of biological tissues are best described by frequency dependent complex conductivities. Head models in EEG analysis, however, usually consider head regions as having only constant real conductivities. This work presents a model for tissues as composed of confined electrolytes that predicts complex conductivities for macroscopic measurements. These results indicate ways in which EEG models can be improved.
ContributorsSolis, Francisco Jr. (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Berisha, Visar (Committee member) / Bliss, Daniel (Committee member) / Moraffah, Bahman (Committee member) / Arizona State University (Publisher)
Created2020
158254-Thumbnail Image.png
Description
Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good performance in detecting areas of significant changes. Coherent change detection

Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good performance in detecting areas of significant changes. Coherent change detection can be implemented using the classical coherence estimator, which does better at detecting subtle changes, like vehicle tracks. A two-stage detector was proposed by Cha et al., where the sample variance ratio forms the first stage, and the second stage comprises of Berger's alternative coherence estimator.

A modification to the first stage of the two-stage detector is proposed in this study, which significantly simplifies the analysis of the this detector. Cha et al. have used a heuristic approach to determine the thresholds for this two-stage detector. In this study, the probability density function for the modified two-stage detector is derived, and using this probability density function, an approach for determining the thresholds for this two-dimensional detection problem has been proposed. The proposed method of threshold selection reveals an interesting behavior shown by the two-stage detector. With the help of theoretical receiver operating characteristic analysis, it is shown that the two-stage detector gives a better detection performance as compared to the other three detectors. However, the Berger's estimator proves to be a simpler alternative, since it gives only a slightly poorer performance as compared to the two-stage detector. All the four detectors have also been implemented on a SAR data set, and it is shown that the two-stage detector and the Berger's estimator generate images where the areas showing change are easily visible.
ContributorsBondre, Akshay Sunil (Author) / Richmond, Christ D (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel W (Committee member) / Arizona State University (Publisher)
Created2020
157748-Thumbnail Image.png
Description
The problem of multiple object tracking seeks to jointly estimate the time-varying cardinality and trajectory of each object. There are numerous challenges that are encountered in tracking multiple objects including a time-varying number of measurements, under varying constraints, and environmental conditions. In this thesis, the proposed statistical methods integrate the

The problem of multiple object tracking seeks to jointly estimate the time-varying cardinality and trajectory of each object. There are numerous challenges that are encountered in tracking multiple objects including a time-varying number of measurements, under varying constraints, and environmental conditions. In this thesis, the proposed statistical methods integrate the use of physical-based models with Bayesian nonparametric methods to address the main challenges in a tracking problem. In particular, Bayesian nonparametric methods are exploited to efficiently and robustly infer object identity and learn time-dependent cardinality; together with Bayesian inference methods, they are also used to associate measurements to objects and estimate the trajectory of objects. These methods differ from the current methods to the core as the existing methods are mainly based on random finite set theory.

The first contribution proposes dependent nonparametric models such as the dependent Dirichlet process and the dependent Pitman-Yor process to capture the inherent time-dependency in the problem at hand. These processes are used as priors for object state distributions to learn dependent information between previous and current time steps. Markov chain Monte Carlo sampling methods exploit the learned information to sample from posterior distributions and update the estimated object parameters.

The second contribution proposes a novel, robust, and fast nonparametric approach based on a diffusion process over infinite random trees to infer information on object cardinality and trajectory. This method follows the hierarchy induced by objects entering and leaving a scene and the time-dependency between unknown object parameters. Markov chain Monte Carlo sampling methods integrate the prior distributions over the infinite random trees with time-dependent diffusion processes to update object states.

The third contribution develops the use of hierarchical models to form a prior for statistically dependent measurements in a single object tracking setup. Dependency among the sensor measurements provides extra information which is incorporated to achieve the optimal tracking performance. The hierarchical Dirichlet process as a prior provides the required flexibility to do inference. Bayesian tracker is integrated with the hierarchical Dirichlet process prior to accurately estimate the object trajectory.

The fourth contribution proposes an approach to model both the multiple dependent objects and multiple dependent measurements. This approach integrates the dependent Dirichlet process modeling over the dependent object with the hierarchical Dirichlet process modeling of the measurements to fully capture the dependency among both object and measurements. Bayesian nonparametric models can successfully associate each measurement to the corresponding object and exploit dependency among them to more accurately infer the trajectory of objects. Markov chain Monte Carlo methods amalgamate the dependent Dirichlet process with the hierarchical Dirichlet process to infer the object identity and object cardinality.

Simulations are exploited to demonstrate the improvement in multiple object tracking performance when compared to approaches that are developed based on random finite set theory.
ContributorsMoraffah, Bahman (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel W. (Committee member) / Richmond, Christ D. (Committee member) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2019
161872-Thumbnail Image.png
Description
This research presents advances in time-synchronized phasor (i.e.,synchrophasor) estimation and imaging with very-low-frequency electric fields. Phasor measurement units measure and track dynamic systems, often power systems, using synchrophasor estimation algorithms. Two improvements to subspace-based synchrophasor estimation algorithms are shown. The first improvement is a dynamic thresholding method for accurately determining the signal subspace

This research presents advances in time-synchronized phasor (i.e.,synchrophasor) estimation and imaging with very-low-frequency electric fields. Phasor measurement units measure and track dynamic systems, often power systems, using synchrophasor estimation algorithms. Two improvements to subspace-based synchrophasor estimation algorithms are shown. The first improvement is a dynamic thresholding method for accurately determining the signal subspace when using the estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm. This improvement facilitates accurate ESPRIT-based frequency estimates of both the nominal system frequency and the frequencies of interfering signals such as harmonics or out-of-band interference signals. Proper frequency estimation of all signals present in measurement data allows for accurate least squares estimates of synchrophasors for the nominal system frequency. By including the effects of clutter signals in the synchrophasor estimate, interference from clutter signals can be excluded. The result is near-flat estimation error during nominal system frequency changes, the presence of harmonic distortion, and out-of-band interference. The second improvement reduces the computational burden of the ESPRIT frequency estimation step by showing that an optimized Eigenvalue decomposition of the measurement data can be used instead of a singular value decomposition. This research also explores a deep-learning-based inversion method for imaging objects with a uniform electric field and a 2D planar D-dot array. Using electric fields as an illumination source has seen multiple applications ranging from medical imaging to mineral deposit detection. It is shown that a planar D-dot array and deep neural network can reconstruct the electrical properties of randomized objects. A 16000-sample dataset of objects comprised of a three-by-three grid of randomized dielectric constants was generated to train a deep neural network for predicting these dielectric constants from measured field distortions. Increasingly complex imaging environments are simulated, ranging from objects in free space to objects placed in a physical cage designed to produce uniform electric fields. Finally, this research relaxes the uniform electric field constraint, showing that the volume of an opaque container can be imaged with a copper tube antenna and a 1x4 array of D-dot sensors. Real world experimental results show that it is possible to image buckets of water (targets) within a plastic shed These experiments explore the detectability of targets as a function of target placement within the shed.
ContributorsDrummond, Zachary (Author) / Allee, David R (Thesis advisor) / Claytor, Kevin E (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Aberle, James (Committee member) / Arizona State University (Publisher)
Created2021
161703-Thumbnail Image.png
Description
With the formation of next generation wireless communication, a growing number of new applications like internet of things, autonomous car, and drone is crowding the unlicensed spectrum. Licensed network such as LTE also comes to the unlicensed spectrum for better providing high-capacity contents with low cost. However, LTE was not

With the formation of next generation wireless communication, a growing number of new applications like internet of things, autonomous car, and drone is crowding the unlicensed spectrum. Licensed network such as LTE also comes to the unlicensed spectrum for better providing high-capacity contents with low cost. However, LTE was not designed for sharing spectrum with others. A cooperation center for these networks is costly because they possess heterogeneous properties and everyone can enter and leave the spectrum unrestrictedly, so the design will be challenging. Since it is infeasible to incorporate potentially infinite scenarios with one unified design, an alternative solution is to let each network learn its own coexistence policy. Previous solutions only work on fixed scenarios. In this work we present a reinforcement learning algorithm to cope with the coexistence between Wi-Fi and LTE-LAA agents in 5 GHz unlicensed spectrum. The coexistence problem was modeled as a Dec-POMDP and Bayesian approach was adopted for policy learning with nonparametric prior to accommodate the uncertainty of policy for different agents. A fairness measure was introduced in the reward function to encourage fair sharing between agents. We turned the reinforcement learning into an optimization problem by transforming the value function as likelihood and variational inference for posterior approximation. Simulation results demonstrate that this algorithm can reach high value with compact policy representations, and stay computationally efficient when applying to agent set.
ContributorsSHIH, PO-KAN (Author) / Moraffah, Bahman (Thesis advisor) / Papandreou-Suppappola, Antonia (Thesis advisor) / Dasarathy, Gautam (Committee member) / Shih, YiChang (Committee member) / Arizona State University (Publisher)
Created2021
153928-Thumbnail Image.png
Description
The work presented in this manuscript has the overarching theme of radiation. The two forms of radiation of interest are neutrons, i.e. nuclear, and electric fields. The ability to detect such forms of radiation have significant security implications that could also be extended to very practical industrial applications.

The work presented in this manuscript has the overarching theme of radiation. The two forms of radiation of interest are neutrons, i.e. nuclear, and electric fields. The ability to detect such forms of radiation have significant security implications that could also be extended to very practical industrial applications. The goal is therefore to detect, and even image, such radiation sources.

The method to do so revolved around the concept of building large-area sensor arrays. By covering a large area, we can increase the probability of detection and gather more data to build a more complete and clearer view of the environment. Large-area circuitry can be achieved cost-effectively by leveraging the thin-film transistor process of the display industry. With production of displays increasing with the explosion of mobile devices and continued growth in sales of flat panel monitors and television, the cost to build a unit continues to decrease.

Using a thin-film process also allows for flexible electronics, which could be taken advantage of in-house at the Flexible Electronics and Display Center. Flexible electronics implies new form factors and applications that would not otherwise be possible with their single crystal counterparts. To be able to effectively use thin-film technology, novel ways of overcoming the drawbacks of the thin-film process, namely the lower performance scale.

The two deliverable devices that underwent development are a preamplifier used in an active pixel sensor for neutron detection and a passive electric field imaging array. This thesis will cover the theory and process behind realizing these devices.
ContributorsChung, Hugh E (Author) / Allee, David R. (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Holbert, Keith E. (Committee member) / Arizona State University (Publisher)
Created2015