Matching Items (185)
Filtering by

Clear all filters

158425-Thumbnail Image.png
Description
The inverse problem in electroencephalography (EEG) is the determination of form and location of neural activity associated to EEG recordings. This determination is of interest in evoked potential experiments where the activity is elicited by an external stimulus. This work investigates three aspects of this problem: the use of forward

The inverse problem in electroencephalography (EEG) is the determination of form and location of neural activity associated to EEG recordings. This determination is of interest in evoked potential experiments where the activity is elicited by an external stimulus. This work investigates three aspects of this problem: the use of forward methods in its solution, the elimination of artifacts that complicate the accurate determination of sources, and the construction of physical models that capture the electrical properties of the human head.

Results from this work aim to increase the accuracy and performance of the inverse solution process.

The inverse problem can be approached by constructing forward solutions where, for a know source, the scalp potentials are determined. This work demonstrates that the use of two variables, the dissipated power and the accumulated charge at interfaces, leads to a new solution method for the forward problem. The accumulated charge satisfies a boundary integral equation. Consideration of dissipated power determines bounds on the range of eigenvalues of the integral operators that appear in this formulation. The new method uses the eigenvalue structure to regularize singular integral operators thus allowing unambiguous solutions to the forward problem.

A major problem in the estimation of properties of neural sources is the presence of artifacts that corrupt EEG recordings. A method is proposed for the determination of inverse solutions that integrates sequential Bayesian estimation with probabilistic data association in order to suppress artifacts before estimating neural activity. This method improves the tracking of neural activity in a dynamic setting in the presence of artifacts.

Solution of the inverse problem requires the use of models of the human head. The electrical properties of biological tissues are best described by frequency dependent complex conductivities. Head models in EEG analysis, however, usually consider head regions as having only constant real conductivities. This work presents a model for tissues as composed of confined electrolytes that predicts complex conductivities for macroscopic measurements. These results indicate ways in which EEG models can be improved.
ContributorsSolis, Francisco Jr. (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Berisha, Visar (Committee member) / Bliss, Daniel (Committee member) / Moraffah, Bahman (Committee member) / Arizona State University (Publisher)
Created2020
158254-Thumbnail Image.png
Description
Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good performance in detecting areas of significant changes. Coherent change detection

Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good performance in detecting areas of significant changes. Coherent change detection can be implemented using the classical coherence estimator, which does better at detecting subtle changes, like vehicle tracks. A two-stage detector was proposed by Cha et al., where the sample variance ratio forms the first stage, and the second stage comprises of Berger's alternative coherence estimator.

A modification to the first stage of the two-stage detector is proposed in this study, which significantly simplifies the analysis of the this detector. Cha et al. have used a heuristic approach to determine the thresholds for this two-stage detector. In this study, the probability density function for the modified two-stage detector is derived, and using this probability density function, an approach for determining the thresholds for this two-dimensional detection problem has been proposed. The proposed method of threshold selection reveals an interesting behavior shown by the two-stage detector. With the help of theoretical receiver operating characteristic analysis, it is shown that the two-stage detector gives a better detection performance as compared to the other three detectors. However, the Berger's estimator proves to be a simpler alternative, since it gives only a slightly poorer performance as compared to the two-stage detector. All the four detectors have also been implemented on a SAR data set, and it is shown that the two-stage detector and the Berger's estimator generate images where the areas showing change are easily visible.
ContributorsBondre, Akshay Sunil (Author) / Richmond, Christ D (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel W (Committee member) / Arizona State University (Publisher)
Created2020
158307-Thumbnail Image.png
Description
The focus of this dissertation is first on understanding the difficulties involved in constructing reduced order models of structures that exhibit a strong nonlinearity/strongly nonlinear events such as snap-through, buckling (local or global), mode switching, symmetry breaking. Next, based on this understanding, it is desired to modify/extend the current Nonlinear

The focus of this dissertation is first on understanding the difficulties involved in constructing reduced order models of structures that exhibit a strong nonlinearity/strongly nonlinear events such as snap-through, buckling (local or global), mode switching, symmetry breaking. Next, based on this understanding, it is desired to modify/extend the current Nonlinear Reduced Order Modeling (NLROM) methodology, basis selection and/or identification methodology, to obtain reliable reduced order models of these structures. Focusing on these goals, the work carried out addressed more specifically the following issues:

i) optimization of the basis to capture at best the response in the smallest number of modes,

ii) improved identification of the reduced order model stiffness coefficients,

iii) detection of strongly nonlinear events using NLROM.

For the first issue, an approach was proposed to rotate a limited number of linear modes to become more dominant in the response of the structure. This step was achieved through a proper orthogonal decomposition of the projection on these linear modes of a series of representative nonlinear displacements. This rotation does not expand the modal space but renders that part of the basis more efficient, the identification of stiffness coefficients more reliable, and the selection of dual modes more compact. In fact, a separate approach was also proposed for an independent optimization of the duals. Regarding the second issue, two tuning approaches of the stiffness coefficients were proposed to improve the identification of a limited set of critical coefficients based on independent response data of the structure. Both approaches led to a significant improvement of the static prediction for the clamped-clamped curved beam model. Extensive validations of the NLROMs based on the above novel approaches was carried out by comparisons with full finite element response data. The third issue, the detection of nonlinear events, was finally addressed by building connections between the eigenvalues of the finite element software (Nastran here) and NLROM tangent stiffness matrices and the occurrence of the ‘events’ which is further extended to the assessment of the accuracy with which the NLROM captures the full finite element behavior after the event has occurred.
ContributorsLin, Jinshan (Author) / Mignolet, Marc (Thesis advisor) / Jiang, Hanqing (Committee member) / Oswald, Jay (Committee member) / Spottswood, Stephen (Committee member) / Rajan, Subramaniam D. (Committee member) / Arizona State University (Publisher)
Created2020
158202-Thumbnail Image.png
Description
Complex dynamical systems are the kind of systems with many interacting components that usually have nonlinear dynamics. Those systems exist in a wide range of disciplines, such as physical, biological, and social fields. Those systems, due to a large amount of interacting components, tend to possess very high dimensionality. Additionally,

Complex dynamical systems are the kind of systems with many interacting components that usually have nonlinear dynamics. Those systems exist in a wide range of disciplines, such as physical, biological, and social fields. Those systems, due to a large amount of interacting components, tend to possess very high dimensionality. Additionally, due to the intrinsic nonlinear dynamics, they have tremendous rich system behavior, such as bifurcation, synchronization, chaos, solitons. To develop methods to predict and control those systems has always been a challenge and an active research area.

My research mainly concentrates on predicting and controlling tipping points (saddle-node bifurcation) in complex ecological systems, comparing linear and nonlinear control methods in complex dynamical systems. Moreover, I use advanced artificial neural networks to predict chaotic spatiotemporal dynamical systems. Complex networked systems can exhibit a tipping point (a “point of no return”) at which a total collapse occurs. Using complex mutualistic networks in ecology as a prototype class of systems, I carry out a dimension reduction process to arrive at an effective two-dimensional (2D) system with the two dynamical variables corresponding to the average pollinator and plant abundances, respectively. I demonstrate that, using 59 empirical mutualistic networks extracted from real data, our 2D model can accurately predict the occurrence of a tipping point even in the presence of stochastic disturbances. I also develop an ecologically feasible strategy to manage/control the tipping point by maintaining the abundance of a particular pollinator species at a constant level, which essentially removes the hysteresis associated with tipping points.

Besides, I also find that the nodal importance ranking for nonlinear and linear control exhibits opposite trends: for the former, large degree nodes are more important but for the latter, the importance scale is tilted towards the small-degree nodes, suggesting strongly irrelevance of linear controllability to these systems. Focusing on a class of recurrent neural networks - reservoir computing systems that have recently been exploited for model-free prediction of nonlinear dynamical systems, I uncover a surprising phenomenon: the emergence of an interval in the spectral radius of the neural network in which the prediction error is minimized.
ContributorsJiang, Junjie (Author) / Lai, Ying-Cheng (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Wang, Xiao (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2020
157748-Thumbnail Image.png
Description
The problem of multiple object tracking seeks to jointly estimate the time-varying cardinality and trajectory of each object. There are numerous challenges that are encountered in tracking multiple objects including a time-varying number of measurements, under varying constraints, and environmental conditions. In this thesis, the proposed statistical methods integrate the

The problem of multiple object tracking seeks to jointly estimate the time-varying cardinality and trajectory of each object. There are numerous challenges that are encountered in tracking multiple objects including a time-varying number of measurements, under varying constraints, and environmental conditions. In this thesis, the proposed statistical methods integrate the use of physical-based models with Bayesian nonparametric methods to address the main challenges in a tracking problem. In particular, Bayesian nonparametric methods are exploited to efficiently and robustly infer object identity and learn time-dependent cardinality; together with Bayesian inference methods, they are also used to associate measurements to objects and estimate the trajectory of objects. These methods differ from the current methods to the core as the existing methods are mainly based on random finite set theory.

The first contribution proposes dependent nonparametric models such as the dependent Dirichlet process and the dependent Pitman-Yor process to capture the inherent time-dependency in the problem at hand. These processes are used as priors for object state distributions to learn dependent information between previous and current time steps. Markov chain Monte Carlo sampling methods exploit the learned information to sample from posterior distributions and update the estimated object parameters.

The second contribution proposes a novel, robust, and fast nonparametric approach based on a diffusion process over infinite random trees to infer information on object cardinality and trajectory. This method follows the hierarchy induced by objects entering and leaving a scene and the time-dependency between unknown object parameters. Markov chain Monte Carlo sampling methods integrate the prior distributions over the infinite random trees with time-dependent diffusion processes to update object states.

The third contribution develops the use of hierarchical models to form a prior for statistically dependent measurements in a single object tracking setup. Dependency among the sensor measurements provides extra information which is incorporated to achieve the optimal tracking performance. The hierarchical Dirichlet process as a prior provides the required flexibility to do inference. Bayesian tracker is integrated with the hierarchical Dirichlet process prior to accurately estimate the object trajectory.

The fourth contribution proposes an approach to model both the multiple dependent objects and multiple dependent measurements. This approach integrates the dependent Dirichlet process modeling over the dependent object with the hierarchical Dirichlet process modeling of the measurements to fully capture the dependency among both object and measurements. Bayesian nonparametric models can successfully associate each measurement to the corresponding object and exploit dependency among them to more accurately infer the trajectory of objects. Markov chain Monte Carlo methods amalgamate the dependent Dirichlet process with the hierarchical Dirichlet process to infer the object identity and object cardinality.

Simulations are exploited to demonstrate the improvement in multiple object tracking performance when compared to approaches that are developed based on random finite set theory.
ContributorsMoraffah, Bahman (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel W. (Committee member) / Richmond, Christ D. (Committee member) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2019
161872-Thumbnail Image.png
Description
This research presents advances in time-synchronized phasor (i.e.,synchrophasor) estimation and imaging with very-low-frequency electric fields. Phasor measurement units measure and track dynamic systems, often power systems, using synchrophasor estimation algorithms. Two improvements to subspace-based synchrophasor estimation algorithms are shown. The first improvement is a dynamic thresholding method for accurately determining the signal subspace

This research presents advances in time-synchronized phasor (i.e.,synchrophasor) estimation and imaging with very-low-frequency electric fields. Phasor measurement units measure and track dynamic systems, often power systems, using synchrophasor estimation algorithms. Two improvements to subspace-based synchrophasor estimation algorithms are shown. The first improvement is a dynamic thresholding method for accurately determining the signal subspace when using the estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm. This improvement facilitates accurate ESPRIT-based frequency estimates of both the nominal system frequency and the frequencies of interfering signals such as harmonics or out-of-band interference signals. Proper frequency estimation of all signals present in measurement data allows for accurate least squares estimates of synchrophasors for the nominal system frequency. By including the effects of clutter signals in the synchrophasor estimate, interference from clutter signals can be excluded. The result is near-flat estimation error during nominal system frequency changes, the presence of harmonic distortion, and out-of-band interference. The second improvement reduces the computational burden of the ESPRIT frequency estimation step by showing that an optimized Eigenvalue decomposition of the measurement data can be used instead of a singular value decomposition. This research also explores a deep-learning-based inversion method for imaging objects with a uniform electric field and a 2D planar D-dot array. Using electric fields as an illumination source has seen multiple applications ranging from medical imaging to mineral deposit detection. It is shown that a planar D-dot array and deep neural network can reconstruct the electrical properties of randomized objects. A 16000-sample dataset of objects comprised of a three-by-three grid of randomized dielectric constants was generated to train a deep neural network for predicting these dielectric constants from measured field distortions. Increasingly complex imaging environments are simulated, ranging from objects in free space to objects placed in a physical cage designed to produce uniform electric fields. Finally, this research relaxes the uniform electric field constraint, showing that the volume of an opaque container can be imaged with a copper tube antenna and a 1x4 array of D-dot sensors. Real world experimental results show that it is possible to image buckets of water (targets) within a plastic shed These experiments explore the detectability of targets as a function of target placement within the shed.
ContributorsDrummond, Zachary (Author) / Allee, David R (Thesis advisor) / Claytor, Kevin E (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Aberle, James (Committee member) / Arizona State University (Publisher)
Created2021
161725-Thumbnail Image.png
Description
A new class of electronic materials from food and foodstuff was developed to form a “toolkit” for edible electronics along with inorganic materials. Electrical components like resistors, capacitors and inductors were fabricated with such materials and tested. Applicable devices such as filters, microphones and pH sensors were built with edible

A new class of electronic materials from food and foodstuff was developed to form a “toolkit” for edible electronics along with inorganic materials. Electrical components like resistors, capacitors and inductors were fabricated with such materials and tested. Applicable devices such as filters, microphones and pH sensors were built with edible materials. Among the applications, a wireless edible pH sensor was optimized in terms of form factor, fabrication process and cost. This dissertation discusses the material sciences of food industry, design and fabrication of electronics and biomedical engineering by demonstrating edible electronic materials, components and devices such as filters, microphones and pH sensors. pH sensors are optimized for two different generations of design and fabrication.
ContributorsYang, Haokai (Author) / Jiang, Hanqing (Thesis advisor) / Yu, Hongbin (Thesis advisor) / Yao, Yu (Committee member) / Nian, Qiong (Committee member) / Zhuang, Houlong (Committee member) / Arizona State University (Publisher)
Created2021
161703-Thumbnail Image.png
Description
With the formation of next generation wireless communication, a growing number of new applications like internet of things, autonomous car, and drone is crowding the unlicensed spectrum. Licensed network such as LTE also comes to the unlicensed spectrum for better providing high-capacity contents with low cost. However, LTE was not

With the formation of next generation wireless communication, a growing number of new applications like internet of things, autonomous car, and drone is crowding the unlicensed spectrum. Licensed network such as LTE also comes to the unlicensed spectrum for better providing high-capacity contents with low cost. However, LTE was not designed for sharing spectrum with others. A cooperation center for these networks is costly because they possess heterogeneous properties and everyone can enter and leave the spectrum unrestrictedly, so the design will be challenging. Since it is infeasible to incorporate potentially infinite scenarios with one unified design, an alternative solution is to let each network learn its own coexistence policy. Previous solutions only work on fixed scenarios. In this work we present a reinforcement learning algorithm to cope with the coexistence between Wi-Fi and LTE-LAA agents in 5 GHz unlicensed spectrum. The coexistence problem was modeled as a Dec-POMDP and Bayesian approach was adopted for policy learning with nonparametric prior to accommodate the uncertainty of policy for different agents. A fairness measure was introduced in the reward function to encourage fair sharing between agents. We turned the reinforcement learning into an optimization problem by transforming the value function as likelihood and variational inference for posterior approximation. Simulation results demonstrate that this algorithm can reach high value with compact policy representations, and stay computationally efficient when applying to agent set.
ContributorsSHIH, PO-KAN (Author) / Moraffah, Bahman (Thesis advisor) / Papandreou-Suppappola, Antonia (Thesis advisor) / Dasarathy, Gautam (Committee member) / Shih, YiChang (Committee member) / Arizona State University (Publisher)
Created2021
161637-Thumbnail Image.png
Description
Extensive efforts have been devoted to understanding material failure in the last several decades. A suitable numerical method and specific failure criteria are required for failure simulation. The finite element method (FEM) is the most widely used approach for material mechanical modelling. Since FEM is based on partial differential equations,

Extensive efforts have been devoted to understanding material failure in the last several decades. A suitable numerical method and specific failure criteria are required for failure simulation. The finite element method (FEM) is the most widely used approach for material mechanical modelling. Since FEM is based on partial differential equations, it is hard to solve problems involving spatial discontinuities, such as fracture and material interface. Due to their intrinsic characteristics of integro-differential governing equations, discontinuous approaches are more suitable for problems involving spatial discontinuities, such as lattice spring method, discrete element method, and peridynamics. A recently proposed lattice particle method is shown to have no restriction of Poisson’s ratio, which is very common in discontinuous methods. In this study, the lattice particle method is adopted to study failure problems. In addition of numerical method, failure criterion is essential for failure simulations. In this study, multiaxial fatigue failure is investigated and then applied to the adopted method. Another critical issue of failure simulation is that the simulation process is time-consuming. To reduce computational cost, the lattice particle method can be partly replaced by neural network model.First, the development of a nonlocal maximum distortion energy criterion in the framework of a Lattice Particle Model (LPM) is presented for modeling of elastoplastic materials. The basic idea is to decompose the energy of a discrete material point into dilatational and distortional components, and plastic yielding of bonds associated with this material point is assumed to occur only when the distortional component reaches a critical value. Then, two multiaxial fatigue models are proposed for random loading and biaxial tension-tension loading, respectively. Following this, fatigue cracking in homogeneous and composite materials is studied using the lattice particle method and the proposed multiaxial fatigue model. Bi-phase material fatigue crack simulation is performed. Next, an integration of an efficient deep learning model and the lattice particle method is presented to predict fracture pattern for arbitrary microstructure and loading conditions. With this integration, computational accuracy and efficiency are both considered. Finally, some conclusion and discussion based on this study are drawn.
ContributorsWei, Haoyang (Author) / Liu, Yongming (Thesis advisor) / Chattopadhyay, Aditi (Committee member) / Jiang, Hanqing (Committee member) / Jiao, Yang (Committee member) / Oswald, Jay (Committee member) / Arizona State University (Publisher)
Created2021
161244-Thumbnail Image.png
Description
Special thermal interface materials are required for connecting devices that operate at high temperatures up to 300°C. Because devices used in power electronics, such as GaN, SiC, and other wide bandgap semiconductors, can reach very high temperatures (beyond 250°C), a high melting point, and high thermal & electrical conductivity are

Special thermal interface materials are required for connecting devices that operate at high temperatures up to 300°C. Because devices used in power electronics, such as GaN, SiC, and other wide bandgap semiconductors, can reach very high temperatures (beyond 250°C), a high melting point, and high thermal & electrical conductivity are required for the thermal interface material. Traditional solder materials for packaging cannot be used for these applications as they do not meet these requirements. Sintered nano-silver is a good candidate on account of its high thermal and electrical conductivity and very high melting point. The high temperature operating conditions of these devices lead to very high thermomechanical stresses that can adversely affect performance and also lead to failure. A number of these devices are mission critical and, therefore, there is a need for very high reliability. Thus, computational and nondestructive techniques and design methodology are needed to determine, characterize, and design the packages. Actual thermal cycling tests can be very expensive and time consuming. It is difficult to build test vehicles in the lab that are very close to the production level quality and therefore making comparisons or making predictions becomes a very difficult exercise. Virtual testing using a Finite Element Analysis (FEA) technique can serve as a good alternative. In this project, finite element analysis is carried out to help achieve this objective. A baseline linear FEA is performed to determine the nature and magnitude of stresses and strains that occur during the sintering step. A nonlinear coupled thermal and mechanical analysis is conducted for the sintering step to study the behavior more accurately and in greater detail. Damage and fatigue analysis are carried out for multiple thermal cycling conditions. The results are compared with the actual results from a prior study. A process flow chart outlining the FEA modeling process is developed as a template for the future work. A Coffin-Manson type relationship is developed to help determine the accelerated aging conditions and predict life for different service conditions.
ContributorsAmla, Tarun (Author) / Chawla, Nikhilesh (Thesis advisor) / Jiao, Yang (Committee member) / Liu, Yongming (Committee member) / Zhuang, Houlong (Committee member) / Jiang, Hanqing (Committee member) / Arizona State University (Publisher)
Created2020