Matching Items (3)
171977-Thumbnail Image.png
Description
This dissertation contains two research projects: Multiple Change Point Detection in Linear Models and Statistical Inference for Implicit Network Structures. In the first project, a new method to detect the number and locations of change points in piecewise linear models under stationary Gaussian noise is proposed. The method transforms the problem

This dissertation contains two research projects: Multiple Change Point Detection in Linear Models and Statistical Inference for Implicit Network Structures. In the first project, a new method to detect the number and locations of change points in piecewise linear models under stationary Gaussian noise is proposed. The method transforms the problem of detecting change points to the detection of local extrema by kernel smoothing and differentiating the data sequence. The change points are detected by computing the p-values for all local extrema using the derived peak height distributions of smooth Gaussian processes, and then applying the Benjamini-Hochberg procedure to identify significant local extrema. Theoretical results show that the method can guarantee asymptotic control of the False Discover Rate (FDR) and power consistency, as the length of the sequence, and the size of slope changes and jumps get large. In addition, compared to traditional methods for change point detection based on recursive segmentation, The proposed method tests the candidate local extrema only one time, achieving the smallest computational complexity. Numerical studies show that the properties on FDR control and power consistency are maintained in non-asymptotic cases. In the second project, identifiability and estimation consistency under mild conditions in hub model are proved. Hub Model is a model-based approach, introduced by Zhao and Weko (2019), to infer implicit network structuress from grouping behavior. The hub model assumes that each member of the group is brought together by a member of the group called the hub. This paper generalize the hub model by introducing a model component that allows hubless groups in which individual nodes spontaneously appear independent of any other individual. The new model bridges the gap between the hub model and the degenerate case of the mixture model -- the Bernoulli product. Furthermore, a penalized likelihood approach is proposed to estimate the set of hubs when it is unknown.
ContributorsHe, Zhibing (Author) / Zhao, Yunpeng YZ (Thesis advisor) / Cheng, Dan DC (Thesis advisor) / Lopes, Hedibert HL (Committee member) / Fricks, John JF (Committee member) / Kao, Ming-Hung MK (Committee member) / Arizona State University (Publisher)
Created2022
161945-Thumbnail Image.png
Description
Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand,

Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand, the outer surface along with its enclosed internal volume can be taken as a three-dimensional embedding of interests. Most studies focus on the surface-based perspective by leveraging the intrinsic features on the tangent plane. But a two-dimensional model may fail to fully represent the realistic properties of shapes with both intrinsic and extrinsic properties. In this thesis, severalStochastic Partial Differential Equations (SPDEs) are thoroughly investigated and several methods are originated from these SPDEs to try to solve the problem of both two-dimensional and three-dimensional shape analyses. The unique physical meanings of these SPDEs inspired the findings of features, shape descriptors, metrics, and kernels in this series of works. Initially, the data generation of high-dimensional shapes, here, the tetrahedral meshes, is introduced. The cerebral cortex is taken as the study target and an automatic pipeline of generating the gray matter tetrahedral mesh is introduced. Then, a discretized Laplace-Beltrami operator (LBO) and a Hamiltonian operator (HO) in tetrahedral domain with Finite Element Method (FEM) are derived. Two high-dimensional shape descriptors are defined based on the solution of the heat equation and Schrödinger’s equation. Considering the fact that high-dimensional shape models usually contain massive redundancies, and the demands on effective landmarks in many applications, a Gaussian process landmarking on tetrahedral meshes is further studied. A SIWKS-based metric space is used to define a geometry-aware Gaussian process. The study of the periodic potential diffusion process further inspired the idea of a new kernel call the geometry-aware convolutional kernel. A series of Bayesian learning methods are then introduced to tackle the problem of shape retrieval and classification. Experiments of every single item are demonstrated. From the popular SPDE such as the heat equation and Schrödinger’s equation to the general potential diffusion equation and the specific periodic potential diffusion equation, it clearly shows that classical SPDEs play an important role in discovering new features, metrics, shape descriptors and kernels. I hope this thesis could be an example of using interdisciplinary knowledge to solve problems.
ContributorsFan, Yonghui (Author) / Wang, Yalin (Thesis advisor) / Lepore, Natasha (Committee member) / Turaga, Pavan (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
157121-Thumbnail Image.png
Description
In this work, I present a Bayesian inference computational framework for the analysis of widefield microscopy data that addresses three challenges: (1) counting and localizing stationary fluorescent molecules; (2) inferring a spatially-dependent effective fluorescence profile that describes the spatially-varying rate at which fluorescent molecules emit subsequently-detected photons (due to different

In this work, I present a Bayesian inference computational framework for the analysis of widefield microscopy data that addresses three challenges: (1) counting and localizing stationary fluorescent molecules; (2) inferring a spatially-dependent effective fluorescence profile that describes the spatially-varying rate at which fluorescent molecules emit subsequently-detected photons (due to different illumination intensities or different local environments); and (3) inferring the camera gain. My general theoretical framework utilizes the Bayesian nonparametric Gaussian and beta-Bernoulli processes with a Markov chain Monte Carlo sampling scheme, which I further specify and implement for Total Internal Reflection Fluorescence (TIRF) microscopy data, benchmarking the method on synthetic data. These three frameworks are self-contained, and can be used concurrently so that the fluorescence profile and emitter locations are both considered unknown and, under some conditions, learned simultaneously. The framework I present is flexible and may be adapted to accommodate the inference of other parameters, such as emission photophysical kinetics and the trajectories of moving molecules. My TIRF-specific implementation may find use in the study of structures on cell membranes, or in studying local sample properties that affect fluorescent molecule photon emission rates.
ContributorsWallgren, Ross (Author) / Presse, Steve (Thesis advisor) / Armbruster, Hans (Thesis advisor) / McCulloch, Robert (Committee member) / Arizona State University (Publisher)
Created2019