Matching Items (98)
Filtering by

Clear all filters

161945-Thumbnail Image.png
Description
Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand,

Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand, the outer surface along with its enclosed internal volume can be taken as a three-dimensional embedding of interests. Most studies focus on the surface-based perspective by leveraging the intrinsic features on the tangent plane. But a two-dimensional model may fail to fully represent the realistic properties of shapes with both intrinsic and extrinsic properties. In this thesis, severalStochastic Partial Differential Equations (SPDEs) are thoroughly investigated and several methods are originated from these SPDEs to try to solve the problem of both two-dimensional and three-dimensional shape analyses. The unique physical meanings of these SPDEs inspired the findings of features, shape descriptors, metrics, and kernels in this series of works. Initially, the data generation of high-dimensional shapes, here, the tetrahedral meshes, is introduced. The cerebral cortex is taken as the study target and an automatic pipeline of generating the gray matter tetrahedral mesh is introduced. Then, a discretized Laplace-Beltrami operator (LBO) and a Hamiltonian operator (HO) in tetrahedral domain with Finite Element Method (FEM) are derived. Two high-dimensional shape descriptors are defined based on the solution of the heat equation and Schrödinger’s equation. Considering the fact that high-dimensional shape models usually contain massive redundancies, and the demands on effective landmarks in many applications, a Gaussian process landmarking on tetrahedral meshes is further studied. A SIWKS-based metric space is used to define a geometry-aware Gaussian process. The study of the periodic potential diffusion process further inspired the idea of a new kernel call the geometry-aware convolutional kernel. A series of Bayesian learning methods are then introduced to tackle the problem of shape retrieval and classification. Experiments of every single item are demonstrated. From the popular SPDE such as the heat equation and Schrödinger’s equation to the general potential diffusion equation and the specific periodic potential diffusion equation, it clearly shows that classical SPDEs play an important role in discovering new features, metrics, shape descriptors and kernels. I hope this thesis could be an example of using interdisciplinary knowledge to solve problems.
ContributorsFan, Yonghui (Author) / Wang, Yalin (Thesis advisor) / Lepore, Natasha (Committee member) / Turaga, Pavan (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
Description
Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on

Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on high-performance graph matching solvers, it still remains a challenging task to tackle the matching problem under real-world scenarios with severe graph uncertainty (e.g., noise, outlier, misleading or ambiguous link).In this dissertation, a main focus is to investigate the essence and propose solutions to graph matching with higher reliability under such uncertainty. To this end, the proposed research was conducted taking into account three perspectives related to reliable graph matching: modeling, optimization and learning. For modeling, graph matching is extended from typical quadratic assignment problem to a more generic mathematical model by introducing a specific family of separable function, achieving higher capacity and reliability. In terms of optimization, a novel high gradient-efficient determinant-based regularization technique is proposed in this research, showing high robustness against outliers. Then learning paradigm for graph matching under intrinsic combinatorial characteristics is explored. First, a study is conducted on the way of filling the gap between discrete problem and its continuous approximation under a deep learning framework. Then this dissertation continues to investigate the necessity of more reliable latent topology of graphs for matching, and propose an effective and flexible framework to obtain it. Coherent findings in this dissertation include theoretical study and several novel algorithms, with rich experiments demonstrating the effectiveness.
ContributorsYu, Tianshu (Author) / Li, Baoxin (Thesis advisor) / Wang, Yalin (Committee member) / Yang, Yezhou (Committee member) / Yang, Yingzhen (Committee member) / Arizona State University (Publisher)
Created2021
168275-Thumbnail Image.png
Description
Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on

Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on high-performance graph matching solvers, it still remains a challenging task to tackle the matching problem under real-world scenarios with severe graph uncertainty (e.g., noise, outlier, misleading or ambiguous link).In this dissertation, a main focus is to investigate the essence and propose solutions to graph matching with higher reliability under such uncertainty. To this end, the proposed research was conducted taking into account three perspectives related to reliable graph matching: modeling, optimization and learning. For modeling, graph matching is extended from typical quadratic assignment problem to a more generic mathematical model by introducing a specific family of separable function, achieving higher capacity and reliability. In terms of optimization, a novel high gradient-efficient determinant-based regularization technique is proposed in this research, showing high robustness against outliers. Then learning paradigm for graph matching under intrinsic combinatorial characteristics is explored. First, a study is conducted on the way of filling the gap between discrete problem and its continuous approximation under a deep learning framework. Then this dissertation continues to investigate the necessity of more reliable latent topology of graphs for matching, and propose an effective and flexible framework to obtain it. Coherent findings in this dissertation include theoretical study and several novel algorithms, with rich experiments demonstrating the effectiveness.
ContributorsYu, Tianshu (Author) / Li, Baoxin (Thesis advisor) / Wang, Yalin (Committee member) / Yang, Yezhou (Committee member) / Yang, Yingzhen (Committee member) / Arizona State University (Publisher)
Created2021
168323-Thumbnail Image.png
Description
Transorbital surgery has gained recent notoriety due to its incorporation into endoscopic skull base surgery. The body of published literature on the field is cadaveric and observation. The pre-clinical studies are focused on the use of the endoscope only. Furthermore the methodology utilised in the published literature is inconsistent and

Transorbital surgery has gained recent notoriety due to its incorporation into endoscopic skull base surgery. The body of published literature on the field is cadaveric and observation. The pre-clinical studies are focused on the use of the endoscope only. Furthermore the methodology utilised in the published literature is inconsistent and does not embody the optimal principles of scientific experimentation. This body of work evaluates a minimally invasive novel surgical corridor - the transorbital approach - its validity in neurosurgical practice, as well as both qualitatively and quantitatively assessing available technological advances in a robust experimental fashion. While the endoscope is an established means of visualisation used in clinical transorbital surgery, the microscope has never been assessed with respect to the transorbital approach. This question is investigated here and the anatomical and surgical benefits and limitations of microscopic visualisation demonstrated. The comparative studies provide increased knowledge on specifics pertinent to neurosurgeons and other skull base specialists when planning pre-operatively, such as pathology location, involved anatomical structures, instrument maneuvrability and the advantages and disadvantages of the distinct visualisation technologies. This is all with the intention of selecting the most suitable surgical approach and technology, specific to the patient, pathology and anatomy, so as to perform the best surgical procedure. The research findings illustrated in this body of work are diverse, reproducible and applicable. The transorbital surgical corridor has substantive potential for access to the anterior cranial fossa and specific surgical target structures. The neuroquantitative metrics investigated confirm the utility and benefits specific to the respective visualisation technologies i.e. the endoscope and microscope. The most appropriate setting wherein the approach should be used is also discussed. The transorbital corridor has impressive potential, can utilise all available technological advances, promotes multi-disciplinary co-operation and learning amongst clinicians and ultimately, is a means of improving operative patient care.
ContributorsHoulihan, Lena Mary (Author) / Preul, Mark C. (Thesis advisor) / Vernon, Brent (Thesis advisor) / O' Sullivan, Michael G.J. (Committee member) / Lawton, Michael T. (Committee member) / Santarelli, Griffin (Committee member) / Smith, Brian (Committee member) / Arizona State University (Publisher)
Created2021
168749-Thumbnail Image.png
Description
Alzheimer's disease (AD) is a neurodegenerative disease that damages the cognitive abilities of a patient. It is critical to diagnose AD early to begin treatment as soon as possible which can be done through biomarkers. One such biomarker is the beta-amyloid (Aβ) peptide which can be quantified using the centiloid

Alzheimer's disease (AD) is a neurodegenerative disease that damages the cognitive abilities of a patient. It is critical to diagnose AD early to begin treatment as soon as possible which can be done through biomarkers. One such biomarker is the beta-amyloid (Aβ) peptide which can be quantified using the centiloid (CL) scale. For identifying the Aβ biomarker, A deep learning model that can model AD progression by predicting the CL value for brain magnetic resonance images (MRIs) is proposed. Brain MRI images can be obtained through the Alzheimer's Disease Neuroimaging Initiative (ADNI) and Open Access Series of Imaging Studies (OASIS) datasets, however a single model cannot perform well on both datasets at once. Thus, A regularization-based continuous learning framework to perform domain adaptation on the previous model is also proposed which captures the latent information about the relationship between Aβ and AD progression within both datasets.
ContributorsTrinh, Matthew Brian (Author) / Wang, Yalin (Thesis advisor) / Liang, Jianming (Committee member) / Su, Yi (Committee member) / Arizona State University (Publisher)
Created2022
189311-Thumbnail Image.png
Description
Background: Studies have examined student fruit/vegetable (FV) consumption, selection, and waste related to lunch duration and found that longer duration at lunch was associated with greater consumption, selection, and reduced waste. However, few studies have investigated the relationship between time to eat and FVs. The aim of this research is

Background: Studies have examined student fruit/vegetable (FV) consumption, selection, and waste related to lunch duration and found that longer duration at lunch was associated with greater consumption, selection, and reduced waste. However, few studies have investigated the relationship between time to eat and FVs. The aim of this research is to analyze the relationship between objective time to students took to eat (“time to eat”) as it relates to their fruit and vegetable consumption, selection, and plate waste.in elementary, middle, and high schools. Methods: A secondary analysis of cross-sectional study of 37 Arizona schools to discover the differences in the selection, consumption, and waste of FVs from students (Full N = 2226, Elementary N = 630, Middle School N = 699, High School N = 897) using objective time to eat measures. Zero-inflated negative binomial regressions examined differences in FV grams selected, consumed, and wasted adjusted for sociodemographics including race, ethnicity, eligibility for free or reduced lunch, academic year, and sex and clustering for students within schools. Results are presented across school level (elementary, middle, and high school). Results: The average time taken to eat ranged from 10-12 minutes for all students. The association of time to eat and lunch duration were not closely related (r=0.03, p = 0.172). In the count model for every additional minute spent, there was a 0.5% greater likelihood of selecting FVs for elementary kids among those who took any FVs. In the zero-inflated model, it was found that there was a statistically significant relationship between time spent eating and the selection of fruits and vegetables. For the total sample and high schoolers, a minute more of eating time was associated with a 4.3% and 8.8% greater odds of selecting FV. This means that longer eating time increased the likelihood of choosing fruits and vegetables. The results indicated that the longer students took to eat, the higher the likelihood of consuming more of FVs. Each 10 more minutes spent eating (i.e., time to eat) is associated with a 5% increase in grams of FV selected relative to mean (for those that chose FV) over 1 week this equates to 32 g increase of FV selected. However, for middle schoolers, the time to eat was not found to be significant in relation to the grams of fruits and vegetables consumed. There was some significance in the sociodemographic factors such as gender (all) and other (middle school). There was a relationship between time taken to eat and waste as a proportion for fruits and vegetables. For example, among those among the students who wasted something (as a proportion of selection), each additional 10 minutes of eating time was associated with a .6% decrease in waste relative to the mean (for those who chose fruits and vegetables) over a week, resulting in a decrease in waste percentage of 16.5%. Among high schoolers, males had a slightly higher odds of wasting a proportion of fruits and vegetables. Conclusions: This study aimed to examine the association between the time students take to eat during lunch and their fruit and vegetable (FV) consumption, selection, and plate waste. The findings revealed that the time to eat was related to FV consumption, depending on the school level. However, it was not significantly associated with FV selection or waste. The study emphasized the need for further research on time to eat, distinguishing it from the duration of lunch. Longer lunch periods and adequate time could influence better food choices, increased FV consumption, and reduced waste. The study highlighted the importance of interventions and school policies promoting healthier food choices and providing sufficient time for students to eat. Future research should validate these findings and explore the impact of socialization opportunities on promoting healthier eating habits. Understanding the relationship between lunch duration, time to eat, and students' dietary behaviors can contribute to improved health outcomes and inform effective strategies in school settings.
ContributorsDandridge, Christina Marie (Author) / Adams, Marc (Thesis advisor) / Whisner, Corrie (Committee member) / Bruening, Meg (Committee member) / Arizona State University (Publisher)
Created2023
168541-Thumbnail Image.png
Description
The purpose of the overall project is to create a simulated environment similar to Google map and traffic but simplified for education purposes. Students can choose different traffic patterns and program a car to navigate through the traffic dynamically based on the changing traffic. The environment used in the project

The purpose of the overall project is to create a simulated environment similar to Google map and traffic but simplified for education purposes. Students can choose different traffic patterns and program a car to navigate through the traffic dynamically based on the changing traffic. The environment used in the project is ASU VIPLE (Visual IoT/Robotics Programming Language Environment). It is a visual programming environment for Computer Science education. VIPLE supports a number of devices and platforms, including a traffic simulator developed using Unity game engine. This thesis focuses on creating realistic traffic data for the traffic simulator and implementing dynamic routing algorithm in VIPLE. The traffic data is generated from the recorded real traffic data published at Arizona Maricopa County website. Based on the generated traffic data, VIPLE programs are developed to implement the traffic simulation based on dynamic changing traffic data.
ContributorsZhang, Zhemin (Author) / Chen, Yinong (Thesis advisor) / Wang, Yalin (Thesis advisor) / De Luca, Gennaro (Committee member) / Arizona State University (Publisher)
Created2022
168788-Thumbnail Image.png
Description
Little is known about how cognitive and brain aging patterns differ in older adults with autism spectrum disorder (ASD). However, recent evidence suggests that individuals with ASD may be at greater risk of pathological aging conditions than their neurotypical (NT) counterparts. A growing body of research indicates that older adults

Little is known about how cognitive and brain aging patterns differ in older adults with autism spectrum disorder (ASD). However, recent evidence suggests that individuals with ASD may be at greater risk of pathological aging conditions than their neurotypical (NT) counterparts. A growing body of research indicates that older adults with ASD may experience accelerated cognitive decline and neurodegeneration as they age, although studies are limited by their cross-sectional design in a population with strong age-cohort effects. Studying aging in ASD and identifying biomarkers to predict atypical aging is important because the population of older individuals with ASD is growing. Understanding the unique challenges faced as autistic adults age is necessary to develop treatments to improve quality of life and preserve independence. In this study, a longitudinal design was used to characterize cognitive and brain aging trajectories in ASD as a function of autistic trait severity. Principal components analysis (PCA) was used to derive a cognitive metric that best explains performance variability on tasks measuring memory ability and executive function. The slope of the integrated persistent feature (SIP) was used to quantify functional connectivity; the SIP is a novel, threshold-free graph theory metric which summarizes the speed of information diffusion in the brain. Longitudinal mixed models were using to predict cognitive and brain aging trajectories (measured via the SIP) as a function of autistic trait severity, sex, and their interaction. The sensitivity of the SIP was also compared with traditional graph theory metrics. It was hypothesized that older adults with ASD would experience accelerated cognitive and brain aging and furthermore, age-related changes in brain network topology would predict age-related changes in cognitive performance. For both cognitive and brain aging, autistic traits and sex interacted to predict trajectories, such that older men with high autistic traits were most at risk for poorer outcomes. In men with autism, variability in SIP scores across time points trended toward predicting cognitive aging trajectories. Findings also suggested that autistic traits are more sensitive to differences in brain aging than diagnostic group and that the SIP is more sensitive to brain aging trajectories than other graph theory metrics. However, further research is required to determine how physiological biomarkers such as the SIP are associated with cognitive outcomes.
ContributorsSullivan, Georgia (Author) / Braden, Blair (Thesis advisor) / Kodibagkar, Vikram (Thesis advisor) / Schaefer, Sydney (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2022
187723-Thumbnail Image.png
Description
Tools designed to help match people with behaviors they identify as likely to lead to a successful behavioral outcome remain under-researched. This study assessed the effect of a participant-driven behavior-matching intervention on 1) the adoption of a new behavior related to fruit and vegetable (F&V) consumption, 2) study attrition, and

Tools designed to help match people with behaviors they identify as likely to lead to a successful behavioral outcome remain under-researched. This study assessed the effect of a participant-driven behavior-matching intervention on 1) the adoption of a new behavior related to fruit and vegetable (F&V) consumption, 2) study attrition, and 3) changes in F&V consumption. In this two-arm randomized controlled trial, 64 adults who did not meet standard F&V recommendations were allocated to an intervention (n=33) or control group (n=31). Participants in the intervention group ranked 20 F&V-related behaviors according to their perceived likelihood of engagement in the behavior and their perception of the behavior’s efficacy in increasing F&V consumption. Participants in the intervention group were subsequently shown the list of 20 behaviors in order of their provided rankings, with the highest-ranked behaviors at the top, and were asked to choose a behavior they would like to perform daily for 4 weeks. The control group chose from a random-order list of the same 20 behaviors to adopt daily for 4 weeks. During the study period, text messages were sent to all participants 90 minutes before their reported bedtime to collect Yes/No data reflecting successful behavior engagement each day. The binary repeated-measures data collected from the text messages was analyzed using mixed-effects logistic regression, differences in attrition were assessed using log-rank analysis, and change scores in F&V consumption were compared between the two groups using the Man-Whitney U test. P<0.05 indicated significance. The rate of successful behavior adoption did not differ significantly between the two groups (b=0.09, 95%CI= -0.81, 0.98, p=0.85). The log rank test results indicated that there was no significant difference in attrition between the two groups (χ2=2.68, df=1, p=0.10). F&V consumption increased significantly over the 4 weeks in the total sample (Z=-5.86, p<0.001), but no differences in F&V change scores were identified between the control and intervention groups (Z=-0.21, p=0.84). The behavior-matching tool assessed in this study did not significantly improve behavior adoption, study attrition, or F&V intake over 4 weeks.
ContributorsCosgrove, Kelly Sarah (Author) / Wharton, Christopher (Thesis advisor) / Adams, Marc (Committee member) / DesRoches, Tyler (Committee member) / Grebitus, Carola (Committee member) / Johnston, Carol (Committee member) / Arizona State University (Publisher)
Created2023
187847-Thumbnail Image.png
Description
A description of numerical and analytical work pertaining to models that describe the growth and progression of glioblastoma multiforme (GBM), an aggressive form of primary brain cancer. Two reaction-diffusion models are used: the Fisher-Kolmogorov-Petrovsky-Piskunov equation and a 2-population model that divides the tumor into actively proliferating and quiescent (or necrotic)

A description of numerical and analytical work pertaining to models that describe the growth and progression of glioblastoma multiforme (GBM), an aggressive form of primary brain cancer. Two reaction-diffusion models are used: the Fisher-Kolmogorov-Petrovsky-Piskunov equation and a 2-population model that divides the tumor into actively proliferating and quiescent (or necrotic) cells. The numerical portion of this work (chapter 2) focuses on simulating GBM expansion in patients undergoing treatment for recurrence of tumor following initial surgery. The models are simulated on 3-dimensional brain geometries derived from magnetic resonance imaging (MRI) scans provided by the Barrow Neurological Institute. The study consists of 17 clinical time intervals across 10 patients that have been followed in detail, each of whom shows significant progression of tumor over a period of 1 to 3 months on sequential follow up scans. A Taguchi sampling design is implemented to estimate the variability of the predicted tumors to using 144 different choices of model parameters. In 9 cases, model parameters can be identified such that the simulated tumor contains at least 40 percent of the volume of the observed tumor. In the analytical portion of the paper (chapters 3 and 4), a positively invariant region for our 2-population model is identified. Then, a rigorous derivation of the critical patch size associated with the model is performed. The critical patch (KISS) size is the minimum habitat size needed for a population to survive in a region. Habitats larger than the critical patch size allow a population to persist, while smaller habitats lead to extinction. The critical patch size of the 2-population model is consistent with that of the Fisher-Kolmogorov-Petrovsky-Piskunov equation, one of the first reaction-diffusion models proposed for GBM. The critical patch size may indicate that GBM tumors have a minimum size depending on the location in the brain. A theoretical relationship between the size of a GBM tumor at steady-state and its maximum cell density is also derived, which has potential applications for patient-specific parameter estimation based on magnetic resonance imaging data.
ContributorsHarris, Duane C. (Author) / Kuang, Yang (Thesis advisor) / Kostelich, Eric J. (Thesis advisor) / Preul, Mark C. (Committee member) / Crook, Sharon (Committee member) / Gardner, Carl (Committee member) / Arizona State University (Publisher)
Created2023