Matching Items (190)
152126-Thumbnail Image.png
Description
Video object segmentation (VOS) is an important task in computer vision with a lot of applications, e.g., video editing, object tracking, and object based encoding. Different from image object segmentation, video object segmentation must consider both spatial and temporal coherence for the object. Despite extensive previous work, the problem is

Video object segmentation (VOS) is an important task in computer vision with a lot of applications, e.g., video editing, object tracking, and object based encoding. Different from image object segmentation, video object segmentation must consider both spatial and temporal coherence for the object. Despite extensive previous work, the problem is still challenging. Usually, foreground object in the video draws more attention from humans, i.e. it is salient. In this thesis we tackle the problem from the aspect of saliency, where saliency means a certain subset of visual information selected by a visual system (human or machine). We present a novel unsupervised method for video object segmentation that considers both low level vision cues and high level motion cues. In our model, video object segmentation can be formulated as a unified energy minimization problem and solved in polynomial time by employing the min-cut algorithm. Specifically, our energy function comprises the unary term and pair-wise interaction energy term respectively, where unary term measures region saliency and interaction term smooths the mutual effects between object saliency and motion saliency. Object saliency is computed in spatial domain from each discrete frame using multi-scale context features, e.g., color histogram, gradient, and graph based manifold ranking. Meanwhile, motion saliency is calculated in temporal domain by extracting phase information of the video. In the experimental section of this thesis, our proposed method has been evaluated on several benchmark datasets. In MSRA 1000 dataset the result demonstrates that our spatial object saliency detection is superior to the state-of-art methods. Moreover, our temporal motion saliency detector can achieve better performance than existing motion detection approaches in UCF sports action analysis dataset and Weizmann dataset respectively. Finally, we show the attractive empirical result and quantitative evaluation of our approach on two benchmark video object segmentation datasets.
ContributorsWang, Yilin (Author) / Li, Baoxin (Thesis advisor) / Wang, Yalin (Committee member) / Cleveau, David (Committee member) / Arizona State University (Publisher)
Created2013
152128-Thumbnail Image.png
Description
Learning from high dimensional biomedical data attracts lots of attention recently. High dimensional biomedical data often suffer from the curse of dimensionality and have imbalanced class distributions. Both of these features of biomedical data, high dimensionality and imbalanced class distributions, are challenging for traditional machine learning methods and may affect

Learning from high dimensional biomedical data attracts lots of attention recently. High dimensional biomedical data often suffer from the curse of dimensionality and have imbalanced class distributions. Both of these features of biomedical data, high dimensionality and imbalanced class distributions, are challenging for traditional machine learning methods and may affect the model performance. In this thesis, I focus on developing learning methods for the high-dimensional imbalanced biomedical data. In the first part, a sparse canonical correlation analysis (CCA) method is presented. The penalty terms is used to control the sparsity of the projection matrices of CCA. The sparse CCA method is then applied to find patterns among biomedical data sets and labels, or to find patterns among different data sources. In the second part, I discuss several learning problems for imbalanced biomedical data. Note that traditional learning systems are often biased when the biomedical data are imbalanced. Therefore, traditional evaluations such as accuracy may be inappropriate for such cases. I then discuss several alternative evaluation criteria to evaluate the learning performance. For imbalanced binary classification problems, I use the undersampling based classifiers ensemble (UEM) strategy to obtain accurate models for both classes of samples. A small sphere and large margin (SSLM) approach is also presented to detect rare abnormal samples from a large number of subjects. In addition, I apply multiple feature selection and clustering methods to deal with high-dimensional data and data with highly correlated features. Experiments on high-dimensional imbalanced biomedical data are presented which illustrate the effectiveness and efficiency of my methods.
ContributorsYang, Tao (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2013
161566-Thumbnail Image.png
Description
Objective: Increasing fruit/vegetable (FV) consumption and decreasing waste during the school lunch is a public health priority. Understanding how serving style of FV impacts FV consumption and waste may be an effective means to changing nutrition behaviors in schools. This study examined whether students were more likely to select, consume,

Objective: Increasing fruit/vegetable (FV) consumption and decreasing waste during the school lunch is a public health priority. Understanding how serving style of FV impacts FV consumption and waste may be an effective means to changing nutrition behaviors in schools. This study examined whether students were more likely to select, consume, and waste FV when FVs were cut vs. whole. Methods: Baseline data from the ASU School Lunch Study was used to explore associations between cut vs. whole FV serving style and objectively measured FV selection, consumption, and waste and grade level interactions among a random selection of students (n=6804; 47.8% female; 78.8% BIPOC) attending Arizona elementary, middle, and high schools (N=37). Negative binomial regression models evaluated serving style on FV weight (grams) selected, consumed, and wasted, adjusted for sociodemographics and school. Results: Students were more likely to select cut FVs (IRR=1.11; 95% CI: 1.04, 1.18) and waste cut FVs (IRR=1.20; 95% CI: 1.04, 1.39); however, no differences were observed in the overall consumption of cut vs. whole FVs. Grade-level interactions impacted students’ selection of FVs. Middle school students had a significantly higher effect modification for the selection of cut FVs (IRR=1.18; p=0.006) compared to high school and elementary students. Further, high school students had a significantly lower effect modification for the selection of cut FVs (IRR=0.83; p=0.010) compared to middle and elementary students. No other grade-level interactions were observed. Discussion: Serving style of FV may impact how much FV is selected and wasted, but further research is needed to determine causality between these variables.
ContributorsJames, Amber Chandarana (Author) / Bruening, Meredith (Thesis advisor) / Adams, Marc (Thesis advisor) / Koskan, Alexis (Committee member) / Arizona State University (Publisher)
Created2021
168413-Thumbnail Image.png
Description
Microfluidic platforms have been exploited extensively as a tool for the separation of particles by electric field manipulation. Microfluidic devices can facilitate the manipulation of particles by dielectrophoresis. Separation of particles by size and type has been demonstrated by insulator-based dielectrophoresis in a microfluidic device. Thus, manipulating particles by size

Microfluidic platforms have been exploited extensively as a tool for the separation of particles by electric field manipulation. Microfluidic devices can facilitate the manipulation of particles by dielectrophoresis. Separation of particles by size and type has been demonstrated by insulator-based dielectrophoresis in a microfluidic device. Thus, manipulating particles by size has been widely studied throughout the years. It has been shown that size-heterogeneity in organelles has been linked to multiple diseases from abnormal organelle size. Here, a mixture of two sizes of polystyrene beads (0.28 and 0.87 μm) was separated by a ratchet migration mechanism under a continuous flow (20 nL/min). Furthermore, to achieve high-throughput separation, different ratchet devices were designed to achieve high-volume separation. Recently, enormous efforts have been made to manipulate small size DNA and proteins. Here, a microfluidic device comprising of multiple valves acting as insulating constrictions when a potential is applied is presented. The tunability of the electric field gradient is evaluated by a COMSOL model, indicating that high electric field gradients can be reached by deflecting the valve at a certain distance. Experimentally, the tunability of the dynamic constriction was demonstrated by conducting a pressure study to estimate the gap distance between the valve and the substrate at different applied pressures. Finally, as a proof of principle, 0.87 μm polystyrene beads were manipulated by dielectrophoresis. These microfluidic platforms will aid in the understanding of size-heterogeneity of organelles for biomolecular assessment and achieve separation of nanometer-size DNA and proteins by dielectrophoresis.
ContributorsOrtiz, Ricardo (Author) / Ros, Alexandra (Thesis advisor) / Hayes, Mark (Committee member) / Borges, Chad (Committee member) / Arizona State University (Publisher)
Created2021
171764-Thumbnail Image.png
Description
This dissertation constructs a new computational processing framework to robustly and precisely quantify retinotopic maps based on their angle distortion properties. More generally, this framework solves the problem of how to robustly and precisely quantify (angle) distortions of noisy or incomplete (boundary enclosed) 2-dimensional surface to surface mappings. This framework

This dissertation constructs a new computational processing framework to robustly and precisely quantify retinotopic maps based on their angle distortion properties. More generally, this framework solves the problem of how to robustly and precisely quantify (angle) distortions of noisy or incomplete (boundary enclosed) 2-dimensional surface to surface mappings. This framework builds upon the Beltrami Coefficient (BC) description of quasiconformal mappings that directly quantifies local mapping (circles to ellipses) distortions between diffeomorphisms of boundary enclosed plane domains homeomorphic to the unit disk. A new map called the Beltrami Coefficient Map (BCM) was constructed to describe distortions in retinotopic maps. The BCM can be used to fully reconstruct the original target surface (retinal visual field) of retinotopic maps. This dissertation also compared retinotopic maps in the visual processing cascade, which is a series of connected retinotopic maps responsible for visual data processing of physical images captured by the eyes. By comparing the BCM results from a large Human Connectome project (HCP) retinotopic dataset (N=181), a new computational quasiconformal mapping description of the transformed retinal image as it passes through the cascade is proposed, which is not present in any current literature. The description applied on HCP data provided direct visible and quantifiable geometric properties of the cascade in a way that has not been observed before. Because retinotopic maps are generated from in vivo noisy functional magnetic resonance imaging (fMRI), quantifying them comes with a certain degree of uncertainty. To quantify the uncertainties in the quantification results, it is necessary to generate statistical models of retinotopic maps from their BCMs and raw fMRI signals. Considering that estimating retinotopic maps from real noisy fMRI time series data using the population receptive field (pRF) model is a time consuming process, a convolutional neural network (CNN) was constructed and trained to predict pRF model parameters from real noisy fMRI data
ContributorsTa, Duyan Nguyen (Author) / Wang, Yalin (Thesis advisor) / Lu, Zhong-Lin (Committee member) / Hansford, Dianne (Committee member) / Liu, Huan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2022
168694-Thumbnail Image.png
Description
Retinotopic map, the map between visual inputs on the retina and neuronal activation in brain visual areas, is one of the central topics in visual neuroscience. For human observers, the map is typically obtained by analyzing functional magnetic resonance imaging (fMRI) signals of cortical responses to slowly moving visual stimuli

Retinotopic map, the map between visual inputs on the retina and neuronal activation in brain visual areas, is one of the central topics in visual neuroscience. For human observers, the map is typically obtained by analyzing functional magnetic resonance imaging (fMRI) signals of cortical responses to slowly moving visual stimuli on the retina. Biological evidences show the retinotopic mapping is topology-preserving/topological (i.e. keep the neighboring relationship after human brain process) within each visual region. Unfortunately, due to limited spatial resolution and the signal-noise ratio of fMRI, state of art retinotopic map is not topological. The topic was to model the topology-preserving condition mathematically, fix non-topological retinotopic map with numerical methods, and improve the quality of retinotopic maps. The impose of topological condition, benefits several applications. With the topological retinotopic maps, one may have a better insight on human retinotopic maps, including better cortical magnification factor quantification, more precise description of retinotopic maps, and potentially better exam ways of in Ophthalmology clinic.
ContributorsTu, Yanshuai (Author) / Wang, Yalin (Thesis advisor) / Lu, Zhong-Lin (Committee member) / Crook, Sharon (Committee member) / Yang, Yezhou (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2022
Description

In cold chain tracking systems, accuracy and flexibility across different temperatures ranges plays an integral role in monitoring biospecimen integrity. However, while two common cold chain tracking systems are currently available (electronic and physics/chemical), there is not an affordable cold chain tracking mechanism that can be applied to a variety

In cold chain tracking systems, accuracy and flexibility across different temperatures ranges plays an integral role in monitoring biospecimen integrity. However, while two common cold chain tracking systems are currently available (electronic and physics/chemical), there is not an affordable cold chain tracking mechanism that can be applied to a variety of temperatures while maintaining accuracy for individual vials. Hence, our lab implemented our understanding of biochemical reaction kinetics to develop a new cold chain tracking mechanism using the permanganate/oxalic acid reaction. The permanganate/oxalic acid reaction is characterized by the reduction of permanganate (MnVII) to Mn(II) with Mn(II)-autocatalyzed oxidation of oxalate to CO2, resulting in a pink to colorless visual indicator change when the reaction system is not in the solid state (i.e., frozen or vitrified). Throughout our research, we demonstrate, (i) Improved reaction consistency and accuracy along with extended run times with the implementation of a nitric acid-based labware washing protocol, (ii) Simulated reaction kinetics for the maximum length reaction and 60-minute reaction based on previously developed MATLAB scripts (iii) Experimental reaction kinetics to verify the simulated MATLAB maximum and 60-minute reactions times (iv) Long-term stability of the permanganate/oxalic acid reaction with water or eutectic solutions of sodium perchlorate and magnesium perchlorate at -80°C (v) Reaction kinetics with eutectic solvents, sodium perchlorate and magnesium perchlorate, at 25°C, 4°C, and -8°C (vi) Accelerated reaction kinetics after the addition of varying concentrations of manganese perchlorate (vii) Reaction kinetics of higher concentration reaction systems (5x and 10x; for darker colors), at 25°C (viii) Long-term stability of the 10x higher concentration reaction at -80°C.

ContributorsLjungberg, Emil (Author) / Borges, Chad (Thesis director) / Levitus, Marcia (Committee member) / Williams, Peter (Committee member) / Barrett, The Honors College (Contributor) / School of Molecular Sciences (Contributor) / Department of Psychology (Contributor)
Created2022-12
171514-Thumbnail Image.png
Description
Plasma and serum are the most commonly used liquid biospecimens in biomarker research. These samples may be subjected to several pre-analytical variables (PAVs) during collection, processing and storage. Exposure to thawed conditions (temperatures above -30 °C) is a PAV that is hard to control, and track and could provide misleading

Plasma and serum are the most commonly used liquid biospecimens in biomarker research. These samples may be subjected to several pre-analytical variables (PAVs) during collection, processing and storage. Exposure to thawed conditions (temperatures above -30 °C) is a PAV that is hard to control, and track and could provide misleading information, that fail to accurately reveal the in vivo biological reality, when unaccounted for. Hence, assays that can empirically check the integrity of plasma and serum samples are crucial. As a solution to this issue, an assay titled ΔS-Cys-Albumin was developed and validated. The reference range of ΔS-Cys-Albumin in cardio vascular patients was determined and the change in ΔS-Cys-Albumin values in different samples over time course incubations at room temperature, 4 °C and -20 °C were evaluated. In blind challenges, this assay proved to be successful in identifying improperly stored samples individually and as groups. Then, the correlation between the instability of several clinically important proteins in plasma from healthy and cancer patients at room temperature, 4 °C and -20 °C was assessed. Results showed a linear inverse relationship between the percentage of proteins destabilized and ΔS-Cys-Albumin regardless of the specific time or temperature of exposure, proving ΔS-Cys-Albumin as an effective surrogate marker to track the stability of clinically relevant analytes in plasma. The stability of oxidized LDL in serum at different temperatures was assessed in serum samples and it stayed stable at all temperatures evaluated. The ΔS-Cys-Albumin requires the use of an LC-ESI-MS instrument which limits its availability to most clinical research laboratories. To overcome this hurdle, an absorbance-based assay that can be measured using a plate reader was developed as an alternative to the ΔS-Cys-Albumin assay. Assay development and analytical validation procedures are reported herein. After that, the range of absorbance in plasma and serum from control and cancer patients were determined and the change in absorbance over a time course incubation at room temperature, 4 °C and -20 °C was assessed. The results showed that the absorbance assay would act as a good alternative to the ΔS-Cys-Albumin assay.
ContributorsJehanathan, Nilojan (Author) / Borges, Chad (Thesis advisor) / Guo, Jia (Committee member) / Van Horn, Wade (Committee member) / Arizona State University (Publisher)
Created2022
171979-Thumbnail Image.png
Description
Neural tissue is a delicate system comprised of neurons and their synapses, glial cells for support, and vasculature for oxygen and nutrient delivery. This complexity ultimately gives rise to the human brain, a system researchers have become increasingly interested in replicating for artificial intelligence purposes. Some have even gone so

Neural tissue is a delicate system comprised of neurons and their synapses, glial cells for support, and vasculature for oxygen and nutrient delivery. This complexity ultimately gives rise to the human brain, a system researchers have become increasingly interested in replicating for artificial intelligence purposes. Some have even gone so far as to use neuronal cultures as computing hardware, but utilizing an environment closer to a living brain means having to grapple with the same issues faced by clinicians and researchers trying to treat brain disorders. Most outstanding among these are the problems that arise with invasive interfaces. Optical techniques that use fluorescent dyes and proteins have emerged as a solution for noninvasive imaging with single-cell resolution in vitro and in vivo, but feeding in information in the form of neuromodulation still requires implanted electrodes. The implantation process of these electrodes damages nearby neurons and their connections, causes hemorrhaging, and leads to scarring and gliosis that diminish efficacy. Here, a new approach for noninvasive neuromodulation with high spatial precision is described. It makes use of a combination of ultrasound, high frequency acoustic energy that can be focused to submillimeter regions at significant depths, and electric fields, an effective tool for neuromodulation that lacks spatial precision when used in a noninvasive manner. The hypothesis is that, when combined in a specific manner, these will lead to nonlinear effects at neuronal membranes that cause cells only in the region of overlap to be stimulated. Computational modeling confirmed this combination to be uniquely stimulating, contingent on certain physical effects of ultrasound on cell membranes. Subsequent in vitro experiments led to inconclusive results, however, leaving the door open for future experimentation with modified configurations and approaches. The specific combination explored here is also not the only untested technique that may achieve a similar goal.
ContributorsNester, Elliot (Author) / Wang, Yalin (Thesis advisor) / Muthuswamy, Jitendran (Committee member) / Towe, Bruce (Committee member) / Arizona State University (Publisher)
Created2022
171902-Thumbnail Image.png
Description
Beta-Amyloid(Aβ) plaques and tau protein tangles in the brain are now widely recognized as the defining hallmarks of Alzheimer’s disease (AD), followed by structural atrophy detectable on brain magnetic resonance imaging (MRI) scans. However, current methods to detect Aβ/tau pathology are either invasive (lumbar puncture) or quite costly and not

Beta-Amyloid(Aβ) plaques and tau protein tangles in the brain are now widely recognized as the defining hallmarks of Alzheimer’s disease (AD), followed by structural atrophy detectable on brain magnetic resonance imaging (MRI) scans. However, current methods to detect Aβ/tau pathology are either invasive (lumbar puncture) or quite costly and not widely available (positron emission tomography (PET)). And one of the particular neurodegenerative regions is the hippocampus to which the influence of Aβ/tau on has been one of the research projects focuses in the AD pathophysiological progress. In this dissertation, I proposed three novel machine learning and statistical models to examine subtle aspects of the hippocampal morphometry from MRI that are associated with Aβ /tau burden in the brain, measured using PET images. The first model is a novel unsupervised feature reduction model to generate a low-dimensional representation of hippocampal morphometry for each individual subject, which has superior performance in predicting Aβ/tau burden in the brain. The second one is an efficient federated group lasso model to identify the hippocampal subregions where atrophy is strongly associated with abnormal Aβ/Tau. The last one is a federated model for imaging genetics, which can identify genetic and transcriptomic influences on hippocampal morphometry. Finally, I stated the results of these three models that have been published or submitted to peer-reviewed conferences and journals.
ContributorsWu, Jianfeng (Author) / Wang, Yalin (Thesis advisor) / Li, Baoxin (Committee member) / Liang, Jianming (Committee member) / Wang, Junwen (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2022