Matching Items (109)
Filtering by

Clear all filters

193542-Thumbnail Image.png
Description
As robots become increasingly integrated into the environments, they need to learn how to interact with the objects around them. Many of these objects are articulated with multiple degrees of freedom (DoF). Multi-DoF objects have complex joints that require specific manipulation orders, but existing methods only consider objects with a

As robots become increasingly integrated into the environments, they need to learn how to interact with the objects around them. Many of these objects are articulated with multiple degrees of freedom (DoF). Multi-DoF objects have complex joints that require specific manipulation orders, but existing methods only consider objects with a single joint. To capture the joint structure and manipulation sequence of any object, I introduce the "Object Kinematic State Machines" (OKSMs), a novel representation that models the kinematic constraints and manipulation sequences of multi-DoF objects. I also present Pokenet, a deep neural network architecture that estimates the OKSMs from the sequence of point cloud data of human demonstrations. I conduct experiments on both simulated and real-world datasets to validate my approach. First, I evaluate the modeling of multi-DoF objects on a simulated dataset, comparing against the current state-of-the-art method. I then assess Pokenet's real-world usability on a dataset collected in my lab, comprising 5,500 data points across 4 objects. Results showcase that my method can successfully estimate joint parameters of novel multi-DoF objects with over 25% more accuracy on average than prior methods.
ContributorsGUPTA, ANMOL (Author) / Gopalan, Nakul (Thesis advisor) / Zhang, Yu (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2024
193593-Thumbnail Image.png
Description
In today's data-driven world, privacy is a significant concern. It is crucial to preserve the privacy of sensitive information while visualizing data. This thesis aims to develop new techniques and software tools that support Vega-Lite visualizations while maintaining privacy. Vega-Lite is a visualization grammar based on Wilkinson's grammar of graphics.

In today's data-driven world, privacy is a significant concern. It is crucial to preserve the privacy of sensitive information while visualizing data. This thesis aims to develop new techniques and software tools that support Vega-Lite visualizations while maintaining privacy. Vega-Lite is a visualization grammar based on Wilkinson's grammar of graphics. The project extends Vega-Lite to incorporate privacy algorithms such as k-anonymity, l-diversity, t-closeness, and differential privacy. This is done by using a unique multi-input loop module logic that generates combinations of attributes as a new anonymization method. Differential privacy is implemented by adding controlled noise (Laplace or Exponential) to the sensitive columns in the dataset. The user defines custom rules in the JSON schema, mentioning the privacy methods and the sensitive column. The schema is validated using Another JSON Validation library, and these rules help identify the anonymization techniques to be performed on the dataset before sending it back to the Vega-Lite visualization server. Multiple datasets satisfying the privacy requirements are generated, and their utility scores are provided so that the user can trade-off between privacy and utility on the datasets based on their requirements. The interface developed is user-friendly and intuitive and guides users in using it. It provides appropriate feedback on the privacy-preserving visualizations generated through various utility metrics. This application is helpful for technical or domain experts across multiple domains where privacy is a big concern, such as medical institutions, traffic and urban planning, financial institutions, educational records, and employer-employee relations. This project is novel as it provides a one-stop solution for privacy-preserving visualization. It works on open-source software, Vega-Lite, which several organizations and users use for business and educational purposes.
ContributorsSekar, Manimozhi (Author) / Bryan, Chris (Thesis advisor) / Wang, Yalin (Committee member) / Cao, Zhichao (Committee member) / Arizona State University (Publisher)
Created2024
193355-Thumbnail Image.png
Description
Image denoising, a fundamental task in computer vision, poses significant challenges due to its inherently inverse and ill-posed nature. Despite advancements in traditional methods and supervised learning approaches, particularly in medical imaging such as Medical Resonance Imaging (MRI) scans, the reliance on paired datasets and known noise distributions remains a

Image denoising, a fundamental task in computer vision, poses significant challenges due to its inherently inverse and ill-posed nature. Despite advancements in traditional methods and supervised learning approaches, particularly in medical imaging such as Medical Resonance Imaging (MRI) scans, the reliance on paired datasets and known noise distributions remains a practical hurdle. Recent progress in noise statistical independence theory and diffusion models has revitalized research interest, offering promising avenues for unsupervised denoising. However, existing methods often yield overly smoothed results or introduce hallucinated structures, limiting their clinical applicability. This thesis tackles the core challenge of progressing towards unsupervised denoising of MRI scans. It aims to retain intricate details without smoothing or introducing artificial structures, thus ensuring the production of high-quality MRI images. The thesis makes a three-fold contribution: Firstly, it presents a detailed analysis of traditional techniques, early machine learning algorithms for denoising, and new statistical-based models, with an extensive evaluation study on self-supervised denoising methods highlighting their limitations. Secondly, it conducts an evaluation study on an emerging class of diffusion-based denoising methods, accompanied by additional empirical findings and discussions on their effectiveness and limitations, proposing solutions to enhance their utility. Lastly, it introduces a novel approach, Unsupervised Multi-stage Ensemble Deep Learning with diffusion models for denoising MRI scans (MEDL). Leveraging diffusion models, this approach operates independently of signal or noise priors and incorporates weighted rescaling of multi-stage reconstructions to balance over-smoothing and hallucination tendencies. Evaluation using benchmark datasets demonstrates an average gain of 1dB and 2% in PSNR and SSIM metrics, respectively, over existing approaches.
ContributorsVora, Sahil (Author) / Li, Baoxin (Thesis advisor) / Wang, Yalin (Committee member) / Zhou, Yuxiang (Committee member) / Arizona State University (Publisher)
Created2024
156682-Thumbnail Image.png
Description
Unsupervised learning of time series data, also known as temporal clustering, is a challenging problem in machine learning. This thesis presents a novel algorithm, Deep Temporal Clustering (DTC), to naturally integrate dimensionality reduction and temporal clustering into a single end-to-end learning framework, fully unsupervised. The algorithm utilizes an autoencoder for

Unsupervised learning of time series data, also known as temporal clustering, is a challenging problem in machine learning. This thesis presents a novel algorithm, Deep Temporal Clustering (DTC), to naturally integrate dimensionality reduction and temporal clustering into a single end-to-end learning framework, fully unsupervised. The algorithm utilizes an autoencoder for temporal dimensionality reduction and a novel temporal clustering layer for cluster assignment. Then it jointly optimizes the clustering objective and the dimensionality reduction objective. Based on requirement and application, the temporal clustering layer can be customized with any temporal similarity metric. Several similarity metrics and state-of-the-art algorithms are considered and compared. To gain insight into temporal features that the network has learned for its clustering, a visualization method is applied that generates a region of interest heatmap for the time series. The viability of the algorithm is demonstrated using time series data from diverse domains, ranging from earthquakes to spacecraft sensor data. In each case, the proposed algorithm outperforms traditional methods. The superior performance is attributed to the fully integrated temporal dimensionality reduction and clustering criterion.
ContributorsMadiraju, NaveenSai (Author) / Liang, Jianming (Thesis advisor) / Wang, Yalin (Thesis advisor) / He, Jingrui (Committee member) / Arizona State University (Publisher)
Created2018
157062-Thumbnail Image.png
Description
Synthetic manipulation of chromatin dynamics has applications for medicine, agriculture, and biotechnology. However, progress in this area requires the identification of design rules for engineering chromatin systems. In this thesis, I discuss research that has elucidated the intrinsic properties of histone binding proteins (HBP), and apply this knowledge to engineer

Synthetic manipulation of chromatin dynamics has applications for medicine, agriculture, and biotechnology. However, progress in this area requires the identification of design rules for engineering chromatin systems. In this thesis, I discuss research that has elucidated the intrinsic properties of histone binding proteins (HBP), and apply this knowledge to engineer novel chromatin binding effectors. Results from the experiments described herein demonstrate that the histone binding domain from chromobox protein homolog 8 (CBX8) is portable and can be customized to alter its endogenous function. First, I developed an assay to identify engineered fusion proteins that bind histone post translational modifications (PTMs) in vitro and regulate genes near the same histone PTMs in living cells. This assay will be useful for assaying the function of synthetic histone PTM-binding actuators and probes. Next, I investigated the activity of a novel, dual histone PTM binding domain regulator called Pc2TF. I characterized Pc2TF in vitro and in cells and show it has enhanced binding and transcriptional activation compared to a single binding domain fusion called Polycomb Transcription Factor (PcTF). These results indicate that valency can be used to tune the activity of synthetic histone-binding transcriptional regulators. Then, I report the delivery of PcTF fused to a cell penetrating peptide (CPP) TAT, called CP-PcTF. I treated 2D U-2 OS bone cancer cells with CP-PcTF, followed by RNA sequencing to identify genes regulated by CP-PcTF. I also showed that 3D spheroids treated with CP-PcTF show delayed growth. This preliminary work demonstrated that an epigenetic effector fused to a CPP can enable entry and regulation of genes in U-2 OS cells through DNA independent interactions. Finally, I described and validated a new screening method that combines the versatility of in vitro transcription and translation (IVTT) expressed protein coupled with the histone tail microarrays. Using Pc2TF as an example, I demonstrated that this assay is capable of determining binding and specificity of a synthetic HBP. I conclude by outlining future work toward engineering HBPs using techniques such as directed evolution and rational design. In conclusion, this work outlines a foundation to engineer and deliver synthetic chromatin effectors.
ContributorsTekel, Stefan (Author) / Haynes, Karmella (Thesis advisor) / Mills, Jeremy (Committee member) / Caplan, Michael (Committee member) / Brafman, David (Committee member) / Arizona State University (Publisher)
Created2019
156541-Thumbnail Image.png
Description
Neurodegenerative diseases such as Alzheimer’s disease, Parkinson’s disease, or amyotrophic lateral sclerosis are defined by the loss of several types of neurons and glial cells within the central nervous system (CNS). Combatting these diseases requires a robust population of relevant cell types that can be employed in cell therapies, drug

Neurodegenerative diseases such as Alzheimer’s disease, Parkinson’s disease, or amyotrophic lateral sclerosis are defined by the loss of several types of neurons and glial cells within the central nervous system (CNS). Combatting these diseases requires a robust population of relevant cell types that can be employed in cell therapies, drug screening, or patient specific disease modeling. Human induced pluripotent stem cells (hiPSC)-derived neural progenitor cells (hNPCs) have the ability to self-renew indefinitely and differentiate into the various neuronal and glial cell types of the CNS. In order to realize the potential of hNPCs, it is necessary to develop a xeno-free scalable platform for effective expansion and differentiation. Previous work in the Brafman lab led to the engineering of a chemically defined substrate—vitronectin derived peptide (VDP), which allows for the long-term expansion and differentiation of hNPCs. In this work, we use this substrate as the basis for a microcarrier (MC)-based suspension culture system. Several independently derived hNPC lines were cultured on MCs for multiple passages as well as efficiently differentiated to neurons. Finally, this MC-based system was used in conjunction with a low shear rotating wall vessel (RWV) bioreactor for the integrated, large-scale expansion and neuronal differentiation of hNPCs. Finally, VDP was shown to support the differentiation of hNPCs into functional astrocytes. Overall, this fully defined and scalable biomanufacturing system will facilitate the generation of hNPCs and their derivatives in quantities necessary for basic and translational applications.
ContributorsMorgan, Daylin (Author) / Brafman, David (Thesis advisor) / Stabenfeldt, Sarah (Committee member) / Wang, Xiao (Committee member) / Arizona State University (Publisher)
Created2018
157555-Thumbnail Image.png
Description
Calcium imaging is a well-established, non-invasive or minimally technique designed to study the electrical signaling neurons. Calcium regulates the release of gliotransmitters in astrocytes. Analyzing astrocytic calcium transients can provide significant insights into mechanisms such as neuroplasticity and neural signal modulation.

In the past decade, numerous methods have been developed

Calcium imaging is a well-established, non-invasive or minimally technique designed to study the electrical signaling neurons. Calcium regulates the release of gliotransmitters in astrocytes. Analyzing astrocytic calcium transients can provide significant insights into mechanisms such as neuroplasticity and neural signal modulation.

In the past decade, numerous methods have been developed to analyze in-vivo calcium imaging data that involves complex techniques such as overlapping signals segregation and motion artifact correction. The hypothesis used to detect calcium signal is the spatiotemporal sparsity of calcium signal, and these methods are unable to identify the passive cells that are not actively firing during the time frame in the video. Statistics regarding the percentage of cells in each frame of view can be critical for the analysis of calcium imaging data for human induced pluripotent stem cells derived neurons and astrocytes.

The objective of this research is to develop a simple and efficient semi-automated pipeline for analysis of in-vitro calcium imaging data. The region of interest (ROI) based image segmentation is used to extract the data regarding intensity fluctuation caused by calcium concentration changes in each cell. It is achieved by using two approaches: basic image segmentation approach and a machine learning approach. The intensity data is evaluated using a custom-made MATLAB that generates statistical information and graphical representation of the number of spiking cells in each field of view, the number of spikes per cell and spike height.
ContributorsBhandarkar, Siddhi Umesh (Author) / Brafman, David (Thesis advisor) / Stabenfeldt, Sarah (Committee member) / Tian, Xiaojun (Committee member) / Arizona State University (Publisher)
Created2019
154728-Thumbnail Image.png
Description
Several debilitating neurological disorders, such as Alzheimer's disease, stroke, and spinal cord injury, are characterized by the damage or loss of neuronal cell types in the central nervous system (CNS). Human neural progenitor cells (hNPCs) derived from human pluripotent stem cells (hPSCs) can proliferate extensively and differentiate into the various

Several debilitating neurological disorders, such as Alzheimer's disease, stroke, and spinal cord injury, are characterized by the damage or loss of neuronal cell types in the central nervous system (CNS). Human neural progenitor cells (hNPCs) derived from human pluripotent stem cells (hPSCs) can proliferate extensively and differentiate into the various neuronal subtypes and supporting cells that comprise the CNS. As such, hNPCs have tremendous potential for disease modeling, drug screening, and regenerative medicine applications. However, the use hNPCs for the study and treatment of neurological diseases requires the development of defined, robust, and scalable methods for their expansion and neuronal differentiation. To that end a rational design process was used to develop a vitronectin-derived peptide (VDP)-based substrate to support the growth and neuronal differentiation of hNPCs in conventional two-dimensional (2-D) culture and large-scale microcarrier (MC)-based suspension culture. Compared to hNPCs cultured on ECMP-based substrates, hNPCs grown on VDP-coated surfaces displayed similar morphologies, growth rates, and high expression levels of hNPC multipotency markers. Furthermore, VDP surfaces supported the directed differentiation of hNPCs to neurons at similar levels to cells differentiated on ECMP substrates. Here it has been demonstrated that VDP is a robust growth and differentiation matrix, as demonstrated by its ability to support the expansions and neuronal differentiation of hNPCs derived from three hESC (H9, HUES9, and HSF4) and one hiPSC (RiPSC) cell lines. Finally, it has been shown that VDP allows for the expansion or neuronal differentiation of hNPCs to quantities (>1010) necessary for drug screening or regenerative medicine purposes. In the future, the use of VDP as a defined culture substrate will significantly advance the clinical application of hNPCs and their derivatives as it will enable the large-scale expansion and neuronal differentiation of hNPCs in quantities necessary for disease modeling, drug screening, and regenerative medicine applications.
ContributorsVarun, Divya (Author) / Brafman, David (Thesis advisor) / Nikkhah, Mehdi (Committee member) / Stabenfeldt, Sarah (Committee member) / Arizona State University (Publisher)
Created2016
154269-Thumbnail Image.png
Description
Understanding the complexity of temporal and spatial characteristics of gene expression over brain development is one of the crucial research topics in neuroscience. An accurate description of the locations and expression status of relative genes requires extensive experiment resources. The Allen Developing Mouse Brain Atlas provides a large number of

Understanding the complexity of temporal and spatial characteristics of gene expression over brain development is one of the crucial research topics in neuroscience. An accurate description of the locations and expression status of relative genes requires extensive experiment resources. The Allen Developing Mouse Brain Atlas provides a large number of in situ hybridization (ISH) images of gene expression over seven different mouse brain developmental stages. Studying mouse brain models helps us understand the gene expressions in human brains. This atlas collects about thousands of genes and now they are manually annotated by biologists. Due to the high labor cost of manual annotation, investigating an efficient approach to perform automated gene expression annotation on mouse brain images becomes necessary. In this thesis, a novel efficient approach based on machine learning framework is proposed. Features are extracted from raw brain images, and both binary classification and multi-class classification models are built with some supervised learning methods. To generate features, one of the most adopted methods in current research effort is to apply the bag-of-words (BoW) algorithm. However, both the efficiency and the accuracy of BoW are not outstanding when dealing with large-scale data. Thus, an augmented sparse coding method, which is called Stochastic Coordinate Coding, is adopted to generate high-level features in this thesis. In addition, a new multi-label classification model is proposed in this thesis. Label hierarchy is built based on the given brain ontology structure. Experiments have been conducted on the atlas and the results show that this approach is efficient and classifies the images with a relatively higher accuracy.
ContributorsZhao, Xinlin (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Thesis advisor) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2016
154464-Thumbnail Image.png
Description
The rapid growth of social media in recent years provides a large amount of user-generated visual objects, e.g., images and videos. Advanced semantic understanding approaches on such visual objects are desired to better serve applications such as human-machine interaction, image retrieval, etc. Semantic visual attributes have been proposed and utilized

The rapid growth of social media in recent years provides a large amount of user-generated visual objects, e.g., images and videos. Advanced semantic understanding approaches on such visual objects are desired to better serve applications such as human-machine interaction, image retrieval, etc. Semantic visual attributes have been proposed and utilized in multiple visual computing tasks to bridge the so-called "semantic gap" between extractable low-level feature representations and high-level semantic understanding of the visual objects.

Despite years of research, there are still some unsolved problems on semantic attribute learning. First, real-world applications usually involve hundreds of attributes which requires great effort to acquire sufficient amount of labeled data for model learning. Second, existing attribute learning work for visual objects focuses primarily on images, with semantic analysis on videos left largely unexplored.

In this dissertation I conduct innovative research and propose novel approaches to tackling the aforementioned problems. In particular, I propose robust and accurate learning frameworks on both attribute ranking and prediction by exploring the correlation among multiple attributes and utilizing various types of label information. Furthermore, I propose a video-based skill coaching framework by extending attribute learning to the video domain for robust motion skill analysis. Experiments on various types of applications and datasets and comparisons with multiple state-of-the-art baseline approaches confirm that my proposed approaches can achieve significant performance improvements for the general attribute learning problem.
ContributorsChen, Lin (Author) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Wang, Yalin (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2016