Matching Items (239)
154380-Thumbnail Image.png
Description
In brain imaging study, 3D surface-based algorithms may provide more advantages over volume-based methods, due to their sub-voxel accuracy to represent subtle subregional changes and solid mathematical foundations on which global shape analyses can be achieved on complicated topological structures, such as the convoluted cortical surfaces. On the other hand,

In brain imaging study, 3D surface-based algorithms may provide more advantages over volume-based methods, due to their sub-voxel accuracy to represent subtle subregional changes and solid mathematical foundations on which global shape analyses can be achieved on complicated topological structures, such as the convoluted cortical surfaces. On the other hand, given the enormous amount of data being generated daily, it is still challenging to develop effective and efficient surface-based methods to analyze brain shape morphometry. There are two major problems in surface-based shape analysis research: correspondence and similarity. This dissertation covers both topics by proposing novel surface registration and indexing algorithms based on conformal geometry for brain morphometry analysis.

First, I propose a surface fluid registration system, which extends the traditional image fluid registration to surfaces. With surface conformal parameterization, the complexity of the proposed registration formula has been greatly reduced, compared to prior methods. Inverse consistency is also incorporated to drive a symmetric correspondence between surfaces. After registration, the multivariate tensor-based morphometry (mTBM) is computed to measure local shape deformations. The algorithm was applied to study hippocampal atrophy associated with Alzheimer's disease (AD).

Next, I propose a ventricular surface registration algorithm based on hyperbolic Ricci flow, which computes a global conformal parameterization for each ventricular surface without introducing any singularity. Furthermore, in the parameter space, unique hyperbolic geodesic curves are introduced to guide consistent correspondences across subjects, a technique called geodesic curve lifting. Tensor-based morphometry (TBM) statistic is computed from the registration to measure shape changes. This algorithm was applied to study ventricular enlargement in mild cognitive impatient (MCI) converters.

Finally, a new shape index, the hyperbolic Wasserstein distance, is introduced. This algorithm computes the Wasserstein distance between general topological surfaces as a shape similarity measure of different surfaces. It is based on hyperbolic Ricci flow, hyperbolic harmonic map, and optimal mass transportation map, which is extended to hyperbolic space. This method fills a gap in the Wasserstein distance study, where prior work only dealt with images or genus-0 closed surfaces. The algorithm was applied in an AD vs. control cortical shape classification study and achieved promising accuracy rate.
ContributorsShi, Jie, Ph.D (Author) / Wang, Yalin (Thesis advisor) / Caselli, Richard (Committee member) / Li, Baoxin (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2016
153618-Thumbnail Image.png
Description
A community in a social network can be viewed as a structure formed by individuals who share similar interests. Not all communities are explicit; some may be hidden in a large network. Therefore, discovering these hidden communities becomes an interesting problem. Researchers from a number of fields have developed algorithms

A community in a social network can be viewed as a structure formed by individuals who share similar interests. Not all communities are explicit; some may be hidden in a large network. Therefore, discovering these hidden communities becomes an interesting problem. Researchers from a number of fields have developed algorithms to tackle this problem.

Besides the common feature above, communities within a social network have two unique characteristics: communities are mostly small and overlapping. Unfortunately, many traditional algorithms have difficulty recognizing these small communities (often called the resolution limit problem) as well as overlapping communities.

In this work, two enhanced community detection techniques are proposed for re-working existing community detection algorithms to find small communities in social networks. One method is to modify the modularity measure within the framework of the traditional Newman-Girvan algorithm so that more small communities can be detected. The second method is to incorporate a preprocessing step into existing algorithms by changing edge weights inside communities. Both methods help improve community detection performance while maintaining or improving computational efficiency.
ContributorsWang, Ran (Author) / Liu, Huan (Thesis advisor) / Sen, Arunabha (Committee member) / Colbourn, Charles (Committee member) / Arizona State University (Publisher)
Created2015
153782-Thumbnail Image.png
Description
Composite materials are finally providing uses hitherto reserved for metals in structural systems applications – airframes and engine containment systems, wraps for repair and rehabilitation, and ballistic/blast mitigation systems. They have high strength-to-weight ratios, are durable and resistant to environmental effects, have high impact strength, and can be manufactured in

Composite materials are finally providing uses hitherto reserved for metals in structural systems applications – airframes and engine containment systems, wraps for repair and rehabilitation, and ballistic/blast mitigation systems. They have high strength-to-weight ratios, are durable and resistant to environmental effects, have high impact strength, and can be manufactured in a variety of shapes. Generalized constitutive models are being developed to accurately model composite systems so they can be used in implicit and explicit finite element analysis. These models require extensive characterization of the composite material as input. The particular constitutive model of interest for this research is a three-dimensional orthotropic elasto-plastic composite material model that requires a total of 12 experimental stress-strain curves, yield stresses, and Young’s Modulus and Poisson’s ratio in the material directions as input. Sometimes it is not possible to carry out reliable experimental tests needed to characterize the composite material. One solution is using virtual testing to fill the gaps in available experimental data. A Virtual Testing Software System (VTSS) has been developed to address the need for a less restrictive method to characterize a three-dimensional orthotropic composite material. The system takes in the material properties of the constituents and completes all 12 of the necessary characterization tests using finite element (FE) models. Verification and validation test cases demonstrate the capabilities of the VTSS.
ContributorsHarrington, Joseph (Author) / Rajan, Subramaniam D. (Thesis advisor) / Neithalath, Narayanan (Committee member) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2015
153857-Thumbnail Image.png
Description
A simplified bilinear moment-curvature model are derived based on the moment-curvature response generated from a parameterized stress-strain response of strain softening and or strain-hardening material by Dr. Barzin Mobasher and Dr. Chote Soranakom. Closed form solutions are developed for deflection calculations of determinate beams subjected to usual loading patterns at

A simplified bilinear moment-curvature model are derived based on the moment-curvature response generated from a parameterized stress-strain response of strain softening and or strain-hardening material by Dr. Barzin Mobasher and Dr. Chote Soranakom. Closed form solutions are developed for deflection calculations of determinate beams subjected to usual loading patterns at any load stage. The solutions are based on a bilinear moment curvature response characterized by the flexural crack initiation and ultimate capacity based on a deflection hardening behavior. Closed form equations for deflection calculation are presented for simply supported beams under three point bending, four point bending, uniform load, concentrated moment at the middle, pure bending, and for cantilever beam under a point load at the end, a point load with an arbitrary distance from the fixed end, and uniform load. These expressions are derived for pre-cracked and post cracked regions. A parametric study is conducted to examine the effects of moment and curvature at the ultimate stage to moment and curvature at the first crack ratios on the deflection. The effectiveness of the simplified closed form solution is demonstrated by comparing the analytical load deflection response and the experimental results for three point and four point bending. The simplified bilinear moment-curvature model is modified by imposing the deflection softening behavior so that it can be widely implemented in the analysis of 2-D panels. The derivations of elastic solutions and yield line approach of 2-D panels are presented. Effectiveness of the proposed moment-curvature model with various types of panels is verified by comparing the simulated data with the experimental data of panel test.
ContributorsWang, Xinmeng (Author) / Mobasher, Barzin (Thesis advisor) / Rajan, Subramaniam D. (Committee member) / Neithalath, Narayanan (Committee member) / Arizona State University (Publisher)
Created2015
153858-Thumbnail Image.png
Description
Browsing Twitter users, or browsers, often find it increasingly cumbersome to attach meaning to tweets that are displayed on their timeline as they follow more and more users or pages. The tweets being browsed are created by Twitter users called originators, and are of some significance to the browser who

Browsing Twitter users, or browsers, often find it increasingly cumbersome to attach meaning to tweets that are displayed on their timeline as they follow more and more users or pages. The tweets being browsed are created by Twitter users called originators, and are of some significance to the browser who has chosen to subscribe to the tweets from the originator by following the originator. Although, hashtags are used to tag tweets in an effort to attach context to the tweets, many tweets do not have a hashtag. Such tweets are called orphan tweets and they adversely affect the experience of a browser.

A hashtag is a type of label or meta-data tag used in social networks and micro-blogging services which makes it easier for users to find messages with a specific theme or content. The context of a tweet can be defined as a set of one or more hashtags. Users often do not use hashtags to tag their tweets. This leads to the problem of missing context for tweets. To address the problem of missing hashtags, a statistical method was proposed which predicts most likely hashtags based on the social circle of an originator.

In this thesis, we propose to improve on the existing context recovery system by selectively limiting the candidate set of hashtags to be derived from the intimate circle of the originator rather than from every user in the social network of the originator. This helps in reducing the computation, increasing speed of prediction, scaling the system to originators with large social networks while still preserving most of the accuracy of the predictions. We also propose to not only derive the candidate hashtags from the social network of the originator but also derive the candidate hashtags based on the content of the tweet. We further propose to learn personalized statistical models according to the adoption patterns of different originators. This helps in not only identifying the personalized candidate set of hashtags based on the social circle and content of the tweets but also in customizing the hashtag adoption pattern to the originator of the tweet.
ContributorsMallapura Umamaheshwar, Tejas (Author) / Kambhampati, Subbarao (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2015
155191-Thumbnail Image.png
Description
Identifying chemical compounds that inhibit bacterial infection has recently gained a considerable amount of attention given the increased number of highly resistant bacteria and the serious health threat it poses around the world. With the development of automated microscopy and image analysis systems, the process of identifying novel therapeutic drugs

Identifying chemical compounds that inhibit bacterial infection has recently gained a considerable amount of attention given the increased number of highly resistant bacteria and the serious health threat it poses around the world. With the development of automated microscopy and image analysis systems, the process of identifying novel therapeutic drugs can generate an immense amount of data - easily reaching terabytes worth of information. Despite increasing the vast amount of data that is currently generated, traditional analytical methods have not increased the overall success rate of identifying active chemical compounds that eventually become novel therapeutic drugs. Moreover, multispectral imaging has become ubiquitous in drug discovery due to its ability to provide valuable information on cellular and sub-cellular processes using florescent reagents. These reagents are often costly and toxic to cells over an extended period of time causing limitations in experimental design. Thus, there is a significant need to develop a more efficient process of identifying active chemical compounds.

This dissertation introduces novel machine learning methods based on parallelized cellomics to analyze interactions between cells, bacteria, and chemical compounds while reducing the use of fluorescent reagents. Machine learning analysis using image-based high-content screening (HCS) data is compartmentalized into three primary components: (1) \textit{Image Analytics}, (2) \textit{Phenotypic Analytics}, and (3) \textit{Compound Analytics}. A novel software analytics tool called the Insights project is also introduced. The Insights project fully incorporates distributed processing, high performance computing, and database management that can rapidly and effectively utilize and store massive amounts of data generated using HCS biological assessments (bioassays). It is ideally suited for parallelized cellomics in high dimensional space.

Results demonstrate that a parallelized cellomics approach increases the quality of a bioassay while vastly decreasing the need for control data. The reduction in control data leads to less fluorescent reagent consumption. Furthermore, a novel proposed method that uses single-cell data points is proven to identify known active chemical compounds with a high degree of accuracy, despite traditional quality control measurements indicating the bioassay to be of poor quality. This, ultimately, decreases the time and resources needed in optimizing bioassays while still accurately identifying active compounds.
ContributorsTrevino, Robert (Author) / Liu, Huan (Thesis advisor) / Lamkin, Thomas J (Committee member) / He, Jingrui (Committee member) / Lee, Joohyung (Committee member) / Arizona State University (Publisher)
Created2016
128683-Thumbnail Image.png
Description

Unidirectional glass fiber reinforced polymer (GFRP) is tested at four initial strain rates (25, 50, 100 and 200 s-1) and six temperatures (−25, 0, 25, 50, 75 and 100 °C) on a servo-hydraulic high-rate testing system to investigate any possible effects on their mechanical properties and failure patterns. Meanwhile, for

Unidirectional glass fiber reinforced polymer (GFRP) is tested at four initial strain rates (25, 50, 100 and 200 s-1) and six temperatures (−25, 0, 25, 50, 75 and 100 °C) on a servo-hydraulic high-rate testing system to investigate any possible effects on their mechanical properties and failure patterns. Meanwhile, for the sake of illuminating strain rate and temperature effect mechanisms, glass yarn samples were complementally tested at four different strain rates (40, 80, 120 and 160 s-1) and varying temperatures (25, 50, 75 and 100 °C) utilizing an Instron drop-weight impact system. In addition, quasi-static properties of GFRP and glass yarn are supplemented as references. The stress–strain responses at varying strain rates and elevated temperatures are discussed. A Weibull statistics model is used to quantify the degree of variability in tensile strength and to obtain Weibull parameters for engineering applications.

ContributorsOu, Yunfu (Author) / Zhu, Deju (Author) / Zhang, Huaian (Author) / Huang, Liang (Author) / Yao, Yiming (Author) / Li, Gaosheng (Author) / Mobasher, Barzin (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-05-19
190815-Thumbnail Image.png
Description
Visual Question Answering (VQA) is an increasingly important multi-modal task where models must answer textual questions based on visual image inputs. Numerous VQA datasets have been proposed to train and evaluate models. However, existing benchmarks exhibit a unilateral focus on textual distribution shifts rather than joint shifts across modalities. This

Visual Question Answering (VQA) is an increasingly important multi-modal task where models must answer textual questions based on visual image inputs. Numerous VQA datasets have been proposed to train and evaluate models. However, existing benchmarks exhibit a unilateral focus on textual distribution shifts rather than joint shifts across modalities. This is suboptimal for properly assessing model robustness and generalization. To address this gap, a novel multi-modal VQA benchmark dataset is introduced for the first time. This dataset combines both visual and textual distribution shifts across training and test sets. Using this challenging benchmark exposes vulnerabilities in existing models relying on spurious correlations and overfitting to dataset biases. The novel dataset advances the field by enabling more robust model training and rigorous evaluation of multi-modal distribution shift generalization. In addition, a new few-shot multi-modal prompt fusion model is proposed to better adapt models for downstream VQA tasks. The model incorporates a prompt encoder module and dual-path design to align and fuse image and text prompts. This represents a novel prompt learning approach tailored for multi-modal learning across vision and language. Together, the introduced benchmark dataset and prompt fusion model address key limitations around evaluating and improving VQA model robustness. The work expands the methodology for training models resilient to multi-modal distribution shifts.
ContributorsJyothi Unni, Suraj (Author) / Liu, Huan (Thesis advisor) / Davalcu, Hasan (Committee member) / Bryan, Chris (Committee member) / Arizona State University (Publisher)
Created2023
193894-Thumbnail Image.png
Description
In today’s world, artificial intelligence (AI) is increasingly becoming a part of our daily lives. For this integration to be successful, it’s essential that AI systems can effectively interact with humans. This means making the AI system’s behavior more understandable to users and allowing users to customize the system’s behavior

In today’s world, artificial intelligence (AI) is increasingly becoming a part of our daily lives. For this integration to be successful, it’s essential that AI systems can effectively interact with humans. This means making the AI system’s behavior more understandable to users and allowing users to customize the system’s behavior to match their preferences. However, there are significant challenges associated with achieving this goal. One major challenge is that modern AI systems, which have shown great success, often make decisions based on learned representations. These representations, often acquired through deep learning techniques, are typically inscrutable to the users inhibiting explainability and customizability of the system. Additionally, since each user may have unique preferences and expertise, the interaction process must be tailored to each individual. This thesis addresses these challenges that arise in human-AI interaction scenarios, especially in cases where the AI system is tasked with solving sequential decision-making problems. This is achieved by introducing a framework that uses a symbolic interface to facilitate communication between humans and AI agents. This shared vocabulary acts as a bridge, enabling the AI agent to provide explanations in terms that are easy for humans to understand and allowing users to express their preferences using this common language. To address the need for personalization, the framework provides mechanisms that allow users to expand this shared vocabulary, enabling them to express their unique preferences effectively. Moreover, the AI systems are designed to take into account the user’s background knowledge when generating explanations tailored to their specific needs.
ContributorsSoni, Utkarsh (Author) / Kambhampati, Subbarao (Thesis advisor) / Baral, Chitta (Committee member) / Bryan, Chris (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2024