Matching Items (977)
Filtering by

Clear all filters

155869-Thumbnail Image.png
Description
A medical control system, a real-time controller, uses a predictive model of human physiology for estimation and controlling of drug concentration in the human body. Artificial Pancreas (AP) is an example of the control system which regulates blood glucose in T1D patients. The predictive model in the control system

A medical control system, a real-time controller, uses a predictive model of human physiology for estimation and controlling of drug concentration in the human body. Artificial Pancreas (AP) is an example of the control system which regulates blood glucose in T1D patients. The predictive model in the control system such as Bergman Minimal Model (BMM) is based on physiological modeling technique which separates the body into the number of anatomical compartments and each compartment's effect on body system is determined by their physiological parameters. These models are less accurate due to unaccounted physiological factors effecting target values. Estimation of a large number of physiological parameters through optimization algorithm is computationally expensive and stuck in local minima. This work evaluates a machine learning(ML) framework which has an ML model guided through physiological models. A support vector regression model guided through modified BMM is implemented for estimation of blood glucose levels. Physical activity and Endogenous glucose production are key factors that contribute in the increased hypoglycemia events thus, this work modifies Bergman Minimal Model ( Bergman et al. 1981) for more accurate estimation of blood glucose levels. Results show that the SVR outperformed BMM by 0.164 average RMSE for 7 different patients in the free-living scenario. This computationally inexpensive data driven model can potentially learn parameters more accurately with time. In conclusion, advised prediction model is promising in modeling the physiology elements in living systems.
ContributorsAgrawal, Anurag (Author) / Gupta, Sandeep K. S. (Thesis advisor) / Banerjee, Ayan (Committee member) / Kudva, Yogish (Committee member) / Arizona State University (Publisher)
Created2017
155598-Thumbnail Image.png
Description
This article proposes a new information-based subdata selection (IBOSS) algorithm, Squared Scaled Distance Algorithm (SSDA). It is based on the invariance of the determinant of the information matrix under orthogonal transformations, especially rotations. Extensive simulation results show that the new IBOSS algorithm retains nice asymptotic properties of IBOSS and gives

This article proposes a new information-based subdata selection (IBOSS) algorithm, Squared Scaled Distance Algorithm (SSDA). It is based on the invariance of the determinant of the information matrix under orthogonal transformations, especially rotations. Extensive simulation results show that the new IBOSS algorithm retains nice asymptotic properties of IBOSS and gives a larger determinant of the subdata information matrix. It has the same order of time complexity as the D-optimal IBOSS algorithm. However, it exploits the advantages of vectorized calculation avoiding for loops and is approximately 6 times as fast as the D-optimal IBOSS algorithm in R. The robustness of SSDA is studied from three aspects: nonorthogonality, including interaction terms and variable misspecification. A new accurate variable selection algorithm is proposed to help the implementation of IBOSS algorithms when a large number of variables are present with sparse important variables among them. Aggregating random subsample results, this variable selection algorithm is much more accurate than the LASSO method using full data. Since the time complexity is associated with the number of variables only, it is also very computationally efficient if the number of variables is fixed as n increases and not massively large. More importantly, using subsamples it solves the problem that full data cannot be stored in the memory when a data set is too large.
ContributorsZheng, Yi (Author) / Stufken, John (Thesis advisor) / Reiser, Mark R. (Committee member) / McCulloch, Robert (Committee member) / Arizona State University (Publisher)
Created2017
153593-Thumbnail Image.png
Description
In software testing, components are tested individually to make sure each performs as expected. The next step is to confirm that two or more components are able to work together. This stage of testing is often difficult because there can be numerous configurations between just two components.

Covering arrays are one

In software testing, components are tested individually to make sure each performs as expected. The next step is to confirm that two or more components are able to work together. This stage of testing is often difficult because there can be numerous configurations between just two components.

Covering arrays are one way to ensure a set of tests will cover every possible configuration at least once. However, on systems with many settings, it is computationally intensive to run every possible test. Test prioritization methods can identify tests of greater importance. This concept of test prioritization can help determine which tests can be removed with minimal impact to the overall testing of the system.

This thesis presents three algorithms that generate covering arrays that test the interaction of every two components at least twice. These algorithms extend the functionality of an established greedy test prioritization method to ensure important components are selected in earlier tests. The algorithms are tested on various inputs and the results reveal that on average, the resulting covering arrays are two-fifths to one-half times smaller than a covering array generated through brute force.
ContributorsAng, Nicole (Author) / Syrotiuk, Violet (Thesis advisor) / Colbourn, Charles (Committee member) / Richa, Andrea (Committee member) / Arizona State University (Publisher)
Created2015
153595-Thumbnail Image.png
Description
A major challenge in automated text analysis is that different words are used for related concepts. Analyzing text at the surface level would treat related concepts (i.e. actors, actions, targets, and victims) as different objects, potentially missing common narrative patterns. Generalized concepts are used to overcome this problem. Generalization may

A major challenge in automated text analysis is that different words are used for related concepts. Analyzing text at the surface level would treat related concepts (i.e. actors, actions, targets, and victims) as different objects, potentially missing common narrative patterns. Generalized concepts are used to overcome this problem. Generalization may result into word sense disambiguation failing to find similarity. This is addressed by taking into account contextual synonyms. Concept discovery based on contextual synonyms reveal information about the semantic roles of the words leading to concepts. Merger engine generalize the concepts so that it can be used as features in learning algorithms.
ContributorsKedia, Nitesh (Author) / Davulcu, Hasan (Thesis advisor) / Corman, Steve R (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2015
153597-Thumbnail Image.png
Description
In this dissertation, two problems are addressed in the verification and control of Cyber-Physical Systems (CPS):

1) Falsification: given a CPS, and a property of interest that the CPS must satisfy under all allowed operating conditions, does the CPS violate, i.e. falsify, the property?

2) Conformance testing: given a model of a

In this dissertation, two problems are addressed in the verification and control of Cyber-Physical Systems (CPS):

1) Falsification: given a CPS, and a property of interest that the CPS must satisfy under all allowed operating conditions, does the CPS violate, i.e. falsify, the property?

2) Conformance testing: given a model of a CPS, and an implementation of that CPS on an embedded platform, how can we characterize the properties satisfied by the implementation, given the properties satisfied by the model?

Both problems arise in the context of Model-Based Design (MBD) of CPS: in MBD, the designers start from a set of formal requirements that the system-to-be-designed must satisfy.

A first model of the system is created.

Because it may not be possible to formally verify the CPS model against the requirements, falsification tries to verify whether the model satisfies the requirements by searching for behavior that violates them.

In the first part of this dissertation, I present improved methods for finding falsifying behaviors of CPS when properties are expressed in Metric Temporal Logic (MTL).

These methods leverage the notion of robust semantics of MTL formulae: if a falsifier exists, it is in the neighborhood of local minimizers of the robustness function.

The proposed algorithms compute descent directions of the robustness function in the space of initial conditions and input signals, and provably converge to local minima of the robustness function.

The initial model of the CPS is then iteratively refined by modeling previously ignored phenomena, adding more functionality, etc., with each refinement resulting in a new model.

Many of the refinements in the MBD process described above do not provide an a priori guaranteed relation between the successive models.

Thus, the second problem above arises: how to quantify the distance between two successive models M_n and M_{n+1}?

If M_n has been verified to satisfy the specification, can it be guaranteed that M_{n+1} also satisfies the same, or some closely related, specification?

This dissertation answers both questions for a general class of CPS, and properties expressed in MTL.
ContributorsAbbas, Houssam Y (Author) / Fainekos, Georgios (Thesis advisor) / Duman, Tolga (Thesis advisor) / Mittelmann, Hans (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2015
153433-Thumbnail Image.png
Description
The apolipoprotein E (APOE) e4 genotype is the most prevalent known genetic risk factor for Alzheimer's disease (AD). In this paper, we examined the longitudinal effect of APOE e4 on hippocampal morphometry in Alzheimer's Disease Neuroimaging Initiative (ADNI). Generally, atrophy of hippocampus has more chance occurs in AD patients who

The apolipoprotein E (APOE) e4 genotype is the most prevalent known genetic risk factor for Alzheimer's disease (AD). In this paper, we examined the longitudinal effect of APOE e4 on hippocampal morphometry in Alzheimer's Disease Neuroimaging Initiative (ADNI). Generally, atrophy of hippocampus has more chance occurs in AD patients who carrying the APOE e4 allele than those who are APOE e4 noncarriers. Also, brain structure and function depend on APOE genotype not just for Alzheimer's disease patients but also in health elderly individuals, so APOE genotyping is considered critical in clinical trials of Alzheimer's disease. We used a large sample of elderly participants, with the help of a new automated surface registration system based on surface conformal parameterization with holomorphic 1-forms and surface fluid registration. In this system, we automatically segmented and constructed hippocampal surfaces from MR images at many different time points, such as 6 months, 1- and 2-year follow up. Between the two different hippocampal surfaces, we did the high-order correspondences, using a novel inverse consistent surface fluid registration method. At each time point, using Hotelling's T^2 test, we found significant morphological deformation in APOE e4 carriers relative to noncarriers in the entire cohort as well as in the non-demented (pooled MCI and control) subjects, affecting the left hippocampus more than the right, and this effect was more pronounced in e4 homozygotes than heterozygotes.
ContributorsLi, Bolun (Author) / Wang, Yalin (Thesis advisor) / Maciejewski, Ross (Committee member) / Liang, Jianming (Committee member) / Arizona State University (Publisher)
Created2015
190815-Thumbnail Image.png
Description
Visual Question Answering (VQA) is an increasingly important multi-modal task where models must answer textual questions based on visual image inputs. Numerous VQA datasets have been proposed to train and evaluate models. However, existing benchmarks exhibit a unilateral focus on textual distribution shifts rather than joint shifts across modalities. This

Visual Question Answering (VQA) is an increasingly important multi-modal task where models must answer textual questions based on visual image inputs. Numerous VQA datasets have been proposed to train and evaluate models. However, existing benchmarks exhibit a unilateral focus on textual distribution shifts rather than joint shifts across modalities. This is suboptimal for properly assessing model robustness and generalization. To address this gap, a novel multi-modal VQA benchmark dataset is introduced for the first time. This dataset combines both visual and textual distribution shifts across training and test sets. Using this challenging benchmark exposes vulnerabilities in existing models relying on spurious correlations and overfitting to dataset biases. The novel dataset advances the field by enabling more robust model training and rigorous evaluation of multi-modal distribution shift generalization. In addition, a new few-shot multi-modal prompt fusion model is proposed to better adapt models for downstream VQA tasks. The model incorporates a prompt encoder module and dual-path design to align and fuse image and text prompts. This represents a novel prompt learning approach tailored for multi-modal learning across vision and language. Together, the introduced benchmark dataset and prompt fusion model address key limitations around evaluating and improving VQA model robustness. The work expands the methodology for training models resilient to multi-modal distribution shifts.
ContributorsJyothi Unni, Suraj (Author) / Liu, Huan (Thesis advisor) / Davalcu, Hasan (Committee member) / Bryan, Chris (Committee member) / Arizona State University (Publisher)
Created2023