Matching Items (48)
Filtering by

Clear all filters

155984-Thumbnail Image.png
Description
Predicting resistant prostate cancer is critical for lowering medical costs and improving the quality of life of advanced prostate cancer patients. I formulate, compare, and analyze two mathematical models that aim to forecast future levels of prostate-specific antigen (PSA). I accomplish these tasks by employing clinical data of locally advanced

Predicting resistant prostate cancer is critical for lowering medical costs and improving the quality of life of advanced prostate cancer patients. I formulate, compare, and analyze two mathematical models that aim to forecast future levels of prostate-specific antigen (PSA). I accomplish these tasks by employing clinical data of locally advanced prostate cancer patients undergoing androgen deprivation therapy (ADT). I demonstrate that the inverse problem of parameter estimation might be too complicated and simply relying on data fitting can give incorrect conclusions, since there is a large error in parameter values estimated and parameters might be unidentifiable. I provide confidence intervals to give estimate forecasts using data assimilation via an ensemble Kalman Filter. Using the ensemble Kalman Filter, I perform dual estimation of parameters and state variables to test the prediction accuracy of the models. Finally, I present a novel model with time delay and a delay-dependent parameter. I provide a geometric stability result to study the behavior of this model and show that the inclusion of time delay may improve the accuracy of predictions. Also, I demonstrate with clinical data that the inclusion of the delay-dependent parameter facilitates the identification and estimation of parameters.
ContributorsBaez, Javier (Author) / Kuang, Yang (Thesis advisor) / Kostelich, Eric (Committee member) / Crook, Sharon (Committee member) / Gardner, Carl (Committee member) / Nagy, John (Committee member) / Arizona State University (Publisher)
Created2017
135355-Thumbnail Image.png
Description
Glioblastoma multiforme (GBM) is a malignant, aggressive and infiltrative cancer of the central nervous system with a median survival of 14.6 months with standard care. Diagnosis of GBM is made using medical imaging such as magnetic resonance imaging (MRI) or computed tomography (CT). Treatment is informed by medical images and

Glioblastoma multiforme (GBM) is a malignant, aggressive and infiltrative cancer of the central nervous system with a median survival of 14.6 months with standard care. Diagnosis of GBM is made using medical imaging such as magnetic resonance imaging (MRI) or computed tomography (CT). Treatment is informed by medical images and includes chemotherapy, radiation therapy, and surgical removal if the tumor is surgically accessible. Treatment seldom results in a significant increase in longevity, partly due to the lack of precise information regarding tumor size and location. This lack of information arises from the physical limitations of MR and CT imaging coupled with the diffusive nature of glioblastoma tumors. GBM tumor cells can migrate far beyond the visible boundaries of the tumor and will result in a recurring tumor if not killed or removed. Since medical images are the only readily available information about the tumor, we aim to improve mathematical models of tumor growth to better estimate the missing information. Particularly, we investigate the effect of random variation in tumor cell behavior (anisotropy) using stochastic parameterizations of an established proliferation-diffusion model of tumor growth. To evaluate the performance of our mathematical model, we use MR images from an animal model consisting of Murine GL261 tumors implanted in immunocompetent mice, which provides consistency in tumor initiation and location, immune response, genetic variation, and treatment. Compared to non-stochastic simulations, stochastic simulations showed improved volume accuracy when proliferation variability was high, but diffusion variability was found to only marginally affect tumor volume estimates. Neither proliferation nor diffusion variability significantly affected the spatial distribution accuracy of the simulations. While certain cases of stochastic parameterizations improved volume accuracy, they failed to significantly improve simulation accuracy overall. Both the non-stochastic and stochastic simulations failed to achieve over 75% spatial distribution accuracy, suggesting that the underlying structure of the model fails to capture one or more biological processes that affect tumor growth. Two biological features that are candidates for further investigation are angiogenesis and anisotropy resulting from differences between white and gray matter. Time-dependent proliferation and diffusion terms could be introduced to model angiogenesis, and diffusion weighed imaging (DTI) could be used to differentiate between white and gray matter, which might allow for improved estimates brain anisotropy.
ContributorsAnderies, Barrett James (Author) / Kostelich, Eric (Thesis director) / Kuang, Yang (Committee member) / Stepien, Tracy (Committee member) / Harrington Bioengineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136516-Thumbnail Image.png
Description
Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot

Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot detection, we are interested in bots on Twitter that tweet Arabic extremist-like phrases. A testing dataset is collected using the honeypot method, and five different heuristics are measured for their effectiveness in detecting bots. The model underperformed, but we have laid the ground-work for a vastly untapped focus on bot detection: extremist ideal diffusion through bots.
ContributorsKarlsrud, Mark C. (Author) / Liu, Huan (Thesis director) / Morstatter, Fred (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136271-Thumbnail Image.png
Description
The OMFIT (One Modeling Framework for Integrated Tasks) modeling environment and the BRAINFUSE module have been deployed on the PPPL (Princeton Plasma Physics Laboratory) computing cluster with modifications that have rendered the application of artificial neural networks (NNs) to the TRANSP databases for the JET (Joint European Torus), TFTR (Tokamak

The OMFIT (One Modeling Framework for Integrated Tasks) modeling environment and the BRAINFUSE module have been deployed on the PPPL (Princeton Plasma Physics Laboratory) computing cluster with modifications that have rendered the application of artificial neural networks (NNs) to the TRANSP databases for the JET (Joint European Torus), TFTR (Tokamak Fusion Test Reactor), and NSTX (National Spherical Torus Experiment) devices possible through their use. This development has facilitated the investigation of NNs for predicting heat transport profiles in JET, TFTR, and NSTX, and has promoted additional investigations to discover how else NNs may be of use to scientists at PPPL. In applying NNs to the aforementioned devices for predicting heat transport, the primary goal of this endeavor is to reproduce the success shown in Meneghini et al. in using NNs for heat transport prediction in DIII-D. Being able to reproduce the results from is important because this in turn would provide scientists at PPPL with a quick and efficient toolset for reliably predicting heat transport profiles much faster than any existing computational methods allow; the progress towards this goal is outlined in this report, and potential additional applications of the NN framework are presented.
ContributorsLuna, Christopher Joseph (Author) / Tang, Wenbo (Thesis director) / Treacy, Michael (Committee member) / Orso, Meneghini (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2015-05
136409-Thumbnail Image.png
Description
Twitter, the microblogging platform, has grown in prominence to the point that the topics that trend on the network are often the subject of the news and other traditional media. By predicting trends on Twitter, it could be possible to predict the next major topic of interest to the public.

Twitter, the microblogging platform, has grown in prominence to the point that the topics that trend on the network are often the subject of the news and other traditional media. By predicting trends on Twitter, it could be possible to predict the next major topic of interest to the public. With this motivation, this paper develops a model for trends leveraging previous work with k-nearest-neighbors and dynamic time warping. The development of this model provides insight into the length and features of trends, and successfully generalizes to identify 74.3% of trends in the time period of interest. The model developed in this work provides understanding into why par- ticular words trend on Twitter.
ContributorsMarshall, Grant A (Author) / Liu, Huan (Thesis director) / Morstatter, Fred (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136442-Thumbnail Image.png
Description
A model has been developed to modify Euler-Bernoulli beam theory for wooden beams, using visible properties of wood knot-defects. Treating knots in a beam as a system of two ellipses that change the local bending stiffness has been shown to improve the fit of a theoretical beam displacement function to

A model has been developed to modify Euler-Bernoulli beam theory for wooden beams, using visible properties of wood knot-defects. Treating knots in a beam as a system of two ellipses that change the local bending stiffness has been shown to improve the fit of a theoretical beam displacement function to edge-line deflection data extracted from digital imagery of experimentally loaded beams. In addition, an Ellipse Logistic Model (ELM) has been proposed, using L1-regularized logistic regression, to predict the impact of a knot on the displacement of a beam. By classifying a knot as severely positive or negative, vs. mildly positive or negative, ELM can classify knots that lead to large changes to beam deflection, while not over-emphasizing knots that may not be a problem. Using ELM with a regression-fit Young's Modulus on three-point bending of Douglass Fir, it is possible estimate the effects a knot will have on the shape of the resulting displacement curve.
Created2015-05
136133-Thumbnail Image.png
Description
Currently in synthetic biology only the Las, Lux, and Rhl quorum sensing pathways have been adapted for broad engineering use. Quorum sensing allows a means of cell to cell communication in which a designated sender cell produces quorum sensing molecules that modify gene expression of a designated receiver cell. While

Currently in synthetic biology only the Las, Lux, and Rhl quorum sensing pathways have been adapted for broad engineering use. Quorum sensing allows a means of cell to cell communication in which a designated sender cell produces quorum sensing molecules that modify gene expression of a designated receiver cell. While useful, these three quorum sensing pathways exhibit a nontrivial level of crosstalk, hindering robust engineering and leading to unexpected effects in a given design. To address the lack of orthogonality among these three quorum sensing pathways, previous scientists have attempted to perform directed evolution on components of the quorum sensing pathway. While a powerful tool, directed evolution is limited by the subspace that is defined by the protein. For this reason, we take an evolutionary biology approach to identify new orthogonal quorum sensing networks and test these networks for cross-talk with currently-used networks. By charting characteristics of acyl homoserine lactone (AHL) molecules used across quorum sensing pathways in nature, we have identified favorable candidate pathways likely to display orthogonality. These include Aub, Bja, Bra, Cer, Esa, Las, Lux, Rhl, Rpa, and Sin, which we have begun constructing and testing. Our synthetic circuits express GFP in response to a quorum sensing molecule, allowing quantitative measurement of orthogonality between pairs. By determining orthogonal quorum sensing pairs, we hope to identify and adapt novel quorum sensing pathways for robust use in higher-order genetic circuits.
ContributorsMuller, Ryan (Author) / Haynes, Karmella (Thesis director) / Wang, Xiao (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Chemistry and Biochemistry (Contributor) / School of Life Sciences (Contributor)
Created2015-05
136857-Thumbnail Image.png
Description
Glioblastoma Multiforme (GBM) is an aggressive and deadly form of brain cancer with a median survival time of about a year with treatment. Due to the aggressive nature of these tumors and the tendency of gliomas to follow white matter tracks in the brain, each tumor mass has a unique

Glioblastoma Multiforme (GBM) is an aggressive and deadly form of brain cancer with a median survival time of about a year with treatment. Due to the aggressive nature of these tumors and the tendency of gliomas to follow white matter tracks in the brain, each tumor mass has a unique growth pattern. Consequently it is difficult for neurosurgeons to anticipate where the tumor will spread in the brain, making treatment planning difficult. Archival patient data including MRI scans depicting the progress of tumors have been helpful in developing a model to predict Glioblastoma proliferation, but limited scans per patient make the tumor growth rate difficult to determine. Furthermore, patient treatment between scan points can significantly compound the challenge of accurately predicting the tumor growth. A partnership with Barrow Neurological Institute has allowed murine studies to be conducted in order to closely observe tumor growth and potentially improve the current model to more closely resemble intermittent stages of GBM growth without treatment effects.
ContributorsSnyder, Lena Haley (Author) / Kostelich, Eric (Thesis director) / Frakes, David (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Harrington Bioengineering Program (Contributor)
Created2014-05
136083-Thumbnail Image.png
Description
Mortality of 1918 influenza virus was high, partly due to bacteria coinfections. We characterize pandemic mortality in Arizona, which had high prevalence of tuberculosis. We applied regressions to over 35,000 data points to estimate the basic reproduction number and excess mortality. Age-specific mortality curves show elevated mortality for all age

Mortality of 1918 influenza virus was high, partly due to bacteria coinfections. We characterize pandemic mortality in Arizona, which had high prevalence of tuberculosis. We applied regressions to over 35,000 data points to estimate the basic reproduction number and excess mortality. Age-specific mortality curves show elevated mortality for all age groups, especially the young, and senior sparing effects. The low value for reproduction number indicates that transmissibility was moderately low.
ContributorsJenner, Melinda Eva (Author) / Chowell-Puente, Gerardo (Thesis director) / Kostelich, Eric (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Life Sciences (Contributor)
Created2015-05
133482-Thumbnail Image.png
Description
Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries

Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries with missing data. The new column is created to measure price difference to create a more accurate analysis on the change in price. Eight relevant variables are selected using cross validation: the total number of bitcoins, the total size of the blockchains, the hash rate, mining difficulty, revenue from mining, transaction fees, the cost of transactions and the estimated transaction volume. The in-sample data is modeled using a simple tree fit, first with one variable and then with eight. Using all eight variables, the in-sample model and data have a correlation of 0.6822657. The in-sample model is improved by first applying bootstrap aggregation (also known as bagging) to fit 400 decision trees to the in-sample data using one variable. Then the random forests technique is applied to the data using all eight variables. This results in a correlation between the model and data of 9.9443413. The random forests technique is then applied to an Ethereum dataset, resulting in a correlation of 9.6904798. Finally, an out-of-sample model is created for Bitcoin and Ethereum using random forests, with a benchmark correlation of 0.03 for financial data. The correlation between the training model and the testing data for Bitcoin was 0.06957639, while for Ethereum the correlation was -0.171125. In conclusion, it is confirmed that cryptocurrencies can have accurate in-sample models by applying the random forests method to a dataset. However, out-of-sample modeling is more difficult, but in some cases better than typical forms of financial data. It should also be noted that cryptocurrency data has similar properties to other related financial datasets, realizing future potential for system modeling for cryptocurrency within the financial world.
ContributorsBrowning, Jacob Christian (Author) / Meuth, Ryan (Thesis director) / Jones, Donald (Committee member) / McCulloch, Robert (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05