Matching Items (19)
Filtering by

Clear all filters

135547-Thumbnail Image.png
Description
The Experimental Data Processing (EDP) software is a C++ GUI-based application to streamline the process of creating a model for structural systems based on experimental data. EDP is designed to process raw data, filter the data for noise and outliers, create a fitted model to describe that data, complete a

The Experimental Data Processing (EDP) software is a C++ GUI-based application to streamline the process of creating a model for structural systems based on experimental data. EDP is designed to process raw data, filter the data for noise and outliers, create a fitted model to describe that data, complete a probabilistic analysis to describe the variation between replicates of the experimental process, and analyze reliability of a structural system based on that model. In order to help design the EDP software to perform the full analysis, the probabilistic and regression modeling aspects of this analysis have been explored. The focus has been on creating and analyzing probabilistic models for the data, adding multivariate and nonparametric fits to raw data, and developing computational techniques that allow for these methods to be properly implemented within EDP. For creating a probabilistic model of replicate data, the normal, lognormal, gamma, Weibull, and generalized exponential distributions have been explored. Goodness-of-fit tests, including the chi-squared, Anderson-Darling, and Kolmogorov-Smirnoff tests, have been used in order to analyze the effectiveness of any of these probabilistic models in describing the variation of parameters between replicates of an experimental test. An example using Young's modulus data for a Kevlar-49 Swath stress-strain test was used in order to demonstrate how this analysis is performed within EDP. In order to implement the distributions, numerical solutions for the gamma, beta, and hypergeometric functions were implemented, along with an arbitrary precision library to store numbers that exceed the maximum size of double-precision floating point digits. To create a multivariate fit, the multilinear solution was created as the simplest solution to the multivariate regression problem. This solution was then extended to solve nonlinear problems that can be linearized into multiple separable terms. These problems were solved analytically with the closed-form solution for the multilinear regression, and then by using a QR decomposition to solve numerically while avoiding numerical instabilities associated with matrix inversion. For nonparametric regression, or smoothing, the loess method was developed as a robust technique for filtering noise while maintaining the general structure of the data points. The loess solution was created by addressing concerns associated with simpler smoothing methods, including the running mean, running line, and kernel smoothing techniques, and combining the ability of each of these methods to resolve those issues. The loess smoothing method involves weighting each point in a partition of the data set, and then adding either a line or a polynomial fit within that partition. Both linear and quadratic methods were applied to a carbon fiber compression test, showing that the quadratic model was more accurate but the linear model had a shape that was more effective for analyzing the experimental data. Finally, the EDP program itself was explored to consider its current functionalities for processing data, as described by shear tests on carbon fiber data, and the future functionalities to be developed. The probabilistic and raw data processing capabilities were demonstrated within EDP, and the multivariate and loess analysis was demonstrated using R. As the functionality and relevant considerations for these methods have been developed, the immediate goal is to finish implementing and integrating these additional features into a version of EDP that performs a full streamlined structural analysis on experimental data.
ContributorsMarkov, Elan Richard (Author) / Rajan, Subramaniam (Thesis director) / Khaled, Bilal (Committee member) / Chemical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Ira A. Fulton School of Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135355-Thumbnail Image.png
Description
Glioblastoma multiforme (GBM) is a malignant, aggressive and infiltrative cancer of the central nervous system with a median survival of 14.6 months with standard care. Diagnosis of GBM is made using medical imaging such as magnetic resonance imaging (MRI) or computed tomography (CT). Treatment is informed by medical images and

Glioblastoma multiforme (GBM) is a malignant, aggressive and infiltrative cancer of the central nervous system with a median survival of 14.6 months with standard care. Diagnosis of GBM is made using medical imaging such as magnetic resonance imaging (MRI) or computed tomography (CT). Treatment is informed by medical images and includes chemotherapy, radiation therapy, and surgical removal if the tumor is surgically accessible. Treatment seldom results in a significant increase in longevity, partly due to the lack of precise information regarding tumor size and location. This lack of information arises from the physical limitations of MR and CT imaging coupled with the diffusive nature of glioblastoma tumors. GBM tumor cells can migrate far beyond the visible boundaries of the tumor and will result in a recurring tumor if not killed or removed. Since medical images are the only readily available information about the tumor, we aim to improve mathematical models of tumor growth to better estimate the missing information. Particularly, we investigate the effect of random variation in tumor cell behavior (anisotropy) using stochastic parameterizations of an established proliferation-diffusion model of tumor growth. To evaluate the performance of our mathematical model, we use MR images from an animal model consisting of Murine GL261 tumors implanted in immunocompetent mice, which provides consistency in tumor initiation and location, immune response, genetic variation, and treatment. Compared to non-stochastic simulations, stochastic simulations showed improved volume accuracy when proliferation variability was high, but diffusion variability was found to only marginally affect tumor volume estimates. Neither proliferation nor diffusion variability significantly affected the spatial distribution accuracy of the simulations. While certain cases of stochastic parameterizations improved volume accuracy, they failed to significantly improve simulation accuracy overall. Both the non-stochastic and stochastic simulations failed to achieve over 75% spatial distribution accuracy, suggesting that the underlying structure of the model fails to capture one or more biological processes that affect tumor growth. Two biological features that are candidates for further investigation are angiogenesis and anisotropy resulting from differences between white and gray matter. Time-dependent proliferation and diffusion terms could be introduced to model angiogenesis, and diffusion weighed imaging (DTI) could be used to differentiate between white and gray matter, which might allow for improved estimates brain anisotropy.
ContributorsAnderies, Barrett James (Author) / Kostelich, Eric (Thesis director) / Kuang, Yang (Committee member) / Stepien, Tracy (Committee member) / Harrington Bioengineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136550-Thumbnail Image.png
Description
The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team

The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team be as successful as possible by defining which positions are most important to a team's success. Data from fifteen years of NFL games was collected and information on every player in the league was analyzed. First there needed to be a benchmark which describes a team as being average and then every player in the NFL must be compared to that average. Based on properties of linear regression using ordinary least squares this project aims to define such a model that shows each position's importance. Finally, once such a model had been established then the focus turned to the NFL draft in which the goal was to find a strategy of where each position needs to be drafted so that it is most likely to give the best payoff based on the results of the regression in part one.
ContributorsBalzer, Kevin Ryan (Author) / Goegan, Brian (Thesis director) / Dassanayake, Maduranga (Committee member) / Barrett, The Honors College (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
132834-Thumbnail Image.png
Description
Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual
funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to
achieve their functionality and this mechanism is designed to minimize the deviations that occur
between the ETF’s listed price and the net

Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual
funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to
achieve their functionality and this mechanism is designed to minimize the deviations that occur
between the ETF’s listed price and the net asset value of the ETF’s underlying assets. However
while this does cause ETF deviations to be generally lower than their mutual fund counterparts,
as our paper explores this process does not eliminate these deviations completely. This article
builds off an earlier paper by Engle and Sarkar (2006) that investigates these properties of
premiums (discounts) of ETFs from their fair market value. And looks to see if these premia
have changed in the last 10 years. Our paper then diverges from the original and takes a deeper
look into the standard deviations of these premia specifically.
Our findings show that over 70% of an ETFs standard deviation of premia can be
explained through a linear combination consisting of two variables: a categorical (Domestic[US],
Developed, Emerging) and a discrete variable (time-difference from US). This paper also finds
that more traditional metrics such as market cap, ETF price volatility, and even 3rd party market
indicators such as the economic freedom index and investment freedom index are insignificant
predictors of an ETFs standard deviation of premia. These findings differ somewhat from
existing literature which indicate that these factors should have a significant impact on the
predictive ability of an ETFs standard deviation of premia.
ContributorsHenning, Thomas Louis (Co-author) / Zhang, Jingbo (Co-author) / Simonson, Mark (Thesis director) / Wendell, Licon (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133570-Thumbnail Image.png
Description
In the last decade, the population of honey bees across the globe has declined sharply leaving scientists and bee keepers to wonder why? Amongst all nations, the United States has seen some of the greatest declines in the last 10 plus years. Without a definite explanation, Colony Collapse Disorder (CCD)

In the last decade, the population of honey bees across the globe has declined sharply leaving scientists and bee keepers to wonder why? Amongst all nations, the United States has seen some of the greatest declines in the last 10 plus years. Without a definite explanation, Colony Collapse Disorder (CCD) was coined to explain the sudden and sharp decline of the honey bee colonies that beekeepers were experiencing. Colony collapses have been rising higher compared to expected averages over the years, and during the winter season losses are even more severe than what is normally acceptable. There are some possible explanations pointing towards meteorological variables, diseases, and even pesticide usage. Despite the cause of CCD being unknown, thousands of beekeepers have reported their losses, and even numbers of infected colonies and colonies under certain stressors in the most recent years. Using the data that was reported to The United States Department of Agriculture (USDA), as well as weather data collected by The National Centers for Environmental Information (NOAA) and the National Centers for Environmental Information (NCEI), regression analysis was used to investigate honey bee colonies to find relationships between stressors in honey bee colonies and meteorological variables, and colony collapses during the winter months. The regression analysis focused on the winter season, or quarter 4 of the year, which includes the months of October, November, and December. In the model, the response variables was the percentage of colonies lost in quarter 4. Through the model, it was concluded that certain weather thresholds and the percentage increase of colonies under certain stressors were related to colony loss.
ContributorsVasquez, Henry Antony (Author) / Zheng, Yi (Thesis director) / Saffell, Erinanne (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
137504-Thumbnail Image.png
Description
The reconstruction of piecewise smooth functions from non-uniform Fourier data arises in sensing applications such as magnetic resonance imaging (MRI). This thesis presents a new polynomial based resampling method (PRM) for 1-dimensional problems which uses edge information to recover the Fourier transform at its integer coefficients, thereby enabling the use

The reconstruction of piecewise smooth functions from non-uniform Fourier data arises in sensing applications such as magnetic resonance imaging (MRI). This thesis presents a new polynomial based resampling method (PRM) for 1-dimensional problems which uses edge information to recover the Fourier transform at its integer coefficients, thereby enabling the use of the inverse fast Fourier transform algorithm. By minimizing the error of the PRM approximation at the sampled Fourier modes, the PRM can also be used to improve on initial edge location estimates. Numerical examples show that using the PRM to improve on initial edge location estimates and then taking of the PRM approximation of the integer frequency Fourier coefficients is a viable way to reconstruct the underlying function in one dimension. In particular, the PRM is shown to converge more quickly and to be more robust than current resampling techniques used in MRI, and is particularly amenable to highly irregular sampling patterns.
ContributorsGutierrez, Alexander Jay (Author) / Platte, Rodrigo (Thesis director) / Gelb, Anne (Committee member) / Viswanathan, Adityavikram (Committee member) / Barrett, The Honors College (Contributor) / School of International Letters and Cultures (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2013-05
137354-Thumbnail Image.png
Description
The objective of the research presented here was to validate the use of kinetic models for the analysis of the dynamic behavior of a contrast agent in tumor tissue and evaluate the utility of such models in determining kinetic properties - in particular perfusion and molecular binding uptake associated with

The objective of the research presented here was to validate the use of kinetic models for the analysis of the dynamic behavior of a contrast agent in tumor tissue and evaluate the utility of such models in determining kinetic properties - in particular perfusion and molecular binding uptake associated with tissue hypoxia - of the imaged tissue, from concentration data acquired with dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) procedure. Data from two separate DCE-MRI experiments, performed in the past, using a standard contrast agent and a hypoxia-binding agent respectively, were analyzed. The results of the analysis demonstrated that the models used may provide novel characterization of the tumor tissue properties. Future research will work to further characterize the physical significance of the estimated parameters, particularly to provide quantitative oxygenation data for the imaged tissue.
ContributorsMartin, Jonathan Michael (Author) / Kodibagkar, Vikram (Thesis director) / Rege, Kaushal (Committee member) / Barrett, The Honors College (Contributor) / Chemical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2013-12
137044-Thumbnail Image.png
Description
In applications such as Magnetic Resonance Imaging (MRI), data are acquired as Fourier samples. Since the underlying images are only piecewise smooth, standard recon- struction techniques will yield the Gibbs phenomenon, which can lead to misdiagnosis. Although filtering will reduce the oscillations at jump locations, it can often have the

In applications such as Magnetic Resonance Imaging (MRI), data are acquired as Fourier samples. Since the underlying images are only piecewise smooth, standard recon- struction techniques will yield the Gibbs phenomenon, which can lead to misdiagnosis. Although filtering will reduce the oscillations at jump locations, it can often have the adverse effect of blurring at these critical junctures, which can also lead to misdiagno- sis. Incorporating prior information into reconstruction methods can help reconstruct a sharper solution. For example, compressed sensing (CS) algorithms exploit the expected sparsity of some features of the image. In this thesis, we develop a method to exploit the sparsity in the edges of the underlying image. We design a convex optimization problem that exploits this sparsity to provide an approximation of the underlying image. Our method successfully reduces the Gibbs phenomenon with only minimal "blurring" at the discontinuities. In addition, we see a high rate of convergence in smooth regions.
ContributorsWasserman, Gabriel Kanter (Author) / Gelb, Anne (Thesis director) / Cochran, Doug (Committee member) / Archibald, Rick (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
137147-Thumbnail Image.png
Description
Smart contrast agents allow for noninvasive study of specific events or tissue conditions inside of a patient's body using Magnetic Resonance Imaging (MRI). This research aims to develop and characterize novel smart contrast agents for MRI that respond to temperature changes in tissue microenvironments. Transmission Electron Microscopy, Nuclear Magnetic Resonance,

Smart contrast agents allow for noninvasive study of specific events or tissue conditions inside of a patient's body using Magnetic Resonance Imaging (MRI). This research aims to develop and characterize novel smart contrast agents for MRI that respond to temperature changes in tissue microenvironments. Transmission Electron Microscopy, Nuclear Magnetic Resonance, and cell culture growth assays were used to characterize the physical, magnetic, and cytotoxic properties of candidate nanoprobes. The nanoprobes displayed thermosensitve MR properties with decreasing relaxivity with temperature. Future work will be focused on generating and characterizing photo-active analogues of the nanoprobes that could be used for both treatment of tissues and assessment of therapy.
ContributorsHussain, Khateeb Hyder (Author) / Kodibagkar, Vikram (Thesis director) / Stabenfeldt, Sarah (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
136857-Thumbnail Image.png
Description
Glioblastoma Multiforme (GBM) is an aggressive and deadly form of brain cancer with a median survival time of about a year with treatment. Due to the aggressive nature of these tumors and the tendency of gliomas to follow white matter tracks in the brain, each tumor mass has a unique

Glioblastoma Multiforme (GBM) is an aggressive and deadly form of brain cancer with a median survival time of about a year with treatment. Due to the aggressive nature of these tumors and the tendency of gliomas to follow white matter tracks in the brain, each tumor mass has a unique growth pattern. Consequently it is difficult for neurosurgeons to anticipate where the tumor will spread in the brain, making treatment planning difficult. Archival patient data including MRI scans depicting the progress of tumors have been helpful in developing a model to predict Glioblastoma proliferation, but limited scans per patient make the tumor growth rate difficult to determine. Furthermore, patient treatment between scan points can significantly compound the challenge of accurately predicting the tumor growth. A partnership with Barrow Neurological Institute has allowed murine studies to be conducted in order to closely observe tumor growth and potentially improve the current model to more closely resemble intermittent stages of GBM growth without treatment effects.
ContributorsSnyder, Lena Haley (Author) / Kostelich, Eric (Thesis director) / Frakes, David (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Harrington Bioengineering Program (Contributor)
Created2014-05