Matching Items (407)
128685-Thumbnail Image.png
Description

Predicting the timing of a castrate resistant prostate cancer is critical to lowering medical costs and improving the quality of life of advanced prostate cancer patients. We formulate, compare and analyze two mathematical models that aim to forecast future levels of prostate-specific antigen (PSA). We accomplish these tasks by employing

Predicting the timing of a castrate resistant prostate cancer is critical to lowering medical costs and improving the quality of life of advanced prostate cancer patients. We formulate, compare and analyze two mathematical models that aim to forecast future levels of prostate-specific antigen (PSA). We accomplish these tasks by employing clinical data of locally advanced prostate cancer patients undergoing androgen deprivation therapy (ADT). While these models are simplifications of a previously published model, they fit data with similar accuracy and improve forecasting results. Both models describe the progression of androgen resistance. Although Model 1 is simpler than the more realistic Model 2, it can fit clinical data to a greater precision. However, we found that Model 2 can forecast future PSA levels more accurately. These findings suggest that including more realistic mechanisms of androgen dynamics in a two population model may help androgen resistance timing prediction.

ContributorsBaez, Javier (Author) / Kuang, Yang (Author) / College of Liberal Arts and Sciences (Contributor)
Created2016-11-16
135547-Thumbnail Image.png
Description
The Experimental Data Processing (EDP) software is a C++ GUI-based application to streamline the process of creating a model for structural systems based on experimental data. EDP is designed to process raw data, filter the data for noise and outliers, create a fitted model to describe that data, complete a

The Experimental Data Processing (EDP) software is a C++ GUI-based application to streamline the process of creating a model for structural systems based on experimental data. EDP is designed to process raw data, filter the data for noise and outliers, create a fitted model to describe that data, complete a probabilistic analysis to describe the variation between replicates of the experimental process, and analyze reliability of a structural system based on that model. In order to help design the EDP software to perform the full analysis, the probabilistic and regression modeling aspects of this analysis have been explored. The focus has been on creating and analyzing probabilistic models for the data, adding multivariate and nonparametric fits to raw data, and developing computational techniques that allow for these methods to be properly implemented within EDP. For creating a probabilistic model of replicate data, the normal, lognormal, gamma, Weibull, and generalized exponential distributions have been explored. Goodness-of-fit tests, including the chi-squared, Anderson-Darling, and Kolmogorov-Smirnoff tests, have been used in order to analyze the effectiveness of any of these probabilistic models in describing the variation of parameters between replicates of an experimental test. An example using Young's modulus data for a Kevlar-49 Swath stress-strain test was used in order to demonstrate how this analysis is performed within EDP. In order to implement the distributions, numerical solutions for the gamma, beta, and hypergeometric functions were implemented, along with an arbitrary precision library to store numbers that exceed the maximum size of double-precision floating point digits. To create a multivariate fit, the multilinear solution was created as the simplest solution to the multivariate regression problem. This solution was then extended to solve nonlinear problems that can be linearized into multiple separable terms. These problems were solved analytically with the closed-form solution for the multilinear regression, and then by using a QR decomposition to solve numerically while avoiding numerical instabilities associated with matrix inversion. For nonparametric regression, or smoothing, the loess method was developed as a robust technique for filtering noise while maintaining the general structure of the data points. The loess solution was created by addressing concerns associated with simpler smoothing methods, including the running mean, running line, and kernel smoothing techniques, and combining the ability of each of these methods to resolve those issues. The loess smoothing method involves weighting each point in a partition of the data set, and then adding either a line or a polynomial fit within that partition. Both linear and quadratic methods were applied to a carbon fiber compression test, showing that the quadratic model was more accurate but the linear model had a shape that was more effective for analyzing the experimental data. Finally, the EDP program itself was explored to consider its current functionalities for processing data, as described by shear tests on carbon fiber data, and the future functionalities to be developed. The probabilistic and raw data processing capabilities were demonstrated within EDP, and the multivariate and loess analysis was demonstrated using R. As the functionality and relevant considerations for these methods have been developed, the immediate goal is to finish implementing and integrating these additional features into a version of EDP that performs a full streamlined structural analysis on experimental data.
ContributorsMarkov, Elan Richard (Author) / Rajan, Subramaniam (Thesis director) / Khaled, Bilal (Committee member) / Chemical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Ira A. Fulton School of Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136176-Thumbnail Image.png
Description
Joseph Rotblat (1908-2005) was the only physicist to leave the Manhattan Project for moral reasons before its completion. He would spend the rest of his life advocating for nuclear disarmament. His activities for disarmament resulted in the formation, in 1957, of the Pugwash conferences, which emerged as the leading global

Joseph Rotblat (1908-2005) was the only physicist to leave the Manhattan Project for moral reasons before its completion. He would spend the rest of his life advocating for nuclear disarmament. His activities for disarmament resulted in the formation, in 1957, of the Pugwash conferences, which emerged as the leading global forum to advance limits on nuclear weapons during the Cold War. Rotblat's efforts, and the activities of Pugwash, resulted in both being awarded the Nobel Peace Prize in 1995. Rotblat is a central figure in the global history of resistance to the spread of nuclear weapons. He also was an important figure in the emergence, after World War II, of a counter-movement to introduce new social justifications for scientific research and new models for ethics and professionalism among scientists. Rotblat embodies the power of the individual scientist to say "no" and thus, at least individually, put limits of conscience on his or her scientific activity. This paper explores the political and ethical choices scientists make as part of their effort to behave responsibly and to influence the outcomes of their work. By analyzing three phases of Rotblat's life, I demonstrate how he pursued his ideal of beneficial science, or science that appears to benefit humanity. The three phases are: (1) his decision to leave the Manhattan Project in 1944, (2) his role in the creation of Pugwash in 1957 and his role in the rise of the organization into international prominence and (3) his winning the Nobel Peace Prize in 1995. These three phases of Rotblat's life provide a singular window of the history of nuclear weapons and the international movement for scientific responsibility in the 50 years since the bombing of Hiroshima in 1945. While this paper does not provide a complete picture of Rotblat's life and times, I argue that his experiences shed important light on the difficult question of the individual responsibility of scientists.
ContributorsEvans, Alison Dawn (Author) / Zachary, Gregg (Thesis director) / Hurlbut, Ben (Committee member) / Francis, Sybil (Committee member) / Barrett, The Honors College (Contributor) / Department of Chemistry and Biochemistry (Contributor) / School of Historical, Philosophical and Religious Studies (Contributor)
Created2015-05
136177-Thumbnail Image.png
Description
The purpose of this study was to determine the ratio of vegetable to fruit incorporated during a fresh vegetable and/or fruit juice diet. Juicing is the process of extracting the liquid part of a plant, fruit, or vegetable. Food can be ground, pressed, and spun to separate the liquid from

The purpose of this study was to determine the ratio of vegetable to fruit incorporated during a fresh vegetable and/or fruit juice diet. Juicing is the process of extracting the liquid part of a plant, fruit, or vegetable. Food can be ground, pressed, and spun to separate the liquid from the pulp. A juice diet involves juicing and consuming a variety of vegetables and fruits. The primary objective of this study was to gather information about the ratio of vegetable to fruit incorporated in freshly made juices during a juice diet. Therefore, the study survey inquired about various topics related to ingredient ratio during a juice diet. The survey data allowed for examination of the relationships between ingredient ratio and certain variables (e.g. gender, age, length of time juicing, juice fast participation, health effects, etc.). The study participants were recruited using online social media. Facebook was the primary method for reaching the online juicing community. A written invitation was distributed in several health related Facebook groups encouraging any person with experience juicing to complete an anonymous survey. This post was also shared via Twitter and various health related websites. The study survey data was used to examine the relationships between ingredient ratio and specific variables. The survey data showed participants had varying levels of experience with juicing. The responses indicated many participants were familiar with juice fasting and many participants completed more than one juice fast. Based on the survey response data, the most common ratio of vegetable to fruit incorporated by the participants during a juice diet was 80% vegetable to 20% fruit. The majority of participants indicated daily consumption of freshly made juice containing 70% -100% vegetables. Based on the survey response data, beginner juicers may be less inclined to incorporate organic produce into their juice diet compared to advanced juicers. The majority of participants reported positive health benefits during a juice diet. Some of the positive health benefits indicated by participants include weight loss, increased energy, and a positive impact on disease symptoms. Some of the negative side effects experienced by participants during a juice diet include frequent urination, headache, and cravings. Cross tabulation calculations between the ratio of ingredients and several variables covered by the study survey demonstrated statistical significance (i.e. length of time juicing, frequency of drinking juice, juice fast participation, number of juice fasts completed, servings of vegetables/fruit in a juice, percent of organic vegetables/fruit used in a juice, perceived positive side effects, and perceived negative side effects). This study provided insight about the average ratio of vegetable to fruit incorporated by participants during a juice diet. When analyzing the data it is important to consider the survey data was self-reported. Therefore, every result and conclusion is based on the individual perceptions of the study participants. In future experimentation, the use of medical tests and blood work would be useful to determine the biological and biochemical effects of drinking raw vegetable and/or fruit juice on the human body.
ContributorsMata, Sara Ann (Author) / Mayol-Kreiser, Sandra (Thesis director) / Shepard, Christina (Committee member) / Barrett, The Honors College (Contributor) / Department of Chemistry and Biochemistry (Contributor)
Created2015-05
141494-Thumbnail Image.png
Description

Background:
Data assimilation refers to methods for updating the state vector (initial condition) of a complex spatiotemporal model (such as a numerical weather model) by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day) forecasts of the growth and

Background:
Data assimilation refers to methods for updating the state vector (initial condition) of a complex spatiotemporal model (such as a numerical weather model) by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day) forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme) in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor.

Results:
We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter), previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles) in the presence of a moderate degree of systematic model error and measurement noise.

Conclusions:
The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling.

ContributorsKostelich, Eric (Author) / Kuang, Yang (Author) / McDaniel, Joshua (Author) / Moore, Nina Z. (Author) / Martirosyan, Nikolay L. (Author) / Preul, Mark C. (Author) / College of Liberal Arts and Sciences (Contributor)
Created2011-12-21
137847-Thumbnail Image.png
Description
Glioblastoma multiforme (GBMs) is the most prevalent brain tumor type and causes approximately 40% of all non-metastic primary tumors in adult patients [1]. GBMs are malignant, grade-4 brain tumors, the most aggressive classication as established by the World Health Organization and are marked by their low survival rate; the median

Glioblastoma multiforme (GBMs) is the most prevalent brain tumor type and causes approximately 40% of all non-metastic primary tumors in adult patients [1]. GBMs are malignant, grade-4 brain tumors, the most aggressive classication as established by the World Health Organization and are marked by their low survival rate; the median survival time is only twelve months from initial diagnosis: Patients who live more than three years are considered long-term survivors [2]. GBMs are highly invasive and their diffusive growth pattern makes it impossible to remove the tumors by surgery alone [3]. The purpose of this paper is to use individual patient data to parameterize a model of GBMs that allows for data on tumor growth and development to be captured on a clinically relevant time scale. Such an endeavor is the rst step to a clinically applicable predictions of GBMs. Previous research has yielded models that adequately represent the development of GBMs, but they have not attempted to follow specic patient cases through the entire tumor process. Using the model utilized by Kostelich et al. [4], I will attempt to redress this deciency. In doing so, I will improve upon a family of models that can be used to approximate the time of development and/or structure evolution in GBMs. The eventual goal is to incorporate Magnetic Resonance Imaging (MRI) data into a parameterized model of GBMs in such a way that it can be used clinically to predict tumor growth and behavior. Furthermore, I hope to come to a denitive conclusion as to the accuracy of the Koteslich et al. model throughout the development of GBMs tumors.
ContributorsManning, Miles (Author) / Kostelich, Eric (Thesis director) / Kuang, Yang (Committee member) / Preul, Mark (Committee member) / Barrett, The Honors College (Contributor) / College of Liberal Arts and Sciences (Contributor)
Created2012-12
Description
Skin elasticity, a key indicator of skin health, is influenced by various factors including diet and body composition. This study, led by Myka Williams as part of her Barrett, The Honors College Thesis Project at Arizona State University under the guidance of Dr. Carol Johnston and Dr. Sandy Mayol-Kreiser, investigates

Skin elasticity, a key indicator of skin health, is influenced by various factors including diet and body composition. This study, led by Myka Williams as part of her Barrett, The Honors College Thesis Project at Arizona State University under the guidance of Dr. Carol Johnston and Dr. Sandy Mayol-Kreiser, investigates the relationship between diet—specifically vegetarian and omnivorous patterns—and skin elasticity. Utilizing the ElastiMeter from Delfin Technologies, we assessed the skin elasticity of 38 individuals from the ASU community. Our findings revealed no significant difference in skin elasticity between the dietary groups. However, intriguing correlations emerged between participants' Body Mass Index (BMI) and skin elasticity. These initial findings suggest the potential influence of body composition on skin health, warranting further research with additional parameters to strengthen and expand upon these observations.
ContributorsWilliams, Myka (Author) / Johnston, Carol (Thesis director) / Mayol-Kreiser, Sandy (Committee member) / Barrett, The Honors College (Contributor) / School of Life Sciences (Contributor) / School of Human Evolution & Social Change (Contributor)
Created2024-05