Matching Items (52)
Filtering by

Clear all filters

134308-Thumbnail Image.png
Description
Cancer is one of the leading causes of death globally according to the World Health Organization. Although improved treatments and early diagnoses have reduced cancer related mortalities, metastatic disease remains a major clinical challenge. The local tumor microenvironment plays a significant role in cancer metastasis, where tumor cells respond and

Cancer is one of the leading causes of death globally according to the World Health Organization. Although improved treatments and early diagnoses have reduced cancer related mortalities, metastatic disease remains a major clinical challenge. The local tumor microenvironment plays a significant role in cancer metastasis, where tumor cells respond and adapt to a plethora of biochemical and biophysical signals from stromal cells and extracellular matrix (ECM) proteins. Due to these complexities, there is a critical need to understand molecular mechanisms underlying cancer metastasis to facilitate the discovery of more effective therapies. In the past few years, the integration of advanced biomaterials and microengineering approaches has initiated the development of innovative platform technologies for cancer research. These technologies enable the creation of biomimetic in vitro models with physiologically relevant (i.e. in vivo-like) characteristics to conduct studies ranging from fundamental cancer biology to high-throughput drug screening. In this review article, we discuss the biological significance of each step of the metastatic cascade and provide a broad overview on recent progress to recapitulate these stages using advanced biomaterials and microengineered technologies. In each section, we will highlight the advantages and shortcomings of each approach and provide our perspectives on future directions.
ContributorsPeela, Nitish (Author) / Nikkhah, Mehdi (Thesis director) / LaBaer, Joshua (Committee member) / Harrington Bioengineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
135355-Thumbnail Image.png
Description
Glioblastoma multiforme (GBM) is a malignant, aggressive and infiltrative cancer of the central nervous system with a median survival of 14.6 months with standard care. Diagnosis of GBM is made using medical imaging such as magnetic resonance imaging (MRI) or computed tomography (CT). Treatment is informed by medical images and

Glioblastoma multiforme (GBM) is a malignant, aggressive and infiltrative cancer of the central nervous system with a median survival of 14.6 months with standard care. Diagnosis of GBM is made using medical imaging such as magnetic resonance imaging (MRI) or computed tomography (CT). Treatment is informed by medical images and includes chemotherapy, radiation therapy, and surgical removal if the tumor is surgically accessible. Treatment seldom results in a significant increase in longevity, partly due to the lack of precise information regarding tumor size and location. This lack of information arises from the physical limitations of MR and CT imaging coupled with the diffusive nature of glioblastoma tumors. GBM tumor cells can migrate far beyond the visible boundaries of the tumor and will result in a recurring tumor if not killed or removed. Since medical images are the only readily available information about the tumor, we aim to improve mathematical models of tumor growth to better estimate the missing information. Particularly, we investigate the effect of random variation in tumor cell behavior (anisotropy) using stochastic parameterizations of an established proliferation-diffusion model of tumor growth. To evaluate the performance of our mathematical model, we use MR images from an animal model consisting of Murine GL261 tumors implanted in immunocompetent mice, which provides consistency in tumor initiation and location, immune response, genetic variation, and treatment. Compared to non-stochastic simulations, stochastic simulations showed improved volume accuracy when proliferation variability was high, but diffusion variability was found to only marginally affect tumor volume estimates. Neither proliferation nor diffusion variability significantly affected the spatial distribution accuracy of the simulations. While certain cases of stochastic parameterizations improved volume accuracy, they failed to significantly improve simulation accuracy overall. Both the non-stochastic and stochastic simulations failed to achieve over 75% spatial distribution accuracy, suggesting that the underlying structure of the model fails to capture one or more biological processes that affect tumor growth. Two biological features that are candidates for further investigation are angiogenesis and anisotropy resulting from differences between white and gray matter. Time-dependent proliferation and diffusion terms could be introduced to model angiogenesis, and diffusion weighed imaging (DTI) could be used to differentiate between white and gray matter, which might allow for improved estimates brain anisotropy.
ContributorsAnderies, Barrett James (Author) / Kostelich, Eric (Thesis director) / Kuang, Yang (Committee member) / Stepien, Tracy (Committee member) / Harrington Bioengineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136633-Thumbnail Image.png
Description
Breast and other solid tumors exhibit high and varying degrees of intra-tumor heterogeneity resulting in targeted therapy resistance and other challenges that make the management and treatment of these diseases rather difficult. Due to the presence of admixtures of non-neoplastic cells with polyclonal cell populations, it is difficult to define

Breast and other solid tumors exhibit high and varying degrees of intra-tumor heterogeneity resulting in targeted therapy resistance and other challenges that make the management and treatment of these diseases rather difficult. Due to the presence of admixtures of non-neoplastic cells with polyclonal cell populations, it is difficult to define cancer genomes in patient samples. By isolating tumor cells from normal cells, and enriching distinct clonal populations, clinically relevant genomic aberrations that drive disease can be identified in patients in vivo. An in-depth analysis of clonal architecture and tumor heterogeneity was performed in a stage II chemoradiation-naïve breast cancer from a sixty-five year old patient. DAPI-based DNA content measurements and DNA content-based flow sorting was used to to isolate nuclei from distinct clonal populations of diploid and aneuploid tumor cells in surgical tumor samples. We combined DNA content-based flow cytometry and ploidy analysis with high-definition array comparative genomic hybridization (aCGH) and next-generation sequencing technologies to interrogate the genomes of multiple biopsies from the breast cancer. The detailed profiles of ploidy, copy number aberrations and mutations were used to recreate and map the lineages present within the tumor. The clonal analysis revealed driver events for tumor progression (a heterozygous germline BRCA2 mutation converted to homozygosity within the tumor by a copy number event and the constitutive activation of Notch and Akt signaling pathways. The highlighted approach has broad implications in the study of tumor heterogeneity by providing a unique ultra-high resolution of polyclonal tumors that can advance effective therapies and clinical management of patients with this disease.
ContributorsLaughlin, Brady Scott (Author) / Ankeny, Casey (Thesis director) / Barrett, Michael (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor) / School for the Science of Health Care Delivery (Contributor)
Created2015-05
136798-Thumbnail Image.png
Description
The purpose of this project was to examine the viability of protein biomarkers in pre-symptomatic detection of lung cancer. Regular screening has been shown to vastly improve patient survival outcome. Lung cancer currently has the highest occurrence and mortality of all cancers and so a means of screening would be

The purpose of this project was to examine the viability of protein biomarkers in pre-symptomatic detection of lung cancer. Regular screening has been shown to vastly improve patient survival outcome. Lung cancer currently has the highest occurrence and mortality of all cancers and so a means of screening would be highly beneficial. In this research, the biomarker neuron-specific enolase (Enolase-2, eno2), a marker of small-cell lung cancer, was detected at varying concentrations using electrochemical impedance spectroscopy in order to develop a mathematical model of predicting protein expression based on a measured impedance value at a determined optimum frequency. The extent of protein expression would indicate the possibility of the patient having small-cell lung cancer. The optimum frequency was found to be 459 Hz, and the mathematical model to determine eno2 concentration based on impedance was found to be y = 40.246x + 719.5 with an R2 value of 0.82237. These results suggest that this approach could provide an option for the development of small-cell lung cancer screening utilizing electrochemical technology.
ContributorsEvans, William Ian (Author) / LaBelle, Jeffrey (Thesis director) / Spano, Mark (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2014-05
136587-Thumbnail Image.png
Description
In the words of W. Edwards Deming, "the central problem in management and in leadership is failure to understand the information in variation." While many quality management programs propose the institution of technical training in advanced statistical methods, this paper proposes that by understanding the fundamental information behind statistical theory,

In the words of W. Edwards Deming, "the central problem in management and in leadership is failure to understand the information in variation." While many quality management programs propose the institution of technical training in advanced statistical methods, this paper proposes that by understanding the fundamental information behind statistical theory, and by minimizing bias and variance while fully utilizing the available information about the system at hand, one can make valuable, accurate predictions about the future. Combining this knowledge with the work of quality gurus W. E. Deming, Eliyahu Goldratt, and Dean Kashiwagi, a framework for making valuable predictions for continuous improvement is made. After this information is synthesized, it is concluded that the best way to make accurate, informative predictions about the future is to "balance the present and future," seeing the future through the lens of the present and thus minimizing bias, variance, and risk.
ContributorsSynodis, Nicholas Dahn (Author) / Kashiwagi, Dean (Thesis director, Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136550-Thumbnail Image.png
Description
The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team

The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team be as successful as possible by defining which positions are most important to a team's success. Data from fifteen years of NFL games was collected and information on every player in the league was analyzed. First there needed to be a benchmark which describes a team as being average and then every player in the NFL must be compared to that average. Based on properties of linear regression using ordinary least squares this project aims to define such a model that shows each position's importance. Finally, once such a model had been established then the focus turned to the NFL draft in which the goal was to find a strategy of where each position needs to be drafted so that it is most likely to give the best payoff based on the results of the regression in part one.
ContributorsBalzer, Kevin Ryan (Author) / Goegan, Brian (Thesis director) / Dassanayake, Maduranga (Committee member) / Barrett, The Honors College (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
135858-Thumbnail Image.png
Description
The concentration factor edge detection method was developed to compute the locations and values of jump discontinuities in a piecewise-analytic function from its first few Fourier series coecients. The method approximates the singular support of a piecewise smooth function using an altered Fourier conjugate partial sum. The accuracy and characteristic

The concentration factor edge detection method was developed to compute the locations and values of jump discontinuities in a piecewise-analytic function from its first few Fourier series coecients. The method approximates the singular support of a piecewise smooth function using an altered Fourier conjugate partial sum. The accuracy and characteristic features of the resulting jump function approximation depends on these lters, known as concentration factors. Recent research showed that that these concentration factors could be designed using aexible iterative framework, improving upon the overall accuracy and robustness of the method, especially in the case where some Fourier data are untrustworthy or altogether missing. Hypothesis testing methods were used to determine how well the original concentration factor method could locate edges using noisy Fourier data. This thesis combines the iterative design aspect of concentration factor design and hypothesis testing by presenting a new algorithm that incorporates multiple concentration factors into one statistical test, which proves more ective at determining jump discontinuities than the previous HT methods. This thesis also examines how the quantity and location of Fourier data act the accuracy of HT methods. Numerical examples are provided.
ContributorsLubold, Shane Michael (Author) / Gelb, Anne (Thesis director) / Cochran, Doug (Committee member) / Viswanathan, Aditya (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136199-Thumbnail Image.png
Description
Despite the 40-year war on cancer, very limited progress has been made in developing a cure for the disease. This failure has prompted the reevaluation of the causes and development of cancer. One resulting model, coined the atavistic model of cancer, posits that cancer is a default phenotype of the

Despite the 40-year war on cancer, very limited progress has been made in developing a cure for the disease. This failure has prompted the reevaluation of the causes and development of cancer. One resulting model, coined the atavistic model of cancer, posits that cancer is a default phenotype of the cells of multicellular organisms which arises when the cell is subjected to an unusual amount of stress. Since this default phenotype is similar across cell types and even organisms, it seems it must be an evolutionarily ancestral phenotype. We take a phylostratigraphical approach, but systematically add species divergence time data to estimate gene ages numerically and use these ages to investigate the ages of genes involved in cancer. We find that ancient disease-recessive cancer genes are significantly enriched for DNA repair and SOS activity, which seems to imply that a core component of cancer development is not the regulation of growth, but the regulation of mutation. Verification of this finding could drastically improve cancer treatment and prevention.
ContributorsOrr, Adam James (Author) / Davies, Paul (Thesis director) / Bussey, Kimberly (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Chemistry and Biochemistry (Contributor) / School of Life Sciences (Contributor)
Created2015-05
136255-Thumbnail Image.png
Description
Over the course of six months, we have worked in partnership with Arizona State University and a leading producer of semiconductor chips in the United States market (referred to as the "Company"), lending our skills in finance, statistics, model building, and external insight. We attempt to design models that hel

Over the course of six months, we have worked in partnership with Arizona State University and a leading producer of semiconductor chips in the United States market (referred to as the "Company"), lending our skills in finance, statistics, model building, and external insight. We attempt to design models that help predict how much time it takes to implement a cost-saving project. These projects had previously been considered only on the merit of cost savings, but with an added dimension of time, we hope to forecast time according to a number of variables. With such a forecast, we can then apply it to an expense project prioritization model which relates time and cost savings together, compares many different projects simultaneously, and returns a series of present value calculations over different ranges of time. The goal is twofold: assist with an accurate prediction of a project's time to implementation, and provide a basis to compare different projects based on their present values, ultimately helping to reduce the Company's manufacturing costs and improve gross margins. We believe this approach, and the research found toward this goal, is most valuable for the Company. Two coaches from the Company have provided assistance and clarified our questions when necessary throughout our research. In this paper, we begin by defining the problem, setting an objective, and establishing a checklist to monitor our progress. Next, our attention shifts to the data: making observations, trimming the dataset, framing and scoping the variables to be used for the analysis portion of the paper. Before creating a hypothesis, we perform a preliminary statistical analysis of certain individual variables to enrich our variable selection process. After the hypothesis, we run multiple linear regressions with project duration as the dependent variable. After regression analysis and a test for robustness, we shift our focus to an intuitive model based on rules of thumb. We relate these models to an expense project prioritization tool developed using Microsoft Excel software. Our deliverables to the Company come in the form of (1) a rules of thumb intuitive model and (2) an expense project prioritization tool.
ContributorsAl-Assi, Hashim (Co-author) / Chiang, Robert (Co-author) / Liu, Andrew (Co-author) / Ludwick, David (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Barrett, The Honors College (Contributor) / Department of Information Systems (Contributor) / Department of Finance (Contributor) / Department of Economics (Contributor) / Department of Supply Chain Management (Contributor) / School of Accountancy (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / WPC Graduate Programs (Contributor)
Created2015-05
133171-Thumbnail Image.png
Description
Magnetic resonance imaging (MRI) data of metastatic brain cancer patients at the Barrow Neurological Institute sparked interest in the radiology department due to the possibility that tumor size distributions might mimic a power law or an exponential distribution. In order to consider the question regarding the growth trends of metastatic

Magnetic resonance imaging (MRI) data of metastatic brain cancer patients at the Barrow Neurological Institute sparked interest in the radiology department due to the possibility that tumor size distributions might mimic a power law or an exponential distribution. In order to consider the question regarding the growth trends of metastatic brain tumors, this thesis analyzes the volume measurements of the tumor sizes from the BNI data and attempts to explain such size distributions through mathematical models. More specifically, a basic stochastic cellular automaton model is used and has three-dimensional results that show similar size distributions of those of the BNI data. Results of the models are investigated using the likelihood ratio test suggesting that, when the tumor volumes are measured based on assuming tumor sphericity, the tumor size distributions significantly mimic the power law over an exponential distribution.
ContributorsFreed, Rebecca (Co-author) / Snopko, Morgan (Co-author) / Kostelich, Eric (Thesis director) / Kuang, Yang (Committee member) / WPC Graduate Programs (Contributor) / School of Accountancy (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-12