Matching Items (100)
133531-Thumbnail Image.png
Description
Predicting the binding sites of proteins has historically relied on the determination of protein structural data. However, the ability to utilize binding data obtained from a simple assay and computationally make the same predictions using only sequence information would be more efficient, both in time and resources. The purpose of

Predicting the binding sites of proteins has historically relied on the determination of protein structural data. However, the ability to utilize binding data obtained from a simple assay and computationally make the same predictions using only sequence information would be more efficient, both in time and resources. The purpose of this study was to evaluate the effectiveness of an algorithm developed to predict regions of high-binding on proteins as it applies to determining the regions of interaction between binding partners. This approach was applied to tumor necrosis factor alpha (TNFα), its receptor TNFR2, programmed cell death protein-1 (PD-1), and one of its ligand PD-L1. The algorithms applied accurately predicted the binding region between TNFα and TNFR2 in which the interacting residues are sequential on TNFα, however failed to predict discontinuous regions of binding as accurately. The interface of PD-1 and PD-L1 contained continuous residues interacting with each other, however this region was predicted to bind weaker than the regions on the external portions of the molecules. Limitations of this approach include use of a linear search window (resulting in inability to predict discontinuous binding residues), and the use of proteins with unnaturally exposed regions, in the case of PD-1 and PD-L1 (resulting in observed interactions which would not occur normally). However, this method was overall very effective in utilizing the available information to make accurate predictions. The use of the microarray to obtain binding information and a computer algorithm to analyze is a versatile tool capable of being adapted to refine accuracy.
ContributorsBrooks, Meilia Catherine (Author) / Woodbury, Neal (Thesis director) / Diehnelt, Chris (Committee member) / Ghirlanda, Giovanna (Committee member) / Department of Psychology (Contributor) / School of Molecular Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134943-Thumbnail Image.png
Description
Prostate cancer is the second most common kind of cancer in men. Fortunately, it has a 99% survival rate. To achieve such a survival rate, a variety of aggressive therapies are used to treat prostate cancers that are caught early. Androgen deprivation therapy (ADT) is a therapy that is given

Prostate cancer is the second most common kind of cancer in men. Fortunately, it has a 99% survival rate. To achieve such a survival rate, a variety of aggressive therapies are used to treat prostate cancers that are caught early. Androgen deprivation therapy (ADT) is a therapy that is given in cycles to patients. This study attempted to analyze what factors in a group of 79 patients caused them to stick with or discontinue the treatment. This was done using naïve Bayes classification, a machine-learning algorithm. The usage of this algorithm identified high testosterone as an indicator of a patient persevering with the treatment, but failed to produce statistically significant high rates of prediction.
ContributorsMillea, Timothy Michael (Author) / Kostelich, Eric (Thesis director) / Kuang, Yang (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
135355-Thumbnail Image.png
Description
Glioblastoma multiforme (GBM) is a malignant, aggressive and infiltrative cancer of the central nervous system with a median survival of 14.6 months with standard care. Diagnosis of GBM is made using medical imaging such as magnetic resonance imaging (MRI) or computed tomography (CT). Treatment is informed by medical images and

Glioblastoma multiforme (GBM) is a malignant, aggressive and infiltrative cancer of the central nervous system with a median survival of 14.6 months with standard care. Diagnosis of GBM is made using medical imaging such as magnetic resonance imaging (MRI) or computed tomography (CT). Treatment is informed by medical images and includes chemotherapy, radiation therapy, and surgical removal if the tumor is surgically accessible. Treatment seldom results in a significant increase in longevity, partly due to the lack of precise information regarding tumor size and location. This lack of information arises from the physical limitations of MR and CT imaging coupled with the diffusive nature of glioblastoma tumors. GBM tumor cells can migrate far beyond the visible boundaries of the tumor and will result in a recurring tumor if not killed or removed. Since medical images are the only readily available information about the tumor, we aim to improve mathematical models of tumor growth to better estimate the missing information. Particularly, we investigate the effect of random variation in tumor cell behavior (anisotropy) using stochastic parameterizations of an established proliferation-diffusion model of tumor growth. To evaluate the performance of our mathematical model, we use MR images from an animal model consisting of Murine GL261 tumors implanted in immunocompetent mice, which provides consistency in tumor initiation and location, immune response, genetic variation, and treatment. Compared to non-stochastic simulations, stochastic simulations showed improved volume accuracy when proliferation variability was high, but diffusion variability was found to only marginally affect tumor volume estimates. Neither proliferation nor diffusion variability significantly affected the spatial distribution accuracy of the simulations. While certain cases of stochastic parameterizations improved volume accuracy, they failed to significantly improve simulation accuracy overall. Both the non-stochastic and stochastic simulations failed to achieve over 75% spatial distribution accuracy, suggesting that the underlying structure of the model fails to capture one or more biological processes that affect tumor growth. Two biological features that are candidates for further investigation are angiogenesis and anisotropy resulting from differences between white and gray matter. Time-dependent proliferation and diffusion terms could be introduced to model angiogenesis, and diffusion weighed imaging (DTI) could be used to differentiate between white and gray matter, which might allow for improved estimates brain anisotropy.
ContributorsAnderies, Barrett James (Author) / Kostelich, Eric (Thesis director) / Kuang, Yang (Committee member) / Stepien, Tracy (Committee member) / Harrington Bioengineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135431-Thumbnail Image.png
Description
The free-base tetra-tolyl-porphyrin and the corresponding cobalt and iron porphyrin complexes were synthesized and characterized to show that this class of compound can be promising, tunable catalysts for carbon dioxide reduction. During cyclic voltammetry experiments, the iron porphyrin showed an on-set of ‘catalytic current’ at an earlier potential than the

The free-base tetra-tolyl-porphyrin and the corresponding cobalt and iron porphyrin complexes were synthesized and characterized to show that this class of compound can be promising, tunable catalysts for carbon dioxide reduction. During cyclic voltammetry experiments, the iron porphyrin showed an on-set of ‘catalytic current’ at an earlier potential than the cobalt porphyrin’s in organic solutions gassed with carbon dioxide. The cobalt porphyrin yielded larger catalytic currents, but at the same potential as the electrode. This difference, along with the significant changes in the porphyrin’s electronic, optical and redox properties, showed that its capabilities for carbon dioxide reduction can be controlled by metal ions, allotting it unique opportunities for applications in solar fuels catalysis and photochemical reactions.
ContributorsSkibo, Edward Kim (Author) / Moore, Gary (Thesis director) / Woodbury, Neal (Committee member) / School of Molecular Sciences (Contributor) / School of Sustainability (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
152192-Thumbnail Image.png
Description
ABSTRACT Peptide microarrays may prove to be a powerful tool for proteomics research and clinical diagnosis applications. Fodor et al. and Maurer et al. have shown proof-of-concept methods of light- and electrochemically-directed peptide microarray fabrication on glass and semiconductor microchips respectively. In this work, peptide microarray fabrication based on the

ABSTRACT Peptide microarrays may prove to be a powerful tool for proteomics research and clinical diagnosis applications. Fodor et al. and Maurer et al. have shown proof-of-concept methods of light- and electrochemically-directed peptide microarray fabrication on glass and semiconductor microchips respectively. In this work, peptide microarray fabrication based on the abovementioned techniques were optimized. In addition, MALDI mass spectrometry based peptide synthesis characterization on semiconductor microchips was developed and novel applications of a CombiMatrix (CBMX) platform for electrochemically controlled synthesis were explored. We have investigated performance of 2-(2-nitrophenyl)propoxycarbonyl (NPPOC) derivatives as photo-labile protecting group. Specifically, influence of substituents on 4 and 5 positions of phenyl ring of NPPOC group on the rate of photolysis and the yield of the amine was investigated. The results indicated that substituents capable of forming a π-network with the nitro group enhanced the rate of photolysis and yield. Once such properly substituted NPPOC groups were used, the rate of photolysis/yield depended on the nature of protected amino group indicating that a different chemical step during the photo-cleavage process became the rate limiting step. We also focused on electrochemically-directed parallel synthesis of high-density peptide microarrays using the CBMX technology referred to above which uses electrochemically generated acids to perform patterned chemistry. Several issues related to peptide synthesis on the CBMX platform were studied and optimized, with emphasis placed on the reactions of electro-generated acids during the deprotection step of peptide synthesis. We have developed a MALDI mass spectrometry based method to determine the chemical composition of microarray synthesis, directly on the feature. This method utilizes non-diffusional chemical cleavage from the surface, thereby making the chemical characterization of high-density microarray features simple, accurate, and amenable to high-throughput. CBMX Corp. has developed a microarray reader which is based on electro-chemical detection of redox chemical species. Several parameters of the instrument were studied and optimized and novel redox applications of peptide microarrays on CBMX platform were also investigated using the instrument. These include (i) a search of metal binding catalytic peptides to reduce overpotential associated with water oxidation reaction and (ii) an immobilization of peptide microarrays using electro-polymerized polypyrrole.
ContributorsKumar, Pallav (Author) / Woodbury, Neal (Thesis advisor) / Allen, James (Committee member) / Johnston, Stephen (Committee member) / Arizona State University (Publisher)
Created2013
161939-Thumbnail Image.png
Description
Traditional Reinforcement Learning (RL) assumes to learn policies with respect to reward available from the environment but sometimes learning in a complex domain requires wisdom which comes from a wide range of experience. In behavior based robotics, it is observed that a complex behavior can be described by a combination

Traditional Reinforcement Learning (RL) assumes to learn policies with respect to reward available from the environment but sometimes learning in a complex domain requires wisdom which comes from a wide range of experience. In behavior based robotics, it is observed that a complex behavior can be described by a combination of simpler behaviors. It is tempting to apply similar idea such that simpler behaviors can be combined in a meaningful way to tailor the complex combination. Such an approach would enable faster learning and modular design of behaviors. Complex behaviors can be combined with other behaviors to create even more advanced behaviors resulting in a rich set of possibilities. Similar to RL, combined behavior can keep evolving by interacting with the environment. The requirement of this method is to specify a reasonable set of simple behaviors. In this research, I present an algorithm that aims at combining behavior such that the resulting behavior has characteristics of each individual behavior. This approach has been inspired by behavior based robotics, such as the subsumption architecture and motor schema-based design. The combination algorithm outputs n weights to combine behaviors linearly. The weights are state dependent and change dynamically at every step in an episode. This idea is tested on discrete and continuous environments like OpenAI’s “Lunar Lander” and “Biped Walker”. Results are compared with related domains like Multi-objective RL, Hierarchical RL, Transfer learning, and basic RL. It is observed that the combination of behaviors is a novel way of learning which helps the agent achieve required characteristics. A combination is learned for a given state and so the agent is able to learn faster in an efficient manner compared to other similar approaches. Agent beautifully demonstrates characteristics of multiple behaviors which helps the agent to learn and adapt to the environment. Future directions are also suggested as possible extensions to this research.
ContributorsVora, Kevin Jatin (Author) / Zhang, Yu (Thesis advisor) / Yang, Yezhou (Committee member) / Praharaj, Sarbeswar (Committee member) / Arizona State University (Publisher)
Created2021
171888-Thumbnail Image.png
Description
Computational models have long been used to describe and predict the outcome of complex immunological processes. The dissertation work described here centers on the construction of multiscale computational immunology models that derives biological insights at the population, systems, and atomistic levels. First, SARS-CoV-2 mortality is investigated through the lens of

Computational models have long been used to describe and predict the outcome of complex immunological processes. The dissertation work described here centers on the construction of multiscale computational immunology models that derives biological insights at the population, systems, and atomistic levels. First, SARS-CoV-2 mortality is investigated through the lens of the predicted robustness of CD8+ T cell responses in 23 different populations. The robustness of CD8+ T cell responses in a given population was modeled by predicting the efficiency of endemic MHC-I protein variants to present peptides derived from SARS-CoV-2 proteins to circulating T cells. To accomplish this task, an algorithm, called EnsembleMHC, was developed to predict viral peptides with a high probability of being recognized by CD T cells. It was discovered that there was significant variation in the efficiency of different MHC-I protein variants to present SARS-CoV-2 derived peptides, and countries enriched with variants with high presentation efficiency had significantly lower mortality rates. Second, a biophysics-based MHC-I peptide prediction algorithm was developed. The MHC-I protein is the most polymorphic protein in the human genome with polymorphisms in the peptide binding causing striking changes in the amino acid compositions, or binding motifs, of peptide species capable of stable binding. A deep learning model, coined HLA-Inception, was trained to predict peptide binding using only biophysical properties, namely electrostatic potential. HLA-Inception was shown to be extremely accurate and efficient at predicting peptide binding motifs and was used to determine the peptide binding motifs of 5,821 MHC-I protein variants. Finally, the impact of stalk glycosylations on NL63 protein dynamics was investigated. Previous data has shown that coronavirus crown glycans play an important role in immune evasion and receptor binding, however, little is known about the role of the stalk glycans. Through the integration of computational biology, experimental data, and physics-based simulations, the stalk glycans were shown to heavily influence the bending angle of spike protein, with a particular emphasis on the glycan at position 1242. Further investigation revealed that removal of the N1242 glycan significantly reduced infectivity, highlighting a new potential therapeutic target. Overall, these investigations and associated innovations in integrative modeling.
ContributorsWilson, Eric Andrew (Author) / Anderson, Karen (Thesis advisor) / Singharoy, Abhishek (Thesis advisor) / Woodbury, Neal (Committee member) / Sulc, Petr (Committee member) / Arizona State University (Publisher)
Created2022
190964-Thumbnail Image.png
Description
Climate change is one of the most pressing issues affecting the world today. One of the impacts of climate change is on the transmission of mosquito-borne diseases (MBDs), such as West Nile Virus (WNV). Climate is known to influence vector and host demography as well as MBD transmission. This dissertation

Climate change is one of the most pressing issues affecting the world today. One of the impacts of climate change is on the transmission of mosquito-borne diseases (MBDs), such as West Nile Virus (WNV). Climate is known to influence vector and host demography as well as MBD transmission. This dissertation addresses the questions of how vector and host demography impact WNV dynamics, and how expected and likely climate change scenarios will affect demographic and epidemiological processes of WNV transmission. First, a data fusion method is developed that connects non-autonomous logistic model parameters to mosquito time series data. This method captures the inter-annual and intra-seasonal variation of mosquito populations within a geographical location. Next, a three-population WNV model between mosquito vectors, bird hosts, and human hosts with infection-age structure for the vector and bird host populations is introduced. A sensitivity analysis uncovers which parameters have the most influence on WNV outbreaks. Finally, the WNV model is extended to include the non-autonomous population model and temperature-dependent processes. Model parameterization using historical temperature and human WNV case data from the Greater Toronto Area (GTA) is conducted. Parameter fitting results are then used to analyze possible future WNV dynamics under two climate change scenarios. These results suggest that WNV risk for the GTA will substantially increase as temperature increases from climate change, even under the most conservative assumptions. This demonstrates the importance of ensuring that the warming of the planet is limited as much as possible.
ContributorsMancuso, Marina (Author) / Milner, Fabio A (Thesis advisor) / Kuang, Yang (Committee member) / Kostelich, Eric (Committee member) / Eikenberry, Steffen (Committee member) / Manore, Carrie (Committee member) / Arizona State University (Publisher)
Created2023
189326-Thumbnail Image.png
Description
Over the past 20 years, the fields of synthetic biology and synthetic biosystems engineering have grown into mature disciplines, leading to significant breakthroughs in cancer research, diagnostics, cell-based medicines, biochemical production, etc. Application of mathematical modelling to biological and biochemical systems have not only given great insight into how these

Over the past 20 years, the fields of synthetic biology and synthetic biosystems engineering have grown into mature disciplines, leading to significant breakthroughs in cancer research, diagnostics, cell-based medicines, biochemical production, etc. Application of mathematical modelling to biological and biochemical systems have not only given great insight into how these systems function, but also have lent enough predictive power to aid in the forward-engineering of synthetic constructs. However, progress has been impeded by several modes of context-dependence unique to biological and biochemical systems that are not seen in traditional engineering disciplines, resulting in the need for lengthy design-build-test cycles before functional prototypes are generated.In this work, two of these universal modes of context dependence – resource competition and growth feedback –their effects on synthetic gene circuits and potential control mechanisms, are studied and characterized. Results demonstrate that a novel competitive control architecture can be utilized to mitigate the effects of winner-take-all resource competition (a form of context dependence where distinct gene modules influence each other by competing over a shared pool of transcriptional/translational resources) in synthetic gene circuits and restore circuits to their intended function. Application of the fluctuation-dissipation theorem and rigorous stochastic simulations demonstrate that realistic resource constraints present in cells at the transcriptional and translational levels influence noise in gene circuits in a nonmonotonic fashion, either increasing or decreasing noise depending on the transcriptional/translational capacity. Growth feedback on the other hand links circuit function to cellular growth rate via increased protein dilution rate during exponential growth phase. This in turn can result in the collapse of bistable gene circuits as the accelerated dilution rate forces switches in a high stable state to fall to a low stable state. Mathematical modelling and experimental data demonstrate that application of repressive links can insulate sensitive parts of gene circuits against growth-fluctuations and can in turn increase the robustness of multistable circuits in growth contexts. The results presented in this work aid in the accumulation of understanding of biological and biochemical context dependence, and corresponding control strategies and design principles engineers can utilize to mitigate these effects.
ContributorsStone, Austin (Author) / Tian, Xiao-jun (Thesis advisor) / Wang, Xiao (Committee member) / Smith, Barbara (Committee member) / Kuang, Yang (Committee member) / Cheng, Albert (Committee member) / Arizona State University (Publisher)
Created2023
171611-Thumbnail Image.png
Description
There is a need in the ecology literature to have a discussion about the fundamental theories from which population dynamics arises. Ad hoc model development is not uncommon in the field often as a result of a need to publish rapidly and frequently. Ecologists and statisticians like Robert J. Steidl

There is a need in the ecology literature to have a discussion about the fundamental theories from which population dynamics arises. Ad hoc model development is not uncommon in the field often as a result of a need to publish rapidly and frequently. Ecologists and statisticians like Robert J. Steidl and Kenneth P Burnham have called for a more deliberative approach they call "hard thinking". For example, the phenomena of population growth can be captured by almost any sigmoid function. The question of which sigmoid function best explains a data set cannot be answered meaningfully by statistical regression since that can only speak to the validity of the shape. There is a need to revisit enzyme kinetics and ecological stoichiometry to properly justify basal model selection in ecology. This dissertation derives several common population growth models from a generalized equation. The mechanistic validity of these models in different contexts is explored through a kinetic lens. The behavioral kinetic framework is then put to the test by examining a set of biologically plausible growth models against the 1968-1995 elk population count data for northern Yellowstone. Using only this count data, the novel Monod-Holling growth model was able to accurately predict minimum viable population and life expectancy despite both being exogenous to the model and data set. Lastly, the elk/wolf data from Yellowstone was used to compare the validity of the Rosenzweig-MacArthur and Arditi-Ginzburg models. They both were derived from a more general model which included both predator and prey mediated steps. The Arditi-Ginzburg model was able to fit the training data better, but only the Rosenzweig-MacArthur model matched the validation data. Accounting for animal sexual behavior allowed for the creation of the Monod-Holling model which is just as simple as the logistic differential equation but provides greater insights for conservation purposes. Explicitly acknowledging the ethology of wolf predation helps explain the differences in predictive performances by the best fit Rosenzweig-MacArthur and Arditi-Ginzburg models. The behavioral kinetic framework has proven to be a useful tool, and it has the ability to provide even further insights going forward.
ContributorsPringle, Jack Andrew McCracken (Author) / Anderies, John M (Thesis advisor) / Kuang, Yang (Committee member) / Milner, Fabio (Committee member) / Arizona State University (Publisher)
Created2022