Matching Items (74)
Filtering by

Clear all filters

149928-Thumbnail Image.png
Description
The technology expansion seen in the last decade for genomics research has permitted the generation of large-scale data sources pertaining to molecular biological assays, genomics, proteomics, transcriptomics and other modern omics catalogs. New methods to analyze, integrate and visualize these data types are essential to unveil relevant disease mechanisms. Towards

The technology expansion seen in the last decade for genomics research has permitted the generation of large-scale data sources pertaining to molecular biological assays, genomics, proteomics, transcriptomics and other modern omics catalogs. New methods to analyze, integrate and visualize these data types are essential to unveil relevant disease mechanisms. Towards these objectives, this research focuses on data integration within two scenarios: (1) transcriptomic, proteomic and functional information and (2) real-time sensor-based measurements motivated by single-cell technology. To assess relationships between protein abundance, transcriptomic and functional data, a nonlinear model was explored at static and temporal levels. The successful integration of these heterogeneous data sources through the stochastic gradient boosted tree approach and its improved predictability are some highlights of this work. Through the development of an innovative validation subroutine based on a permutation approach and the use of external information (i.e., operons), lack of a priori knowledge for undetected proteins was overcome. The integrative methodologies allowed for the identification of undetected proteins for Desulfovibrio vulgaris and Shewanella oneidensis for further biological exploration in laboratories towards finding functional relationships. In an effort to better understand diseases such as cancer at different developmental stages, the Microscale Life Science Center headquartered at the Arizona State University is pursuing single-cell studies by developing novel technologies. This research arranged and applied a statistical framework that tackled the following challenges: random noise, heterogeneous dynamic systems with multiple states, and understanding cell behavior within and across different Barrett's esophageal epithelial cell lines using oxygen consumption curves. These curves were characterized with good empirical fit using nonlinear models with simple structures which allowed extraction of a large number of features. Application of a supervised classification model to these features and the integration of experimental factors allowed for identification of subtle patterns among different cell types visualized through multidimensional scaling. Motivated by the challenges of analyzing real-time measurements, we further explored a unique two-dimensional representation of multiple time series using a wavelet approach which showcased promising results towards less complex approximations. Also, the benefits of external information were explored to improve the image representation.
ContributorsTorres Garcia, Wandaliz (Author) / Meldrum, Deirdre R. (Thesis advisor) / Runger, George C. (Thesis advisor) / Gel, Esma S. (Committee member) / Li, Jing (Committee member) / Zhang, Weiwen (Committee member) / Arizona State University (Publisher)
Created2011
151810-Thumbnail Image.png
Description
Hepatocellular carcinoma (HCC) is a malignant tumor and seventh most common cancer in human. Every year there is a significant rise in the number of patients suffering from HCC. Most clinical research has focused on HCC early detection so that there are high chances of patient's survival. Emerging advancements in

Hepatocellular carcinoma (HCC) is a malignant tumor and seventh most common cancer in human. Every year there is a significant rise in the number of patients suffering from HCC. Most clinical research has focused on HCC early detection so that there are high chances of patient's survival. Emerging advancements in functional and structural imaging techniques have provided the ability to detect microscopic changes in tumor micro environment and micro structure. The prime focus of this thesis is to validate the applicability of advanced imaging modality, Magnetic Resonance Elastography (MRE), for HCC diagnosis. The research was carried out on three HCC patient's data and three sets of experiments were conducted. The main focus was on quantitative aspect of MRE in conjunction with Texture Analysis, an advanced imaging processing pipeline and multi-variate analysis machine learning method for accurate HCC diagnosis. We analyzed the techniques to handle unbalanced data and evaluate the efficacy of sampling techniques. Along with this we studied different machine learning algorithms and developed models using them. Performance metrics such as Prediction Accuracy, Sensitivity and Specificity have been used for evaluation for the final developed model. We were able to identify the significant features in the dataset and also the selected classifier was robust in predicting the response class variable with high accuracy.
ContributorsBansal, Gaurav (Author) / Wu, Teresa (Thesis advisor) / Mitchell, Ross (Thesis advisor) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2013
152382-Thumbnail Image.png
Description
A P-value based method is proposed for statistical monitoring of various types of profiles in phase II. The performance of the proposed method is evaluated by the average run length criterion under various shifts in the intercept, slope and error standard deviation of the model. In our proposed approach, P-values

A P-value based method is proposed for statistical monitoring of various types of profiles in phase II. The performance of the proposed method is evaluated by the average run length criterion under various shifts in the intercept, slope and error standard deviation of the model. In our proposed approach, P-values are computed at each level within a sample. If at least one of the P-values is less than a pre-specified significance level, the chart signals out-of-control. The primary advantage of our approach is that only one control chart is required to monitor several parameters simultaneously: the intercept, slope(s), and the error standard deviation. A comprehensive comparison of the proposed method and the existing KMW-Shewhart method for monitoring linear profiles is conducted. In addition, the effect that the number of observations within a sample has on the performance of the proposed method is investigated. The proposed method was also compared to the T^2 method discussed in Kang and Albin (2000) for multivariate, polynomial, and nonlinear profiles. A simulation study shows that overall the proposed P-value method performs satisfactorily for different profile types.
ContributorsAdibi, Azadeh (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Thesis advisor) / Li, Jing (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2013
151176-Thumbnail Image.png
Description
Rapid advance in sensor and information technology has resulted in both spatially and temporally data-rich environment, which creates a pressing need for us to develop novel statistical methods and the associated computational tools to extract intelligent knowledge and informative patterns from these massive datasets. The statistical challenges for addressing these

Rapid advance in sensor and information technology has resulted in both spatially and temporally data-rich environment, which creates a pressing need for us to develop novel statistical methods and the associated computational tools to extract intelligent knowledge and informative patterns from these massive datasets. The statistical challenges for addressing these massive datasets lay in their complex structures, such as high-dimensionality, hierarchy, multi-modality, heterogeneity and data uncertainty. Besides the statistical challenges, the associated computational approaches are also considered essential in achieving efficiency, effectiveness, as well as the numerical stability in practice. On the other hand, some recent developments in statistics and machine learning, such as sparse learning, transfer learning, and some traditional methodologies which still hold potential, such as multi-level models, all shed lights on addressing these complex datasets in a statistically powerful and computationally efficient way. In this dissertation, we identify four kinds of general complex datasets, including "high-dimensional datasets", "hierarchically-structured datasets", "multimodality datasets" and "data uncertainties", which are ubiquitous in many domains, such as biology, medicine, neuroscience, health care delivery, manufacturing, etc. We depict the development of novel statistical models to analyze complex datasets which fall under these four categories, and we show how these models can be applied to some real-world applications, such as Alzheimer's disease research, nursing care process, and manufacturing.
ContributorsHuang, Shuai (Author) / Li, Jing (Thesis advisor) / Askin, Ronald (Committee member) / Ye, Jieping (Committee member) / Runger, George C. (Committee member) / Arizona State University (Publisher)
Created2012
135873-Thumbnail Image.png
Description
Cancer remains one of the leading killers throughout the world. Death and disability due to lung cancer in particular accounts for one of the largest global economic burdens a disease presents. The burden on third-world countries is especially large due to the unusually large financial stress that comes from

Cancer remains one of the leading killers throughout the world. Death and disability due to lung cancer in particular accounts for one of the largest global economic burdens a disease presents. The burden on third-world countries is especially large due to the unusually large financial stress that comes from late tumor detection and expensive treatment options. Early detection using inexpensive techniques may relieve much of the burden throughout the world, not just in more developed countries. I examined the immune responses of lung cancer patients using immunosignatures – patterns of reactivity between host serum antibodies and random peptides. Immunosignatures reveal disease-specific patterns that are very reproducible. Immunosignaturing is a chip-based method that has the ability to display the antibody diversity from individual sera sample with low cost. Immunosignaturing is a medical diagnostic test that has many applications in current medical research and in diagnosis. From a previous clinical study, patients diagnosed for lung cancer were tested for their immunosignature vs. healthy non-cancer volunteers. The pattern of reactivity against the random peptides (the ‘immunosignature’) revealed common signals in cancer patients, absent from healthy controls. My study involved the search for common amino acid motifs in the cancer-specific peptides. My search through the hundreds of ‘hits’ revealed certain motifs that were repeated more times than expected by random chance. The amino acids that were the most conserved in each set include tryptophan, aspartic acid, glutamic acid, proline, alanine, serine, and lysine. The most overall conserved amino acid observed between each set was D - aspartic acid. The motifs were short (no more than 5-6 amino acids in a row), but the total number of motifs I identified was large enough to assure significance. I utilized Excel to organize the large peptide sequence libraries, then CLUSTALW to cluster similar-sequence peptides, then GLAM2 to find common themes in groups of peptides. In so doing, I found sequences that were also present in translated cancer expression libraries (RNA) that matched my motifs, suggesting that immunosignatures can find cancer-specific antigens that can be both diagnostic and potentially therapeutic.
ContributorsShiehzadegan, Shima (Author) / Johnston, Stephen (Thesis director) / Stafford, Phillip (Committee member) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
135788-Thumbnail Image.png
Description
The Department of Defense (DoD) acquisition system is a complex system riddled with cost and schedule overruns. These cost and schedule overruns are very serious issues as the acquisition system is responsible for aiding U.S. warfighters. Hence, if the acquisition process is failing that could be a potential threat to

The Department of Defense (DoD) acquisition system is a complex system riddled with cost and schedule overruns. These cost and schedule overruns are very serious issues as the acquisition system is responsible for aiding U.S. warfighters. Hence, if the acquisition process is failing that could be a potential threat to our nation's security. Furthermore, the DoD acquisition system is responsible for proper allocation of billions of taxpayer's dollars and employs many civilians and military personnel. Much research has been done in the past on the acquisition system with little impact or success. One reason for this lack of success in improving the system is the lack of accurate models to test theories. This research is a continuation of the effort on the Enterprise Requirements and Acquisition Model (ERAM), a discrete event simulation modeling research on DoD acquisition system. We propose to extend ERAM using agent-based simulation principles due to the many interactions among the subsystems of the acquisition system. We initially identify ten sub models needed to simulate the acquisition system. This research focuses on three sub models related to the budget of acquisition programs. In this thesis, we present the data collection, data analysis, initial implementation, and initial validation needed to facilitate these sub models and lay the groundwork for a full agent-based simulation of the DoD acquisition system.
ContributorsBucknell, Sophia Robin (Author) / Wu, Teresa (Thesis director) / Li, Jing (Committee member) / Colombi, John (Committee member) / Industrial, Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
137139-Thumbnail Image.png
Description
The influenza virus, also known as "the flu", is an infectious disease that has constantly affected the health of humanity. There is currently no known cure for Influenza. The Center for Innovations in Medicine at the Biodesign Institute located on campus at Arizona State University has been developing synbodies as

The influenza virus, also known as "the flu", is an infectious disease that has constantly affected the health of humanity. There is currently no known cure for Influenza. The Center for Innovations in Medicine at the Biodesign Institute located on campus at Arizona State University has been developing synbodies as a possible Influenza therapeutic. Specifically, at CIM, we have attempted to design these initial synbodies to target the entire Influenza virus and preliminary data leads us to believe that these synbodies target Nucleoprotein (NP). Given that the synbody targets NP, the penetration of cells via synbody should also occur. Then by Western Blot analysis we evaluated for the diminution of NP level in treated cells versus untreated cells. The focus of my honors thesis is to explore how synthetic antibodies can potentially inhibit replication of the Influenza (H1N1) A/Puerto Rico/8/34 strain so that a therapeutic can be developed. A high affinity synbody for Influenza can be utilized to test for inhibition of Influenza as shown by preliminary data. The 5-5-3819 synthetic antibody's internalization in live cells was visualized with Madin-Darby Kidney Cells under a Confocal Microscope. Then by Western Blot analysis we evaluated for the diminution of NP level in treated cells versus untreated cells. Expression of NP over 8 hours time was analyzed via Western Blot Analysis, which showed NP accumulation was retarded in synbody treated cells. The data obtained from my honors thesis and preliminary data provided suggest that the synthetic antibody penetrates live cells and targets NP. The results of my thesis presents valuable information that can be utilized by other researchers so that future experiments can be performed, eventually leading to the creation of a more effective therapeutic for influenza.
ContributorsHayden, Joel James (Author) / Diehnelt, Chris (Thesis director) / Johnston, Stephen (Committee member) / Legutki, Bart (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor) / Department of Chemistry and Biochemistry (Contributor)
Created2014-05
149315-Thumbnail Image.png
Description
In today's global market, companies are facing unprecedented levels of uncertainties in supply, demand and in the economic environment. A critical issue for companies to survive increasing competition is to monitor the changing business environment and manage disturbances and changes in real time. In this dissertation, an integrated framework is

In today's global market, companies are facing unprecedented levels of uncertainties in supply, demand and in the economic environment. A critical issue for companies to survive increasing competition is to monitor the changing business environment and manage disturbances and changes in real time. In this dissertation, an integrated framework is proposed using simulation and online calibration methods to enable the adaptive management of large-scale complex supply chain systems. The design, implementation and verification of the integrated approach are studied in this dissertation. The research contributions are two-fold. First, this work enriches symbiotic simulation methodology by proposing a framework of simulation and advanced data fusion methods to improve simulation accuracy. Data fusion techniques optimally calibrate the simulation state/parameters by considering errors in both the simulation models and in measurements of the real-world system. Data fusion methods - Kalman Filtering, Extended Kalman Filtering, and Ensemble Kalman Filtering - are examined and discussed under varied conditions of system chaotic levels, data quality and data availability. Second, the proposed framework is developed, validated and demonstrated in `proof-of-concept' case studies on representative supply chain problems. In the case study of a simplified supply chain system, Kalman Filtering is applied to fuse simulation data and emulation data to effectively improve the accuracy of the detection of abnormalities. In the case study of the `beer game' supply chain model, the system's chaotic level is identified as a key factor to influence simulation performance and the choice of data fusion method. Ensemble Kalman Filtering is found more robust than Extended Kalman Filtering in a highly chaotic system. With appropriate tuning, the improvement of simulation accuracy is up to 80% in a chaotic system, and 60% in a stable system. In the last study, the integrated framework is applied to adaptive inventory control of a multi-echelon supply chain with non-stationary demand. It is worth pointing out that the framework proposed in this dissertation is not only useful in supply chain management, but also suitable to model other complex dynamic systems, such as healthcare delivery systems and energy consumption networks.
ContributorsWang, Shanshan (Author) / Wu, Teresa (Thesis advisor) / Fowler, John (Thesis advisor) / Pfund, Michele (Committee member) / Li, Jing (Committee member) / Pavlicek, William (Committee member) / Arizona State University (Publisher)
Created2010
131810-Thumbnail Image.png
Description
Technological applications are continually being developed in the healthcare industry as technology becomes increasingly more available. In recent years, companies have started creating mobile applications to address various conditions and diseases. This falls under mHealth or the “use of mobile phones and other wireless technology in medical care” (Rouse, 2018).

Technological applications are continually being developed in the healthcare industry as technology becomes increasingly more available. In recent years, companies have started creating mobile applications to address various conditions and diseases. This falls under mHealth or the “use of mobile phones and other wireless technology in medical care” (Rouse, 2018). The goal of this study was to identify if data gathered through the use of mHealth methods can be used to build predictive models. The first part of this thesis contains a literature review presenting relevant definitions and several potential studies that involved the use of technology in healthcare applications. The second part of this thesis focuses on data from one study, where regression analysis is used to develop predictive models.

Rouse, M. (2018). mHealth (mobile health). Retrieved from https://searchhealthit.techtarget.com/definition/mHealth
ContributorsAkers, Lindsay (Co-author) / Kiraly, Alyssa (Co-author) / Li, Jing (Thesis director) / Yoon, Hyunsoo (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
132592-Thumbnail Image.png
Description
In this study, we demonstrate the effectiveness of a cancer type specific FrAmeShifT (FAST) vaccine. A murine breast cancer (mBC) FAST vaccine and a murine pancreatic cancer (mPC) FAST vaccine were tested in the 4T1 breast cancer syngeneic mouse model. The mBC FAST vaccine, both with and without check point

In this study, we demonstrate the effectiveness of a cancer type specific FrAmeShifT (FAST) vaccine. A murine breast cancer (mBC) FAST vaccine and a murine pancreatic cancer (mPC) FAST vaccine were tested in the 4T1 breast cancer syngeneic mouse model. The mBC FAST vaccine, both with and without check point inhibitors (CPI), significantly slowed tumor growth, reduced pulmonary metastasis and increased the cell-mediated immune response. In terms of tumor volumes, the mPC FAST vaccine was comparable to the untreated controls. However, a significant difference in tumor volume did emerge when the mPC vaccine was used with CPI. The collective data indicated that the immune checkpoint blockade therapy was only beneficial with suboptimal neoantigens. More importantly, the FAST vaccine, though requiring notably less resources, performed similarly to the personalized version of the frameshift breast cancer vaccine in the same mouse model. Furthermore, because the frameshift peptide (FSP) array provided a strong rationale for a focused vaccine, the FAST vaccine can theoretically be expanded and translated to any human cancer type. Overall, the FAST vaccine is a promising treatment that would provide the most benefit to patients while eliminating most of the challenges associated with current personal cancer vaccines.
ContributorsMurphy, Sierra Nicole (Author) / Johnston, Stephen (Thesis director) / Peterson, Milene (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of Molecular Sciences (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05