This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 544
Filtering by

Clear all filters

149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
Description

The PPP Loan Program was created by the CARES Act and carried out by the Small Business Administration (SBA) to provide support to small businesses in maintaining their payroll during the Coronavirus pandemic. This program was approved for $350 billion, but this amount was expanded by an additional $320 billion

The PPP Loan Program was created by the CARES Act and carried out by the Small Business Administration (SBA) to provide support to small businesses in maintaining their payroll during the Coronavirus pandemic. This program was approved for $350 billion, but this amount was expanded by an additional $320 billion to meet the demand by struggling businesses, since initial funding was exhausted under two weeks.<br/><br/>Significant controversy surrounds the program. In December 2020, the Department of Justice reported 90 individuals were charged for fraudulent use of funds, totaling $250 million. The loans, which were intended for small business, were actually approved for 450 public companies. Furthermore, the methods of approval are<br/>shrouded in mystery. In an effort to be transparent, the SBA has released information about loan recipients. Conveniently, the SBA has released information of all recipients. Detailed information was released for 661,218 recipients who have received a PPP loan in excess of $150,000. These recipients are the central point of this research.<br/><br/>This research sought to answer two primary questions: how did the SBA determine which loans, and therefore which industries are approved, and did the industries most affected by the pandemic receive the most in PPP loans, as intended by Congress? It was determined that, generally, PPP Loans were approved on the basis of employment percentages relative to the individual state. Furthermore, in general, the loans approved were approved fairly, with respect to the size of the industry. The loans, when adjusted for GDP and Employment factors, yielded a clear ranking that prioritized vulnerable industries first.<br/><br/>However, significant questions remain. The effectiveness of the PPP has been hindered by unclear incentives and negative outcomes, characterized by a government program that has essentially been rushed into service. Furthermore, limitations of available data to regress and compare the SBA's approved loans are not representative of small business.

ContributorsMaglanoc, Julian (Author) / Kenchington, David (Thesis director) / Cassidy, Nancy (Committee member) / Department of Finance (Contributor) / Dean, W.P. Carey School of Business (Contributor) / School of Accountancy (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

The Covid-19 pandemic has made a significant impact on both the stock market and the<br/>global economy. The resulting volatility in stock prices has provided an opportunity to examine<br/>the Efficient Market Hypothesis. This study aims to gain insights into the efficiency of markets<br/>based on stock price performance in the Covid era.

The Covid-19 pandemic has made a significant impact on both the stock market and the<br/>global economy. The resulting volatility in stock prices has provided an opportunity to examine<br/>the Efficient Market Hypothesis. This study aims to gain insights into the efficiency of markets<br/>based on stock price performance in the Covid era. Specifically, it investigates the market’s<br/>ability to anticipate significant events during the Covid-19 timeline beginning November 1, 2019<br/><br/>and ending March 31, 2021. To examine the efficiency of markets, our team created a Stay-at-<br/>Home Portfolio, experiencing economic tailwinds from the Covid lockdowns, and a Pandemic<br/><br/>Loser Portfolio, experiencing economic headwinds from the Covid lockdowns. Cumulative<br/>returns of each portfolio are benchmarked to the cumulative returns of the S&P 500. The results<br/>showed that the Efficient Market Hypothesis is likely to be valid, although a definitive<br/>conclusion cannot be made based on the scope of the analysis. There are recommendations for<br/>further research surrounding key events that may be able to draw a more direct conclusion.

ContributorsBrock, Matt Ian (Co-author) / Beneduce, Trevor (Co-author) / Craig, Nicko (Co-author) / Hertzel, Michael (Thesis director) / Mindlin, Jeff (Committee member) / Department of Finance (Contributor) / Economics Program in CLAS (Contributor) / WPC Graduate Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147956-Thumbnail Image.png
Description

Music streaming services have affected the music industry from both a financial and legal standpoint. Their current business model affects stakeholders such as artists, users, and investors. These services have been scrutinized recently for their imperfect royalty distribution model. Covid-19 has made these discussions even more relevant as touring income

Music streaming services have affected the music industry from both a financial and legal standpoint. Their current business model affects stakeholders such as artists, users, and investors. These services have been scrutinized recently for their imperfect royalty distribution model. Covid-19 has made these discussions even more relevant as touring income has come to a halt for musicians and the live entertainment industry. <br/>Under the current per-stream model, it is becoming exceedingly hard for artists to make a living off of streams. This forces artists to tour heavily as well as cut corners to create what is essentially “disposable art”. Rapidly releasing multiple projects a year has become the norm for many modern artists. This paper will examine the licensing framework, royalty payout issues, and propose a solution.

ContributorsKoudssi, Zakaria Corley (Author) / Sadusky, Brian (Thesis director) / Koretz, Lora (Committee member) / Dean, W.P. Carey School of Business (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

The COVID-19 pandemic has and will continue to radically shift the workplace. An increasing percentage of the workforce desires flexible working options and, as such, firms are likely to require less office space going forward. Additionally, the economic downturn caused by the pandemic provides an opportunity for companies to secure

The COVID-19 pandemic has and will continue to radically shift the workplace. An increasing percentage of the workforce desires flexible working options and, as such, firms are likely to require less office space going forward. Additionally, the economic downturn caused by the pandemic provides an opportunity for companies to secure favorable rent rates on new lease agreements. This project aims to evaluate and measure Company X’s potential cost savings from terminating current leases and downsizing office space in five selected cities. Along with city-specific real estate market research and forecasts, we employ a four-stage model of Company X’s real estate negotiation process to analyze whether existing lease agreements in these cities should be renewed or terminated.

ContributorsHegardt, Brandon Michael (Co-author) / Saker, Logan (Co-author) / Patterson, Jack (Co-author) / Ries, Sarah (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description
In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably

In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably more important than any other data type, because the data point could be a cancer patient or the classication decision could help determine what gene might be over expressed and perhaps a cause of cancer. These mis-classications are typically higher in the presence of outlier data points. The aim of this thesis is to develop a maximum margin classier that is suited to address the lack of robustness of discriminant based classiers (like the Support Vector Machine (SVM)) to noise and outliers. The underlying notion is to adopt and develop a natural loss function that is more robust to outliers and more representative of the true loss function of the data. It is demonstrated experimentally that SVM's are indeed susceptible to outliers and that the new classier developed, here coined as Robust-SVM (RSVM), is superior to all studied classier on the synthetic datasets. It is superior to the SVM in both the synthetic and experimental data from biomedical studies and is competent to a classier derived on similar lines when real life data examples are considered.
ContributorsGupta, Sidharth (Author) / Kim, Seungchan (Thesis advisor) / Welfert, Bruno (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
149931-Thumbnail Image.png
Description
HIV/AIDS is the sixth leading cause of death worldwide and the leading cause of death among women of reproductive age living in low-income countries. Clinicians in industrialized nations monitor the efficacy of antiretroviral drugs and HIV disease progression with the HIV-1 viral load assay, which measures the copy number of

HIV/AIDS is the sixth leading cause of death worldwide and the leading cause of death among women of reproductive age living in low-income countries. Clinicians in industrialized nations monitor the efficacy of antiretroviral drugs and HIV disease progression with the HIV-1 viral load assay, which measures the copy number of HIV-1 RNA in blood. However, viral load assays are not widely available in sub-Saharan Africa and cost between 50-$139 USD per test on average where available. To address this problem, a mixed-methods approach was undertaken to design a novel and inexpensive viral load diagnostic for HIV-1 and to evaluate barriers to its adoption in a developing country. The assay was produced based on loop-mediated isothermal amplification (LAMP). Blood samples from twenty-one individuals were spiked with varying concentrations of HIV-1 RNA to evaluate the sensitivity and specificity of LAMP. Under isothermal conditions, LAMP was performed with an initial reverse-transcription step (RT-LAMP) and primers designed for HIV-1 subtype C. Each reaction generated up to a few billion copies of target DNA within an hour. Presence of target was detected through naked-eye observation of a fluorescent indicator and verified by DNA gel electrophoresis and real-time fluorescence. The assay successfully detected the presence of HIV in samples with a broad range of HIV RNA concentration, from over 120,000 copies/reaction to 120 copies/reaction. In order to better understand barriers to adoption of LAMP in developing countries, a feasibility study was undertaken in Tanzania, a low-income country facing significant problems in healthcare. Medical professionals in Northern Tanzania were surveyed for feedback regarding perspectives of current HIV assays, patient treatment strategies, availability of treatment, treatment priorities, HIV transmission, and barriers to adoption of the HIV-1 LAMP assay. The majority of medical providers surveyed indicated that the proposed LAMP assay is too expensive for their patient populations. Significant gender differences were observed in response to some survey questions. Female medical providers were more likely to cite stigma as a source problem of the HIV epidemic than male medical providers while males were more likely to cite lack of education as a source problem than female medical providers.
ContributorsSalamone, Damien Thomas (Author) / Jacobs, Bertram L (Thesis advisor) / Marsiglia, Flavio (Committee member) / Stout, Valerie (Committee member) / Johnson, Crista (Committee member) / Arizona State University (Publisher)
Created2011
149928-Thumbnail Image.png
Description
The technology expansion seen in the last decade for genomics research has permitted the generation of large-scale data sources pertaining to molecular biological assays, genomics, proteomics, transcriptomics and other modern omics catalogs. New methods to analyze, integrate and visualize these data types are essential to unveil relevant disease mechanisms. Towards

The technology expansion seen in the last decade for genomics research has permitted the generation of large-scale data sources pertaining to molecular biological assays, genomics, proteomics, transcriptomics and other modern omics catalogs. New methods to analyze, integrate and visualize these data types are essential to unveil relevant disease mechanisms. Towards these objectives, this research focuses on data integration within two scenarios: (1) transcriptomic, proteomic and functional information and (2) real-time sensor-based measurements motivated by single-cell technology. To assess relationships between protein abundance, transcriptomic and functional data, a nonlinear model was explored at static and temporal levels. The successful integration of these heterogeneous data sources through the stochastic gradient boosted tree approach and its improved predictability are some highlights of this work. Through the development of an innovative validation subroutine based on a permutation approach and the use of external information (i.e., operons), lack of a priori knowledge for undetected proteins was overcome. The integrative methodologies allowed for the identification of undetected proteins for Desulfovibrio vulgaris and Shewanella oneidensis for further biological exploration in laboratories towards finding functional relationships. In an effort to better understand diseases such as cancer at different developmental stages, the Microscale Life Science Center headquartered at the Arizona State University is pursuing single-cell studies by developing novel technologies. This research arranged and applied a statistical framework that tackled the following challenges: random noise, heterogeneous dynamic systems with multiple states, and understanding cell behavior within and across different Barrett's esophageal epithelial cell lines using oxygen consumption curves. These curves were characterized with good empirical fit using nonlinear models with simple structures which allowed extraction of a large number of features. Application of a supervised classification model to these features and the integration of experimental factors allowed for identification of subtle patterns among different cell types visualized through multidimensional scaling. Motivated by the challenges of analyzing real-time measurements, we further explored a unique two-dimensional representation of multiple time series using a wavelet approach which showcased promising results towards less complex approximations. Also, the benefits of external information were explored to improve the image representation.
ContributorsTorres Garcia, Wandaliz (Author) / Meldrum, Deirdre R. (Thesis advisor) / Runger, George C. (Thesis advisor) / Gel, Esma S. (Committee member) / Li, Jing (Committee member) / Zhang, Weiwen (Committee member) / Arizona State University (Publisher)
Created2011
150067-Thumbnail Image.png
Description
The objective of this project was to evaluate human factors based cognitive aids on endoscope reprocessing. The project stems from recent failures in reprocessing (cleaning) endoscopes, contributing to the spread of harmful bacterial and viral agents between patients. Three themes were found to represent a majority of problems:

The objective of this project was to evaluate human factors based cognitive aids on endoscope reprocessing. The project stems from recent failures in reprocessing (cleaning) endoscopes, contributing to the spread of harmful bacterial and viral agents between patients. Three themes were found to represent a majority of problems: 1) lack of visibility (parts and tools were difficult to identify), 2) high memory demands, and 3) insufficient user feedback. In an effort to improve completion rate and eliminate error, cognitive aids were designed utilizing human factors principles that would replace existing manufacturer visual aids. Then, a usability test was conducted, which compared the endoscope reprocessing performance of novices using the standard manufacturer-provided visual aids and the new cognitive aids. Participants successfully completed 87.1% of the reprocessing procedure in the experimental condition with the use of the cognitive aids, compared to 46.3% in the control condition using only existing support materials. Twenty-five of sixty subtasks showed significant improvement in completion rates. When given a cognitive aid designed with human factors principles, participants were able to more successfully complete the reprocessing task. This resulted in an endoscope that was more likely to be safe for patient use.
ContributorsJolly, Jonathan D (Author) / Branaghan, Russell J (Thesis advisor) / Cooke, Nancy J. (Committee member) / Sanchez, Christopher (Committee member) / Arizona State University (Publisher)
Created2011
150250-Thumbnail Image.png
Description
Immunosignaturing is a new immunodiagnostic technology that uses random-sequence peptide microarrays to profile the humoral immune response. Though the peptides have little sequence homology to any known protein, binding of serum antibodies may be detected, and the pattern correlated to disease states. The aim of my dissertation is to analyze

Immunosignaturing is a new immunodiagnostic technology that uses random-sequence peptide microarrays to profile the humoral immune response. Though the peptides have little sequence homology to any known protein, binding of serum antibodies may be detected, and the pattern correlated to disease states. The aim of my dissertation is to analyze the factors affecting the binding patterns using monoclonal antibodies and determine how much information may be extracted from the sequences. Specifically, I examined the effects of antibody concentration, competition, peptide density, and antibody valence. Peptide binding could be detected at the low concentrations relevant to immunosignaturing, and a monoclonal's signature could even be detected in the presences of 100 fold excess naive IgG. I also found that peptide density was important, but this effect was not due to bivalent binding. Next, I examined in more detail how a polyreactive antibody binds to the random sequence peptides compared to protein sequence derived peptides, and found that it bound to many peptides from both sets, but with low apparent affinity. An in depth look at how the peptide physicochemical properties and sequence complexity revealed that there were some correlations with properties, but they were generally small and varied greatly between antibodies. However, on a limited diversity but larger peptide library, I found that sequence complexity was important for antibody binding. The redundancy on that library did enable the identification of specific sub-sequences recognized by an antibody. The current immunosignaturing platform has little repetition of sub-sequences, so I evaluated several methods to infer antibody epitopes. I found two methods that had modest prediction accuracy, and I developed a software application called GuiTope to facilitate the epitope prediction analysis. None of the methods had sufficient accuracy to identify an unknown antigen from a database. In conclusion, the characteristics of the immunosignaturing platform observed through monoclonal antibody experiments demonstrate its promise as a new diagnostic technology. However, a major limitation is the difficulty in connecting the signature back to the original antigen, though larger peptide libraries could facilitate these predictions.
ContributorsHalperin, Rebecca (Author) / Johnston, Stephen A. (Thesis advisor) / Bordner, Andrew (Committee member) / Taylor, Thomas (Committee member) / Stafford, Phillip (Committee member) / Arizona State University (Publisher)
Created2011