Matching Items (195)
Filtering by

Clear all filters

150449-Thumbnail Image.png
Description
Current information on successful leadership and management practices is contradictory and inconsistent, which makes difficult to understand what successful business practices are and what are not. The purpose of this study is to identify a simple process that quickly and logically identifies consistent and inconsistent leadership and management criteria. The

Current information on successful leadership and management practices is contradictory and inconsistent, which makes difficult to understand what successful business practices are and what are not. The purpose of this study is to identify a simple process that quickly and logically identifies consistent and inconsistent leadership and management criteria. The hypothesis proposed is that Information Measurement Theory (IMT) along with the Kashiwagi Solution Model (KSM) is a methodology than can differentiate between accurate and inaccurate principles the initial part of the study about authors in these areas show how information is conflictive, and also served to establish an initial baseline of recommended practices aligned with IMT. The one author that excels in comparison to the rest suits the "Initial Baseline Matrix from Deming" which composes the first model. The second model is denominated the "Full Extended KSM-Matrix" composed of all the LS characteristics found among all authors and IMT. Both models were tested-out for accuracy. The second part of the study was directed to evaluate the perception of individuals on these principles. Two different groups were evaluated, one group of people that had prior training and knowledge of IMT; another group of people without any knowledge of IMT. The results of the survey showed more confusion in the group of people without knowledge to IMT and improved consistency and less variation in the group of people with knowledge in IMT. The third part of the study, the analysis of case studies of success and failure, identified principles as contributors, and categorized them into LS/type "A" characteristics and RS/type "C" characteristics, by applying the KSM. The results validated the initial proposal and led to the conclusion that practices that fall into the LS side of the KSM will lead to success, while practices that fall into the RS of the KSM will lead to failure. The comparison and testing of both models indicated a dominant support of the IMT concepts as contributors to success; while the KSM model has a higher accuracy of prediction.
ContributorsReynolds, Harry (Author) / Kashiwagi, Dean (Thesis advisor) / Sullivan, Kenneth (Committee member) / Badger, William (Committee member) / Arizona State University (Publisher)
Created2011
150784-Thumbnail Image.png
Description
In this work, the vapor transport and aerobic bio-attenuation of compounds from a multi-component petroleum vapor mixture were studied for six idealized lithologies in 1.8-m tall laboratory soil columns. Columns representing different geological settings were prepared using 20-40 mesh sand (medium-grained) and 16-minus mesh crushed granite (fine-grained). The contaminant vapor

In this work, the vapor transport and aerobic bio-attenuation of compounds from a multi-component petroleum vapor mixture were studied for six idealized lithologies in 1.8-m tall laboratory soil columns. Columns representing different geological settings were prepared using 20-40 mesh sand (medium-grained) and 16-minus mesh crushed granite (fine-grained). The contaminant vapor source was a liquid composed of twelve petroleum hydrocarbons common in weathered gasoline. It was placed in a chamber at the bottom of each column and the vapors diffused upward through the soil to the top where they were swept away with humidified gas. The experiment was conducted in three phases: i) nitrogen sweep gas; ii) air sweep gas; iii) vapor source concentrations decreased by ten times from the original concentrations and under air sweep gas. Oxygen, carbon dioxide and hydrocarbon concentrations were monitored over time. The data allowed determination of times to reach steady conditions, effluent mass emissions and concentration profiles. Times to reach near-steady conditions were consistent with theory and chemical-specific properties. First-order degradation rates were highest for straight-chain alkanes and aromatic hydrocarbons. Normalized effluent mass emissions were lower for lower source concentration and aerobic conditions. At the end of the study, soil core samples were taken every 6 in. Soil moisture content analyses showed that water had redistributed in the soil during the experiment. The soil at the bottom of the columns generally had higher moisture contents than initial values, and soil at the top had lower moisture contents. Profiles of the number of colony forming units of hydrocarbon-utilizing bacteria/g-soil indicated that the highest concentrations of degraders were located at the vertical intervals where maximum degradation activity was suggested by CO2 profiles. Finally, the near-steady conditions of each phase of the study were simulated using a three-dimensional transient numerical model. The model was fit to the Phase I data by adjusting soil properties, and then fit to Phase III data to obtain compound-specific first-order biodegradation rate constants ranging from 0.0 to 5.7x103 d-1.
ContributorsEscobar Melendez, Elsy (Author) / Johnson, Paul C. (Thesis advisor) / Andino, Jean (Committee member) / Forzani, Erica (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Kavazanjian, Edward (Committee member) / Arizona State University (Publisher)
Created2012
150491-Thumbnail Image.png
Description
We propose a novel solution to prevent cancer by developing a prophylactic cancer. Several sources of antigens for cancer vaccines have been published. Among these, antigens that contain a frame-shift (FS) peptide or viral peptide are quite attractive for a variety of reasons. FS sequences, from either mistake in RNA

We propose a novel solution to prevent cancer by developing a prophylactic cancer. Several sources of antigens for cancer vaccines have been published. Among these, antigens that contain a frame-shift (FS) peptide or viral peptide are quite attractive for a variety of reasons. FS sequences, from either mistake in RNA processing or in genomic DNA, may lead to generation of neo-peptides that are foreign to the immune system. Viral peptides presumably would originate from exogenous but integrated viral nucleic acid sequences. Both are non-self, therefore lessen concerns about development of autoimmunity. I have developed a bioinformatical approach to identify these aberrant transcripts in the cancer transcriptome. Their suitability for use in a vaccine is evaluated by establishing their frequencies and predicting possible epitopes along with their population coverage according to the prevalence of major histocompatibility complex (MHC) types. Viral transcripts and transcripts with FS mutations from gene fusion, insertion/deletion at coding microsatellite DNA, and alternative splicing were identified in NCBI Expressed Sequence Tag (EST) database. 48 FS chimeric transcripts were validated in 50 breast cell lines and 68 primary breast tumor samples with their frequencies from 4% to 98% by RT-PCR and sequencing confirmation. These 48 FS peptides, if translated and presented, could be used to protect more than 90% of the population in Northern America based on the prediction of epitopes derived from them. Furthermore, we synthesized 150 peptides that correspond to FS and viral peptides that we predicted would exist in tumor patients and we tested over 200 different cancer patient sera. We found a number of serological reactive peptide sequences in cancer patients that had little to no reactivity in healthy controls; strong support for the strength of our bioinformatic approach. This study describes a process used to identify aberrant transcripts that lead to a new source of antigens that can be tested and used in a prophylactic cancer vaccine. The vast amount of transcriptome data of various cancers from the Cancer Genome Atlas (TCGA) project will enhance our ability to further select better cancer antigen candidates.
ContributorsLee, HoJoon (Author) / Johnston, Stephen A. (Thesis advisor) / Kumar, Sudhir (Committee member) / Miller, Laurence (Committee member) / Stafford, Phillip (Committee member) / Sykes, Kathryn (Committee member) / Arizona State University (Publisher)
Created2012
150498-Thumbnail Image.png
Description
Contamination by chlorinated ethenes is widespread in groundwater aquifers, sediment, and soils worldwide. The overarching objectives of my research were to understand how the bacterial genus Dehalococcoides function optimally to carry out reductive dechlorination of chlorinated ethenes in a mixed microbial community and then apply this knowledge to manage dechlorinating

Contamination by chlorinated ethenes is widespread in groundwater aquifers, sediment, and soils worldwide. The overarching objectives of my research were to understand how the bacterial genus Dehalococcoides function optimally to carry out reductive dechlorination of chlorinated ethenes in a mixed microbial community and then apply this knowledge to manage dechlorinating communities in the hydrogen-based membrane biofilm reactor (MBfR). The MBfR is used for the biological reduction of oxidized contaminants in water using hydrogen supplied as the electron donor by diffusion through gas-transfer fibers. First, I characterized a new anaerobic dechlorinating community developed in our laboratory, named DehaloR^2, in terms of chlorinated ethene turnover rates and assessed its microbial community composition. I then carried out an experiment to correlate performance and community structure for trichloroethene (TCE)-fed microbial consortia. Fill-and-draw reactors inoculated with DehaloR^2 demonstrated a direct correlation between microbial community function and structure as the TCE-pulsing rate was increased. An electron-balance analysis predicted the community structure based on measured concentrations of products and constant net yields for each microorganism. The predictions corresponded to trends in the community structure based on pyrosequencing and quantitative PCR up to the highest TCE pulsing rate, where deviations to the trend resulted from stress by the chlorinated ethenes. Next, I optimized a method for simultaneous detection of chlorinated ethenes and ethene at or below the Environmental Protection Agency maximum contaminant levels for groundwater using solid phase microextraction in a gas chromatograph with a flame ionization detector. This method is ideal for monitoring biological reductive dechlorination in groundwater, where ethene is the ultimate end product. The major advantage of this method is that it uses a small sample volume of 1 mL, making it ideally suited for bench-scale feasibility studies, such as the MBfR. Last, I developed a reliable start-up and operation strategy for TCE reduction in the MBfR. Successful operation relied on controlling the pH-increase effects of methanogenesis and homoacetogenesis, along with creating hydrogen limitation during start-up to allow dechlorinators to compete against other microorgansims. Methanogens were additionally minimized during continuous flow operation by a limitation in bicarbonate resulting from strong homoacetogenic activity.
ContributorsZiv-El, Michal (Author) / Rittmann, Bruce E. (Thesis advisor) / Krajmalnik-Brown, Rosa (Thesis advisor) / Halden, Rolf U. (Committee member) / Arizona State University (Publisher)
Created2012
150481-Thumbnail Image.png
Description
The overall goal of this dissertation is to advance understanding of biofilm reduction of oxidized contaminants in water and wastewater. Chapter 1 introduces the fundamentals of biological reduction of three oxidized contaminants (nitrate, perchlorate, and trichloriethene (TCE)) using two biofilm processes (hydrogen-based membrane biofilm reactors (MBfR) and packed-bed heterotrophic reactors

The overall goal of this dissertation is to advance understanding of biofilm reduction of oxidized contaminants in water and wastewater. Chapter 1 introduces the fundamentals of biological reduction of three oxidized contaminants (nitrate, perchlorate, and trichloriethene (TCE)) using two biofilm processes (hydrogen-based membrane biofilm reactors (MBfR) and packed-bed heterotrophic reactors (PBHR)), and it identifies the research objectives. Chapters 2 through 6 focus on nitrate removal using the MBfR and PBHR, while chapters 7 through 10 investigate simultaneous reduction of nitrate and another oxidized compound (perchlorate, sulfate, or TCE) in the MBfR. Chapter 11 summarizes the major findings of this research. Chapters 2 and 3 demonstrate nitrate removal in a groundwater and identify the maximum nitrate loadings using a pilot-scale MBfR and a pilot-scale PBHR, respectively. Chapter 4 compares the MBfR and the PBHR for denitrification of the same nitrate-contaminated groundwater. The comparison includes the maximum nitrate loading, the effluent water quality of the denitrification reactors, and the impact of post-treatment on water quality. Chapter 5 theoretically and experimentally demonstrates that the nitrate biomass-carrier surface loading, rather than the traditionally used empty bed contact time or nitrate volumetric loading, is the primary design parameter for heterotrophic denitrification. Chapter 6 constructs a pH-control model to predict pH, alkalinity, and precipitation potential in heterotrophic or hydrogen-based autotrophic denitrification reactors. Chapter 7 develops and uses steady-state permeation tests and a mathematical model to determine the hydrogen-permeation coefficients of three fibers commonly used in the MBfR. The coefficients are then used as inputs for the three models in Chapters 8-10. Chapter 8 develops a multispecies biofilm model for simultaneous reduction of nitrate and perchlorate in the MBfR. The model quantitatively and systematically explains how operating conditions affect nitrate and perchlorate reduction and biomass distribution via four mechanisms. Chapter 9 modifies the nitrate and perchlorate model into a nitrate and sulfate model and uses it to identify operating conditions corresponding to onset of sulfate reduction. Chapter 10 modifies the nitrate and perchlorate model into a nitrate and TCE model and uses it to investigate how operating conditions affect TCE reduction and accumulation of TCE reduction intermediates.
ContributorsTang, Youneng (Author) / Rittmann, Bruce E. (Thesis advisor) / Westerhoff, Paul (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Halden, Rolf (Committee member) / Arizona State University (Publisher)
Created2012
151234-Thumbnail Image.png
Description
Immunosignaturing is a technology that allows the humoral immune response to be observed through the binding of antibodies to random sequence peptides. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides in a multiplexed fashion. There are computational and statistical challenges to

Immunosignaturing is a technology that allows the humoral immune response to be observed through the binding of antibodies to random sequence peptides. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides in a multiplexed fashion. There are computational and statistical challenges to the analysis of immunosignaturing data. The overall aim of my dissertation is to develop novel computational and statistical methods for immunosignaturing data to access its potential for diagnostics and drug discovery. Firstly, I discovered that a classification algorithm Naive Bayes which leverages the biological independence of the probes on our array in such a way as to gather more information outperforms other classification algorithms due to speed and accuracy. Secondly, using this classifier, I then tested the specificity and sensitivity of immunosignaturing platform for its ability to resolve four different diseases (pancreatic cancer, pancreatitis, type 2 diabetes and panIN) that target the same organ (pancreas). These diseases were separated with >90% specificity from controls and from each other. Thirdly, I observed that the immunosignature of type 2 diabetes and cardiovascular complications are unique, consistent, and reproducible and can be separated by 100% accuracy from controls. But when these two complications arise in the same person, the resultant immunosignature is quite different in that of individuals with only one disease. I developed a method to trace back from informative random peptides in disease signatures to the potential antigen(s). Hence, I built a decipher system to trace random peptides in type 1 diabetes immunosignature to known antigens. Immunosignaturing, unlike the ELISA, has the ability to not only detect the presence of response but also absence of response during a disease. I observed, not only higher but also lower peptides intensities can be mapped to antigens in type 1 diabetes. To study immunosignaturing potential for population diagnostics, I studied effect of age, gender and geographical location on immunosignaturing data. For its potential to be a health monitoring technology, I proposed a single metric Coefficient of Variation that has shown potential to change significantly when a person enters a disease state.
ContributorsKukreja, Muskan (Author) / Johnston, Stephen Albert (Thesis advisor) / Stafford, Phillip (Committee member) / Dinu, Valentin (Committee member) / Arizona State University (Publisher)
Created2012
136547-Thumbnail Image.png
Description
The introduction of novel information technology within contemporary healthcare settings presents a critical juncture for the industry and thus lends itself to the importance of better understanding the impact of this emerging "health 2.0" landscape. Simply, how such technology may affect the healthcare system is still not fully realized, despite

The introduction of novel information technology within contemporary healthcare settings presents a critical juncture for the industry and thus lends itself to the importance of better understanding the impact of this emerging "health 2.0" landscape. Simply, how such technology may affect the healthcare system is still not fully realized, despite the ever-growing need to adopt it in order to serve a growing patient population. Thus, two pertinent questions are posed: is HIT useful and practical and, if so, what is the best way to implement it? This study examined the clinical implementation of specific instances of health information technology (HIT) so as to weigh its benefits and risks to ultimately construct a proposal for successful widespread adoption. Due to the poignancy of information analysis within HIT, Information Measurement Theory (IMT) was used to measure the effectiveness of current HIT systems as well as to elucidate improvements for future implementation. The results indicate that increased transparency, attention to patient-focused approaches and proper IT training will not only allow HIT to better serve the community, but will also decrease inefficient healthcare expenditure.
ContributorsMaietta, Myles Anthony (Author) / Kashiwagi, Dean (Thesis director) / Kashiwagi, Jacob (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor) / School of Life Sciences (Contributor)
Created2015-05
136548-Thumbnail Image.png
Description
The value of data in the construction industry is driven by the actual worth or usefulness the data can provide. The revolutionary method of Best Value Performance Information Procurement System implemented into the industry by the Performance Based Studies Research Group at ASU optimizes the value of data. By simplifying

The value of data in the construction industry is driven by the actual worth or usefulness the data can provide. The revolutionary method of Best Value Performance Information Procurement System implemented into the industry by the Performance Based Studies Research Group at ASU optimizes the value of data. By simplifying the details and complexity of a construction project through dominant and logical thinking, the Best Value system delivers efficient, non-risk success. The Best Value model's implementation into industry projects is observed in the PBSRG Minnesota projects in order to improve data collection and metric analysis. The Minnesota projects specifically have an issue with delivering Best Value transparency, the notion that the details of project data should be used to support dominant ideas. By improving and simplifying the data collection tools of PBSRG, Best Value transparency can be achieved more easily and effective, in turn improved the Best Value system.
ContributorsMisiak, Erik Richard (Author) / Kashiwagi, Dean (Thesis director) / Kashiwagi, Jacob (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2015-05
136561-Thumbnail Image.png
Description
The current model of revenue generation for some free to play video games is preventing the companies controlling them from growing, but with a few changes in approach these issues could be alleviated. A new style of video games, called a MOBA (Massive Online Battle Arena) has emerged in the

The current model of revenue generation for some free to play video games is preventing the companies controlling them from growing, but with a few changes in approach these issues could be alleviated. A new style of video games, called a MOBA (Massive Online Battle Arena) has emerged in the past few years bringing with it a new style of generating wealth. Contrary to past gaming models, where users must either purchase the game outright, view advertisements, or purchase items to gain a competitive advantage, MOBAs require no payment of any kind. These are free to play computer games that provides users with all the tools necessary to compete with anyone free of charge; no advantages can be purchased in this game. This leaves the only way for users to provide money to the company through optional purchases of purely aesthetic items, only to be purchased if the buyer wishes to see their character in a different set of attire. The genre’s best in show—called League of Legends, or LOL—has spearheaded this method of revenue-generation. Fortunately for LOL, its level of popularity has reached levels never seen in video games: the world championships had more viewers than game 7 of the NBA Finals (Dorsey). The player base alone is enough to keep the company afloat currently, but the fact that they only convert 3.75% of the players into revenue is alarming. Each player brings the company an average of $1.32, or 30% of what some other free to play games earn per user (Comparing MMO). It is this low per player income that has caused Riot Games, the developer of LOL, to state that their e-sports division is not currently profitable. To resolve this issue, LOL must take on a more aggressive marketing plan. Advertisements for the NBA Finals cost $460,000 for 30 seconds, and LOL should aim for ads in this range (Lombardo). With an average of 3 million people logged on at any time, 90% of the players being male and 85% being between the ages of 16 and 30, advertising via this game would appeal to many companies, making a deal easy to strike (LOL infographic 2012). The idea also appeals to players: 81% of players surveyed said that an advertisement on the client that allows for the option to place an order would improve or not impact their experience. Moving forward with this, the gaming client would be updated to contain both an option to order pizza and an advertisement for Mountain Dew. This type of advertising was determined based on community responses through a sequence of survey questions. These small adjustments to the game would allow LOL to generate enough income for Riot Games to expand into other areas of the e-sports industry.
ContributorsSeip, Patrick (Co-author) / Zhao, BoNing (Co-author) / Kashiwagi, Dean (Thesis director) / Kashiwagi, Jacob (Committee member) / Barrett, The Honors College (Contributor) / Sandra Day O'Connor College of Law (Contributor) / Department of Economics (Contributor) / Department of Supply Chain Management (Contributor)
Created2015-05
136566-Thumbnail Image.png
Description
Lung cancer is the leading cause of cancer-related deaths in the US. Low-dose computed tomography (LDCT) scans are speculated to reduce lung cancer mortality. However LDCT scans impose multiple risks including false-negative results, false- positive results, overdiagnosis, and cancer due to repeated exposure to radiation. Immunosignaturing is a new method

Lung cancer is the leading cause of cancer-related deaths in the US. Low-dose computed tomography (LDCT) scans are speculated to reduce lung cancer mortality. However LDCT scans impose multiple risks including false-negative results, false- positive results, overdiagnosis, and cancer due to repeated exposure to radiation. Immunosignaturing is a new method proposed to screen and detect lung cancer, eliminating the risks associated with LDCT scans. Known and blinded primary blood sera from participants with lung cancer and no cancer were run on peptide microarrays and analyzed. Immunosignatures for each known sample collectively indicated 120 peptides unique to lung cancer and non-cancer participants. These 120 peptides were used to determine the status of the blinded samples. Verification of the results from Vanderbilt is pending.
ContributorsNguyen, Geneva Trieu (Author) / Woodbury, Neal (Thesis director) / Zhao, Zhan-Gong (Committee member) / Stafford, Phillip (Committee member) / Barrett, The Honors College (Contributor) / Department of Chemistry and Biochemistry (Contributor) / Department of Psychology (Contributor)
Created2015-05