Matching Items (97)
151725-Thumbnail Image.png
Description
Woody plant encroachment is a worldwide phenomenon linked to water availability in semiarid systems. Nevertheless, the implications of woody plant encroachment on the hydrologic cycle are poorly understood, especially at the catchment scale. This study takes place in a pair of small semiarid rangeland undergoing the encroachment of Prosopis velutina

Woody plant encroachment is a worldwide phenomenon linked to water availability in semiarid systems. Nevertheless, the implications of woody plant encroachment on the hydrologic cycle are poorly understood, especially at the catchment scale. This study takes place in a pair of small semiarid rangeland undergoing the encroachment of Prosopis velutina Woot., or velvet mesquite tree. The similarly-sized basins are in close proximity, leading to equivalent meteorological and soil conditions. One basin was treated for mesquite in 1974, while the other represents the encroachment process. A sensor network was installed to measure ecohydrological states and fluxes, including precipitation, runoff, soil moisture and evapotranspiration. Observations from June 1, 2011 through September 30, 2012 are presented to describe the seasonality and spatial variability of ecohydrological conditions during the North American Monsoon (NAM). Runoff observations are linked to historical changes in runoff production in each watershed. Observations indicate that the mesquite-treated basin generates more runoff pulses and greater runoff volume for small rainfall events, while the mesquite-encroached basin generates more runoff volume for large rainfall events. A distributed hydrologic model is applied to both basins to investigate the runoff threshold processes experienced during the NAM. Vegetation in the two basins is classified into grass, mesquite, or bare soil using high-resolution imagery. Model predictions are used to investigate the vegetation controls on soil moisture, evapotranspiration, and runoff generation. The distributed model shows that grass and mesquite sites retain the highest levels of soil moisture. The model also captures the runoff generation differences between the two watersheds that have been observed over the past decade. Generally, grass sites in the mesquite-treated basin have less plant interception and evapotranspiration, leading to higher soil moisture that supports greater runoff for small rainfall events. For large rainfall events, the mesquite-encroached basin produces greater runoff due to its higher fraction of bare soil. The results of this study show that a distributed hydrologic model can be used to explain runoff threshold processes linked to woody plant encroachment at the catchment-scale and provides useful interpretations for rangeland management in semiarid areas.
ContributorsPierini, Nicole A (Author) / Vivoni, Enrique R (Thesis advisor) / Wang, Zhi-Hua (Committee member) / Mays, Larry W. (Committee member) / Arizona State University (Publisher)
Created2013
152223-Thumbnail Image.png
Description
Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has

Nowadays product reliability becomes the top concern of the manufacturers and customers always prefer the products with good performances under long period. In order to estimate the lifetime of the product, accelerated life testing (ALT) is introduced because most of the products can last years even decades. Much research has been done in the ALT area and optimal design for ALT is a major topic. This dissertation consists of three main studies. First, a methodology of finding optimal design for ALT with right censoring and interval censoring have been developed and it employs the proportional hazard (PH) model and generalized linear model (GLM) to simplify the computational process. A sensitivity study is also given to show the effects brought by parameters to the designs. Second, an extended version of I-optimal design for ALT is discussed and then a dual-objective design criterion is defined and showed with several examples. Also in order to evaluate different candidate designs, several graphical tools are developed. Finally, when there are more than one models available, different model checking designs are discussed.
ContributorsYang, Tao (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Borror, Connie (Committee member) / Rigdon, Steve (Committee member) / Arizona State University (Publisher)
Created2013
151329-Thumbnail Image.png
Description
During the initial stages of experimentation, there are usually a large number of factors to be investigated. Fractional factorial (2^(k-p)) designs are particularly useful during this initial phase of experimental work. These experiments often referred to as screening experiments help reduce the large number of factors to a smaller set.

During the initial stages of experimentation, there are usually a large number of factors to be investigated. Fractional factorial (2^(k-p)) designs are particularly useful during this initial phase of experimental work. These experiments often referred to as screening experiments help reduce the large number of factors to a smaller set. The 16 run regular fractional factorial designs for six, seven and eight factors are in common usage. These designs allow clear estimation of all main effects when the three-factor and higher order interactions are negligible, but all two-factor interactions are aliased with each other making estimation of these effects problematic without additional runs. Alternatively, certain nonregular designs called no-confounding (NC) designs by Jones and Montgomery (Jones & Montgomery, Alternatives to resolution IV screening designs in 16 runs, 2010) partially confound the main effects with the two-factor interactions but do not completely confound any two-factor interactions with each other. The NC designs are useful for independently estimating main effects and two-factor interactions without additional runs. While several methods have been suggested for the analysis of data from nonregular designs, stepwise regression is familiar to practitioners, available in commercial software, and is widely used in practice. Given that an NC design has been run, the performance of stepwise regression for model selection is unknown. In this dissertation I present a comprehensive simulation study evaluating stepwise regression for analyzing both regular fractional factorial and NC designs. Next, the projection properties of the six, seven and eight factor NC designs are studied. Studying the projection properties of these designs allows the development of analysis methods to analyze these designs. Lastly the designs and projection properties of 9 to 14 factor NC designs onto three and four factors are presented. Certain recommendations are made on analysis methods for these designs as well.
ContributorsShinde, Shilpa (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Committee member) / Fowler, John (Committee member) / Jones, Bradley (Committee member) / Arizona State University (Publisher)
Created2012
152382-Thumbnail Image.png
Description
A P-value based method is proposed for statistical monitoring of various types of profiles in phase II. The performance of the proposed method is evaluated by the average run length criterion under various shifts in the intercept, slope and error standard deviation of the model. In our proposed approach, P-values

A P-value based method is proposed for statistical monitoring of various types of profiles in phase II. The performance of the proposed method is evaluated by the average run length criterion under various shifts in the intercept, slope and error standard deviation of the model. In our proposed approach, P-values are computed at each level within a sample. If at least one of the P-values is less than a pre-specified significance level, the chart signals out-of-control. The primary advantage of our approach is that only one control chart is required to monitor several parameters simultaneously: the intercept, slope(s), and the error standard deviation. A comprehensive comparison of the proposed method and the existing KMW-Shewhart method for monitoring linear profiles is conducted. In addition, the effect that the number of observations within a sample has on the performance of the proposed method is investigated. The proposed method was also compared to the T^2 method discussed in Kang and Albin (2000) for multivariate, polynomial, and nonlinear profiles. A simulation study shows that overall the proposed P-value method performs satisfactorily for different profile types.
ContributorsAdibi, Azadeh (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Thesis advisor) / Li, Jing (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2013
150466-Thumbnail Image.png
Description
The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order

The ever-changing economic landscape has forced many companies to re-examine their supply chains. Global resourcing and outsourcing of processes has been a strategy many organizations have adopted to reduce cost and to increase their global footprint. This has, however, resulted in increased process complexity and reduced customer satisfaction. In order to meet and exceed customer expectations, many companies are forced to improve quality and on-time delivery, and have looked towards Lean Six Sigma as an approach to enable process improvement. The Lean Six Sigma literature is rich in deployment strategies; however, there is a general lack of a mathematical approach to deploy Lean Six Sigma in a global enterprise. This includes both project identification and prioritization. The research presented here is two-fold. Firstly, a process characterization framework is presented to evaluate processes based on eight characteristics. An unsupervised learning technique, using clustering algorithms, is then utilized to group processes that are Lean Six Sigma conducive. The approach helps Lean Six Sigma deployment champions to identify key areas within the business to focus a Lean Six Sigma deployment. A case study is presented and 33% of the processes were found to be Lean Six Sigma conducive. Secondly, having identified parts of the business that are lean Six Sigma conducive, the next steps are to formulate and prioritize a portfolio of projects. Very often the deployment champion is faced with the decision of selecting a portfolio of Lean Six Sigma projects that meet multiple objectives which could include: maximizing productivity, customer satisfaction or return on investment, while meeting certain budgetary constraints. A multi-period 0-1 knapsack problem is presented that maximizes the expected net savings of the Lean Six Sigma portfolio over the life cycle of the deployment. Finally, a case study is presented that demonstrates the application of the model in a large multinational company. Traditionally, Lean Six Sigma found its roots in manufacturing. The research presented in this dissertation also emphasizes the applicability of the methodology to the non-manufacturing space. Additionally, a comparison is conducted between manufacturing and non-manufacturing processes to highlight the challenges in deploying the methodology in both spaces.
ContributorsDuarte, Brett Marc (Author) / Fowler, John W (Thesis advisor) / Montgomery, Douglas C. (Thesis advisor) / Shunk, Dan (Committee member) / Borror, Connie (Committee member) / Konopka, John (Committee member) / Arizona State University (Publisher)
Created2011
151203-Thumbnail Image.png
Description
This dissertation presents methods for the evaluation of ocular surface protection during natural blink function. The evaluation of ocular surface protection is especially important in the diagnosis of dry eye and the evaluation of dry eye severity in clinical trials. Dry eye is a highly prevalent disease affecting vast numbers

This dissertation presents methods for the evaluation of ocular surface protection during natural blink function. The evaluation of ocular surface protection is especially important in the diagnosis of dry eye and the evaluation of dry eye severity in clinical trials. Dry eye is a highly prevalent disease affecting vast numbers (between 11% and 22%) of an aging population. There is only one approved therapy with limited efficacy, which results in a huge unmet need. The reason so few drugs have reached approval is a lack of a recognized therapeutic pathway with reproducible endpoints. While the interplay between blink function and ocular surface protection has long been recognized, all currently used evaluation techniques have addressed blink function in isolation from tear film stability, the gold standard of which is Tear Film Break-Up Time (TFBUT). In the first part of this research a manual technique of calculating ocular surface protection during natural blink function through the use of video analysis is developed and evaluated for it's ability to differentiate between dry eye and normal subjects, the results are compared with that of TFBUT. In the second part of this research the technique is improved in precision and automated through the use of video analysis algorithms. This software, called the OPI 2.0 System, is evaluated for accuracy and precision, and comparisons are made between the OPI 2.0 System and other currently recognized dry eye diagnostic techniques (e.g. TFBUT). In the third part of this research the OPI 2.0 System is deployed for use in the evaluation of subjects before, immediately after and 30 minutes after exposure to a controlled adverse environment (CAE), once again the results are compared and contrasted against commonly used dry eye endpoints. The results demonstrate that the evaluation of ocular surface protection using the OPI 2.0 System offers superior accuracy to the current standard, TFBUT.
ContributorsAbelson, Richard (Author) / Montgomery, Douglas C. (Thesis advisor) / Borror, Connie (Committee member) / Shunk, Dan (Committee member) / Pan, Rong (Committee member) / Arizona State University (Publisher)
Created2012
135873-Thumbnail Image.png
Description
Cancer remains one of the leading killers throughout the world. Death and disability due to lung cancer in particular accounts for one of the largest global economic burdens a disease presents. The burden on third-world countries is especially large due to the unusually large financial stress that comes from

Cancer remains one of the leading killers throughout the world. Death and disability due to lung cancer in particular accounts for one of the largest global economic burdens a disease presents. The burden on third-world countries is especially large due to the unusually large financial stress that comes from late tumor detection and expensive treatment options. Early detection using inexpensive techniques may relieve much of the burden throughout the world, not just in more developed countries. I examined the immune responses of lung cancer patients using immunosignatures – patterns of reactivity between host serum antibodies and random peptides. Immunosignatures reveal disease-specific patterns that are very reproducible. Immunosignaturing is a chip-based method that has the ability to display the antibody diversity from individual sera sample with low cost. Immunosignaturing is a medical diagnostic test that has many applications in current medical research and in diagnosis. From a previous clinical study, patients diagnosed for lung cancer were tested for their immunosignature vs. healthy non-cancer volunteers. The pattern of reactivity against the random peptides (the ‘immunosignature’) revealed common signals in cancer patients, absent from healthy controls. My study involved the search for common amino acid motifs in the cancer-specific peptides. My search through the hundreds of ‘hits’ revealed certain motifs that were repeated more times than expected by random chance. The amino acids that were the most conserved in each set include tryptophan, aspartic acid, glutamic acid, proline, alanine, serine, and lysine. The most overall conserved amino acid observed between each set was D - aspartic acid. The motifs were short (no more than 5-6 amino acids in a row), but the total number of motifs I identified was large enough to assure significance. I utilized Excel to organize the large peptide sequence libraries, then CLUSTALW to cluster similar-sequence peptides, then GLAM2 to find common themes in groups of peptides. In so doing, I found sequences that were also present in translated cancer expression libraries (RNA) that matched my motifs, suggesting that immunosignatures can find cancer-specific antigens that can be both diagnostic and potentially therapeutic.
ContributorsShiehzadegan, Shima (Author) / Johnston, Stephen (Thesis director) / Stafford, Phillip (Committee member) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
137139-Thumbnail Image.png
Description
The influenza virus, also known as "the flu", is an infectious disease that has constantly affected the health of humanity. There is currently no known cure for Influenza. The Center for Innovations in Medicine at the Biodesign Institute located on campus at Arizona State University has been developing synbodies as

The influenza virus, also known as "the flu", is an infectious disease that has constantly affected the health of humanity. There is currently no known cure for Influenza. The Center for Innovations in Medicine at the Biodesign Institute located on campus at Arizona State University has been developing synbodies as a possible Influenza therapeutic. Specifically, at CIM, we have attempted to design these initial synbodies to target the entire Influenza virus and preliminary data leads us to believe that these synbodies target Nucleoprotein (NP). Given that the synbody targets NP, the penetration of cells via synbody should also occur. Then by Western Blot analysis we evaluated for the diminution of NP level in treated cells versus untreated cells. The focus of my honors thesis is to explore how synthetic antibodies can potentially inhibit replication of the Influenza (H1N1) A/Puerto Rico/8/34 strain so that a therapeutic can be developed. A high affinity synbody for Influenza can be utilized to test for inhibition of Influenza as shown by preliminary data. The 5-5-3819 synthetic antibody's internalization in live cells was visualized with Madin-Darby Kidney Cells under a Confocal Microscope. Then by Western Blot analysis we evaluated for the diminution of NP level in treated cells versus untreated cells. Expression of NP over 8 hours time was analyzed via Western Blot Analysis, which showed NP accumulation was retarded in synbody treated cells. The data obtained from my honors thesis and preliminary data provided suggest that the synthetic antibody penetrates live cells and targets NP. The results of my thesis presents valuable information that can be utilized by other researchers so that future experiments can be performed, eventually leading to the creation of a more effective therapeutic for influenza.
ContributorsHayden, Joel James (Author) / Diehnelt, Chris (Thesis director) / Johnston, Stephen (Committee member) / Legutki, Bart (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor) / Department of Chemistry and Biochemistry (Contributor)
Created2014-05
137647-Thumbnail Image.png
Description
The widespread use of statistical analysis in sports-particularly Baseball- has made it increasingly necessary for small and mid-market teams to find ways to maintain their analytical advantages over large market clubs. In baseball, an opportunity for exists for teams with limited financial resources to sign players under team control to

The widespread use of statistical analysis in sports-particularly Baseball- has made it increasingly necessary for small and mid-market teams to find ways to maintain their analytical advantages over large market clubs. In baseball, an opportunity for exists for teams with limited financial resources to sign players under team control to long-term contracts before other teams can bid for their services in free agency. If small and mid-market clubs can successfully identify talented players early, clubs can save money, achieve cost certainty and remain competitive for longer periods of time. These deals are also advantageous to players since they receive job security and greater financial dividends earlier in their career. The objective of this paper is to develop a regression-based predictive model that teams can use to forecast the performance of young baseball players with limited Major League experience. There were several tasks conducted to achieve this goal: (1) Data was obtained from Major League Baseball and Lahman's Baseball Database and sorted using Excel macros for easier analysis. (2) Players were separated into three positional groups depending on similar fielding requirements and offensive profiles: Group I was comprised of first and third basemen, Group II contains second basemen, shortstops, and center fielders and Group III contains left and right fielders. (3) Based on the context of baseball and the nature of offensive performance metrics, only players who achieve greater than 200 plate appearances within the first two years of their major league debut are included in this analysis. (4) The statistical software package JMP was used to create regression models of each group and analyze the residuals for any irregularities or normality violations. Once the models were developed, slight adjustments were made to improve the accuracy of the forecasts and identify opportunities for future work. It was discovered that Group I and Group III were the easiest player groupings to forecast while Group II required several attempts to improve the model.
ContributorsJack, Nathan Scott (Author) / Shunk, Dan (Thesis director) / Montgomery, Douglas (Committee member) / Borror, Connie (Committee member) / Industrial, Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2013-05
141440-Thumbnail Image.png
Description

Engineered pavements cover a large fraction of cities and offer significant potential for urban heat island mitigation. Though rapidly increasing research efforts have been devoted to the study of pavement materials, thermal interactions between buildings and the ambient environment are mostly neglected. In this study, numerical models featuring a realistic

Engineered pavements cover a large fraction of cities and offer significant potential for urban heat island mitigation. Though rapidly increasing research efforts have been devoted to the study of pavement materials, thermal interactions between buildings and the ambient environment are mostly neglected. In this study, numerical models featuring a realistic representation of building-environment thermal interactions, were applied to quantify the effect of pavements on the urban thermal environment at multiple scales. It was found that performance of pavements inside the canyon was largely determined by the canyon geometry. In a high-density residential area, modifying pavements had insignificant effect on the wall temperature and building energy consumption. At a regional scale, various pavement types were also found to have a limited cooling effect on land surface temperature and 2-m air temperature for metropolitan Phoenix. In the context of global climate change, the effect of pavement was evaluated in terms of the equivalent CO2 emission. Equivalent CO2 emission offset by reflective pavements in urban canyons was only about 13.9e46.6% of that without building canopies, depending on the canyon geometry. This study revealed the importance of building-environment thermal interactions in determining thermal conditions inside the urban canopy.

ContributorsYang, Jiachuan (Author) / Wang, Zhi-Hua (Author) / Kaloush, Kamil (Author) / Dylla, Heather (Author)
Created2016-08-22