Matching Items (141)
149709-Thumbnail Image.png
Description
The price based marketplace has dominated the construction industry. The majority of owners use price based practices of management (expectation and decision making, control, direction, and inspection.) The price based/management and control paradigm has not worked. Clients have now been moving toward the best value environment (hire

The price based marketplace has dominated the construction industry. The majority of owners use price based practices of management (expectation and decision making, control, direction, and inspection.) The price based/management and control paradigm has not worked. Clients have now been moving toward the best value environment (hire contractors who know what they are doing, who preplan, and manage and minimize risk and deviation.) Owners are trying to move from client direction and control to hiring an expert and allowing them to do the quality control/risk management. The movement of environments changes the paradigm for the contractors from a reactive to a proactive, from a bureaucratic
on-accountable to an accountable position, from a relationship based
on-measuring to a measuring entity, and to a contractor who manages and minimizes the risk that they do not control. Years of price based practices have caused poor quality and low performance in the construction industry. This research identifies what is a best value contractor or vendor, what factors make up a best value vendor, and the methodology to transform a vendor to a best value vendor. It will use deductive logic, a case study to confirm the logic and the proposed methodology.
ContributorsPauli, Michele (Author) / Kashiwagi, Dean (Thesis advisor) / Sullivan, Kenneth (Committee member) / Badger, William (Committee member) / Arizona State University (Publisher)
Created2011
149730-Thumbnail Image.png
Description
Nonlinear dispersive equations model nonlinear waves in a wide range of physical and mathematics contexts. They reinforce or dissipate effects of linear dispersion and nonlinear interactions, and thus, may be of a focusing or defocusing nature. The nonlinear Schrödinger equation or NLS is an example of such equations. It appears

Nonlinear dispersive equations model nonlinear waves in a wide range of physical and mathematics contexts. They reinforce or dissipate effects of linear dispersion and nonlinear interactions, and thus, may be of a focusing or defocusing nature. The nonlinear Schrödinger equation or NLS is an example of such equations. It appears as a model in hydrodynamics, nonlinear optics, quantum condensates, heat pulses in solids and various other nonlinear instability phenomena. In mathematics, one of the interests is to look at the wave interaction: waves propagation with different speeds and/or different directions produces either small perturbations comparable with linear behavior, or creates solitary waves, or even leads to singular solutions. This dissertation studies the global behavior of finite energy solutions to the $d$-dimensional focusing NLS equation, $i partial _t u+Delta u+ |u|^{p-1}u=0, $ with initial data $u_0in H^1,; x in Rn$; the nonlinearity power $p$ and the dimension $d$ are chosen so that the scaling index $s=frac{d}{2}-frac{2}{p-1}$ is between 0 and 1, thus, the NLS is mass-supercritical $(s>0)$ and energy-subcritical $(s<1).$ For solutions with $ME[u_0]<1$ ($ME[u_0]$ stands for an invariant and conserved quantity in terms of the mass and energy of $u_0$), a sharp threshold for scattering and blowup is given. Namely, if the renormalized gradient $g_u$ of a solution $u$ to NLS is initially less than 1, i.e., $g_u(0)<1,$ then the solution exists globally in time and scatters in $H^1$ (approaches some linear Schr"odinger evolution as $ttopminfty$); if the renormalized gradient $g_u(0)>1,$ then the solution exhibits a blowup behavior, that is, either a finite time blowup occurs, or there is a divergence of $H^1$ norm in infinite time. This work generalizes the results for the 3d cubic NLS obtained in a series of papers by Holmer-Roudenko and Duyckaerts-Holmer-Roudenko with the key ingredients, the concentration compactness and localized variance, developed in the context of the energy-critical NLS and Nonlinear Wave equations by Kenig and Merle. One of the difficulties is fractional powers of nonlinearities which are overcome by considering Besov-Strichartz estimates and various fractional differentiation rules.
ContributorsGuevara, Cristi Darley (Author) / Roudenko, Svetlana (Thesis advisor) / Castillo_Chavez, Carlos (Committee member) / Jones, Donald (Committee member) / Mahalov, Alex (Committee member) / Suslov, Sergei (Committee member) / Arizona State University (Publisher)
Created2011
150372-Thumbnail Image.png
Description
As global competition continues to grow more disruptive, organizational change is an ever-present reality that affects companies in all industries at both the operational and strategic level. Organizational change capabilities have become a necessary aspect of existence for organizations in all industries worldwide. Research suggests that more than half of

As global competition continues to grow more disruptive, organizational change is an ever-present reality that affects companies in all industries at both the operational and strategic level. Organizational change capabilities have become a necessary aspect of existence for organizations in all industries worldwide. Research suggests that more than half of all organizational change efforts fail to achieve their original intended results, with some studies quoting failure rates as high as 70 percent. Exasperating this problem is the fact that no single change methodology has been universally accepted. This thesis examines two aspect of organizational change: the implementation of tactical and strategic initiatives, primarily focusing on successful tactical implementation techniques. This research proposed that tactical issues typically dominate the focus of change agents and recipients alike, often to the detriment of strategic level initiatives that are vital to the overall value and success of the organizational change effort. The Delphi method was employed to develop a tool to facilitate the initial implementation of organizational change such that tactical barriers were minimized and available resources for strategic initiatives were maximized. Feedback from two expert groups of change agents and change facilitators was solicited to develop the tool and evaluate its impact. Preliminary pilot testing of the tool confirmed the proposal and successfully served to minimize tactical barriers to organizational change.
ContributorsLines, Brian (Author) / Sullivan, Kenneth T. (Thesis advisor) / Badger, William (Committee member) / Kashiwagi, Dean (Committee member) / Arizona State University (Publisher)
Created2011
Description

In an effort to address the lack of literature in on-campus active travel, this study aims to investigate the following primary questions:<br/>• What are the modes that students use to travel on campus?<br/>• What are the motivations that underlie the mode choice of students on campus?<br/>My first stage of research

In an effort to address the lack of literature in on-campus active travel, this study aims to investigate the following primary questions:<br/>• What are the modes that students use to travel on campus?<br/>• What are the motivations that underlie the mode choice of students on campus?<br/>My first stage of research involved a series of qualitative investigations. I held one-on-one virtual interviews with students in which I asked them questions about the mode they use and why they feel that their chosen mode works best for them. These interviews served two functions. First, they provided me with insight into the various motivations underlying student mode choice. Second, they provided me with an indication of what explanatory variables should be included in a model of mode choice on campus.<br/>The first half of the research project informed a quantitative survey that was released via the Honors Digest to attract student respondents. Data was gathered on travel behavior as well as relevant explanatory variables.<br/>My analysis involved developing a logit model to predict student mode choice on campus and presenting the model estimation in conjunction with a discussion of student travel motivations based on the qualitative interviews. I use this information to make a recommendation on how campus infrastructure could be modified to better support the needs of the student population.

ContributorsMirtich, Laura Christine (Author) / Salon, Deborah (Thesis director) / Fang, Kevin (Committee member) / School of Public Affairs (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
150288-Thumbnail Image.png
Description
In an effort to begin validating the large number of discovered candidate biomarkers, proteomics is beginning to shift from shotgun proteomic experiments towards targeted proteomic approaches that provide solutions to automation and economic concerns. Such approaches to validate biomarkers necessitate the mass spectrometric analysis of hundreds to thousands of human

In an effort to begin validating the large number of discovered candidate biomarkers, proteomics is beginning to shift from shotgun proteomic experiments towards targeted proteomic approaches that provide solutions to automation and economic concerns. Such approaches to validate biomarkers necessitate the mass spectrometric analysis of hundreds to thousands of human samples. As this takes place, a serendipitous opportunity has become evident. By the virtue that as one narrows the focus towards "single" protein targets (instead of entire proteomes) using pan-antibody-based enrichment techniques, a discovery science has emerged, so to speak. This is due to the largely unknown context in which "single" proteins exist in blood (i.e. polymorphisms, transcript variants, and posttranslational modifications) and hence, targeted proteomics has applications for established biomarkers. Furthermore, besides protein heterogeneity accounting for interferences with conventional immunometric platforms, it is becoming evident that this formerly hidden dimension of structural information also contains rich-pathobiological information. Consequently, targeted proteomics studies that aim to ascertain a protein's genuine presentation within disease- stratified populations and serve as a stepping-stone within a biomarker translational pipeline are of clinical interest. Roughly 128 million Americans are pre-diabetic, diabetic, and/or have kidney disease and public and private spending for treating these diseases is in the hundreds of billions of dollars. In an effort to create new solutions for the early detection and management of these conditions, described herein is the design, development, and translation of mass spectrometric immunoassays targeted towards diabetes and kidney disease. Population proteomics experiments were performed for the following clinically relevant proteins: insulin, C-peptide, RANTES, and parathyroid hormone. At least thirty-eight protein isoforms were detected. Besides the numerous disease correlations confronted within the disease-stratified cohorts, certain isoforms also appeared to be causally related to the underlying pathophysiology and/or have therapeutic implications. Technical advancements include multiplexed isoform quantification as well a "dual- extraction" methodology for eliminating non-specific proteins while simultaneously validating isoforms. Industrial efforts towards widespread clinical adoption are also described. Consequently, this work lays a foundation for the translation of mass spectrometric immunoassays into the clinical arena and simultaneously presents the most recent advancements concerning the mass spectrometric immunoassay approach.
ContributorsOran, Paul (Author) / Nelson, Randall (Thesis advisor) / Hayes, Mark (Thesis advisor) / Ros, Alexandra (Committee member) / Williams, Peter (Committee member) / Arizona State University (Publisher)
Created2011
150133-Thumbnail Image.png
Description
ABSTRACT Facility managers have an important job in today's competitive business world by caring for the backbone of the corporation's capital. Maintaining assets and the support efforts cause facility managers to fight an uphill battle to prove the worth of their organizations. This thesis will discuss the important and flexible

ABSTRACT Facility managers have an important job in today's competitive business world by caring for the backbone of the corporation's capital. Maintaining assets and the support efforts cause facility managers to fight an uphill battle to prove the worth of their organizations. This thesis will discuss the important and flexible use of measurement and leadership reports and the benefits of justifying the work required to maintain or upgrade a facility. The task is streamlined by invoking accountability to subject experts. The facility manager must trust in the ability of his or her work force to get the job done. However, with accountability comes increased risk. Even though accountability may not alleviate total control or cease reactionary actions, facility managers can develop key leadership based reports to reassign accountability and measure subject matter experts while simultaneously reducing reactionary actions leading to increased cost. Identifying and reassigning risk that are not controlled to subject matter experts is imperative for effective facility management leadership and allows facility managers to create an accurate and solid facility management plan, supports the organization's succession plan, and allows the organization to focus on key competencies.
ContributorsTellefsen, Thor (Author) / Sullivan, Kenneth (Thesis advisor) / Kashiwagi, Dean (Committee member) / Badger, William (Committee member) / Arizona State University (Publisher)
Created2011
152225-Thumbnail Image.png
Description
The dynamics of urban water use are characterized by spatial and temporal variability that is influenced by associated factors at different scales. Thus it is important to capture the relationship between urban water use and its determinants in a spatio-temporal framework in order to enhance understanding and management of urban

The dynamics of urban water use are characterized by spatial and temporal variability that is influenced by associated factors at different scales. Thus it is important to capture the relationship between urban water use and its determinants in a spatio-temporal framework in order to enhance understanding and management of urban water demand. This dissertation aims to contribute to understanding the spatio-temporal relationships between single-family residential (SFR) water use and its determinants in a desert city. The dissertation has three distinct papers to support this goal. In the first paper, I demonstrate that aggregated scale data can be reliably used to study the relationship between SFR water use and its determinants without leading to significant ecological fallacy. The usability of aggregated scale data facilitates scientific inquiry about SFR water use with more available aggregated scale data. The second paper advances understanding of the relationship between SFR water use and its associated factors by accounting for the spatial and temporal dependence in a panel data setting. The third paper of this dissertation studies the historical contingency, spatial heterogeneity, and spatial connectivity in the relationship of SFR water use and its determinants by comparing three different regression models. This dissertation demonstrates the importance and necessity of incorporating spatio-temporal components, such as scale, dependence, and heterogeneity, into SFR water use research. Spatial statistical models should be used to understand the effects of associated factors on water use and test the effectiveness of certain management policies since spatial effects probably will significantly influence the estimates if only non-spatial statistical models are used. Urban water demand management should pay attention to the spatial heterogeneity in predicting the future water demand to achieve more accurate estimates, and spatial statistical models provide a promising method to do this job.
ContributorsOuyang, Yun (Author) / Wentz, Elizabeth (Thesis advisor) / Ruddell, Benjamin (Thesis advisor) / Harlan, Sharon (Committee member) / Janssen, Marcus (Committee member) / Arizona State University (Publisher)
Created2013
152185-Thumbnail Image.png
Description
Over the past couple of decades, quality has been an area of increased focus. Multiple models and approaches have been proposed to measure the quality in the construction industry. This paper focuses on determining the quality of one of the types of roofing systems used in the construction industry, i.e.

Over the past couple of decades, quality has been an area of increased focus. Multiple models and approaches have been proposed to measure the quality in the construction industry. This paper focuses on determining the quality of one of the types of roofing systems used in the construction industry, i.e. Sprayed Polyurethane Foam Roofs (SPF roofs). Thirty seven urethane coated SPF roofs that were installed in 2005 / 2006 were visually inspected to measure the percentage of blisters and repairs three times over a period of 4 year, 6 year and 7 year marks. A repairing criteria was established after a 6 year mark based on the data that were reported to contractors as vulnerable roofs. Furthermore, the relation between four possible contributing time of installation factors i.e. contractor, demographics, season, and difficulty (number of penetrations and size of the roof in square feet) that could affect the quality of the roof was determined. Demographics and difficulty did not affect the quality of the roofs whereas the contractor and the season when the roof was installed did affect the quality of the roofs.
ContributorsGajjar, Dhaval (Author) / Kashiwagi, Dean (Thesis advisor) / Sullivan, Kenneth (Committee member) / Badger, William (Committee member) / Arizona State University (Publisher)
Created2013
151515-Thumbnail Image.png
Description
This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified

This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified technique results in significant improvement in velocity retrieval accuracy. These modifications include changes to innovation covariance portioning, covariance binning, and analysis increment calculation. It is observed that the modified technique is able to make retrievals with better accuracy, preserves local information better, and compares well with tower measurements. In order to study the error of representativeness and vector retrieval error, a lidar simulator was constructed. Using the lidar simulator a thorough sensitivity analysis of the lidar measurement process and vector retrieval is carried out. The error of representativeness as a function of scales of motion and sensitivity of vector retrieval to look angle is quantified. Using the modified OI technique, study of nocturnal flow in Owens' Valley, CA was carried out to identify and understand uncharacteristic events on the night of March 27th 2006. Observations from 1030 UTC to 1230 UTC (0230 hr local time to 0430 hr local time) on March 27 2006 are presented. Lidar observations show complex and uncharacteristic flows such as sudden bursts of westerly cross-valley wind mixing with the dominant up-valley wind. Model results from Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS®) and other in-situ instrumentations are used to corroborate and complement these observations. The modified OI technique is used to identify uncharacteristic and extreme flow events at a wind development site. Estimates of turbulence and shear from this technique are compared to tower measurements. A formulation for equivalent wind speed in the presence of variations in wind speed and direction, combined with shear is developed and used to determine wind energy content in presence of turbulence.
ContributorsChoukulkar, Aditya (Author) / Calhoun, Ronald (Thesis advisor) / Mahalov, Alex (Committee member) / Kostelich, Eric (Committee member) / Huang, Huei-Ping (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2013
151436-Thumbnail Image.png
Description
Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay

Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay (MSIA) which has been one of the primary methods of biomarker discovery techniques. MSIA analyzes protein molecules as potential biomarkers using time of flight mass spectrometry (TOF-MS). Peak detection in TOF-MS is important for biomarker analysis and many other MS related application. Though many peak detection algorithms exist, most of them are based on heuristics models. One of the ways of detecting signal peaks is by deploying stochastic models of the signal and noise observations. Likelihood ratio test (LRT) detector, based on the Neyman-Pearson (NP) lemma, is an uniformly most powerful test to decision making in the form of a hypothesis test. The primary goal of this dissertation is to develop signal and noise models for the electrospray ionization (ESI) TOF-MS data. A new method is proposed for developing the signal model by employing first principles calculations based on device physics and molecular properties. The noise model is developed by analyzing MS data from careful experiments in the ESI mass spectrometer. A non-flat baseline in MS data is common. The reasons behind the formation of this baseline has not been fully comprehended. A new signal model explaining the presence of baseline is proposed, though detailed experiments are needed to further substantiate the model assumptions. Signal detection schemes based on these signal and noise models are proposed. A maximum likelihood (ML) method is introduced for estimating the signal peak amplitudes. The performance of the detection methods and ML estimation are evaluated with Monte Carlo simulation which shows promising results. An application of these methods is proposed for fractional abundance calculation for biomarker analysis, which is mathematically robust and fundamentally different than the current algorithms. Biomarker panels for type 2 diabetes and cardiovascular disease are analyzed using existing MS analysis algorithms. Finally, a support vector machine based multi-classification algorithm is developed for evaluating the biomarkers' effectiveness in discriminating type 2 diabetes and cardiovascular diseases and is shown to perform better than a linear discriminant analysis based classifier.
ContributorsBuddi, Sai (Author) / Taylor, Thomas (Thesis advisor) / Cochran, Douglas (Thesis advisor) / Nelson, Randall (Committee member) / Duman, Tolga (Committee member) / Arizona State University (Publisher)
Created2012