Matching Items (70)
149730-Thumbnail Image.png
Description
Nonlinear dispersive equations model nonlinear waves in a wide range of physical and mathematics contexts. They reinforce or dissipate effects of linear dispersion and nonlinear interactions, and thus, may be of a focusing or defocusing nature. The nonlinear Schrödinger equation or NLS is an example of such equations. It appears

Nonlinear dispersive equations model nonlinear waves in a wide range of physical and mathematics contexts. They reinforce or dissipate effects of linear dispersion and nonlinear interactions, and thus, may be of a focusing or defocusing nature. The nonlinear Schrödinger equation or NLS is an example of such equations. It appears as a model in hydrodynamics, nonlinear optics, quantum condensates, heat pulses in solids and various other nonlinear instability phenomena. In mathematics, one of the interests is to look at the wave interaction: waves propagation with different speeds and/or different directions produces either small perturbations comparable with linear behavior, or creates solitary waves, or even leads to singular solutions. This dissertation studies the global behavior of finite energy solutions to the $d$-dimensional focusing NLS equation, $i partial _t u+Delta u+ |u|^{p-1}u=0, $ with initial data $u_0in H^1,; x in Rn$; the nonlinearity power $p$ and the dimension $d$ are chosen so that the scaling index $s=frac{d}{2}-frac{2}{p-1}$ is between 0 and 1, thus, the NLS is mass-supercritical $(s>0)$ and energy-subcritical $(s<1).$ For solutions with $ME[u_0]<1$ ($ME[u_0]$ stands for an invariant and conserved quantity in terms of the mass and energy of $u_0$), a sharp threshold for scattering and blowup is given. Namely, if the renormalized gradient $g_u$ of a solution $u$ to NLS is initially less than 1, i.e., $g_u(0)<1,$ then the solution exists globally in time and scatters in $H^1$ (approaches some linear Schr"odinger evolution as $ttopminfty$); if the renormalized gradient $g_u(0)>1,$ then the solution exhibits a blowup behavior, that is, either a finite time blowup occurs, or there is a divergence of $H^1$ norm in infinite time. This work generalizes the results for the 3d cubic NLS obtained in a series of papers by Holmer-Roudenko and Duyckaerts-Holmer-Roudenko with the key ingredients, the concentration compactness and localized variance, developed in the context of the energy-critical NLS and Nonlinear Wave equations by Kenig and Merle. One of the difficulties is fractional powers of nonlinearities which are overcome by considering Besov-Strichartz estimates and various fractional differentiation rules.
ContributorsGuevara, Cristi Darley (Author) / Roudenko, Svetlana (Thesis advisor) / Castillo_Chavez, Carlos (Committee member) / Jones, Donald (Committee member) / Mahalov, Alex (Committee member) / Suslov, Sergei (Committee member) / Arizona State University (Publisher)
Created2011
148162-Thumbnail Image.png
Description

Surveys have shown that several hundred billion weather forecasts are obtained by the United States public each year, and that weather news is one of the most consumed topics in the media. This indicates that the forecast provides information that is significant to the public, and that the public utilizes

Surveys have shown that several hundred billion weather forecasts are obtained by the United States public each year, and that weather news is one of the most consumed topics in the media. This indicates that the forecast provides information that is significant to the public, and that the public utilizes details associated with it to inform aspects of their life. Phoenix, Arizona is a dry, desert region that experiences a monsoon season and extreme heat. How then, does the weather forecast influence the way Phoenix residents make decisions? This paper aims to draw connections between the weather forecast, decision making, and people who live in a desert environment. To do this, a ten-minute survey was deployed through Amazon Mechanical Turk (MTurk) in which 379 respondents were targeted. The survey asks 45 multiple choice and ranking questions categorized into four sections: obtainment of the forecast, forecast variables of interest, informed decision making based on unique weather variables, and demographics. This research illuminates how residents in the Phoenix metropolitan area use the local weather forecast for decision-making on daily activities, and the main meteorological factors that drive those decisions.

ContributorsMarturano, Julia (Author) / Middel, Ariane (Thesis director) / Schneider, Florian (Committee member) / School of Geographical Sciences and Urban Planning (Contributor, Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
150288-Thumbnail Image.png
Description
In an effort to begin validating the large number of discovered candidate biomarkers, proteomics is beginning to shift from shotgun proteomic experiments towards targeted proteomic approaches that provide solutions to automation and economic concerns. Such approaches to validate biomarkers necessitate the mass spectrometric analysis of hundreds to thousands of human

In an effort to begin validating the large number of discovered candidate biomarkers, proteomics is beginning to shift from shotgun proteomic experiments towards targeted proteomic approaches that provide solutions to automation and economic concerns. Such approaches to validate biomarkers necessitate the mass spectrometric analysis of hundreds to thousands of human samples. As this takes place, a serendipitous opportunity has become evident. By the virtue that as one narrows the focus towards "single" protein targets (instead of entire proteomes) using pan-antibody-based enrichment techniques, a discovery science has emerged, so to speak. This is due to the largely unknown context in which "single" proteins exist in blood (i.e. polymorphisms, transcript variants, and posttranslational modifications) and hence, targeted proteomics has applications for established biomarkers. Furthermore, besides protein heterogeneity accounting for interferences with conventional immunometric platforms, it is becoming evident that this formerly hidden dimension of structural information also contains rich-pathobiological information. Consequently, targeted proteomics studies that aim to ascertain a protein's genuine presentation within disease- stratified populations and serve as a stepping-stone within a biomarker translational pipeline are of clinical interest. Roughly 128 million Americans are pre-diabetic, diabetic, and/or have kidney disease and public and private spending for treating these diseases is in the hundreds of billions of dollars. In an effort to create new solutions for the early detection and management of these conditions, described herein is the design, development, and translation of mass spectrometric immunoassays targeted towards diabetes and kidney disease. Population proteomics experiments were performed for the following clinically relevant proteins: insulin, C-peptide, RANTES, and parathyroid hormone. At least thirty-eight protein isoforms were detected. Besides the numerous disease correlations confronted within the disease-stratified cohorts, certain isoforms also appeared to be causally related to the underlying pathophysiology and/or have therapeutic implications. Technical advancements include multiplexed isoform quantification as well a "dual- extraction" methodology for eliminating non-specific proteins while simultaneously validating isoforms. Industrial efforts towards widespread clinical adoption are also described. Consequently, this work lays a foundation for the translation of mass spectrometric immunoassays into the clinical arena and simultaneously presents the most recent advancements concerning the mass spectrometric immunoassay approach.
ContributorsOran, Paul (Author) / Nelson, Randall (Thesis advisor) / Hayes, Mark (Thesis advisor) / Ros, Alexandra (Committee member) / Williams, Peter (Committee member) / Arizona State University (Publisher)
Created2011
151515-Thumbnail Image.png
Description
This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified

This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified technique results in significant improvement in velocity retrieval accuracy. These modifications include changes to innovation covariance portioning, covariance binning, and analysis increment calculation. It is observed that the modified technique is able to make retrievals with better accuracy, preserves local information better, and compares well with tower measurements. In order to study the error of representativeness and vector retrieval error, a lidar simulator was constructed. Using the lidar simulator a thorough sensitivity analysis of the lidar measurement process and vector retrieval is carried out. The error of representativeness as a function of scales of motion and sensitivity of vector retrieval to look angle is quantified. Using the modified OI technique, study of nocturnal flow in Owens' Valley, CA was carried out to identify and understand uncharacteristic events on the night of March 27th 2006. Observations from 1030 UTC to 1230 UTC (0230 hr local time to 0430 hr local time) on March 27 2006 are presented. Lidar observations show complex and uncharacteristic flows such as sudden bursts of westerly cross-valley wind mixing with the dominant up-valley wind. Model results from Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS®) and other in-situ instrumentations are used to corroborate and complement these observations. The modified OI technique is used to identify uncharacteristic and extreme flow events at a wind development site. Estimates of turbulence and shear from this technique are compared to tower measurements. A formulation for equivalent wind speed in the presence of variations in wind speed and direction, combined with shear is developed and used to determine wind energy content in presence of turbulence.
ContributorsChoukulkar, Aditya (Author) / Calhoun, Ronald (Thesis advisor) / Mahalov, Alex (Committee member) / Kostelich, Eric (Committee member) / Huang, Huei-Ping (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2013
151436-Thumbnail Image.png
Description
Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay

Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay (MSIA) which has been one of the primary methods of biomarker discovery techniques. MSIA analyzes protein molecules as potential biomarkers using time of flight mass spectrometry (TOF-MS). Peak detection in TOF-MS is important for biomarker analysis and many other MS related application. Though many peak detection algorithms exist, most of them are based on heuristics models. One of the ways of detecting signal peaks is by deploying stochastic models of the signal and noise observations. Likelihood ratio test (LRT) detector, based on the Neyman-Pearson (NP) lemma, is an uniformly most powerful test to decision making in the form of a hypothesis test. The primary goal of this dissertation is to develop signal and noise models for the electrospray ionization (ESI) TOF-MS data. A new method is proposed for developing the signal model by employing first principles calculations based on device physics and molecular properties. The noise model is developed by analyzing MS data from careful experiments in the ESI mass spectrometer. A non-flat baseline in MS data is common. The reasons behind the formation of this baseline has not been fully comprehended. A new signal model explaining the presence of baseline is proposed, though detailed experiments are needed to further substantiate the model assumptions. Signal detection schemes based on these signal and noise models are proposed. A maximum likelihood (ML) method is introduced for estimating the signal peak amplitudes. The performance of the detection methods and ML estimation are evaluated with Monte Carlo simulation which shows promising results. An application of these methods is proposed for fractional abundance calculation for biomarker analysis, which is mathematically robust and fundamentally different than the current algorithms. Biomarker panels for type 2 diabetes and cardiovascular disease are analyzed using existing MS analysis algorithms. Finally, a support vector machine based multi-classification algorithm is developed for evaluating the biomarkers' effectiveness in discriminating type 2 diabetes and cardiovascular diseases and is shown to perform better than a linear discriminant analysis based classifier.
ContributorsBuddi, Sai (Author) / Taylor, Thomas (Thesis advisor) / Cochran, Douglas (Thesis advisor) / Nelson, Randall (Committee member) / Duman, Tolga (Committee member) / Arizona State University (Publisher)
Created2012
151170-Thumbnail Image.png
Description
Cancer claims hundreds of thousands of lives every year in US alone. Finding ways for early detection of cancer onset is crucial for better management and treatment of cancer. Thus, biomarkers especially protein biomarkers, being the functional units which reflect dynamic physiological changes, need to be discovered. Though important, there

Cancer claims hundreds of thousands of lives every year in US alone. Finding ways for early detection of cancer onset is crucial for better management and treatment of cancer. Thus, biomarkers especially protein biomarkers, being the functional units which reflect dynamic physiological changes, need to be discovered. Though important, there are only a few approved protein cancer biomarkers till date. To accelerate this process, fast, comprehensive and affordable assays are required which can be applied to large population studies. For this, these assays should be able to comprehensively characterize and explore the molecular diversity of nominally "single" proteins across populations. This information is usually unavailable with commonly used immunoassays such as ELISA (enzyme linked immunosorbent assay) which either ignore protein microheterogeneity, or are confounded by it. To this end, mass spectrometric immuno assays (MSIA) for three different human plasma proteins have been developed. These proteins viz. IGF-1, hemopexin and tetranectin have been found in reported literature to show correlations with many diseases along with several carcinomas. Developed assays were used to extract entire proteins from plasma samples and subsequently analyzed on mass spectrometric platforms. Matrix assisted laser desorption ionization (MALDI) and electrospray ionization (ESI) mass spectrometric techniques where used due to their availability and suitability for the analysis. This resulted in visibility of different structural forms of these proteins showing their structural micro-heterogeneity which is invisible to commonly used immunoassays. These assays are fast, comprehensive and can be applied in large sample studies to analyze proteins for biomarker discovery.
ContributorsRai, Samita (Author) / Nelson, Randall (Thesis advisor) / Hayes, Mark (Thesis advisor) / Borges, Chad (Committee member) / Ros, Alexandra (Committee member) / Arizona State University (Publisher)
Created2012
Description
It is possible in a properly controlled environment, such as industrial metrology, to make significant headway into the non-industrial constraints on image-based position measurement using the techniques of image registration and achieve repeatable feature measurements on the order of 0.3% of a pixel, or about an order of magnitude improvement

It is possible in a properly controlled environment, such as industrial metrology, to make significant headway into the non-industrial constraints on image-based position measurement using the techniques of image registration and achieve repeatable feature measurements on the order of 0.3% of a pixel, or about an order of magnitude improvement on conventional real-world performance. These measurements are then used as inputs for a model optimal, model agnostic, smoothing for calibration of a laser scribe and online tracking of velocimeter using video input. Using appropriate smooth interpolation to increase effective sample density can reduce uncertainty and improve estimates. Use of the proper negative offset of the template function has the result of creating a convolution with higher local curvature than either template of target function which allows improved center-finding. Using the Akaike Information Criterion with a smoothing spline function it is possible to perform a model-optimal smooth on scalar measurements without knowing the underlying model and to determine the function describing the uncertainty in that optimal smooth. An example of empiric derivation of the parameters for a rudimentary Kalman Filter from this is then provided, and tested. Using the techniques of Exploratory Data Analysis and the "Formulize" genetic algorithm tool to convert the spline models into more accessible analytic forms resulted in stable, properly generalized, KF with performance and simplicity that exceeds "textbook" implementations thereof. Validation of the measurement includes that, in analytic case, it led to arbitrary precision in measurement of feature; in reasonable test case using the methods proposed, a reasonable and consistent maximum error of around 0.3% the length of a pixel was achieved and in practice using pixels that were 700nm in size feature position was located to within ± 2 nm. Robust applicability is demonstrated by the measurement of indicator position for a King model 2-32-G-042 rotameter.
ContributorsMunroe, Michael R (Author) / Phelan, Patrick (Thesis advisor) / Kostelich, Eric (Committee member) / Mahalov, Alex (Committee member) / Arizona State University (Publisher)
Created2012
141381-Thumbnail Image.png
Description

This study investigates the impact of urban form and landscaping type on the mid-afternoon microclimate in semi-arid Phoenix, Arizona. The goal is to find effective urban form and design strategies to ameliorate temperatures during the summer months. We simulated near-ground air temperatures for typical residential neighborhoods in Phoenix using the

This study investigates the impact of urban form and landscaping type on the mid-afternoon microclimate in semi-arid Phoenix, Arizona. The goal is to find effective urban form and design strategies to ameliorate temperatures during the summer months. We simulated near-ground air temperatures for typical residential neighborhoods in Phoenix using the three-dimensional microclimate model ENVI-met. The model was validated using weather observations from the North Desert Village (NDV) landscape experiment, located on the Arizona State University's Polytechnic campus. The NDV is an ideal site to determine the model's input parameters, since it is a controlled environment recreating three prevailing residential landscape types in the Phoenix metropolitan area (mesic, oasis, and xeric).

After validation, we designed five neighborhoods with different urban forms that represent a realistic cross-section of typical residential neighborhoods in Phoenix. The scenarios follow the Local Climate Zone (LCZ) classification scheme after Stewart and Oke. We then combined the neighborhoods with three landscape designs and, using ENVI-met, simulated microclimate conditions for these neighborhoods for a typical summer day. Results were analyzed in terms of mid-afternoon air temperature distribution and variation, ventilation, surface temperatures, and shading. Findings show that advection is important for the distribution of within-design temperatures and that spatial differences in cooling are strongly related to solar radiation and local shading patterns. In mid-afternoon, dense urban forms can create local cool islands. Our approach suggests that the LCZ concept is useful for planning and design purposes.

ContributorsMiddel, Ariane (Author) / Hab, Kathrin (Author) / Brazel, Anthony J. (Author) / Martin, Chris A. (Author) / Guhathakurta, Subhrajit (Author)
Created2013-12-01
141382-Thumbnail Image.png
Description

The City of Phoenix (Arizona, USA) developed a Tree and Shade Master Plan and a Cool Roofs initiative to ameliorate extreme heat during the summer months in their arid city. This study investigates the impact of the City's heat mitigation strategies on daytime microclimate for a pre-monsoon summer day under

The City of Phoenix (Arizona, USA) developed a Tree and Shade Master Plan and a Cool Roofs initiative to ameliorate extreme heat during the summer months in their arid city. This study investigates the impact of the City's heat mitigation strategies on daytime microclimate for a pre-monsoon summer day under current climate conditions and two climate change scenarios. We assessed the cooling effect of trees and cool roofs in a Phoenix residential neighborhood using the microclimate model ENVI-met. First, using xeric landscaping as a base, we created eight tree planting scenarios (from 0% canopy cover to 30% canopy cover) for the neighborhood to characterize the relationship between canopy cover and daytime cooling benefit of trees. In a second set of simulations, we ran ENVI-met for nine combined tree planting and landscaping scenarios (mesic, oasis, and xeric) with regular roofs and cool roofs under current climate conditions and two climate change projections. For each of the 54 scenarios, we compared average neighborhood mid-afternoon air temperatures and assessed the benefits of each heat mitigation measure under current and projected climate conditions. Findings suggest that the relationship between percent canopy cover and air temperature reduction is linear, with 0.14 °C cooling per percent increase in tree cover for the neighborhood under investigation. An increase in tree canopy cover from the current 10% to a targeted 25% resulted in an average daytime cooling benefit of up to 2.0 °C in residential neighborhoods at the local scale. Cool roofs reduced neighborhood air temperatures by 0.3 °C when implemented on residential homes. The results from this city-specific mitigation project will inform messaging campaigns aimed at engaging the city decision makers, industry, and the public in the green building and urban forestry initiatives.

ContributorsMiddel, Ariane (Author) / Chhetri, Nalini (Author) / Quay, Raymond (Author)
Created2015
141393-Thumbnail Image.png
Description

This study addresses a classic sustainability challenge—the tradeoff between water conservation and temperature amelioration in rapidly growing cities, using Phoenix, Arizona and Portland, Oregon as case studies. An urban energy balance model— LUMPS (Local-Scale Urban Meteorological Parameterization Scheme)—is used to represent the tradeoff between outdoor water use and nighttime cooling

This study addresses a classic sustainability challenge—the tradeoff between water conservation and temperature amelioration in rapidly growing cities, using Phoenix, Arizona and Portland, Oregon as case studies. An urban energy balance model— LUMPS (Local-Scale Urban Meteorological Parameterization Scheme)—is used to represent the tradeoff between outdoor water use and nighttime cooling during hot, dry summer months. Tradeoffs were characterized under three scenarios of land use change and three climate-change assumptions. Decreasing vegetation density reduced outdoor water use but sacrificed nighttime cooling. Increasing vegetated surfaces accelerated nighttime cooling, but increased outdoor water use by ~20%. Replacing impervious surfaces with buildings achieved similar improvements in nighttime cooling with minimal increases in outdoor water use; it was the most water-efficient cooling strategy. The fact that nighttime cooling rates and outdoor water use were more sensitive to land use scenarios than climate-change simulations suggested that cities can adapt to a warmer climate by manipulating land use.

ContributorsGober, Patricia (Author) / Middel, Ariane (Author) / Brazel, Anthony J. (Author) / Myint, Soe (Author) / Chang, Heejun (Author) / Duh, Jiunn-Der (Author) / House-Peters, Lily (Author)
Created2013-05-16