Matching Items (76)
149730-Thumbnail Image.png
Description
Nonlinear dispersive equations model nonlinear waves in a wide range of physical and mathematics contexts. They reinforce or dissipate effects of linear dispersion and nonlinear interactions, and thus, may be of a focusing or defocusing nature. The nonlinear Schrödinger equation or NLS is an example of such equations. It appears

Nonlinear dispersive equations model nonlinear waves in a wide range of physical and mathematics contexts. They reinforce or dissipate effects of linear dispersion and nonlinear interactions, and thus, may be of a focusing or defocusing nature. The nonlinear Schrödinger equation or NLS is an example of such equations. It appears as a model in hydrodynamics, nonlinear optics, quantum condensates, heat pulses in solids and various other nonlinear instability phenomena. In mathematics, one of the interests is to look at the wave interaction: waves propagation with different speeds and/or different directions produces either small perturbations comparable with linear behavior, or creates solitary waves, or even leads to singular solutions. This dissertation studies the global behavior of finite energy solutions to the $d$-dimensional focusing NLS equation, $i partial _t u+Delta u+ |u|^{p-1}u=0, $ with initial data $u_0in H^1,; x in Rn$; the nonlinearity power $p$ and the dimension $d$ are chosen so that the scaling index $s=frac{d}{2}-frac{2}{p-1}$ is between 0 and 1, thus, the NLS is mass-supercritical $(s>0)$ and energy-subcritical $(s<1).$ For solutions with $ME[u_0]<1$ ($ME[u_0]$ stands for an invariant and conserved quantity in terms of the mass and energy of $u_0$), a sharp threshold for scattering and blowup is given. Namely, if the renormalized gradient $g_u$ of a solution $u$ to NLS is initially less than 1, i.e., $g_u(0)<1,$ then the solution exists globally in time and scatters in $H^1$ (approaches some linear Schr"odinger evolution as $ttopminfty$); if the renormalized gradient $g_u(0)>1,$ then the solution exhibits a blowup behavior, that is, either a finite time blowup occurs, or there is a divergence of $H^1$ norm in infinite time. This work generalizes the results for the 3d cubic NLS obtained in a series of papers by Holmer-Roudenko and Duyckaerts-Holmer-Roudenko with the key ingredients, the concentration compactness and localized variance, developed in the context of the energy-critical NLS and Nonlinear Wave equations by Kenig and Merle. One of the difficulties is fractional powers of nonlinearities which are overcome by considering Besov-Strichartz estimates and various fractional differentiation rules.
ContributorsGuevara, Cristi Darley (Author) / Roudenko, Svetlana (Thesis advisor) / Castillo_Chavez, Carlos (Committee member) / Jones, Donald (Committee member) / Mahalov, Alex (Committee member) / Suslov, Sergei (Committee member) / Arizona State University (Publisher)
Created2011
151515-Thumbnail Image.png
Description
This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified

This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified technique results in significant improvement in velocity retrieval accuracy. These modifications include changes to innovation covariance portioning, covariance binning, and analysis increment calculation. It is observed that the modified technique is able to make retrievals with better accuracy, preserves local information better, and compares well with tower measurements. In order to study the error of representativeness and vector retrieval error, a lidar simulator was constructed. Using the lidar simulator a thorough sensitivity analysis of the lidar measurement process and vector retrieval is carried out. The error of representativeness as a function of scales of motion and sensitivity of vector retrieval to look angle is quantified. Using the modified OI technique, study of nocturnal flow in Owens' Valley, CA was carried out to identify and understand uncharacteristic events on the night of March 27th 2006. Observations from 1030 UTC to 1230 UTC (0230 hr local time to 0430 hr local time) on March 27 2006 are presented. Lidar observations show complex and uncharacteristic flows such as sudden bursts of westerly cross-valley wind mixing with the dominant up-valley wind. Model results from Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS®) and other in-situ instrumentations are used to corroborate and complement these observations. The modified OI technique is used to identify uncharacteristic and extreme flow events at a wind development site. Estimates of turbulence and shear from this technique are compared to tower measurements. A formulation for equivalent wind speed in the presence of variations in wind speed and direction, combined with shear is developed and used to determine wind energy content in presence of turbulence.
ContributorsChoukulkar, Aditya (Author) / Calhoun, Ronald (Thesis advisor) / Mahalov, Alex (Committee member) / Kostelich, Eric (Committee member) / Huang, Huei-Ping (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2013
Description
It is possible in a properly controlled environment, such as industrial metrology, to make significant headway into the non-industrial constraints on image-based position measurement using the techniques of image registration and achieve repeatable feature measurements on the order of 0.3% of a pixel, or about an order of magnitude improvement

It is possible in a properly controlled environment, such as industrial metrology, to make significant headway into the non-industrial constraints on image-based position measurement using the techniques of image registration and achieve repeatable feature measurements on the order of 0.3% of a pixel, or about an order of magnitude improvement on conventional real-world performance. These measurements are then used as inputs for a model optimal, model agnostic, smoothing for calibration of a laser scribe and online tracking of velocimeter using video input. Using appropriate smooth interpolation to increase effective sample density can reduce uncertainty and improve estimates. Use of the proper negative offset of the template function has the result of creating a convolution with higher local curvature than either template of target function which allows improved center-finding. Using the Akaike Information Criterion with a smoothing spline function it is possible to perform a model-optimal smooth on scalar measurements without knowing the underlying model and to determine the function describing the uncertainty in that optimal smooth. An example of empiric derivation of the parameters for a rudimentary Kalman Filter from this is then provided, and tested. Using the techniques of Exploratory Data Analysis and the "Formulize" genetic algorithm tool to convert the spline models into more accessible analytic forms resulted in stable, properly generalized, KF with performance and simplicity that exceeds "textbook" implementations thereof. Validation of the measurement includes that, in analytic case, it led to arbitrary precision in measurement of feature; in reasonable test case using the methods proposed, a reasonable and consistent maximum error of around 0.3% the length of a pixel was achieved and in practice using pixels that were 700nm in size feature position was located to within ± 2 nm. Robust applicability is demonstrated by the measurement of indicator position for a King model 2-32-G-042 rotameter.
ContributorsMunroe, Michael R (Author) / Phelan, Patrick (Thesis advisor) / Kostelich, Eric (Committee member) / Mahalov, Alex (Committee member) / Arizona State University (Publisher)
Created2012
137407-Thumbnail Image.png
Description
This thesis explores and explains a stochastic model in Evolutionary Game Theory introduced by Dr. Nicolas Lanchier. The model is a continuous-time Markov chain that maps the two-dimensional lattice into the strategy space {1,2}. At every vertex in the grid there is exactly one player whose payoff is determined by

This thesis explores and explains a stochastic model in Evolutionary Game Theory introduced by Dr. Nicolas Lanchier. The model is a continuous-time Markov chain that maps the two-dimensional lattice into the strategy space {1,2}. At every vertex in the grid there is exactly one player whose payoff is determined by its strategy and the strategies of its neighbors. Update times are exponential random variables with parameters equal to the absolute value of the respective cells' payoffs. The model is connected to an ordinary differential equation known as the replicator equation. This differential equation is analyzed to find its fixed points and stability. Then, by simulating the model using Java code and observing the change in dynamics which result from varying the parameters of the payoff matrix, the stochastic model's phase diagram is compared to the replicator equation's phase diagram to see what effect local interactions and stochastic update times have on the evolutionary stability of strategies. It is revealed that in the stochastic model altruistic strategies can be evolutionarily stable, and selfish strategies are only evolutionarily stable if they are more selfish than their opposing strategy. This contrasts with the replicator equation where selfishness is always evolutionarily stable and altruism never is.
ContributorsWehn, Austin Brent (Author) / Lanchier, Nicolas (Thesis director) / Kang, Yun (Committee member) / Motsch, Sebastien (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of International Letters and Cultures (Contributor)
Created2013-12
133983-Thumbnail Image.png
Description
There are multiple mathematical models for alignment of individuals moving within a group. In a first class of models, individuals tend to relax their velocity toward the average velocity of other nearby neighbors. These types of models are motivated by the flocking behavior exhibited by birds. Another class of models

There are multiple mathematical models for alignment of individuals moving within a group. In a first class of models, individuals tend to relax their velocity toward the average velocity of other nearby neighbors. These types of models are motivated by the flocking behavior exhibited by birds. Another class of models have been introduced to describe rapid changes of individual velocity, referred to as jump, which better describes behavior of smaller agents (e.g. locusts, ants). In the second class of model, individuals will randomly choose to align with another nearby individual, matching velocities. There are several open questions concerning these two type of behavior: which behavior is the most efficient to create a flock (i.e. to converge toward the same velocity)? Will flocking still emerge when the number of individuals approach infinity? Analysis of these models show that, in the homogeneous case where all individuals are capable of interacting with each other, the variance of the velocities in both the jump model and the relaxation model decays to 0 exponentially for any nonzero number of individuals. This implies the individuals in the system converge to an absorbing state where all individuals share the same velocity, therefore individuals converge to a flock even as the number of individuals approach infinity. Further analysis focused on the case where interactions between individuals were determined by an adjacency matrix. The second eigenvalues of the Laplacian of this adjacency matrix (denoted ƛ2) provided a lower bound on the rate of decay of the variance. When ƛ2 is nonzero, the system is said to converge to a flock almost surely. Furthermore, when the adjacency matrix is generated by a random graph, such that connections between individuals are formed with probability p (where 0

1/N. ƛ2 is a good estimator of the rate of convergence of the system, in comparison to the value of p used to generate the adjacency matrix..

ContributorsTrent, Austin L. (Author) / Motsch, Sebastien (Thesis director) / Lanchier, Nicolas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134875-Thumbnail Image.png
Description
Productivity in the construction industry is an essential measure of production efficiency and economic progress, quantified by craft laborers' time spent directly adding value to a project. In order to better understand craft labor productivity as an aspect of lean construction, an activity analysis was conducted at the Arizona State

Productivity in the construction industry is an essential measure of production efficiency and economic progress, quantified by craft laborers' time spent directly adding value to a project. In order to better understand craft labor productivity as an aspect of lean construction, an activity analysis was conducted at the Arizona State University Palo Verde Main engineering dormitory construction site in December of 2016. The objective of this analysis on craft labor productivity in construction projects was to gather data regarding the efficiency of craft labor workers, make conclusions about the effects of time of day and other site-specific factors on labor productivity, as well as suggest improvements to implement in the construction process. Analysis suggests that supporting tasks, such as traveling or materials handling, constitute the majority of craft labors' efforts on the job site with the highest percentages occurring at the beginning and end of the work day. Direct work and delays were approximately equal at about 20% each hour with the highest peak occurring at lunchtime between 10:00 am and 11:00 am. The top suggestion to improve construction productivity would be to perform an extensive site utilization analysis due to the confined nature of this job site. Despite the limitations of an activity analysis to provide a complete prospective of all the factors that can affect craft labor productivity as well as the small number of days of data acquisition, this analysis provides a basic overview of the productivity at the Palo Verde Main construction site. Through this research, construction managers can more effectively generate site plans and schedules to increase labor productivity.
ContributorsFord, Emily Lucile (Author) / Grau, David (Thesis director) / Chong, Oswald (Committee member) / Civil, Environmental and Sustainable Engineering Programs (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
Description
Cancer modeling has brought a lot of attention in recent years. It had been proven to be a difficult task to model the behavior of cancer cells, since little about the "rules" a cell follows has been known. Existing models for cancer cells can be generalized into two categories: macroscopic

Cancer modeling has brought a lot of attention in recent years. It had been proven to be a difficult task to model the behavior of cancer cells, since little about the "rules" a cell follows has been known. Existing models for cancer cells can be generalized into two categories: macroscopic models which studies the tumor structure as a whole, and microscopic models which focus on the behavior of individual cells. Both modeling strategies strive the same goal of creating a model that can be validated with experimental data, and is reliable for predicting tumor growth. In order to achieve this goal, models must be developed based on certain rules that tumor structures follow. This paper will introduce how such rules can be implemented in a mathematical model, with the example of individual based modeling.
ContributorsHan, Zimo (Author) / Motsch, Sebastien (Thesis director) / Moustaoui, Mohamed (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
134662-Thumbnail Image.png
Description
The overall energy consumption around the United States has not been reduced even with the advancement of technology over the past decades. Deficiencies exist between design and actual energy performances. Energy Infrastructure Systems (EIS) are impacted when the amount of energy production cannot be accurately and efficiently forecasted. Inaccurate engineering

The overall energy consumption around the United States has not been reduced even with the advancement of technology over the past decades. Deficiencies exist between design and actual energy performances. Energy Infrastructure Systems (EIS) are impacted when the amount of energy production cannot be accurately and efficiently forecasted. Inaccurate engineering assumptions can result when there is a lack of understanding on how energy systems can operate in real-world applications. Energy systems are complex, which results in unknown system behaviors, due to an unknown structural system model. Currently, there exists a lack of data mining techniques in reverse engineering, which are needed to develop efficient structural system models. In this project, a new type of reverse engineering algorithm has been applied to a year's worth of energy data collected from an ASU research building called MacroTechnology Works, to identify the structural system model. Developing and understanding structural system models is the first step in creating accurate predictive analytics for energy production. The associative network of the building's data will be highlighted to accurately depict the structural model. This structural model will enhance energy infrastructure systems' energy efficiency, reduce energy waste, and narrow the gaps between energy infrastructure design, planning, operation and management (DPOM).
ContributorsCamarena, Raquel Jimenez (Author) / Chong, Oswald (Thesis director) / Ye, Nong (Committee member) / Industrial, Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
134565-Thumbnail Image.png
Description
A numerical study of wave-induced momentum transport across the tropopause in the presence of a stably stratified thin inversion layer is presented and discussed. This layer consists of a sharp increase in static stability within the tropopause. The wave propagation is modeled by numerically solving the Taylor-Goldstein equation, which governs

A numerical study of wave-induced momentum transport across the tropopause in the presence of a stably stratified thin inversion layer is presented and discussed. This layer consists of a sharp increase in static stability within the tropopause. The wave propagation is modeled by numerically solving the Taylor-Goldstein equation, which governs the dynamics of internal waves in stably stratified shear flows. The waves are forced by a flow over a bell shaped mountain placed at the lower boundary of the domain. A perfectly radiating condition based on the group velocity of mountain waves is imposed at the top to avoid artificial wave reflection. A validation for the numerical method through comparisons with the corresponding analytical solutions will be provided. Then, the method is applied to more realistic profiles of the stability to study the impact of these profiles on wave propagation through the tropopause.
Created2017-05
135209-Thumbnail Image.png
Description
Building construction, design and maintenance is a sector of engineering where improved efficiency will have immense impacts on resource consumption and environmental health. This research closely examines the Leadership in Environment and Energy Design (LEED) rating system and the International Green Construction Code (IgCC). The IgCC is a model code,

Building construction, design and maintenance is a sector of engineering where improved efficiency will have immense impacts on resource consumption and environmental health. This research closely examines the Leadership in Environment and Energy Design (LEED) rating system and the International Green Construction Code (IgCC). The IgCC is a model code, written with the same structure as many building codes. It is a standard that can be enforced if a city's government decides to adopt it. When IgCC is enforced, the buildings either meet all of the requirements set forth in the document or it fails to meet the code standards. The LEED Rating System, on the other hand, is not a building code. LEED certified buildings are built according to the standards of their local jurisdiction and in addition to that, building owners can chose to pursue a LEED certification. This is a rating system that awards points based on the sustainable measures achieved by a building. A comparison of these green building systems highlights their accomplishments in terms of reduced electricity usage, usage of low-impact materials, indoor environmental quality and other innovative features. It was determined that in general IgCC is more holistic, stringent approach to green building. At the same time the LEED rating system a wider variety of green building options. In addition, building data from LEED certified buildings was complied and analyzed to understand important trends. Both of these methods are progressing towards low-impact, efficient infrastructure and a side-by-side comparison, as done in this research, shed light on the strengths and weaknesses of each method, allowing for future improvements.
ContributorsCampbell, Kaleigh Ruth (Author) / Chong, Oswald (Thesis director) / Parrish, Kristen (Committee member) / Civil, Environmental and Sustainable Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05