Matching Items (14)
Description
In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably

In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably more important than any other data type, because the data point could be a cancer patient or the classication decision could help determine what gene might be over expressed and perhaps a cause of cancer. These mis-classications are typically higher in the presence of outlier data points. The aim of this thesis is to develop a maximum margin classier that is suited to address the lack of robustness of discriminant based classiers (like the Support Vector Machine (SVM)) to noise and outliers. The underlying notion is to adopt and develop a natural loss function that is more robust to outliers and more representative of the true loss function of the data. It is demonstrated experimentally that SVM's are indeed susceptible to outliers and that the new classier developed, here coined as Robust-SVM (RSVM), is superior to all studied classier on the synthetic datasets. It is superior to the SVM in both the synthetic and experimental data from biomedical studies and is competent to a classier derived on similar lines when real life data examples are considered.
ContributorsGupta, Sidharth (Author) / Kim, Seungchan (Thesis advisor) / Welfert, Bruno (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
151840-Thumbnail Image.png
Description
Urbanization and infrastructure development often brings dramatic changes in the surface and groundwater regimes. These changes in moisture content may be particularly problematic when subsurface soils are moisture sensitive such as expansive soils. Residential foundations such as slab-on ground may be built on unsaturated expansive soils and therefore have to

Urbanization and infrastructure development often brings dramatic changes in the surface and groundwater regimes. These changes in moisture content may be particularly problematic when subsurface soils are moisture sensitive such as expansive soils. Residential foundations such as slab-on ground may be built on unsaturated expansive soils and therefore have to resist the deformations associated with change in moisture content (matric suction) in the soil. The problem is more pronounced in arid and semi arid regions with drying periods followed by wet season resulting in large changes in soil suction. Moisture content change causes volume change in expansive soil which causes serious damage to the structures. In order to mitigate these ill effects various mitigation are adopted. The most commonly adopted method in the US is the removal and replacement of upper soils in the profile. The remove and replace method, although heavily used, is not well understood with regard to its impact on the depth of soil wetting or near-surface differential soil movements. In this study the effectiveness of the remove and replace method is studied. A parametric study is done with various removal and replacement materials used and analyzed to obtain the optimal replacement depths and best material. The depth of wetting and heave caused in expansive soil profile under climatic conditions and common irrigation scenarios are studied for arid regions. Soil suction changes and associated soil deformations are analyzed using finite element codes for unsaturated flow and stress/deformation, SVFlux and SVSolid, respectively. The effectiveness and fundamental mechanisms at play in mitigation of expansive soils for remove and replace methods are studied, and include (1) its role in reducing the depth and degree of wetting, and (2) its effect in reducing the overall heave potential, and (3) the effectiveness of this method in pushing the seat of movement deeper within the soil profile to reduce differential soil surface movements. Various non-expansive replacement layers and different surface flux boundary conditions are analyzed, and the concept of optimal depth and soil is introduced. General observations are made concerning the efficacy of remove and replace as a mitigation method.
ContributorsBharadwaj, Anushree (Author) / Houston, Sandra L. (Thesis advisor) / Welfert, Bruno (Thesis advisor) / Zapata, Claudia E (Committee member) / Arizona State University (Publisher)
Created2013
148057-Thumbnail Image.png
Description

This thesis project focuses on algorithms that generate good sampling points for function approximation. In one dimension, polynomial interpolation using equispaced points is unstable, with high Oscillations near the endpoints of the interpolated interval. On the other hand, Chebyshev nodes provide both stable and highly accurate points for polynomial

This thesis project focuses on algorithms that generate good sampling points for function approximation. In one dimension, polynomial interpolation using equispaced points is unstable, with high Oscillations near the endpoints of the interpolated interval. On the other hand, Chebyshev nodes provide both stable and highly accurate points for polynomial interpolation. In higher dimensions, optimal sampling points are unknown. This project addresses this problem by finding algorithms that are robust in various domains for polynomial interpolation and least-squares. To measure the quality of the nodes produced by said algorithms, the Lebesgue constant will be used. In the algorithms, a number of numerical techniques will be used, such as the Gram-Schmidt process and the pivoted-QR process. In addition, concepts such as node density and greedy algorithms will be explored.

ContributorsGuo, Maosheng (Author) / Platte, Rodrigo (Thesis director) / Welfert, Bruno (Committee member) / School of Mathematical and Statistical Sciences (Contributor, Contributor) / Department of Physics (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
136422-Thumbnail Image.png
Description
We study an idealized model of a wind-driven ocean, namely a 2-D lid-driven cavity with a linear temperature gradient along the side walls and constant hot and cold temperatures on the top and bottom boundaries respectively. In particular, we determine numerically the response on flow field and temperature stratification associated

We study an idealized model of a wind-driven ocean, namely a 2-D lid-driven cavity with a linear temperature gradient along the side walls and constant hot and cold temperatures on the top and bottom boundaries respectively. In particular, we determine numerically the response on flow field and temperature stratification associated with the velocity of the lid driven by harmonic forcing using the Navier-Stokes equations with Boussinesq approximation in an attempt to gain an understanding of how variations of external forces (such as the wind over the ocean) transfer energy to a system by exciting internal modes through resonances. The time variation of the forcing, accounting for turbulence at the boundary is critical for allowing penetration of energy waves through the stratified medium in which the angles of the internal waves depend on these perturbation frequencies. Determining the results of the interaction of two 45 degree angle wave beams at the center of the cavity is of particular interest.
ContributorsTaylor, Stephanie Lynn (Author) / Welfert, Bruno (Thesis director) / Lopez, Juan (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor) / W. P. Carey School of Business (Contributor)
Created2015-05
137787-Thumbnail Image.png
Description
A comparison of the performance of CUDA versus OpenMP for Jacobi, Gauss-Seidel, and S.O.R. iterative methods for Laplace's Equation with Dirichlet boundary conditions is presented. Both the number of cores and the grid size were varied for the OpenMP program, while the grid size was varied for the CUDA program.

A comparison of the performance of CUDA versus OpenMP for Jacobi, Gauss-Seidel, and S.O.R. iterative methods for Laplace's Equation with Dirichlet boundary conditions is presented. Both the number of cores and the grid size were varied for the OpenMP program, while the grid size was varied for the CUDA program. CUDA outperforms the 8-core OpenMP program with the Jacobi and Gauss-Seidel schemes for all grid sizes, and is competitive with S.O.R for all grid sizes examined.
ContributorsProst, Spencer Arthur (Author) / Gardner, Carl (Thesis director) / Welfert, Bruno (Committee member) / Speyer, Gil (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-05
132164-Thumbnail Image.png
Description
With the coming advances of computational power, algorithmic trading has become one of the primary strategies to trading on the stock market. To understand why and how these strategies have been effective, this project has taken a look at the complete process of creating tools and applications to analyze and

With the coming advances of computational power, algorithmic trading has become one of the primary strategies to trading on the stock market. To understand why and how these strategies have been effective, this project has taken a look at the complete process of creating tools and applications to analyze and predict stock prices in order to perform low-frequency trading. The project is composed of three main components. The first component is integrating several public resources to acquire and process financial trading data and store it in order to complete the other components. Alpha Vantage API, a free open source application, provides an accurate and comprehensive dataset of features for each stock ticker requested. The second component is researching, prototyping, and implementing various trading algorithms in code. We began by focusing on the Mean Reversion algorithm as a proof of concept algorithm to develop meaningful trading strategies and identify patterns within our datasets. To augment our market prediction power (“alpha”), we implemented a Long Short-Term Memory recurrent neural network. Neural Networks are an incredibly effective but often complex tool used frequently in data science when traditional methods are found lacking. Following the implementation, the last component is to optimize, analyze, compare, and contrast all of the algorithms and identify key features to conclude the overall effectiveness of each algorithm. We were able to identify conclusively which aspects of each algorithm provided better alpha and create an entire pipeline to automate this process for live trading implementation. An additional reason for automation is to provide an educational framework such that any who may be interested in quantitative finance in the future can leverage this project to gain further insight.
ContributorsYurowkin, Alexander (Co-author) / Kumar, Rohit (Co-author) / Welfert, Bruno (Thesis director) / Li, Baoxin (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
189270-Thumbnail Image.png
Description
The variable projection method has been developed as a powerful tool for solvingseparable nonlinear least squares problems. It has proven effective in cases where the underlying model consists of a linear combination of nonlinear functions, such as exponential functions. In this thesis, a modified version of the variable projection method to address a

The variable projection method has been developed as a powerful tool for solvingseparable nonlinear least squares problems. It has proven effective in cases where the underlying model consists of a linear combination of nonlinear functions, such as exponential functions. In this thesis, a modified version of the variable projection method to address a challenging semi-blind deconvolution problem involving mixed Gaussian kernels is employed. The aim is to recover the original signal accurately while estimating the mixed Gaussian kernel utilized during the convolution process. The numerical results obtained through the implementation of the proposed algo- rithm are presented. These results highlight the method’s ability to approximate the true signal successfully. However, accurately estimating the mixed Gaussian kernel remains a challenging task. The implementation details, specifically focusing on con- structing a simplified Jacobian for the Gauss-Newton method, are explored. This contribution enhances the understanding and practicality of the approach.
ContributorsDworaczyk, Jordan Taylor (Author) / Espanol, Malena (Thesis advisor) / Welfert, Bruno (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2023
157240-Thumbnail Image.png
Description
The dynamics of a fluid flow inside 2D square and 3D cubic cavities

under various configurations were simulated and analyzed using a

spectral code I developed.

This code was validated against known studies in the 3D lid-driven

cavity. It was then used to explore the various dynamical behaviors

close to the onset

The dynamics of a fluid flow inside 2D square and 3D cubic cavities

under various configurations were simulated and analyzed using a

spectral code I developed.

This code was validated against known studies in the 3D lid-driven

cavity. It was then used to explore the various dynamical behaviors

close to the onset of instability of the steady-state flow, and explain

in the process the mechanism underlying an intermittent bursting

previously observed. A fairly complete bifurcation picture emerged,

using a combination of computational tools such as selective

frequency damping, edge-state tracking and subspace restriction.

The code was then used to investigate the flow in a 2D square cavity

under stable temperature stratification, an idealized version of a lake

with warmer water at the surface compared to the bottom. The governing

equations are the Navier-Stokes equations under the Boussinesq approximation.

Simulations were done over a wide range of parameters of the problem quantifying

the driving velocity at the top (e.g. wind) and the strength of the stratification.

Particular attention was paid to the mechanisms associated with the onset of

instability of the base steady state, and the complex nontrivial dynamics

occurring beyond onset, where the presence of multiple states leads to a

rich spectrum of states, including homoclinic and heteroclinic chaos.

A third configuration investigates the flow dynamics of a fluid in a rapidly

rotating cube subjected to small amplitude modulations. The responses were

quantified by the global helicity and energy measures, and various peak

responses associated to resonances with intrinsic eigenmodes of the cavity

and/or internal retracing beams were clearly identified for the first time.

A novel approach to compute the eigenmodes is also described, making accessible

a whole catalog of these with various properties and dynamics. When the small

amplitude modulation does not align with the rotation axis (precession) we show

that a new set of eigenmodes are primarily excited as the angular velocity

increases, while triadic resonances may occur once the nonlinear regime kicks in.
ContributorsWu, Ke (Author) / Lopez, Juan (Thesis advisor) / Welfert, Bruno (Thesis advisor) / Tang, Wenbo (Committee member) / Platte, Rodrigo (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2019
154804-Thumbnail Image.png
Description
Divergence-free vector field interpolants properties are explored on uniform and scattered nodes, and also their application to fluid flow problems. These interpolants may be applied to physical problems that require the approximant to have zero divergence, such as the velocity field in the incompressible Navier-Stokes equations and the magnetic and

Divergence-free vector field interpolants properties are explored on uniform and scattered nodes, and also their application to fluid flow problems. These interpolants may be applied to physical problems that require the approximant to have zero divergence, such as the velocity field in the incompressible Navier-Stokes equations and the magnetic and electric fields in the Maxwell's equations. In addition, the methods studied here are meshfree, and are suitable for problems defined on complex domains, where mesh generation is computationally expensive or inaccurate, or for problems where the data is only available at scattered locations.

The contributions of this work include a detailed comparison between standard and divergence-free radial basis approximations, a study of the Lebesgue constants for divergence-free approximations and their dependence on node placement, and an investigation of the flat limit of divergence-free interpolants. Finally, numerical solvers for the incompressible Navier-Stokes equations in primitive variables are implemented using discretizations based on traditional and divergence-free kernels. The numerical results are compared to reference solutions obtained with a spectral

method.
ContributorsAraujo Mitrano, Arthur (Author) / Platte, Rodrigo (Thesis advisor) / Wright, Grady (Committee member) / Welfert, Bruno (Committee member) / Gelb, Anne (Committee member) / Renaut, Rosemary (Committee member) / Arizona State University (Publisher)
Created2016
153049-Thumbnail Image.png
Description
Obtaining high-quality experimental designs to optimize statistical efficiency and data quality is quite challenging for functional magnetic resonance imaging (fMRI). The primary fMRI design issue is on the selection of the best sequence of stimuli based on a statistically meaningful optimality criterion. Some previous studies have provided some guidance and

Obtaining high-quality experimental designs to optimize statistical efficiency and data quality is quite challenging for functional magnetic resonance imaging (fMRI). The primary fMRI design issue is on the selection of the best sequence of stimuli based on a statistically meaningful optimality criterion. Some previous studies have provided some guidance and powerful computational tools for obtaining good fMRI designs. However, these results are mainly for basic experimental settings with simple statistical models. In this work, a type of modern fMRI experiments is considered, in which the design matrix of the statistical model depends not only on the selected design, but also on the experimental subject's probabilistic behavior during the experiment. The design matrix is thus uncertain at the design stage, making it diffcult to select good designs. By taking this uncertainty into account, a very efficient approach for obtaining high-quality fMRI designs is developed in this study. The proposed approach is built upon an analytical result, and an efficient computer algorithm. It is shown through case studies that the proposed approach can outperform an existing method in terms of computing time, and the quality of the obtained designs.
ContributorsZhou, Lin (Author) / Kao, Ming-Hung (Thesis advisor) / Reiser, Mark R. (Committee member) / Stufken, John (Committee member) / Welfert, Bruno (Committee member) / Arizona State University (Publisher)
Created2014