This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 31 - 40 of 212
Filtering by

Clear all filters

156747-Thumbnail Image.png
Description
Mixture of experts is a machine learning ensemble approach that consists of individual models that are trained to be ``experts'' on subsets of the data, and a gating network that provides weights to output a combination of the expert predictions. Mixture of experts models do not currently see wide use

Mixture of experts is a machine learning ensemble approach that consists of individual models that are trained to be ``experts'' on subsets of the data, and a gating network that provides weights to output a combination of the expert predictions. Mixture of experts models do not currently see wide use due to difficulty in training diverse experts and high computational requirements. This work presents modifications of the mixture of experts formulation that use domain knowledge to improve training, and incorporate parameter sharing among experts to reduce computational requirements.

First, this work presents an application of mixture of experts models for quality robust visual recognition. First it is shown that human subjects outperform deep neural networks on classification of distorted images, and then propose a model, MixQualNet, that is more robust to distortions. The proposed model consists of ``experts'' that are trained on a particular type of image distortion. The final output of the model is a weighted sum of the expert models, where the weights are determined by a separate gating network. The proposed model also incorporates weight sharing to reduce the number of parameters, as well as increase performance.



Second, an application of mixture of experts to predict visual saliency is presented. A computational saliency model attempts to predict where humans will look in an image. In the proposed model, each expert network is trained to predict saliency for a set of closely related images. The final saliency map is computed as a weighted mixture of the expert networks' outputs, with weights determined by a separate gating network. The proposed model achieves better performance than several other visual saliency models and a baseline non-mixture model.

Finally, this work introduces a saliency model that is a weighted mixture of models trained for different levels of saliency. Levels of saliency include high saliency, which corresponds to regions where almost all subjects look, and low saliency, which corresponds to regions where some, but not all subjects look. The weighted mixture shows improved performance compared with baseline models because of the diversity of the individual model predictions.
ContributorsDodge, Samuel Fuller (Author) / Karam, Lina (Thesis advisor) / Jayasuriya, Suren (Committee member) / Li, Baoxin (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2018
156919-Thumbnail Image.png
Description
Motion estimation is a core task in computer vision and many applications utilize optical flow methods as fundamental tools to analyze motion in images and videos. Optical flow is the apparent motion of objects in image sequences that results from relative motion between the objects and the imaging perspective. Today,

Motion estimation is a core task in computer vision and many applications utilize optical flow methods as fundamental tools to analyze motion in images and videos. Optical flow is the apparent motion of objects in image sequences that results from relative motion between the objects and the imaging perspective. Today, optical flow fields are utilized to solve problems in various areas such as object detection and tracking, interpolation, visual odometry, etc. In this dissertation, three problems from different areas of computer vision and the solutions that make use of modified optical flow methods are explained.

The contributions of this dissertation are approaches and frameworks that introduce i) a new optical flow-based interpolation method to achieve minimally divergent velocimetry data, ii) a framework that improves the accuracy of change detection algorithms in synthetic aperture radar (SAR) images, and iii) a set of new methods to integrate Proton Magnetic Resonance Spectroscopy (1HMRSI) data into threedimensional (3D) neuronavigation systems for tumor biopsies.

In the first application an optical flow-based approach for the interpolation of minimally divergent velocimetry data is proposed. The velocimetry data of incompressible fluids contain signals that describe the flow velocity. The approach uses the additional flow velocity information to guide the interpolation process towards reduced divergence in the interpolated data.

In the second application a framework that mainly consists of optical flow methods and other image processing and computer vision techniques to improve object extraction from synthetic aperture radar images is proposed. The proposed framework is used for distinguishing between actual motion and detected motion due to misregistration in SAR image sets and it can lead to more accurate and meaningful change detection and improve object extraction from a SAR datasets.

In the third application a set of new methods that aim to improve upon the current state-of-the-art in neuronavigation through the use of detailed three-dimensional (3D) 1H-MRSI data are proposed. The result is a progressive form of online MRSI-guided neuronavigation that is demonstrated through phantom validation and clinical application.
ContributorsKanberoglu, Berkay (Author) / Frakes, David (Thesis advisor) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2018
156802-Thumbnail Image.png
Description
Human movement is a complex process influenced by physiological and psychological factors. The execution of movement is varied from person to person, and the number of possible strategies for completing a specific movement task is almost infinite. Different choices of strategies can be perceived by humans as having different degrees

Human movement is a complex process influenced by physiological and psychological factors. The execution of movement is varied from person to person, and the number of possible strategies for completing a specific movement task is almost infinite. Different choices of strategies can be perceived by humans as having different degrees of quality, and the quality can be defined with regard to aesthetic, athletic, or health-related ratings. It is useful to measure and track the quality of a person's movements, for various applications, especially with the prevalence of low-cost and portable cameras and sensors today. Furthermore, based on such measurements, feedback systems can be designed for people to practice their movements towards certain goals. In this dissertation, I introduce symmetry as a family of measures for movement quality, and utilize recent advances in computer vision and differential geometry to model and analyze different types of symmetry in human movements. Movements are modeled as trajectories on different types of manifolds, according to the representations of movements from sensor data. The benefit of such a universal framework is that it can accommodate different existing and future features that describe human movements. The theory and tools developed in this dissertation will also be useful in other scientific areas to analyze symmetry from high-dimensional signals.
ContributorsWang, Qiao (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Srivastava, Anuj (Committee member) / Sha, Xin Wei (Committee member) / Arizona State University (Publisher)
Created2018
157215-Thumbnail Image.png
Description
Non-line-of-sight (NLOS) imaging of objects not visible to either the camera or illumina-

tion source is a challenging task with vital applications including surveillance and robotics.

Recent NLOS reconstruction advances have been achieved using time-resolved measure-

ments. Acquiring these time-resolved measurements requires expensive and specialized

detectors and laser sources. In work proposes a data-driven

Non-line-of-sight (NLOS) imaging of objects not visible to either the camera or illumina-

tion source is a challenging task with vital applications including surveillance and robotics.

Recent NLOS reconstruction advances have been achieved using time-resolved measure-

ments. Acquiring these time-resolved measurements requires expensive and specialized

detectors and laser sources. In work proposes a data-driven approach for NLOS 3D local-

ization requiring only a conventional camera and projector. The localisation is performed

using a voxelisation and a regression problem. Accuracy of greater than 90% is achieved

in localizing a NLOS object to a 5cm × 5cm × 5cm volume in real data. By adopting

the regression approach an object of width 10cm to localised to approximately 1.5cm. To

generalize to line-of-sight (LOS) scenes with non-planar surfaces, an adaptive lighting al-

gorithm is adopted. This algorithm, based on radiosity, identifies and illuminates scene

patches in the LOS which most contribute to the NLOS light paths, and can factor in sys-

tem power constraints. Improvements ranging from 6%-15% in accuracy with a non-planar

LOS wall using adaptive lighting is reported, demonstrating the advantage of combining

the physics of light transport with active illumination for data-driven NLOS imaging.
ContributorsChandran, Sreenithy (Author) / Jayasuriya, Suren (Thesis advisor) / Turaga, Pavan (Committee member) / Dasarathy, Gautam (Committee member) / Arizona State University (Publisher)
Created2019
156987-Thumbnail Image.png
Description
Speech is generated by articulators acting on

a phonatory source. Identification of this

phonatory source and articulatory geometry are

individually challenging and ill-posed

problems, called speech separation and

articulatory inversion, respectively.

There exists a trade-off

between decomposition and recovered

articulatory geometry due to multiple

possible mappings between an

articulatory configuration

and the speech produced. However, if measurements

are

Speech is generated by articulators acting on

a phonatory source. Identification of this

phonatory source and articulatory geometry are

individually challenging and ill-posed

problems, called speech separation and

articulatory inversion, respectively.

There exists a trade-off

between decomposition and recovered

articulatory geometry due to multiple

possible mappings between an

articulatory configuration

and the speech produced. However, if measurements

are obtained only from a microphone sensor,

they lack any invasive insight and add

additional challenge to an already difficult

problem.

A joint non-invasive estimation

strategy that couples articulatory and

phonatory knowledge would lead to better

articulatory speech synthesis. In this thesis,

a joint estimation strategy for speech

separation and articulatory geometry recovery

is studied. Unlike previous

periodic/aperiodic decomposition methods that

use stationary speech models within a

frame, the proposed model presents a

non-stationary speech decomposition method.

A parametric glottal source model and an

articulatory vocal tract response are

represented in a dynamic state space formulation.

The unknown parameters of the

speech generation components are estimated

using sequential Monte Carlo methods

under some specific assumptions.

The proposed approach is compared with other

glottal inverse filtering methods,

including iterative adaptive inverse filtering,

state-space inverse filtering, and

the quasi-closed phase method.
ContributorsVenkataramani, Adarsh Akkshai (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Bliss, Daniel W (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2018
131537-Thumbnail Image.png
Description
At present, the vast majority of human subjects with neurological disease are still diagnosed through in-person assessments and qualitative analysis of patient data. In this paper, we propose to use Topological Data Analysis (TDA) together with machine learning tools to automate the process of Parkinson’s disease classification and severity assessment.

At present, the vast majority of human subjects with neurological disease are still diagnosed through in-person assessments and qualitative analysis of patient data. In this paper, we propose to use Topological Data Analysis (TDA) together with machine learning tools to automate the process of Parkinson’s disease classification and severity assessment. An automated, stable, and accurate method to evaluate Parkinson’s would be significant in streamlining diagnoses of patients and providing families more time for corrective measures. We propose a methodology which incorporates TDA into analyzing Parkinson’s disease postural shifts data through the representation of persistence images. Studying the topology of a system has proven to be invariant to small changes in data and has been shown to perform well in discrimination tasks. The contributions of the paper are twofold. We propose a method to 1) classify healthy patients from those afflicted by disease and 2) diagnose the severity of disease. We explore the use of the proposed method in an application involving a Parkinson’s disease dataset comprised of healthy-elderly, healthy-young and Parkinson’s disease patients.
ContributorsRahman, Farhan Nadir (Co-author) / Nawar, Afra (Co-author) / Turaga, Pavan (Thesis director) / Krishnamurthi, Narayanan (Committee member) / Electrical Engineering Program (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
133886-Thumbnail Image.png
Description
This paper will focus on the changes in China's OFDI while also explaining its growth. However, another primary focus will be comparing the relationships between China, Hong Kong, and Africa. This paper will show the correlating changes between the three regions and explain the distribution of China's investments. One argument

This paper will focus on the changes in China's OFDI while also explaining its growth. However, another primary focus will be comparing the relationships between China, Hong Kong, and Africa. This paper will show the correlating changes between the three regions and explain the distribution of China's investments. One argument is that Hong Kong may play a large role in facilitating Chinese investment into Africa, which if not disaggregated, could lead to inaccurate numbers of China's FDI into Africa. The purpose of this paper is to investigate the importance of China's relationship with Hong Kong and Africa. In 2012, Garth Shelton argued that Hong Kong was an important gateway in South Africa's trade with China. Since then, many others have made similar claims in support of Hong Kong's bigger role. However, due to the difficulty of finding specific data for each region, these analyses are incomplete and fail to clearly substantiate their theory. I will try to find a correlation by gathering my own data, tables, and through different interviews.
ContributorsSon, James (Author) / Simonson, Mark (Thesis director) / Iheduru, Okechukwu (Committee member) / Economics Program in CLAS (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133905-Thumbnail Image.png
Description
This thesis examines the impact of price changes of select microprocessors on the market share and 5-year gross profit net present values of Company X in the networking market through a multi-step analysis. The networking market includes segments including media processing, cloud services, security, routers & switches, and access points.

This thesis examines the impact of price changes of select microprocessors on the market share and 5-year gross profit net present values of Company X in the networking market through a multi-step analysis. The networking market includes segments including media processing, cloud services, security, routers & switches, and access points. For this thesis our team focused on the routers & switches, as well as the security segments. Company X wants to capitalize on the expected growth of the networking market as it transitions to its fifth generation (henceforth referred to as 5G) by positioning itself favorably in its customers eyes through high quality products offered at competitive prices. Our team performed a quantitative analysis of benchmark data to measure the performances of Company X's products against those of its competitors. We collected this data from third party computer reviewers, as well as the published reports of Company X and its competitors. Through the use of a preference matrix, we then normalized this performance data to adjust for different scales. In order to provide a well-rounded analysis, we adjusted these normalized performances for power consumption (using thermal design power as a proxy) as well as price. We believe these adjusted performances are more valuable than raw benchmark data, as they appeal to the demands of price-sensitive customers. Based on these comparisons, our team was able to assess price changes for their market and discounted financial impact on Company X. Our findings challenge the current pricing of one of the two products being analyzed and suggests a 9% decrease in the price of said product. This recommendation most effectively positions Company X for the development of 5G by offering the best balance of market share and NPV.
ContributorsArias, Stephen (Co-author) / Masson, Taylor (Co-author) / McCall, Kyle (Co-author) / Dimitroff, Alex (Co-author) / Hardy, Sebastian (Co-author) / Simonson, Mark (Thesis director) / Haller, Marcie (Committee member) / School of Accountancy (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135574-Thumbnail Image.png
Description
The purpose of our research was to develop recommendations and/or strategies for Company A's data center group in the context of the server CPU chip industry. We used data collected from the International Data Corporation (IDC) that was provided by our team coaches, and data that is accessible on the

The purpose of our research was to develop recommendations and/or strategies for Company A's data center group in the context of the server CPU chip industry. We used data collected from the International Data Corporation (IDC) that was provided by our team coaches, and data that is accessible on the internet. As the server CPU industry expands and transitions to cloud computing, Company A's Data Center Group will need to expand their server CPU chip product mix to meet new demands of the cloud industry and to maintain high market share. Company A boasts leading performance with their x86 server chips and 95% market segment share. The cloud industry is dominated by seven companies Company A calls "The Super 7." These seven companies include: Amazon, Google, Microsoft, Facebook, Alibaba, Tencent, and Baidu. In the long run, the growing market share of the Super 7 could give them substantial buying power over Company A, which could lead to discounts and margin compression for Company A's main growth engine. Additionally, in the long-run, the substantial growth of the Super 7 could fuel the development of their own design teams and work towards making their own server chips internally, which would be detrimental to Company A's data center revenue. We first researched the server industry and key terminology relevant to our project. We narrowed our scope by focusing most on the cloud computing aspect of the server industry. We then researched what Company A has already been doing in the context of cloud computing and what they are currently doing to address the problem. Next, using our market analysis, we identified key areas we think Company A's data center group should focus on. Using the information available to us, we developed our strategies and recommendations that we think will help Company A's Data Center Group position themselves well in an extremely fast growing cloud computing industry.
ContributorsJurgenson, Alex (Co-author) / Nguyen, Duy (Co-author) / Kolder, Sean (Co-author) / Wang, Chenxi (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Department of Finance (Contributor) / Department of Management (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Accountancy (Contributor) / WPC Graduate Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
134324-Thumbnail Image.png
Description
In January 2016, Chinese regulators announced the use of circuit breakers to stabilize the stock market but suspended this mechanism after two weeks. Researchers want to further understand the unique characteristics of Chinese stock market and measure the feasibility of implementing a circuit breaker in China once again. The thesis

In January 2016, Chinese regulators announced the use of circuit breakers to stabilize the stock market but suspended this mechanism after two weeks. Researchers want to further understand the unique characteristics of Chinese stock market and measure the feasibility of implementing a circuit breaker in China once again. The thesis provides an overview of China's attempted implementation and its related consequences, followed by possible problems and tentative recommendations. It outlines key characteristics among different nations that are implementing circuit breakers and price limit systems. Circuit breaker policies in the United States and Japan are explained in detail, while policies in other nations are presented as an overall trend.
ContributorsLiu, Luyao (Co-author) / Zhang, Zihan (Co-author) / Simonson, Mark (Thesis director) / Aragon, George (Committee member) / School of Accountancy (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05