Matching Items (246)
149953-Thumbnail Image.png
Description
The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability

The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability to exploit sparsity. Traditional interior point methods encounter difficulties in computation for solving the CS applications. In the first part of this work, a fast algorithm based on the augmented Lagrangian method for solving the large-scale TV-$\ell_1$ regularized inverse problem is proposed. Specifically, by taking advantage of the separable structure, the original problem can be approximated via the sum of a series of simple functions with closed form solutions. A preconditioner for solving the block Toeplitz with Toeplitz block (BTTB) linear system is proposed to accelerate the computation. An in-depth discussion on the rate of convergence and the optimal parameter selection criteria is given. Numerical experiments are used to test the performance and the robustness of the proposed algorithm to a wide range of parameter values. Applications of the algorithm in magnetic resonance (MR) imaging and a comparison with other existing methods are included. The second part of this work is the application of the TV-$\ell_1$ model in source localization using sensor arrays. The array output is reformulated into a sparse waveform via an over-complete basis and study the $\ell_p$-norm properties in detecting the sparsity. An algorithm is proposed for minimizing a non-convex problem. According to the results of numerical experiments, the proposed algorithm with the aid of the $\ell_p$-norm can resolve closely distributed sources with higher accuracy than other existing methods.
ContributorsShen, Wei (Author) / Mittlemann, Hans D (Thesis advisor) / Renaut, Rosemary A. (Committee member) / Jackiewicz, Zdzislaw (Committee member) / Gelb, Anne (Committee member) / Ringhofer, Christian (Committee member) / Arizona State University (Publisher)
Created2011
151545-Thumbnail Image.png
Description
A Pairwise Comparison Matrix (PCM) is used to compute for relative priorities of criteria or alternatives and are integral components of widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, a PCM suffers from several issues limiting its application

A Pairwise Comparison Matrix (PCM) is used to compute for relative priorities of criteria or alternatives and are integral components of widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, a PCM suffers from several issues limiting its application to large-scale decision problems, specifically: (1) to the curse of dimensionality, that is, a large number of pairwise comparisons need to be elicited from a decision maker (DM), (2) inconsistent and (3) imprecise preferences maybe obtained due to the limited cognitive power of DMs. This dissertation proposes a PCM Framework for Large-Scale Decisions to address these limitations in three phases as follows. The first phase proposes a binary integer program (BIP) to intelligently decompose a PCM into several mutually exclusive subsets using interdependence scores. As a result, the number of pairwise comparisons is reduced and the consistency of the PCM is improved. Since the subsets are disjoint, the most independent pivot element is identified to connect all subsets. This is done to derive the global weights of the elements from the original PCM. The proposed BIP is applied to both AHP and ANP methodologies. However, it is noted that the optimal number of subsets is provided subjectively by the DM and hence is subject to biases and judgement errors. The second phase proposes a trade-off PCM decomposition methodology to decompose a PCM into a number of optimally identified subsets. A BIP is proposed to balance the: (1) time savings by reducing pairwise comparisons, the level of PCM inconsistency, and (2) the accuracy of the weights. The proposed methodology is applied to the AHP to demonstrate its advantages and is compared to established methodologies. In the third phase, a beta distribution is proposed to generalize a wide variety of imprecise pairwise comparison distributions via a method of moments methodology. A Non-Linear Programming model is then developed that calculates PCM element weights which maximizes the preferences of the DM as well as minimizes the inconsistency simultaneously. Comparison experiments are conducted using datasets collected from literature to validate the proposed methodology.
ContributorsJalao, Eugene Rex Lazaro (Author) / Shunk, Dan L. (Thesis advisor) / Wu, Teresa (Thesis advisor) / Askin, Ronald G. (Committee member) / Goul, Kenneth M (Committee member) / Arizona State University (Publisher)
Created2013
151307-Thumbnail Image.png
Description
This study explores the impact of feedback and feedforward and personality on computer-mediated behavior change. The impact of the effects were studied using subjects who entered information relevant to their diet and exercise into an online tool. Subjects were divided into four experimental groups: those receiving only feedback, those receiving

This study explores the impact of feedback and feedforward and personality on computer-mediated behavior change. The impact of the effects were studied using subjects who entered information relevant to their diet and exercise into an online tool. Subjects were divided into four experimental groups: those receiving only feedback, those receiving only feedforward, those receiving both, and those receiving none. Results were analyzed using regression analysis. Results indicate that both feedforward and feedback impact behavior change and that individuals with individuals ranking low in conscientiousness experienced behavior change equivalent to that of individuals with high conscientiousness in the presence of feedforward and/or feedback.
ContributorsMcCreless, Tamuchin (Author) / St. Louis, Robert (Thesis advisor) / St. Louis, Robert D. (Committee member) / Goul, Kenneth M (Committee member) / Shao, Benjamin B (Committee member) / Arizona State University (Publisher)
Created2012
151950-Thumbnail Image.png
Description
Social media offers a powerful platform for the independent digital content producer community to develop, disperse, and maintain their brands. In terms of information systems research, the broad majority of the work has not examined hedonic consumption on Social Media Sites (SMS). The focus has mostly been on the organizational

Social media offers a powerful platform for the independent digital content producer community to develop, disperse, and maintain their brands. In terms of information systems research, the broad majority of the work has not examined hedonic consumption on Social Media Sites (SMS). The focus has mostly been on the organizational perspectives and utilitarian gains from these services. Unlike through traditional commerce channels, including e-commerce retailers, consumption enhancing hedonic utility is experienced differently in the context of a social media site; consequently, the dynamic of the decision-making process shifts when it is made in a social context. Previous research assumed a limited influence of a small, immediate group of peers. But the rules change when the network of peers expands exponentially. The assertion is that, while there are individual differences in the level of susceptibility to influence coming from others, these are not the most important pieces of the analysis--unlike research centered completely on influence. Rather, the context of the consumption can play an important role in the way social influence factors affect consumer behavior on Social Media Sites. Over the course of three studies, this dissertation will examine factors that influence consumer decision-making and the brand personalities created and interpreted in these SMS. Study one examines the role of different types of peer influence on consumer decision-making on Facebook. Study two observes the impact of different types of producer message posts with the different types of influence on decision-making on Twitter. Study three will conclude this work with an exploratory empirical investigation of actual twitter postings of a set of musicians. These studies contribute to the body of IS literature by evaluating the specific behavioral changes related to consumption in the context of digital social media: (a) the power of social influencers in contrast to personal preferences on SMS, (b) the effect on consumers of producer message types and content on SMS at both the profile level and the individual message level.
ContributorsSopha, Matthew (Author) / Santanam, Raghu T (Thesis advisor) / Goul, Kenneth M (Committee member) / Gu, Bin (Committee member) / Arizona State University (Publisher)
Created2013
151159-Thumbnail Image.png
Description
Ample evidence exists to support the conclusion that enterprise search is failing its users. This failure is costing corporate America billions of dollars every year. Most enterprise search engines are built using web search engines as their foundations. These search engines are optimized for web use and are inadequate when

Ample evidence exists to support the conclusion that enterprise search is failing its users. This failure is costing corporate America billions of dollars every year. Most enterprise search engines are built using web search engines as their foundations. These search engines are optimized for web use and are inadequate when used inside the firewall. Without the ability to use popularity-based measures for ranking documents returned to the searcher, these search engines must rely on full-text search technologies. The Information Science literature explains why full-text search, by itself, fails to adequately discriminate relevant from irrelevant documents. This failure in discrimination results in far too many documents being returned to the searcher, which causes enterprise searchers to abandon their searches in favor of re-creating the documents or information they seek. This dissertation describes and evaluates a potential solution to the problem of failed enterprise search derived from the Information Science literature: subject-aided search. In subject-aided search, full-text search is augmented with a search of subject metadata coded into each document based upon a hierarchically structured subject index. Using the Design Science methodology, this dissertation develops and evaluates three IT artifacts in the search for a solution to the wicked problem of enterprise search failure.
ContributorsSchymik, Gregory (Author) / St. Louis, Robert (Thesis advisor) / Goul, Kenneth M (Committee member) / Santanum, Raghu (Committee member) / Arizona State University (Publisher)
Created2012
148419-Thumbnail Image.png
Description

Currently, autonomous vehicles are being evaluated by how well they interact with humans without evaluating how well humans interact with them. Since people are not going to unanimously switch over to using autonomous vehicles, attention must be given to how well these new vehicles signal intent to human drivers from

Currently, autonomous vehicles are being evaluated by how well they interact with humans without evaluating how well humans interact with them. Since people are not going to unanimously switch over to using autonomous vehicles, attention must be given to how well these new vehicles signal intent to human drivers from the driver’s point of view. Ineffective communication will lead to unnecessary discomfort among drivers caused by an underlying uncertainty about what an autonomous vehicle is or isn’t about to do. Recent studies suggest that humans tend to fixate on areas of higher uncertainty so scenarios that have a higher number of vehicle fixations can be reasoned to be more uncertain. We provide a framework for measuring human uncertainty and use the framework to measure the effect of empathetic vs non-empathetic agents. We used a simulated driving environment to create recorded scenarios and manipulate the autonomous vehicle to include either an empathetic or non-empathetic agent. The driving interaction is composed of two vehicles approaching an uncontrolled intersection. These scenarios were played to twelve participants while their gaze was recorded to track what the participants were fixating on. The overall intent was to provide an analytical framework as a tool for evaluating autonomous driving features; and in this case, we choose to evaluate how effective it was for vehicles to have empathetic behaviors included in the autonomous vehicle decision making. A t-test analysis of the gaze indicated that empathy did not in fact reduce uncertainty although additional testing of this hypothesis will be needed due to the small sample size.

ContributorsGreenhagen, Tanner Patrick (Author) / Yang, Yezhou (Thesis director) / Jammula, Varun C (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
131372-Thumbnail Image.png
Description
In the last decade, a large variety of algorithms have been developed for use in object tracking, environment mapping, and object classification. It is often difficult for beginners to fully predict the constraints that multirotors place on machine vision algorithms. The purpose of this paper is to explain

In the last decade, a large variety of algorithms have been developed for use in object tracking, environment mapping, and object classification. It is often difficult for beginners to fully predict the constraints that multirotors place on machine vision algorithms. The purpose of this paper is to explain some of the types of algorithms that can be applied to these aerial systems, why the constraints for these algorithms exist, and what could be done to mitigate them. This paper provides a summary of the processes involved in a popular filter-based tracking algorithm called MOSSE (Minimum Output Sum of Squared Error) and a particular implementation of SLAM (Simultaneous Localization and Mapping) called LSD SLAM.
ContributorsVan Hazel, Colton (Author) / Zhang, Wenlong (Thesis director) / Yang, Yezhou (Committee member) / Engineering Programs (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
132368-Thumbnail Image.png
Description
A defense-by-randomization framework is proposed as an effective defense mechanism against different types of adversarial attacks on neural networks. Experiments were conducted by selecting a combination of differently constructed image classification neural networks to observe which combinations applied to this framework were most effective in maximizing classification accuracy. Furthermore, the

A defense-by-randomization framework is proposed as an effective defense mechanism against different types of adversarial attacks on neural networks. Experiments were conducted by selecting a combination of differently constructed image classification neural networks to observe which combinations applied to this framework were most effective in maximizing classification accuracy. Furthermore, the reasons why particular combinations were more effective than others is explored.
ContributorsMazboudi, Yassine Ahmad (Author) / Yang, Yezhou (Thesis director) / Ren, Yi (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
Description
Propaganda bots are malicious bots on Twitter that spread divisive opinions and support political accounts. This project is based on detecting propaganda bots on Twitter using machine learning. Once I began to observe patterns within propaganda followers on Twitter, I determined that I could train algorithms to detect

Propaganda bots are malicious bots on Twitter that spread divisive opinions and support political accounts. This project is based on detecting propaganda bots on Twitter using machine learning. Once I began to observe patterns within propaganda followers on Twitter, I determined that I could train algorithms to detect these bots. The paper focuses on my development and process of training classifiers and using them to create a user-facing server that performs prediction functions automatically. The learning goals of this project were detailed, the focus of which was to learn some form of machine learning architecture. I needed to learn some aspect of large data handling, as well as being able to maintain these datasets for training use. I also needed to develop a server that would execute these functionalities on command. I wanted to be able to design a full-stack system that allowed me to create every aspect of a user-facing server that can execute predictions using the classifiers that I design.
Throughout this project, I decided on a number of learning goals to consider it a success. I needed to learn how to use the supporting libraries that would help me to design this system. I also learned how to use the Twitter API, as well as create the infrastructure behind it that would allow me to collect large amounts of data for machine learning. I needed to become familiar with common machine learning libraries in Python in order to create the necessary algorithms and pipelines to make predictions based on Twitter data.
This paper details the steps and decisions needed to determine how to collect this data and apply it to machine learning algorithms. I determined how to create labelled data using pre-existing Botometer ratings, and the levels of confidence I needed to label data for training. I use the scikit-learn library to create these algorithms to best detect these bots. I used a number of pre-processing routines to refine the classifiers’ precision, including natural language processing and data analysis techniques. I eventually move to remotely-hosted versions of the system on Amazon web instances to collect larger amounts of data and train more advanced classifiers. This leads to the details of my final implementation of a user-facing server, hosted on AWS and interfacing over Gmail’s IMAP server.
The current and future development of this system is laid out. This includes more advanced classifiers, better data analysis, conversions to third party Twitter data collection systems, and user features. I detail what it is I have learned from this exercise, and what it is I hope to continue working on.
ContributorsPeterson, Austin (Author) / Yang, Yezhou (Thesis director) / Sadasivam, Aadhavan (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
Description
In the field of machine learning, reinforcement learning stands out for its ability to explore approaches to complex, high dimensional problems that outperform even expert humans. For robotic locomotion tasks reinforcement learning provides an approach to solving them without the need for unique controllers. In this thesis, two reinforcement learning

In the field of machine learning, reinforcement learning stands out for its ability to explore approaches to complex, high dimensional problems that outperform even expert humans. For robotic locomotion tasks reinforcement learning provides an approach to solving them without the need for unique controllers. In this thesis, two reinforcement learning algorithms, Deep Deterministic Policy Gradient and Group Factor Policy Search are compared based upon their performance in the bipedal walking environment provided by OpenAI gym. These algorithms are evaluated on their performance in the environment and their sample efficiency.
ContributorsMcDonald, Dax (Author) / Ben Amor, Heni (Thesis director) / Yang, Yezhou (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2018-12