Matching Items (174)
149953-Thumbnail Image.png
Description
The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability

The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability to exploit sparsity. Traditional interior point methods encounter difficulties in computation for solving the CS applications. In the first part of this work, a fast algorithm based on the augmented Lagrangian method for solving the large-scale TV-$\ell_1$ regularized inverse problem is proposed. Specifically, by taking advantage of the separable structure, the original problem can be approximated via the sum of a series of simple functions with closed form solutions. A preconditioner for solving the block Toeplitz with Toeplitz block (BTTB) linear system is proposed to accelerate the computation. An in-depth discussion on the rate of convergence and the optimal parameter selection criteria is given. Numerical experiments are used to test the performance and the robustness of the proposed algorithm to a wide range of parameter values. Applications of the algorithm in magnetic resonance (MR) imaging and a comparison with other existing methods are included. The second part of this work is the application of the TV-$\ell_1$ model in source localization using sensor arrays. The array output is reformulated into a sparse waveform via an over-complete basis and study the $\ell_p$-norm properties in detecting the sparsity. An algorithm is proposed for minimizing a non-convex problem. According to the results of numerical experiments, the proposed algorithm with the aid of the $\ell_p$-norm can resolve closely distributed sources with higher accuracy than other existing methods.
ContributorsShen, Wei (Author) / Mittlemann, Hans D (Thesis advisor) / Renaut, Rosemary A. (Committee member) / Jackiewicz, Zdzislaw (Committee member) / Gelb, Anne (Committee member) / Ringhofer, Christian (Committee member) / Arizona State University (Publisher)
Created2011
151948-Thumbnail Image.png
Description
Smart home system (SHS) is a kind of information system aiming at realizing home automation. The SHS can connect with almost any kind of electronic/electric device used in a home so that they can be controlled and monitored centrally. Today's technology also allows the home owners to control and monitor

Smart home system (SHS) is a kind of information system aiming at realizing home automation. The SHS can connect with almost any kind of electronic/electric device used in a home so that they can be controlled and monitored centrally. Today's technology also allows the home owners to control and monitor the SHS installed in their homes remotely. This is typically realized by giving the SHS network access ability. Although the SHS's network access ability brings a lot of conveniences to the home owners, it also makes the SHS facing more security threats than ever before. As a result, when designing a SHS, the security threats it might face should be given careful considerations. System security threats can be solved properly by understanding them and knowing the parts in the system that should be protected against them first. This leads to the idea of solving the security threats a SHS might face from the requirements engineering level. Following this idea, this paper proposes a systematic approach to generate the security requirements specifications for the SHS. It can be viewed as the first step toward the complete SHS security requirements engineering process.
ContributorsXu, Rongcao (Author) / Ghazarian, Arbi (Thesis advisor) / Bansal, Ajay (Committee member) / Lindquist, Timothy (Committee member) / Arizona State University (Publisher)
Created2013
151950-Thumbnail Image.png
Description
Social media offers a powerful platform for the independent digital content producer community to develop, disperse, and maintain their brands. In terms of information systems research, the broad majority of the work has not examined hedonic consumption on Social Media Sites (SMS). The focus has mostly been on the organizational

Social media offers a powerful platform for the independent digital content producer community to develop, disperse, and maintain their brands. In terms of information systems research, the broad majority of the work has not examined hedonic consumption on Social Media Sites (SMS). The focus has mostly been on the organizational perspectives and utilitarian gains from these services. Unlike through traditional commerce channels, including e-commerce retailers, consumption enhancing hedonic utility is experienced differently in the context of a social media site; consequently, the dynamic of the decision-making process shifts when it is made in a social context. Previous research assumed a limited influence of a small, immediate group of peers. But the rules change when the network of peers expands exponentially. The assertion is that, while there are individual differences in the level of susceptibility to influence coming from others, these are not the most important pieces of the analysis--unlike research centered completely on influence. Rather, the context of the consumption can play an important role in the way social influence factors affect consumer behavior on Social Media Sites. Over the course of three studies, this dissertation will examine factors that influence consumer decision-making and the brand personalities created and interpreted in these SMS. Study one examines the role of different types of peer influence on consumer decision-making on Facebook. Study two observes the impact of different types of producer message posts with the different types of influence on decision-making on Twitter. Study three will conclude this work with an exploratory empirical investigation of actual twitter postings of a set of musicians. These studies contribute to the body of IS literature by evaluating the specific behavioral changes related to consumption in the context of digital social media: (a) the power of social influencers in contrast to personal preferences on SMS, (b) the effect on consumers of producer message types and content on SMS at both the profile level and the individual message level.
ContributorsSopha, Matthew (Author) / Santanam, Raghu T (Thesis advisor) / Goul, Kenneth M (Committee member) / Gu, Bin (Committee member) / Arizona State University (Publisher)
Created2013
150509-Thumbnail Image.png
Description
Gathering and managing software requirements, known as Requirement Engineering (RE), is a significant and basic step during the Software Development Life Cycle (SDLC). Any error or defect during the RE step will propagate to further steps of SDLC and resolving it will be more costly than any defect in other

Gathering and managing software requirements, known as Requirement Engineering (RE), is a significant and basic step during the Software Development Life Cycle (SDLC). Any error or defect during the RE step will propagate to further steps of SDLC and resolving it will be more costly than any defect in other steps. In order to produce better quality software, the requirements have to be free of any defects. Verification and Validation (V&V;) of requirements are performed to improve their quality, by performing the V&V; process on the Software Requirement Specification (SRS) document. V&V; of the software requirements focused to a specific domain helps in improving quality. A large database of software requirements from software projects of different domains is created. Software requirements from commercial applications are focus of this project; other domains embedded, mobile, E-commerce, etc. can be the focus of future efforts. The V&V; is done to inspect the requirements and improve the quality. Inspections are done to detect defects in the requirements and three approaches for inspection of software requirements are discussed; ad-hoc techniques, checklists, and scenario-based techniques. A more systematic domain-specific technique is presented for performing V&V; of requirements.
ContributorsChughtai, Rehman (Author) / Ghazarian, Arbi (Thesis advisor) / Bansal, Ajay (Committee member) / Millard, Bruce (Committee member) / Arizona State University (Publisher)
Created2012
136283-Thumbnail Image.png
Description
This undergraduate thesis explores the efficacy of developing a translator generator in the Prolog programming language using Lexical Functional Grammars. A bidirectional machine translator between English and Hungarian, developed as a proof-of-concept case study, is discussed and assessed. The benefits and drawbacks of this approach as generalized to Machine Translation

This undergraduate thesis explores the efficacy of developing a translator generator in the Prolog programming language using Lexical Functional Grammars. A bidirectional machine translator between English and Hungarian, developed as a proof-of-concept case study, is discussed and assessed. The benefits and drawbacks of this approach as generalized to Machine Translation systems are also discussed, along with possible areas of future work.
ContributorsLane, Ryan Andrew (Author) / Bansal, Ajay (Thesis director) / Bansal, Srividya (Committee member) / Barrett, The Honors College (Contributor)
Created2015-05
132708-Thumbnail Image.png
Description
In this paper, I explore practical applications of neural networks for automated skin lesion identification. The visual characteristics are of primary importance in the recognition of skin diseases, hence, the development of deep neural network models proven capable of classifying skin lesions can potentially change the face of modern medicine

In this paper, I explore practical applications of neural networks for automated skin lesion identification. The visual characteristics are of primary importance in the recognition of skin diseases, hence, the development of deep neural network models proven capable of classifying skin lesions can potentially change the face of modern medicine by extending the availability and lowering the cost of diagnostic care. Previous work has demonstrated the effectiveness of convolutional neural networks in image classification in general, with even higher accuracy achievable by data augmentation techniques, such as cropping, rotating, and flipping input images, along with more advanced computationally intensive approaches. In this research, I provide an overview of Convolutional Neural Networks (CNN) and CNN implementation with TensorFlow and Keras API in context of image recognition and classification. I also experiment with custom convolutional neural network model architecture trained using HAM10000 dataset. The dataset used for the case study is obtained from Harvard Dataverse and is maintained by Medical University of Vienna. The HAM10000 dataset is a large collection of multi-source dermatoscopic images of common pigmented skin lesions and is available for academic research under Creative Commons Attribution-Noncommercial 4.0 International Public License. With over ten thousand dermatoscopic images of seven classes of benign and malignant skin lesions, the dataset is substantial for academic machine learning purposes for multiclass image classification. I discuss the successes and shortcomings of the model in respect to its application to the dataset.
ContributorsKaraliova, Natallia (Author) / Bansal, Ajay (Thesis director) / Gonzalez-Sanchez, Javier (Committee member) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
135458-Thumbnail Image.png
Description
Currently, students at Arizona State University are restricted to cards when using their college's local currency. This currency, Maroon and Gold dollars (M&G), is a primary source of meal plans for many students. When relying on card readers, students risk security and convenience. The security is risked due to the

Currently, students at Arizona State University are restricted to cards when using their college's local currency. This currency, Maroon and Gold dollars (M&G), is a primary source of meal plans for many students. When relying on card readers, students risk security and convenience. The security is risked due to the constant student id number on each card. A student's identification number never changes and is located on each card. If the student loses their card, their account information is permanently compromised. Convenience is an issue because, currently, students must make a purchase in order to see their current account balance. Another major issue is that businesses must purchase external hardware in order to use the M&G System. An online or mobile system would eliminate the need for a physical card and allow businesses to function without external card readers. Such a system would have access to financial information of businesses and students at ASU. Thus, the system require severe scrutiny by a well-trusted team of professionals before being implemented. My objective was to help bring such a system to life. To do this, I decided to make a mobile application prototype to serve as a baseline and to demonstrate the features of such a system. As a baseline, it needed to have a realistic, professional appearance, with the ability to accurately demonstrate feature functionality. Before developing the app, I set out to determine the User Interactions and User Experience designs (UI/UX) by conducting a series of informal interviews with local students and businesses. After the designs were finalized, I started implementation of the actual application in Android Studio. This creative project consists of a mobile application, a contained database, a GUI (Graphics User Interface) prototype, and a technical document.
ContributorsReigel, Justin Bryce (Author) / Bansal, Ajay (Thesis director) / Lindquist, Timothy (Committee member) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
134185-Thumbnail Image.png
Description
37,461 automobile accident fatalities occured in the United States in 2016 ("Quick Facts 2016", 2017). Improving the safety of roads has traditionally been approached by governmental agencies including the National Highway Traffic Safety Administration and State Departments of Transporation. In past literature, automobile crash data is analyzed using time-series prediction

37,461 automobile accident fatalities occured in the United States in 2016 ("Quick Facts 2016", 2017). Improving the safety of roads has traditionally been approached by governmental agencies including the National Highway Traffic Safety Administration and State Departments of Transporation. In past literature, automobile crash data is analyzed using time-series prediction technicques to identify road segments and/or intersections likely to experience future crashes (Lord & Mannering, 2010). After dangerous zones have been identified road modifications can be implemented improving public safety. This project introduces a historical safety metric for evaluating the relative danger of roads in a road network. The historical safety metric can be used to update routing choices of individual drivers improving public safety by avoiding historically more dangerous routes. The metric is constructed using crash frequency, severity, location and traffic information. An analysis of publically-available crash and traffic data in Allgeheny County, Pennsylvania is used to generate the historical safety metric for a specific road network. Methods for evaluating routes based on the presented historical safety metric are included using the Mann Whitney U Test to evaluate the significance of routing decisions. The evaluation method presented requires routes have at least 20 crashes to be compared with significance testing. The safety of the road network is visualized using a heatmap to present distribution of the metric throughout Allgeheny County.
ContributorsGupta, Ariel Meron (Author) / Bansal, Ajay (Thesis director) / Sodemann, Angela (Committee member) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
133880-Thumbnail Image.png
Description
In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form

In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form a dependency tree. An agent operating within these environments have access to low amounts of data about the environment before interacting with it, so it is crucial that this agent is able to effectively utilize a tree of dependencies and its environmental surroundings to make judgements about which sub-goals are most efficient to pursue at any point in time. A successful agent aims to minimizes cost when completing a given goal. A deep neural network in combination with Q-learning techniques was employed to act as the agent in this environment. This agent consistently performed better than agents using alternate models (models that used dependency tree heuristics or human-like approaches to make sub-goal oriented choices), with an average performance advantage of 33.86% (with a standard deviation of 14.69%) over the best alternate agent. This shows that machine learning techniques can be consistently employed to make goal-oriented choices within an environment with recursive sub-goal dependencies and low amounts of pre-known information.
ContributorsKoleber, Derek (Author) / Acuna, Ruben (Thesis director) / Bansal, Ajay (Committee member) / W.P. Carey School of Business (Contributor) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
161626-Thumbnail Image.png
Description
Calculus as a math course is important subject students need to succeed in, in order to venture into STEM majors. This thesis focuses on the early detection of at-risk students in a calculus course which can provide the proper intervention that might help them succeed in the course. Calculus has

Calculus as a math course is important subject students need to succeed in, in order to venture into STEM majors. This thesis focuses on the early detection of at-risk students in a calculus course which can provide the proper intervention that might help them succeed in the course. Calculus has high failure rates which corroborates with the data collected from Arizona State University that shows that 40% of the 3266 students whose data were used failed in their calculus course.This thesis proposes to utilize educational big data to detect students at high risk of failure and their eventual early detection and subsequent intervention can be useful. Some existing studies similar to this thesis make use of open-scale data that are lower in data count and perform predictions on low-impact Massive Open Online Courses(MOOC) based courses. In this thesis, an automatic detection method of academically at-risk students by using learning management systems(LMS) activity data along with the student information system(SIS) data from Arizona State University(ASU) for the course calculus for engineers I (MAT 265) is developed. The method will detect students at risk by employing machine learning to identify key features that contribute to the success of a student. This thesis also proposes a new technique to convert this button click data into a button click sequence which can be used as inputs to classifiers. In addition, the advancements in Natural Language Processing field can be used by adopting methods such as part-of-speech (POS) tagging and tools such as Facebook Fasttext word embeddings to convert these button click sequences into numeric vectors before feeding them into the classifiers. The thesis proposes two preprocessing techniques and evaluates them on 3 different machine learning ensembles to determine their performance across the two modalities of the class.
ContributorsDileep, Akshay Kumar (Author) / Bansal, Ajay (Thesis advisor) / Cunningham, James (Committee member) / Acuna, Ruben (Committee member) / Arizona State University (Publisher)
Created2021