Matching Items (200)
149310-Thumbnail Image.png
Description
The fields of pattern recognition and machine learning are on a fundamental quest to design systems that can learn the way humans do. One important aspect of human intelligence that has so far not been given sufficient attention is the capability of humans to express when they are certain about

The fields of pattern recognition and machine learning are on a fundamental quest to design systems that can learn the way humans do. One important aspect of human intelligence that has so far not been given sufficient attention is the capability of humans to express when they are certain about a decision, or when they are not. Machine learning techniques today are not yet fully equipped to be trusted with this critical task. This work seeks to address this fundamental knowledge gap. Existing approaches that provide a measure of confidence on a prediction such as learning algorithms based on the Bayesian theory or the Probably Approximately Correct theory require strong assumptions or often produce results that are not practical or reliable. The recently developed Conformal Predictions (CP) framework - which is based on the principles of hypothesis testing, transductive inference and algorithmic randomness - provides a game-theoretic approach to the estimation of confidence with several desirable properties such as online calibration and generalizability to all classification and regression methods. This dissertation builds on the CP theory to compute reliable confidence measures that aid decision-making in real-world problems through: (i) Development of a methodology for learning a kernel function (or distance metric) for optimal and accurate conformal predictors; (ii) Validation of the calibration properties of the CP framework when applied to multi-classifier (or multi-regressor) fusion; and (iii) Development of a methodology to extend the CP framework to continuous learning, by using the framework for online active learning. These contributions are validated on four real-world problems from the domains of healthcare and assistive technologies: two classification-based applications (risk prediction in cardiac decision support and multimodal person recognition), and two regression-based applications (head pose estimation and saliency prediction in images). The results obtained show that: (i) multiple kernel learning can effectively increase efficiency in the CP framework; (ii) quantile p-value combination methods provide a viable solution for fusion in the CP framework; and (iii) eigendecomposition of p-value difference matrices can serve as effective measures for online active learning; demonstrating promise and potential in using these contributions in multimedia pattern recognition problems in real-world settings.
ContributorsNallure Balasubramanian, Vineeth (Author) / Panchanathan, Sethuraman (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Vovk, Vladimir (Committee member) / Arizona State University (Publisher)
Created2010
149503-Thumbnail Image.png
Description
The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images captured from a camera mounted on it. VO offers a

The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images captured from a camera mounted on it. VO offers a cheap and relatively accurate alternative to conventional odometry techniques like wheel odometry, inertial measurement systems and global positioning system (GPS). This thesis implements and analyzes the performance of a two camera based VO called Stereo based visual odometry (SVO) in presence of various deterrent factors like shadows, extremely bright outdoors, wet conditions etc... To allow the implementation of VO on any generic vehicle, a discussion on porting of the VO algorithm to android handsets is presented too. The SVO is implemented in three steps. In the first step, a dense disparity map for a scene is computed. To achieve this we utilize sum of absolute differences technique for stereo matching on rectified and pre-filtered stereo frames. Epipolar geometry is used to simplify the matching problem. The second step involves feature detection and temporal matching. Feature detection is carried out by Harris corner detector. These features are matched between two consecutive frames using the Lucas-Kanade feature tracker. The 3D co-ordinates of these matched set of features are computed from the disparity map obtained from the first step and are mapped into each other by a translation and a rotation. The rotation and translation is computed using least squares minimization with the aid of Singular Value Decomposition. Random Sample Consensus (RANSAC) is used for outlier detection. This comprises the third step. The accuracy of the algorithm is quantified based on the final position error, which is the difference between the final position computed by the SVO algorithm and the final ground truth position as obtained from the GPS. The SVO showed an error of around 1% under normal conditions for a path length of 60 m and around 3% in bright conditions for a path length of 130 m. The algorithm suffered in presence of shadows and vibrations, with errors of around 15% and path lengths of 20 m and 100 m respectively.
ContributorsDhar, Anchit (Author) / Saripalli, Srikanth (Thesis advisor) / Li, Baoxin (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2010
149543-Thumbnail Image.png
Description
Debugging is a hard task. Debugging multi-threaded applications with their inherit non-determinism is all the more difficult. Non-determinism of any kind adds to the difficulty of cyclic debugging. In Android applications which are written in Java, threads and concurrency constructs introduce non-determinism to the program execution. Even with the same

Debugging is a hard task. Debugging multi-threaded applications with their inherit non-determinism is all the more difficult. Non-determinism of any kind adds to the difficulty of cyclic debugging. In Android applications which are written in Java, threads and concurrency constructs introduce non-determinism to the program execution. Even with the same input, consecutive runs may not be the same and reproducing the same bug is a challenging task. This makes it difficult to understand and analyze the execution behavior or to understand the source of a failing execution. This thesis introduces a replay mechanism for Android applications written in Java and is based on the Lamport Clock. This tool provides the user with a controlled debugging environment, where the program execution follows the identical partially ordered happened-before dependency among threads, as during the recorded execution. In this, certain significant events like thread creation, synchronization etc. are recorded during run-time. They can later be replayed off-line, as many times as needed to pinpoint and fix an error in the application. It is software based approach and has been implemented by modifying the Dalvik Virtual Machine in the Android platform. The method of replay described in this thesis is independent of the underlying operating system scheduler.
ContributorsGirme, Rohit (Author) / Lee, Yann-Hang (Thesis advisor) / Chatha, Karamvir (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
149449-Thumbnail Image.png
Description
Advances in the area of ubiquitous, pervasive and wearable computing have resulted in the development of low band-width, data rich environmental and body sensor networks, providing a reliable and non-intrusive methodology for capturing activity data from humans and the environments they inhabit. Assistive technologies that promote independent living amongst elderly

Advances in the area of ubiquitous, pervasive and wearable computing have resulted in the development of low band-width, data rich environmental and body sensor networks, providing a reliable and non-intrusive methodology for capturing activity data from humans and the environments they inhabit. Assistive technologies that promote independent living amongst elderly and individuals with cognitive impairment are a major motivating factor for sensor-based activity recognition systems. However, the process of discerning relevant activity information from these sensor streams such as accelerometers is a non-trivial task and is an on-going research area. The difficulty stems from factors such as spatio-temporal variations in movement patterns induced by different individuals and contexts, sparse occurrence of relevant activity gestures in a continuous stream of irrelevant movements and the lack of real-world data for training learning algorithms. This work addresses these challenges in the context of wearable accelerometer-based simple activity and gesture recognition. The proposed computational framework utilizes discriminative classifiers for learning the spatio-temporal variations in movement patterns and demonstrates its effectiveness through a real-time simple activity recognition system and short duration, non- repetitive activity gesture recognition. Furthermore, it proposes adaptive discriminative threshold models trained only on relevant activity gestures for filtering irrelevant movement patterns in a continuous stream. These models are integrated into a gesture spotting network for detecting activity gestures involved in complex activities of daily living. The framework addresses the lack of real world data for training, by using auxiliary, yet related data samples for training in a transfer learning setting. Finally the problem of predicting activity tasks involved in the execution of a complex activity of daily living is described and a solution based on hierarchical Markov models is discussed and evaluated.
ContributorsChatapuram Krishnan, Narayanan (Author) / Panchanathan, Sethuraman (Thesis advisor) / Sundaram, Hari (Committee member) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Cook, Diane (Committee member) / Arizona State University (Publisher)
Created2010
132164-Thumbnail Image.png
Description
With the coming advances of computational power, algorithmic trading has become one of the primary strategies to trading on the stock market. To understand why and how these strategies have been effective, this project has taken a look at the complete process of creating tools and applications to analyze and

With the coming advances of computational power, algorithmic trading has become one of the primary strategies to trading on the stock market. To understand why and how these strategies have been effective, this project has taken a look at the complete process of creating tools and applications to analyze and predict stock prices in order to perform low-frequency trading. The project is composed of three main components. The first component is integrating several public resources to acquire and process financial trading data and store it in order to complete the other components. Alpha Vantage API, a free open source application, provides an accurate and comprehensive dataset of features for each stock ticker requested. The second component is researching, prototyping, and implementing various trading algorithms in code. We began by focusing on the Mean Reversion algorithm as a proof of concept algorithm to develop meaningful trading strategies and identify patterns within our datasets. To augment our market prediction power (“alpha”), we implemented a Long Short-Term Memory recurrent neural network. Neural Networks are an incredibly effective but often complex tool used frequently in data science when traditional methods are found lacking. Following the implementation, the last component is to optimize, analyze, compare, and contrast all of the algorithms and identify key features to conclude the overall effectiveness of each algorithm. We were able to identify conclusively which aspects of each algorithm provided better alpha and create an entire pipeline to automate this process for live trading implementation. An additional reason for automation is to provide an educational framework such that any who may be interested in quantitative finance in the future can leverage this project to gain further insight.
ContributorsYurowkin, Alexander (Co-author) / Kumar, Rohit (Co-author) / Welfert, Bruno (Thesis director) / Li, Baoxin (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132708-Thumbnail Image.png
Description
In this paper, I explore practical applications of neural networks for automated skin lesion identification. The visual characteristics are of primary importance in the recognition of skin diseases, hence, the development of deep neural network models proven capable of classifying skin lesions can potentially change the face of modern medicine

In this paper, I explore practical applications of neural networks for automated skin lesion identification. The visual characteristics are of primary importance in the recognition of skin diseases, hence, the development of deep neural network models proven capable of classifying skin lesions can potentially change the face of modern medicine by extending the availability and lowering the cost of diagnostic care. Previous work has demonstrated the effectiveness of convolutional neural networks in image classification in general, with even higher accuracy achievable by data augmentation techniques, such as cropping, rotating, and flipping input images, along with more advanced computationally intensive approaches. In this research, I provide an overview of Convolutional Neural Networks (CNN) and CNN implementation with TensorFlow and Keras API in context of image recognition and classification. I also experiment with custom convolutional neural network model architecture trained using HAM10000 dataset. The dataset used for the case study is obtained from Harvard Dataverse and is maintained by Medical University of Vienna. The HAM10000 dataset is a large collection of multi-source dermatoscopic images of common pigmented skin lesions and is available for academic research under Creative Commons Attribution-Noncommercial 4.0 International Public License. With over ten thousand dermatoscopic images of seven classes of benign and malignant skin lesions, the dataset is substantial for academic machine learning purposes for multiclass image classification. I discuss the successes and shortcomings of the model in respect to its application to the dataset.
ContributorsKaraliova, Natallia (Author) / Bansal, Ajay (Thesis director) / Gonzalez-Sanchez, Javier (Committee member) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
Description
Uninformed people frequently kill snakes without knowing whether they are venomous or harmless, fearing for their safety. To prevent unnecessary killings and to encourage people to be safe around venomous snakes, a proper identification is important. This work seeks to preserve wild native Arizona snakes and promote a general interest

Uninformed people frequently kill snakes without knowing whether they are venomous or harmless, fearing for their safety. To prevent unnecessary killings and to encourage people to be safe around venomous snakes, a proper identification is important. This work seeks to preserve wild native Arizona snakes and promote a general interest in them by using a bag of features approach for classifying native Arizona snakes in images as venomous or non-venomous. The image category classifier was implemented in MATLAB and trained on a set of 245 images of native Arizona snakes (171 non-venomous, 74 venomous). To test this approach, 10-fold cross-validation was performed and the average accuracy was 0.7772. While this approach is functional, the results could be improved, ideally with a higher average accuracy, in order to be reliable. In false positives, the features may have been associated with the color or pattern, which is similar between venomous and non-venomous snakes due to mimicry. Polymorphic traits, color morphs, variation, and juveniles that may exhibit different colors can cause false negatives and misclassification. Future work involves pre-training image processing such as improving the brightness and contrast or converting to grayscale, interactively specifying or generating regions of interest for feature detection, and targeting reducing the false negative rate and improve the true positive rate. Further study is needed with a larger and balanced image set to evaluate its performance. This work may potentially serve as a tool for herpetologists to assist in their field research and to classify large image sets.
ContributorsIp, Melissa A (Author) / Li, Baoxin (Thesis director) / Chandakkar, Parag (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
135458-Thumbnail Image.png
Description
Currently, students at Arizona State University are restricted to cards when using their college's local currency. This currency, Maroon and Gold dollars (M&G), is a primary source of meal plans for many students. When relying on card readers, students risk security and convenience. The security is risked due to the

Currently, students at Arizona State University are restricted to cards when using their college's local currency. This currency, Maroon and Gold dollars (M&G), is a primary source of meal plans for many students. When relying on card readers, students risk security and convenience. The security is risked due to the constant student id number on each card. A student's identification number never changes and is located on each card. If the student loses their card, their account information is permanently compromised. Convenience is an issue because, currently, students must make a purchase in order to see their current account balance. Another major issue is that businesses must purchase external hardware in order to use the M&G System. An online or mobile system would eliminate the need for a physical card and allow businesses to function without external card readers. Such a system would have access to financial information of businesses and students at ASU. Thus, the system require severe scrutiny by a well-trusted team of professionals before being implemented. My objective was to help bring such a system to life. To do this, I decided to make a mobile application prototype to serve as a baseline and to demonstrate the features of such a system. As a baseline, it needed to have a realistic, professional appearance, with the ability to accurately demonstrate feature functionality. Before developing the app, I set out to determine the User Interactions and User Experience designs (UI/UX) by conducting a series of informal interviews with local students and businesses. After the designs were finalized, I started implementation of the actual application in Android Studio. This creative project consists of a mobile application, a contained database, a GUI (Graphics User Interface) prototype, and a technical document.
ContributorsReigel, Justin Bryce (Author) / Bansal, Ajay (Thesis director) / Lindquist, Timothy (Committee member) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
134185-Thumbnail Image.png
Description
37,461 automobile accident fatalities occured in the United States in 2016 ("Quick Facts 2016", 2017). Improving the safety of roads has traditionally been approached by governmental agencies including the National Highway Traffic Safety Administration and State Departments of Transporation. In past literature, automobile crash data is analyzed using time-series prediction

37,461 automobile accident fatalities occured in the United States in 2016 ("Quick Facts 2016", 2017). Improving the safety of roads has traditionally been approached by governmental agencies including the National Highway Traffic Safety Administration and State Departments of Transporation. In past literature, automobile crash data is analyzed using time-series prediction technicques to identify road segments and/or intersections likely to experience future crashes (Lord & Mannering, 2010). After dangerous zones have been identified road modifications can be implemented improving public safety. This project introduces a historical safety metric for evaluating the relative danger of roads in a road network. The historical safety metric can be used to update routing choices of individual drivers improving public safety by avoiding historically more dangerous routes. The metric is constructed using crash frequency, severity, location and traffic information. An analysis of publically-available crash and traffic data in Allgeheny County, Pennsylvania is used to generate the historical safety metric for a specific road network. Methods for evaluating routes based on the presented historical safety metric are included using the Mann Whitney U Test to evaluate the significance of routing decisions. The evaluation method presented requires routes have at least 20 crashes to be compared with significance testing. The safety of the road network is visualized using a heatmap to present distribution of the metric throughout Allgeheny County.
ContributorsGupta, Ariel Meron (Author) / Bansal, Ajay (Thesis director) / Sodemann, Angela (Committee member) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
133880-Thumbnail Image.png
Description
In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form

In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form a dependency tree. An agent operating within these environments have access to low amounts of data about the environment before interacting with it, so it is crucial that this agent is able to effectively utilize a tree of dependencies and its environmental surroundings to make judgements about which sub-goals are most efficient to pursue at any point in time. A successful agent aims to minimizes cost when completing a given goal. A deep neural network in combination with Q-learning techniques was employed to act as the agent in this environment. This agent consistently performed better than agents using alternate models (models that used dependency tree heuristics or human-like approaches to make sub-goal oriented choices), with an average performance advantage of 33.86% (with a standard deviation of 14.69%) over the best alternate agent. This shows that machine learning techniques can be consistently employed to make goal-oriented choices within an environment with recursive sub-goal dependencies and low amounts of pre-known information.
ContributorsKoleber, Derek (Author) / Acuna, Ruben (Thesis director) / Bansal, Ajay (Committee member) / W.P. Carey School of Business (Contributor) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05