Matching Items (15)
Filtering by

Clear all filters

156331-Thumbnail Image.png
Description
Graph theory is a critical component of computer science and software engineering, with algorithms concerning graph traversal and comprehension powering much of the largest problems in both industry and research. Engineers and researchers often have an accurate view of their target graph, however they struggle to implement a correct, and

Graph theory is a critical component of computer science and software engineering, with algorithms concerning graph traversal and comprehension powering much of the largest problems in both industry and research. Engineers and researchers often have an accurate view of their target graph, however they struggle to implement a correct, and efficient, search over that graph.

To facilitate rapid, correct, efficient, and intuitive development of graph based solutions we propose a new programming language construct - the search statement. Given a supra-root node, a procedure which determines the children of a given parent node, and optional definitions of the fail-fast acceptance or rejection of a solution, the search statement can conduct a search over any graph or network. Structurally, this statement is modelled after the common switch statement and is put into a largely imperative/procedural context to allow for immediate and intuitive development by most programmers. The Go programming language has been used as a foundation and proof-of-concept of the search statement. A Go compiler is provided which implements this construct.
ContributorsHenderson, Christopher (Author) / Bansal, Ajay (Thesis advisor) / Lindquist, Timothy (Committee member) / Acuna, Ruben (Committee member) / Arizona State University (Publisher)
Created2018
135660-Thumbnail Image.png
Description
This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can

This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can be used to understand the image better through recognizing different features present within the image. Deep CNNs, however, require training sets that can be larger than a million pictures in order to fine tune their feature detectors. For the case of facial expression datasets, none of these large datasets are available. Due to this limited availability of data required to train a new CNN, the idea of using naïve domain adaptation is explored. Instead of creating and using a new CNN trained specifically to extract features related to FER, a previously trained CNN originally trained for another computer vision task is used. Work for this research involved creating a system that can run a CNN, can extract feature vectors from the CNN, and can classify these extracted features. Once this system was built, different aspects of the system were tested and tuned. These aspects include the pre-trained CNN that was used, the layer from which features were extracted, normalization used on input images, and training data for the classifier. Once properly tuned, the created system returned results more accurate than previous attempts on facial expression recognition. Based on these positive results, naïve domain adaptation is shown to successfully leverage advantages of deep CNNs for facial expression recognition.
ContributorsEusebio, Jose Miguel Ang (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Venkateswara, Hemanth (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
132967-Thumbnail Image.png
Description
Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that,

Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that, given an abstract problem state, predicts both (i) the best action to be taken from that state and (ii) the generalized “role” of the object being manipulated. The neural network was tested on two classical planning domains: the blocks world domain and the logistic domain. Results indicate that neural networks are capable of making such
predictions with high accuracy, indicating a promising new framework for approaching generalized planning problems.
ContributorsNakhleh, Julia Blair (Author) / Srivastava, Siddharth (Thesis director) / Fainekos, Georgios (Committee member) / Computer Science and Engineering Program (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133339-Thumbnail Image.png
Description
Medical records are increasingly being recorded in the form of electronic health records (EHRs), with a significant amount of patient data recorded as unstructured natural language text. Consequently, being able to extract and utilize clinical data present within these records is an important step in furthering clinical care. One important

Medical records are increasingly being recorded in the form of electronic health records (EHRs), with a significant amount of patient data recorded as unstructured natural language text. Consequently, being able to extract and utilize clinical data present within these records is an important step in furthering clinical care. One important aspect within these records is the presence of prescription information. Existing techniques for extracting prescription information — which includes medication names, dosages, frequencies, reasons for taking, and mode of administration — from unstructured text have focused on the application of rule- and classifier-based methods. While state-of-the-art systems can be effective in extracting many types of information, they require significant effort to develop hand-crafted rules and conduct effective feature engineering. This paper presents the use of a bidirectional LSTM with CRF tagging model initialized with precomputed word embeddings for extracting prescription information from sentences without requiring significant feature engineering. The experimental results, run on the i2b2 2009 dataset, achieve an F1 macro measure of 0.8562, and scores above 0.9449 on four of the six categories, indicating significant potential for this model.
ContributorsRawal, Samarth Chetan (Author) / Baral, Chitta (Thesis director) / Anwar, Saadat (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description

Breast cancer is one of the most common types of cancer worldwide. Early detection and diagnosis are crucial for improving the chances of successful treatment and survival. In this thesis, many different machine learning algorithms were evaluated and compared to predict breast cancer malignancy from diagnostic features extracted from digitized

Breast cancer is one of the most common types of cancer worldwide. Early detection and diagnosis are crucial for improving the chances of successful treatment and survival. In this thesis, many different machine learning algorithms were evaluated and compared to predict breast cancer malignancy from diagnostic features extracted from digitized images of breast tissue samples, called fine-needle aspirates. Breast cancer diagnosis typically involves a combination of mammography, ultrasound, and biopsy. However, machine learning algorithms can assist in the detection and diagnosis of breast cancer by analyzing large amounts of data and identifying patterns that may not be discernible to the human eye. By using these algorithms, healthcare professionals can potentially detect breast cancer at an earlier stage, leading to more effective treatment and better patient outcomes. The results showed that the gradient boosting classifier performed the best, achieving an accuracy of 96% on the test set. This indicates that this algorithm can be a useful tool for healthcare professionals in the early detection and diagnosis of breast cancer, potentially leading to improved patient outcomes.

ContributorsMallya, Aatmik (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description

This research paper explores the effects of data variance on the quality of Artificial Intelligence image generation models and the impact on a viewer's perception of the generated images. The study examines how the quality and accuracy of the images produced by these models are influenced by factors such as

This research paper explores the effects of data variance on the quality of Artificial Intelligence image generation models and the impact on a viewer's perception of the generated images. The study examines how the quality and accuracy of the images produced by these models are influenced by factors such as size, labeling, and format of the training data. The findings suggest that reducing the training dataset size can lead to a decrease in image coherence, indicating that AI models get worse as the training dataset gets smaller. Moreover, the study makes surprising discoveries regarding AI image generation models that are trained on highly varied datasets. In addition, the study involves a survey in which people were asked to rate the subjective realism of the generated images on a scale ranging from 1 to 5 as well as sorting the images into their respective classes. The findings of this study emphasize the importance of considering dataset variance and size as a critical aspect of improving image generation models as well as the implications of using AI technology in the future.

ContributorsPunyamurthula, Rushil (Author) / Carter, Lynn (Thesis director) / Sarmento, Rick (Committee member) / Barrett, The Honors College (Contributor) / School of Sustainability (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
161223-Thumbnail Image.jpg
Description

User interface development on iOS is in a major transitionary state as Apple introduces a declarative and interactive framework called SwiftUI. SwiftUI’s success depends on how well it integrates its new tooling for novice developers. This paper will demonstrate and discuss where SwiftUI succeeds and fails at carving a new

User interface development on iOS is in a major transitionary state as Apple introduces a declarative and interactive framework called SwiftUI. SwiftUI’s success depends on how well it integrates its new tooling for novice developers. This paper will demonstrate and discuss where SwiftUI succeeds and fails at carving a new path for user interface development for new developers. This is done by comparisons against its existing imperative UI framework UIKit as well as elaborating on the background of SwiftUI and examples of how SwiftUI works to help developers. The paper will also discuss what exactly led to SwiftUI and how it is currently faring on Apple's latest operating systems. SwiftUI is a framework growing and evolving to serve the needs of 5 very different platforms with code that claims to be simpler to write and easier to deploy. The world of UI programming in iOS has been dominated by a Storyboard canvas for years, but SwiftUI claims to link this graphic-first development process with the code programmers are used to by keeping them side by side in constant sync. This bold move requires interactive programming capable of recompilation on the fly. As this paper will discuss, SwiftUI has garnered a community of developers giving it the main property it needs to succeed: a component library.

ContributorsGilchrist, Ethan (Author) / Bansal, Ajay (Thesis director) / Balasooriya, Janaka (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2021-12
131135-Thumbnail Image.png
Description
Accurate pose initialization and pose estimation are crucial requirements in on-orbit space assembly and various other autonomous on-orbit tasks. However, pose initialization and pose estimation are much more difficult to do accurately and consistently in space. This is primarily due to not only the variable lighting conditions present in space,

Accurate pose initialization and pose estimation are crucial requirements in on-orbit space assembly and various other autonomous on-orbit tasks. However, pose initialization and pose estimation are much more difficult to do accurately and consistently in space. This is primarily due to not only the variable lighting conditions present in space, but also the power requirements mandated by space-flyable hardware. This thesis investigates leveraging a deep learning approach for monocular one-shot pose initialization and pose estimation. A convolutional neural network was used to estimate the 6D pose of an assembly truss object. This network was trained by utilizing synthetic imagery generated from a simulation testbed. Furthermore, techniques to quantify model uncertainty of the deep learning model were investigated and applied in the task of in-space pose estimation and pose initialization. The feasibility of this approach on low-power computational platforms was also tested. The results demonstrate that accurate pose initialization and pose estimation can be conducted using a convolutional neural network. In addition, the results show that the model uncertainty can be obtained from the network. Lastly, the use of deep learning for pose initialization and pose estimation in addition with uncertainty quantification was demonstrated to be feasible on low-power compute platforms.
ContributorsKailas, Siva Maneparambil (Author) / Ben Amor, Heni (Thesis director) / Detry, Renaud (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131274-Thumbnail Image.png
Description
Emotion recognition in conversation has applications within numerous domains such as affective computing and medicine. Recent methods for emotion recognition jointly utilize conversational data over several modalities including audio, video, and text. However, state-of-the-art frameworks for this task do not focus on the feature extraction and feature fusion steps of

Emotion recognition in conversation has applications within numerous domains such as affective computing and medicine. Recent methods for emotion recognition jointly utilize conversational data over several modalities including audio, video, and text. However, state-of-the-art frameworks for this task do not focus on the feature extraction and feature fusion steps of this process. This thesis aims to improve the state-of-the-art method by incorporating two components to better accomplish these steps. By doing so, we are able to produce improved representations for the text modality and better model the relationships between all modalities. This paper proposes two methods which focus on these concepts and provide improved accuracy over the state-of-the-art framework for multimodal emotion recognition in dialogue.
ContributorsRawal, Siddharth (Author) / Baral, Chitta (Thesis director) / Shah, Shrikant (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131461-Thumbnail Image.png
Description
Spotify, one of the most popular music streaming services, has many
algorithms for recommending new music to users. However, at the
core of their recommendations is the collaborative filtering algorithm,
which recommends music based on what other people with similar
tastes have listened to [1]. While this can produce highly relevant
content recommendations, it tends

Spotify, one of the most popular music streaming services, has many
algorithms for recommending new music to users. However, at the
core of their recommendations is the collaborative filtering algorithm,
which recommends music based on what other people with similar
tastes have listened to [1]. While this can produce highly relevant
content recommendations, it tends to promote only popular content
[2]. The popularity bias inherent in collaborative-filtering based
systems can overlook music that fits a user’s taste, simply because
nobody else is listening to it. One possible solution to this problem is
to recommend music based on features of the music itself, and
recommend songs which have similar features. Here, a method for
extracting high-level features representing the mood of a song is
presented, with the aim of tailoring music recommendations to an
individual's mood, and providing music recommendations with
diversity in popularity.
ContributorsGomez, Luis Angel (Author) / Kevin, Burger (Thesis director) / Alberto, Hernández (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05