Matching Items (18)
Filtering by

Clear all filters

135660-Thumbnail Image.png
Description
This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can

This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can be used to understand the image better through recognizing different features present within the image. Deep CNNs, however, require training sets that can be larger than a million pictures in order to fine tune their feature detectors. For the case of facial expression datasets, none of these large datasets are available. Due to this limited availability of data required to train a new CNN, the idea of using naïve domain adaptation is explored. Instead of creating and using a new CNN trained specifically to extract features related to FER, a previously trained CNN originally trained for another computer vision task is used. Work for this research involved creating a system that can run a CNN, can extract feature vectors from the CNN, and can classify these extracted features. Once this system was built, different aspects of the system were tested and tuned. These aspects include the pre-trained CNN that was used, the layer from which features were extracted, normalization used on input images, and training data for the classifier. Once properly tuned, the created system returned results more accurate than previous attempts on facial expression recognition. Based on these positive results, naïve domain adaptation is shown to successfully leverage advantages of deep CNNs for facial expression recognition.
ContributorsEusebio, Jose Miguel Ang (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Venkateswara, Hemanth (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description
YouTube video bots have been constantly generating bot videos and posting them on the YouTube platform. While these bot-generated videos negatively influence the YouTube audience, they cost YouTube extra resources to host. The goal for this project is to build a classifier that identifies bot-generated channels based on a dee

YouTube video bots have been constantly generating bot videos and posting them on the YouTube platform. While these bot-generated videos negatively influence the YouTube audience, they cost YouTube extra resources to host. The goal for this project is to build a classifier that identifies bot-generated channels based on a deep learning-based framework. We designed the framework to take text, audio, and video features into account. For the purpose of this thesis project, we will be focusing on text classification work.
ContributorsSai, Lun (Author) / Benjamin, Victor (Thesis director) / Lin, Elva S.Y. (Committee member) / Department of Information Systems (Contributor, Contributor) / School of Accountancy (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132967-Thumbnail Image.png
Description
Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that,

Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that, given an abstract problem state, predicts both (i) the best action to be taken from that state and (ii) the generalized “role” of the object being manipulated. The neural network was tested on two classical planning domains: the blocks world domain and the logistic domain. Results indicate that neural networks are capable of making such
predictions with high accuracy, indicating a promising new framework for approaching generalized planning problems.
ContributorsNakhleh, Julia Blair (Author) / Srivastava, Siddharth (Thesis director) / Fainekos, Georgios (Committee member) / Computer Science and Engineering Program (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133339-Thumbnail Image.png
Description
Medical records are increasingly being recorded in the form of electronic health records (EHRs), with a significant amount of patient data recorded as unstructured natural language text. Consequently, being able to extract and utilize clinical data present within these records is an important step in furthering clinical care. One important

Medical records are increasingly being recorded in the form of electronic health records (EHRs), with a significant amount of patient data recorded as unstructured natural language text. Consequently, being able to extract and utilize clinical data present within these records is an important step in furthering clinical care. One important aspect within these records is the presence of prescription information. Existing techniques for extracting prescription information — which includes medication names, dosages, frequencies, reasons for taking, and mode of administration — from unstructured text have focused on the application of rule- and classifier-based methods. While state-of-the-art systems can be effective in extracting many types of information, they require significant effort to develop hand-crafted rules and conduct effective feature engineering. This paper presents the use of a bidirectional LSTM with CRF tagging model initialized with precomputed word embeddings for extracting prescription information from sentences without requiring significant feature engineering. The experimental results, run on the i2b2 2009 dataset, achieve an F1 macro measure of 0.8562, and scores above 0.9449 on four of the six categories, indicating significant potential for this model.
ContributorsRawal, Samarth Chetan (Author) / Baral, Chitta (Thesis director) / Anwar, Saadat (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
137623-Thumbnail Image.png
Description
Due to its difficult nature, organic chemistry is receiving much research attention across the nation to develop more efficient and effective means to teach it. As part of that, Dr. Ian Gould at ASU is developing an online organic chemistry educational website that provides help to students, adapts to their

Due to its difficult nature, organic chemistry is receiving much research attention across the nation to develop more efficient and effective means to teach it. As part of that, Dr. Ian Gould at ASU is developing an online organic chemistry educational website that provides help to students, adapts to their responses, and collects data about their performance. This thesis creative project addresses the design and implementation of an input parser for organic chemistry reagent questions, to appear on his website. After students used the form to submit questions throughout the Spring 2013 semester in Dr. Gould's organic chemistry class, the data gathered from their usage was analyzed, and feedback was collected. The feedback obtained from students was positive, and suggested that the input parser accomplished the educational goals that it sought to meet.
ContributorsBeerman, Eric Christopher (Author) / Gould, Ian (Thesis director) / Wilkerson, Kelly (Committee member) / Mosca, Vince (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-05
Description

Breast cancer is one of the most common types of cancer worldwide. Early detection and diagnosis are crucial for improving the chances of successful treatment and survival. In this thesis, many different machine learning algorithms were evaluated and compared to predict breast cancer malignancy from diagnostic features extracted from digitized

Breast cancer is one of the most common types of cancer worldwide. Early detection and diagnosis are crucial for improving the chances of successful treatment and survival. In this thesis, many different machine learning algorithms were evaluated and compared to predict breast cancer malignancy from diagnostic features extracted from digitized images of breast tissue samples, called fine-needle aspirates. Breast cancer diagnosis typically involves a combination of mammography, ultrasound, and biopsy. However, machine learning algorithms can assist in the detection and diagnosis of breast cancer by analyzing large amounts of data and identifying patterns that may not be discernible to the human eye. By using these algorithms, healthcare professionals can potentially detect breast cancer at an earlier stage, leading to more effective treatment and better patient outcomes. The results showed that the gradient boosting classifier performed the best, achieving an accuracy of 96% on the test set. This indicates that this algorithm can be a useful tool for healthcare professionals in the early detection and diagnosis of breast cancer, potentially leading to improved patient outcomes.

ContributorsMallya, Aatmik (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description

This research paper explores the effects of data variance on the quality of Artificial Intelligence image generation models and the impact on a viewer's perception of the generated images. The study examines how the quality and accuracy of the images produced by these models are influenced by factors such as

This research paper explores the effects of data variance on the quality of Artificial Intelligence image generation models and the impact on a viewer's perception of the generated images. The study examines how the quality and accuracy of the images produced by these models are influenced by factors such as size, labeling, and format of the training data. The findings suggest that reducing the training dataset size can lead to a decrease in image coherence, indicating that AI models get worse as the training dataset gets smaller. Moreover, the study makes surprising discoveries regarding AI image generation models that are trained on highly varied datasets. In addition, the study involves a survey in which people were asked to rate the subjective realism of the generated images on a scale ranging from 1 to 5 as well as sorting the images into their respective classes. The findings of this study emphasize the importance of considering dataset variance and size as a critical aspect of improving image generation models as well as the implications of using AI technology in the future.

ContributorsPunyamurthula, Rushil (Author) / Carter, Lynn (Thesis director) / Sarmento, Rick (Committee member) / Barrett, The Honors College (Contributor) / School of Sustainability (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
131135-Thumbnail Image.png
Description
Accurate pose initialization and pose estimation are crucial requirements in on-orbit space assembly and various other autonomous on-orbit tasks. However, pose initialization and pose estimation are much more difficult to do accurately and consistently in space. This is primarily due to not only the variable lighting conditions present in space,

Accurate pose initialization and pose estimation are crucial requirements in on-orbit space assembly and various other autonomous on-orbit tasks. However, pose initialization and pose estimation are much more difficult to do accurately and consistently in space. This is primarily due to not only the variable lighting conditions present in space, but also the power requirements mandated by space-flyable hardware. This thesis investigates leveraging a deep learning approach for monocular one-shot pose initialization and pose estimation. A convolutional neural network was used to estimate the 6D pose of an assembly truss object. This network was trained by utilizing synthetic imagery generated from a simulation testbed. Furthermore, techniques to quantify model uncertainty of the deep learning model were investigated and applied in the task of in-space pose estimation and pose initialization. The feasibility of this approach on low-power computational platforms was also tested. The results demonstrate that accurate pose initialization and pose estimation can be conducted using a convolutional neural network. In addition, the results show that the model uncertainty can be obtained from the network. Lastly, the use of deep learning for pose initialization and pose estimation in addition with uncertainty quantification was demonstrated to be feasible on low-power compute platforms.
ContributorsKailas, Siva Maneparambil (Author) / Ben Amor, Heni (Thesis director) / Detry, Renaud (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131274-Thumbnail Image.png
Description
Emotion recognition in conversation has applications within numerous domains such as affective computing and medicine. Recent methods for emotion recognition jointly utilize conversational data over several modalities including audio, video, and text. However, state-of-the-art frameworks for this task do not focus on the feature extraction and feature fusion steps of

Emotion recognition in conversation has applications within numerous domains such as affective computing and medicine. Recent methods for emotion recognition jointly utilize conversational data over several modalities including audio, video, and text. However, state-of-the-art frameworks for this task do not focus on the feature extraction and feature fusion steps of this process. This thesis aims to improve the state-of-the-art method by incorporating two components to better accomplish these steps. By doing so, we are able to produce improved representations for the text modality and better model the relationships between all modalities. This paper proposes two methods which focus on these concepts and provide improved accuracy over the state-of-the-art framework for multimodal emotion recognition in dialogue.
ContributorsRawal, Siddharth (Author) / Baral, Chitta (Thesis director) / Shah, Shrikant (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131461-Thumbnail Image.png
Description
Spotify, one of the most popular music streaming services, has many
algorithms for recommending new music to users. However, at the
core of their recommendations is the collaborative filtering algorithm,
which recommends music based on what other people with similar
tastes have listened to [1]. While this can produce highly relevant
content recommendations, it tends

Spotify, one of the most popular music streaming services, has many
algorithms for recommending new music to users. However, at the
core of their recommendations is the collaborative filtering algorithm,
which recommends music based on what other people with similar
tastes have listened to [1]. While this can produce highly relevant
content recommendations, it tends to promote only popular content
[2]. The popularity bias inherent in collaborative-filtering based
systems can overlook music that fits a user’s taste, simply because
nobody else is listening to it. One possible solution to this problem is
to recommend music based on features of the music itself, and
recommend songs which have similar features. Here, a method for
extracting high-level features representing the mood of a song is
presented, with the aim of tailoring music recommendations to an
individual's mood, and providing music recommendations with
diversity in popularity.
ContributorsGomez, Luis Angel (Author) / Kevin, Burger (Thesis director) / Alberto, Hernández (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05