Matching Items (31)
Filtering by

Clear all filters

148193-Thumbnail Image.png
Description

This project explores how modern mobile technology can be used to provide support for domestic violence victims. The goal of the project is to create a proof-of-concept iOS mobile application that maintains a discreet safety front and provides domestic violence victims with resources and safety planning. The design and implementation

This project explores how modern mobile technology can be used to provide support for domestic violence victims. The goal of the project is to create a proof-of-concept iOS mobile application that maintains a discreet safety front and provides domestic violence victims with resources and safety planning. The design and implementation are disguised as a hair salon app to maintain a low profile on the user’s phone. The HairHelp app features quick exit navigation, a secure database to store a user’s private and personal documents in case of emergency, and a checklist of safety planning measures. The steps taken in this project serve as the foundation for a larger project in the long term.

ContributorsShovkovy, Sophia (Author) / Balasooriya, Janaka (Thesis director) / Wilkey, Douglas (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
135660-Thumbnail Image.png
Description
This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can

This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can be used to understand the image better through recognizing different features present within the image. Deep CNNs, however, require training sets that can be larger than a million pictures in order to fine tune their feature detectors. For the case of facial expression datasets, none of these large datasets are available. Due to this limited availability of data required to train a new CNN, the idea of using naïve domain adaptation is explored. Instead of creating and using a new CNN trained specifically to extract features related to FER, a previously trained CNN originally trained for another computer vision task is used. Work for this research involved creating a system that can run a CNN, can extract feature vectors from the CNN, and can classify these extracted features. Once this system was built, different aspects of the system were tested and tuned. These aspects include the pre-trained CNN that was used, the layer from which features were extracted, normalization used on input images, and training data for the classifier. Once properly tuned, the created system returned results more accurate than previous attempts on facial expression recognition. Based on these positive results, naïve domain adaptation is shown to successfully leverage advantages of deep CNNs for facial expression recognition.
ContributorsEusebio, Jose Miguel Ang (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Venkateswara, Hemanth (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136678-Thumbnail Image.png
Description
When planning a road trip today, there are solutions that let the user know what comes along their route, but the user is often presented with too much information, which can overwhelm the user. They are provided suggestions all along the route, not just at those times when they would

When planning a road trip today, there are solutions that let the user know what comes along their route, but the user is often presented with too much information, which can overwhelm the user. They are provided suggestions all along the route, not just at those times when they would be needed. RoutePlanner simply takes all that information and only presents that data to the user, that they would need at a particular time. Gas station suggestions would show when the gas tank range is going to be hit soon, and restaurant suggestions would only be shown around lunch time. The iOS app takes in the users origin and destination and provides the user the route as given by GoogleMaps, and then various stop suggestions at their given time. Each route that is obtained, is broken down into a number of steps, which are basically a connection of coordinate points. These coordinate point collections are used to point to a location at a certain distance or duration away from the origin. Given a coordinate, we query the APIs for places of interest and move to the next stop, until the end of the route.
ContributorsDamania, Harsh Abhay (Author) / Balasooriya, Janaka (Thesis director) / Faucon, Christophe (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2014-12
136604-Thumbnail Image.png
Description
As technology's influence pushes every industry to change, healthcare professionals must move to a more connected model. The nearly ubiquitous presence of smartphones presents a unique opportunity for physicians to collect and process data from their patients more frequently. The Mayo Clinic, in partnership with the Barrett Honors College, has

As technology's influence pushes every industry to change, healthcare professionals must move to a more connected model. The nearly ubiquitous presence of smartphones presents a unique opportunity for physicians to collect and process data from their patients more frequently. The Mayo Clinic, in partnership with the Barrett Honors College, has designed and developed a prototype smartphone application targeting palliative care patients. The application collects symptom data from the patients and presents it to the doctors. This development project serves as a proof-of-concept for the application, and shows how such an application might look and function. Additionally, the project has revealed significant possibilities for the future of the application.
ContributorsGaney, David Howard (Author) / Balasooriya, Janaka (Thesis director) / Lipinski, Christopher (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor) / Computer Science and Engineering Program (Contributor)
Created2015-05
136440-Thumbnail Image.png
Description
The face of computing is constantly changing. Wearable computers in the form of glasses or watches are becoming more and more common. These devices have very small screens (measured in millimeters), and users often interact with them through voice input and audio feedback. Weather is one of the most regularly

The face of computing is constantly changing. Wearable computers in the form of glasses or watches are becoming more and more common. These devices have very small screens (measured in millimeters), and users often interact with them through voice input and audio feedback. Weather is one of the most regularly checked app category on smart devices, but weather results on these devices are often limited to raw data, canned responses, or sentence templates with numbers plugged in. The goal for this project was to build a system that could generate weather forecast text, which could then be read to a user through text-to-speech. By using methods in language generation, the system can generate weather forecast text in millions of different ways. This is all computed locally, and it covers every possible weather case. In order to generate natural weather forecast texts, the system retrieved raw weather data from a weather API and created the text through six methods: content determination, document structuring, sentence aggregation, lexical choice, referring expression generation, and text realization. Content determination is the process of deciding on what information to include in a computer generated text. The document structuring phase deals with the order and structure of the information. Sentence aggregation is the merging of similar sentences to improve readability and to reduce redundancy. Lexical choice is the process of putting words to concepts. Referring expression generation is the process of identifying objects, regions, time periods, and locations within a text. Finally text realization involves creating sentences with proper syntax, morphology, and orthography. Through these six stages, a system was developed that could generate unique weather forecast text from raw data accurately and efficiently. It was built for iOS devices with Apple's new programming language, Swift, and it will be ported to the Apple Watch when the API is fully opened to developers.
ContributorsJorgensen, Jacob Paul (Author) / Baral, Chitta (Thesis director) / Faucon, Christophe (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2015-05
131274-Thumbnail Image.png
Description
Emotion recognition in conversation has applications within numerous domains such as affective computing and medicine. Recent methods for emotion recognition jointly utilize conversational data over several modalities including audio, video, and text. However, state-of-the-art frameworks for this task do not focus on the feature extraction and feature fusion steps of

Emotion recognition in conversation has applications within numerous domains such as affective computing and medicine. Recent methods for emotion recognition jointly utilize conversational data over several modalities including audio, video, and text. However, state-of-the-art frameworks for this task do not focus on the feature extraction and feature fusion steps of this process. This thesis aims to improve the state-of-the-art method by incorporating two components to better accomplish these steps. By doing so, we are able to produce improved representations for the text modality and better model the relationships between all modalities. This paper proposes two methods which focus on these concepts and provide improved accuracy over the state-of-the-art framework for multimodal emotion recognition in dialogue.
ContributorsRawal, Siddharth (Author) / Baral, Chitta (Thesis director) / Shah, Shrikant (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131135-Thumbnail Image.png
Description
Accurate pose initialization and pose estimation are crucial requirements in on-orbit space assembly and various other autonomous on-orbit tasks. However, pose initialization and pose estimation are much more difficult to do accurately and consistently in space. This is primarily due to not only the variable lighting conditions present in space,

Accurate pose initialization and pose estimation are crucial requirements in on-orbit space assembly and various other autonomous on-orbit tasks. However, pose initialization and pose estimation are much more difficult to do accurately and consistently in space. This is primarily due to not only the variable lighting conditions present in space, but also the power requirements mandated by space-flyable hardware. This thesis investigates leveraging a deep learning approach for monocular one-shot pose initialization and pose estimation. A convolutional neural network was used to estimate the 6D pose of an assembly truss object. This network was trained by utilizing synthetic imagery generated from a simulation testbed. Furthermore, techniques to quantify model uncertainty of the deep learning model were investigated and applied in the task of in-space pose estimation and pose initialization. The feasibility of this approach on low-power computational platforms was also tested. The results demonstrate that accurate pose initialization and pose estimation can be conducted using a convolutional neural network. In addition, the results show that the model uncertainty can be obtained from the network. Lastly, the use of deep learning for pose initialization and pose estimation in addition with uncertainty quantification was demonstrated to be feasible on low-power compute platforms.
ContributorsKailas, Siva Maneparambil (Author) / Ben Amor, Heni (Thesis director) / Detry, Renaud (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131461-Thumbnail Image.png
Description
Spotify, one of the most popular music streaming services, has many
algorithms for recommending new music to users. However, at the
core of their recommendations is the collaborative filtering algorithm,
which recommends music based on what other people with similar
tastes have listened to [1]. While this can produce highly relevant
content recommendations, it tends

Spotify, one of the most popular music streaming services, has many
algorithms for recommending new music to users. However, at the
core of their recommendations is the collaborative filtering algorithm,
which recommends music based on what other people with similar
tastes have listened to [1]. While this can produce highly relevant
content recommendations, it tends to promote only popular content
[2]. The popularity bias inherent in collaborative-filtering based
systems can overlook music that fits a user’s taste, simply because
nobody else is listening to it. One possible solution to this problem is
to recommend music based on features of the music itself, and
recommend songs which have similar features. Here, a method for
extracting high-level features representing the mood of a song is
presented, with the aim of tailoring music recommendations to an
individual's mood, and providing music recommendations with
diversity in popularity.
ContributorsGomez, Luis Angel (Author) / Kevin, Burger (Thesis director) / Alberto, Hernández (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131212-Thumbnail Image.png
Description
In recent years, the development of new Machine Learning models has allowed for new technological advancements to be introduced for practical use across the world. Multiple studies and experiments have been conducted to create new variations of Machine Learning models with different algorithms to determine if potential systems would prove

In recent years, the development of new Machine Learning models has allowed for new technological advancements to be introduced for practical use across the world. Multiple studies and experiments have been conducted to create new variations of Machine Learning models with different algorithms to determine if potential systems would prove to be successful. Even today, there are still many research initiatives that are continuing to develop new models in the hopes to discover potential solutions for problems such as autonomous driving or determining the emotional value from a single sentence. One of the current popular research topics for Machine Learning is the development of Facial Expression Recognition systems. These Machine Learning models focus on classifying images of human faces that are expressing different emotions through facial expressions. In order to develop effective models to perform Facial Expression Recognition, researchers have gone on to utilize Deep Learning models, which are a more advanced implementation of Machine Learning models, known as Neural Networks. More specifically, the use of Convolutional Neural Networks has proven to be the most effective models for achieving highly accurate results at classifying images of various facial expressions. Convolutional Neural Networks are Deep Learning models that are capable of processing visual data, such as images and videos, and can be used to identify various facial expressions. The purpose of this project, I focused on learning about the important concepts of Machine Learning, Deep Learning, and Convolutional Neural Networks to implement a Convolutional Neural Network that was previously developed by a recommended research paper.
ContributorsFrace, Douglas R (Author) / Demakethepalli Venkateswara, Hemanth Kumar (Thesis director) / McDaniel, Troy (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
132364-Thumbnail Image.png
Description
Finding applications on Apple’s iOS device Home screen is a difficult task since applications are arranged in a disorganized grid of icons and small labels. By “jailbreaking” an iOS device, it is possible to install third party “tweaks” that modify the operating system to customize and fix annoying aspects of

Finding applications on Apple’s iOS device Home screen is a difficult task since applications are arranged in a disorganized grid of icons and small labels. By “jailbreaking” an iOS device, it is possible to install third party “tweaks” that modify the operating system to customize and fix annoying aspects of iOS. Current jailbreak tweaks exist that can launch applications differently than Apple’s stock Home screen, but they leave much to be desired in terms of functionality, usability, and aesthetics. HomeList is a watchOS-inspired tweak I created to add an easy to read, quick to navigate, and visually appealing list of applications integrated directly into the Home screen. Research into Apple’s private iOS frameworks was used to figure out how to perform tasks required by an app launcher as well as match iOS design aesthetics.
ContributorsBoxberger, Blake Palmer (Author) / Balasooriya, Janaka (Thesis director) / Faucon, Philippe Christophe (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05