Matching Items (29)
Filtering by

Clear all filters

131529-Thumbnail Image.png
Description
RecyclePlus is an iOS mobile application that allows users to be knowledgeable in the realms of sustainability. It gives encourages users to be environmental responsible by providing them access to recycling information. In particular, it allows users to search up certain materials and learn about its recyclability and how to

RecyclePlus is an iOS mobile application that allows users to be knowledgeable in the realms of sustainability. It gives encourages users to be environmental responsible by providing them access to recycling information. In particular, it allows users to search up certain materials and learn about its recyclability and how to properly dispose of the material. Some searches will show locations of facilities near users that collect certain materials and dispose of the materials properly. This is a full stack software project that explores open source software and APIs, UI/UX design, and iOS development.
ContributorsTran, Nikki (Author) / Ganesh, Tirupalavanam (Thesis director) / Meuth, Ryan (Committee member) / Watts College of Public Service & Community Solut (Contributor) / Department of Information Systems (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
135660-Thumbnail Image.png
Description
This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can

This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can be used to understand the image better through recognizing different features present within the image. Deep CNNs, however, require training sets that can be larger than a million pictures in order to fine tune their feature detectors. For the case of facial expression datasets, none of these large datasets are available. Due to this limited availability of data required to train a new CNN, the idea of using naïve domain adaptation is explored. Instead of creating and using a new CNN trained specifically to extract features related to FER, a previously trained CNN originally trained for another computer vision task is used. Work for this research involved creating a system that can run a CNN, can extract feature vectors from the CNN, and can classify these extracted features. Once this system was built, different aspects of the system were tested and tuned. These aspects include the pre-trained CNN that was used, the layer from which features were extracted, normalization used on input images, and training data for the classifier. Once properly tuned, the created system returned results more accurate than previous attempts on facial expression recognition. Based on these positive results, naïve domain adaptation is shown to successfully leverage advantages of deep CNNs for facial expression recognition.
ContributorsEusebio, Jose Miguel Ang (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Venkateswara, Hemanth (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136785-Thumbnail Image.png
Description
This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind using a visual-to-tactile mapping of facial action units and emotions. Pancake shaftless vibration motors are mounted on the back of

This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind using a visual-to-tactile mapping of facial action units and emotions. Pancake shaftless vibration motors are mounted on the back of a chair to provide vibrotactile stimulation in the context of a dyadic (one-on-one) interaction across a table. This work explores the design of spatiotemporal vibration patterns that can be used to convey the basic building blocks of facial movements according to the Facial Action Unit Coding System. A behavioral study was conducted to explore the factors that influence the naturalness of conveying affect using vibrotactile cues.
ContributorsBala, Shantanu (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / Department of Psychology (Contributor)
Created2014-05
136604-Thumbnail Image.png
Description
As technology's influence pushes every industry to change, healthcare professionals must move to a more connected model. The nearly ubiquitous presence of smartphones presents a unique opportunity for physicians to collect and process data from their patients more frequently. The Mayo Clinic, in partnership with the Barrett Honors College, has

As technology's influence pushes every industry to change, healthcare professionals must move to a more connected model. The nearly ubiquitous presence of smartphones presents a unique opportunity for physicians to collect and process data from their patients more frequently. The Mayo Clinic, in partnership with the Barrett Honors College, has designed and developed a prototype smartphone application targeting palliative care patients. The application collects symptom data from the patients and presents it to the doctors. This development project serves as a proof-of-concept for the application, and shows how such an application might look and function. Additionally, the project has revealed significant possibilities for the future of the application.
ContributorsGaney, David Howard (Author) / Balasooriya, Janaka (Thesis director) / Lipinski, Christopher (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor) / Computer Science and Engineering Program (Contributor)
Created2015-05
136678-Thumbnail Image.png
Description
When planning a road trip today, there are solutions that let the user know what comes along their route, but the user is often presented with too much information, which can overwhelm the user. They are provided suggestions all along the route, not just at those times when they would

When planning a road trip today, there are solutions that let the user know what comes along their route, but the user is often presented with too much information, which can overwhelm the user. They are provided suggestions all along the route, not just at those times when they would be needed. RoutePlanner simply takes all that information and only presents that data to the user, that they would need at a particular time. Gas station suggestions would show when the gas tank range is going to be hit soon, and restaurant suggestions would only be shown around lunch time. The iOS app takes in the users origin and destination and provides the user the route as given by GoogleMaps, and then various stop suggestions at their given time. Each route that is obtained, is broken down into a number of steps, which are basically a connection of coordinate points. These coordinate point collections are used to point to a location at a certain distance or duration away from the origin. Given a coordinate, we query the APIs for places of interest and move to the next stop, until the end of the route.
ContributorsDamania, Harsh Abhay (Author) / Balasooriya, Janaka (Thesis director) / Faucon, Christophe (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2014-12
136440-Thumbnail Image.png
Description
The face of computing is constantly changing. Wearable computers in the form of glasses or watches are becoming more and more common. These devices have very small screens (measured in millimeters), and users often interact with them through voice input and audio feedback. Weather is one of the most regularly

The face of computing is constantly changing. Wearable computers in the form of glasses or watches are becoming more and more common. These devices have very small screens (measured in millimeters), and users often interact with them through voice input and audio feedback. Weather is one of the most regularly checked app category on smart devices, but weather results on these devices are often limited to raw data, canned responses, or sentence templates with numbers plugged in. The goal for this project was to build a system that could generate weather forecast text, which could then be read to a user through text-to-speech. By using methods in language generation, the system can generate weather forecast text in millions of different ways. This is all computed locally, and it covers every possible weather case. In order to generate natural weather forecast texts, the system retrieved raw weather data from a weather API and created the text through six methods: content determination, document structuring, sentence aggregation, lexical choice, referring expression generation, and text realization. Content determination is the process of deciding on what information to include in a computer generated text. The document structuring phase deals with the order and structure of the information. Sentence aggregation is the merging of similar sentences to improve readability and to reduce redundancy. Lexical choice is the process of putting words to concepts. Referring expression generation is the process of identifying objects, regions, time periods, and locations within a text. Finally text realization involves creating sentences with proper syntax, morphology, and orthography. Through these six stages, a system was developed that could generate unique weather forecast text from raw data accurately and efficiently. It was built for iOS devices with Apple's new programming language, Swift, and it will be ported to the Apple Watch when the API is fully opened to developers.
ContributorsJorgensen, Jacob Paul (Author) / Baral, Chitta (Thesis director) / Faucon, Christophe (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2015-05
132922-Thumbnail Image.png
Description
Charleston, South Carolina currently faces serious annual flooding issues due to tides and rainfall. These issues are expected to get significantly worse within the next few decades reaching a projected 180 days a year of flooding by 2045 (Carter et al., 2018). Several permanent solutions are in progress by the

Charleston, South Carolina currently faces serious annual flooding issues due to tides and rainfall. These issues are expected to get significantly worse within the next few decades reaching a projected 180 days a year of flooding by 2045 (Carter et al., 2018). Several permanent solutions are in progress by the City of Charleston. However, these solutions are years away at minimum and faced with development issues. This thesis attempts to treat some of the symptoms of flooding, such as navigation, by creating an iPhone application which predicts flooding and helps people navigate around it safely. Specifically, this thesis will take into account rainfall and tide levels to display to users actively flooded areas of downtown Charleston and provide routing to a destination from a user’s location around these flooded areas whenever possible.
ContributorsSalisbury, Mason (Author) / Balasooriya, Janaka (Thesis director) / Faucon, Christophe (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132967-Thumbnail Image.png
Description
Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that,

Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that, given an abstract problem state, predicts both (i) the best action to be taken from that state and (ii) the generalized “role” of the object being manipulated. The neural network was tested on two classical planning domains: the blocks world domain and the logistic domain. Results indicate that neural networks are capable of making such
predictions with high accuracy, indicating a promising new framework for approaching generalized planning problems.
ContributorsNakhleh, Julia Blair (Author) / Srivastava, Siddharth (Thesis director) / Fainekos, Georgios (Committee member) / Computer Science and Engineering Program (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133339-Thumbnail Image.png
Description
Medical records are increasingly being recorded in the form of electronic health records (EHRs), with a significant amount of patient data recorded as unstructured natural language text. Consequently, being able to extract and utilize clinical data present within these records is an important step in furthering clinical care. One important

Medical records are increasingly being recorded in the form of electronic health records (EHRs), with a significant amount of patient data recorded as unstructured natural language text. Consequently, being able to extract and utilize clinical data present within these records is an important step in furthering clinical care. One important aspect within these records is the presence of prescription information. Existing techniques for extracting prescription information — which includes medication names, dosages, frequencies, reasons for taking, and mode of administration — from unstructured text have focused on the application of rule- and classifier-based methods. While state-of-the-art systems can be effective in extracting many types of information, they require significant effort to develop hand-crafted rules and conduct effective feature engineering. This paper presents the use of a bidirectional LSTM with CRF tagging model initialized with precomputed word embeddings for extracting prescription information from sentences without requiring significant feature engineering. The experimental results, run on the i2b2 2009 dataset, achieve an F1 macro measure of 0.8562, and scores above 0.9449 on four of the six categories, indicating significant potential for this model.
ContributorsRawal, Samarth Chetan (Author) / Baral, Chitta (Thesis director) / Anwar, Saadat (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133291-Thumbnail Image.png
DescriptionFresh15 is an iOS application geared towards helping college students eat healthier. This is based on a user's preferences of price range, food restrictions, and favorite ingredients. Our application also considers the fact that students may have to order their ingredients online since they don't have access to transportation.
ContributorsBailey, Reece (Co-author) / Fallah-Adl, Sarah (Co-author) / Meuth, Ryan (Thesis director) / McDaniel, Troy (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05