Matching Items (22)
Filtering by

Clear all filters

131529-Thumbnail Image.png
Description
RecyclePlus is an iOS mobile application that allows users to be knowledgeable in the realms of sustainability. It gives encourages users to be environmental responsible by providing them access to recycling information. In particular, it allows users to search up certain materials and learn about its recyclability and how to

RecyclePlus is an iOS mobile application that allows users to be knowledgeable in the realms of sustainability. It gives encourages users to be environmental responsible by providing them access to recycling information. In particular, it allows users to search up certain materials and learn about its recyclability and how to properly dispose of the material. Some searches will show locations of facilities near users that collect certain materials and dispose of the materials properly. This is a full stack software project that explores open source software and APIs, UI/UX design, and iOS development.
ContributorsTran, Nikki (Author) / Ganesh, Tirupalavanam (Thesis director) / Meuth, Ryan (Committee member) / Watts College of Public Service & Community Solut (Contributor) / Department of Information Systems (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
136604-Thumbnail Image.png
Description
As technology's influence pushes every industry to change, healthcare professionals must move to a more connected model. The nearly ubiquitous presence of smartphones presents a unique opportunity for physicians to collect and process data from their patients more frequently. The Mayo Clinic, in partnership with the Barrett Honors College, has

As technology's influence pushes every industry to change, healthcare professionals must move to a more connected model. The nearly ubiquitous presence of smartphones presents a unique opportunity for physicians to collect and process data from their patients more frequently. The Mayo Clinic, in partnership with the Barrett Honors College, has designed and developed a prototype smartphone application targeting palliative care patients. The application collects symptom data from the patients and presents it to the doctors. This development project serves as a proof-of-concept for the application, and shows how such an application might look and function. Additionally, the project has revealed significant possibilities for the future of the application.
ContributorsGaney, David Howard (Author) / Balasooriya, Janaka (Thesis director) / Lipinski, Christopher (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor) / Computer Science and Engineering Program (Contributor)
Created2015-05
136678-Thumbnail Image.png
Description
When planning a road trip today, there are solutions that let the user know what comes along their route, but the user is often presented with too much information, which can overwhelm the user. They are provided suggestions all along the route, not just at those times when they would

When planning a road trip today, there are solutions that let the user know what comes along their route, but the user is often presented with too much information, which can overwhelm the user. They are provided suggestions all along the route, not just at those times when they would be needed. RoutePlanner simply takes all that information and only presents that data to the user, that they would need at a particular time. Gas station suggestions would show when the gas tank range is going to be hit soon, and restaurant suggestions would only be shown around lunch time. The iOS app takes in the users origin and destination and provides the user the route as given by GoogleMaps, and then various stop suggestions at their given time. Each route that is obtained, is broken down into a number of steps, which are basically a connection of coordinate points. These coordinate point collections are used to point to a location at a certain distance or duration away from the origin. Given a coordinate, we query the APIs for places of interest and move to the next stop, until the end of the route.
ContributorsDamania, Harsh Abhay (Author) / Balasooriya, Janaka (Thesis director) / Faucon, Christophe (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2014-12
136440-Thumbnail Image.png
Description
The face of computing is constantly changing. Wearable computers in the form of glasses or watches are becoming more and more common. These devices have very small screens (measured in millimeters), and users often interact with them through voice input and audio feedback. Weather is one of the most regularly

The face of computing is constantly changing. Wearable computers in the form of glasses or watches are becoming more and more common. These devices have very small screens (measured in millimeters), and users often interact with them through voice input and audio feedback. Weather is one of the most regularly checked app category on smart devices, but weather results on these devices are often limited to raw data, canned responses, or sentence templates with numbers plugged in. The goal for this project was to build a system that could generate weather forecast text, which could then be read to a user through text-to-speech. By using methods in language generation, the system can generate weather forecast text in millions of different ways. This is all computed locally, and it covers every possible weather case. In order to generate natural weather forecast texts, the system retrieved raw weather data from a weather API and created the text through six methods: content determination, document structuring, sentence aggregation, lexical choice, referring expression generation, and text realization. Content determination is the process of deciding on what information to include in a computer generated text. The document structuring phase deals with the order and structure of the information. Sentence aggregation is the merging of similar sentences to improve readability and to reduce redundancy. Lexical choice is the process of putting words to concepts. Referring expression generation is the process of identifying objects, regions, time periods, and locations within a text. Finally text realization involves creating sentences with proper syntax, morphology, and orthography. Through these six stages, a system was developed that could generate unique weather forecast text from raw data accurately and efficiently. It was built for iOS devices with Apple's new programming language, Swift, and it will be ported to the Apple Watch when the API is fully opened to developers.
ContributorsJorgensen, Jacob Paul (Author) / Baral, Chitta (Thesis director) / Faucon, Christophe (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2015-05
132922-Thumbnail Image.png
Description
Charleston, South Carolina currently faces serious annual flooding issues due to tides and rainfall. These issues are expected to get significantly worse within the next few decades reaching a projected 180 days a year of flooding by 2045 (Carter et al., 2018). Several permanent solutions are in progress by the

Charleston, South Carolina currently faces serious annual flooding issues due to tides and rainfall. These issues are expected to get significantly worse within the next few decades reaching a projected 180 days a year of flooding by 2045 (Carter et al., 2018). Several permanent solutions are in progress by the City of Charleston. However, these solutions are years away at minimum and faced with development issues. This thesis attempts to treat some of the symptoms of flooding, such as navigation, by creating an iPhone application which predicts flooding and helps people navigate around it safely. Specifically, this thesis will take into account rainfall and tide levels to display to users actively flooded areas of downtown Charleston and provide routing to a destination from a user’s location around these flooded areas whenever possible.
ContributorsSalisbury, Mason (Author) / Balasooriya, Janaka (Thesis director) / Faucon, Christophe (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133515-Thumbnail Image.png
Description
Natural Language Processing and Virtual Reality are hot topics in the present. How can we synthesize these together in order to make a cohesive experience? The game focuses on users using vocal commands, building structures, and memorizing spatial objects. In order to get proper vocal commands, the IBM Watson API

Natural Language Processing and Virtual Reality are hot topics in the present. How can we synthesize these together in order to make a cohesive experience? The game focuses on users using vocal commands, building structures, and memorizing spatial objects. In order to get proper vocal commands, the IBM Watson API for Natural Language Processing was incorporated into our game system. User experience elements like gestures, UI color change, and images were used to help guide users in memorizing and building structures. The process to create these elements were streamlined through the VRTK library in Unity. The game has two segments. The first segment is a tutorial level where the user learns to perform motions and in-game actions. The second segment is a game where the user must correctly create a structure by utilizing vocal commands and spatial recognition. A standardized usability test, System Usability Scale, was used to evaluate the effectiveness of the game. A survey was also created in order to evaluate a more descriptive user opinion. Overall, users gave a positive score on the System Usability Scale and slightly positive reviews in the custom survey.
ContributorsOrtega, Excel (Co-author) / Ryan, Alexander (Co-author) / Kobayashi, Yoshihiro (Thesis director) / Nelson, Brian (Committee member) / Computing and Informatics Program (Contributor) / School of Art (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133291-Thumbnail Image.png
DescriptionFresh15 is an iOS application geared towards helping college students eat healthier. This is based on a user's preferences of price range, food restrictions, and favorite ingredients. Our application also considers the fact that students may have to order their ingredients online since they don't have access to transportation.
ContributorsBailey, Reece (Co-author) / Fallah-Adl, Sarah (Co-author) / Meuth, Ryan (Thesis director) / McDaniel, Troy (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
148193-Thumbnail Image.png
Description

This project explores how modern mobile technology can be used to provide support for domestic violence victims. The goal of the project is to create a proof-of-concept iOS mobile application that maintains a discreet safety front and provides domestic violence victims with resources and safety planning. The design and implementation

This project explores how modern mobile technology can be used to provide support for domestic violence victims. The goal of the project is to create a proof-of-concept iOS mobile application that maintains a discreet safety front and provides domestic violence victims with resources and safety planning. The design and implementation are disguised as a hair salon app to maintain a low profile on the user’s phone. The HairHelp app features quick exit navigation, a secure database to store a user’s private and personal documents in case of emergency, and a checklist of safety planning measures. The steps taken in this project serve as the foundation for a larger project in the long term.

ContributorsShovkovy, Sophia (Author) / Balasooriya, Janaka (Thesis director) / Wilkey, Douglas (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147796-Thumbnail Image.png
Description

As much as SARS-CoV-2 has altered the way humans live since the beginning of 2020,<br/>this virus's deadly nature has required clinical testing to meet 2020's demands of higher<br/>throughput, higher accuracy and higher efficiency. Information technology has allowed<br/>institutions, like Arizona State University (ASU), to make strategic and operational changes to<br/>combat the

As much as SARS-CoV-2 has altered the way humans live since the beginning of 2020,<br/>this virus's deadly nature has required clinical testing to meet 2020's demands of higher<br/>throughput, higher accuracy and higher efficiency. Information technology has allowed<br/>institutions, like Arizona State University (ASU), to make strategic and operational changes to<br/>combat the SARS-CoV-2 pandemic. At ASU, information technology was one of the six facets<br/>identified in the ongoing review of the ASU Biodesign Clinical Testing Laboratory (ABCTL)<br/>among business, communications, management/training, law, and clinical analysis. The first<br/>chapter of this manuscript covers the background of clinical laboratory automation and details<br/>the automated laboratory workflow to perform ABCTL’s COVID-19 diagnostic testing. The<br/>second chapter discusses the usability and efficiency of key information technology systems of<br/>the ABCTL. The third chapter explains the role of quality control and data management within<br/>ABCTL’s use of information technology. The fourth chapter highlights the importance of data<br/>modeling and 10 best practices when responding to future public health emergencies.

ContributorsKandan, Mani (Co-author) / Leung, Michael (Co-author) / Woo, Sabrina (Co-author) / Knox, Garrett (Co-author) / Compton, Carolyn (Thesis director) / Dudley, Sean (Committee member) / Computer Science and Engineering Program (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

This paper explores the intersection of user experience and museums through interactive and immersive exhibits. It discusses the background and history of the art museum and the field of UX and describes how interactivity and immersion impact visitors and change the exhibit development process. The implications of interactive and immersive

This paper explores the intersection of user experience and museums through interactive and immersive exhibits. It discusses the background and history of the art museum and the field of UX and describes how interactivity and immersion impact visitors and change the exhibit development process. The implications of interactive and immersive exhibits on the museum space are detailed including: social media, the authenticity of objects, and the commodification of experience. It is argued that despite the drawbacks of interactivity and immersion in the museum, the potential benefits of audience engagement and social connection make them worth pursuing.

ContributorsHong, Harrison (Author) / Boyce-Jacino, Katherine (Thesis director) / Carrasquilla, Christina (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05