Matching Items (38)
Filtering by

Clear all filters

131529-Thumbnail Image.png
Description
RecyclePlus is an iOS mobile application that allows users to be knowledgeable in the realms of sustainability. It gives encourages users to be environmental responsible by providing them access to recycling information. In particular, it allows users to search up certain materials and learn about its recyclability and how to

RecyclePlus is an iOS mobile application that allows users to be knowledgeable in the realms of sustainability. It gives encourages users to be environmental responsible by providing them access to recycling information. In particular, it allows users to search up certain materials and learn about its recyclability and how to properly dispose of the material. Some searches will show locations of facilities near users that collect certain materials and dispose of the materials properly. This is a full stack software project that explores open source software and APIs, UI/UX design, and iOS development.
ContributorsTran, Nikki (Author) / Ganesh, Tirupalavanam (Thesis director) / Meuth, Ryan (Committee member) / Watts College of Public Service & Community Solut (Contributor) / Department of Information Systems (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
136604-Thumbnail Image.png
Description
As technology's influence pushes every industry to change, healthcare professionals must move to a more connected model. The nearly ubiquitous presence of smartphones presents a unique opportunity for physicians to collect and process data from their patients more frequently. The Mayo Clinic, in partnership with the Barrett Honors College, has

As technology's influence pushes every industry to change, healthcare professionals must move to a more connected model. The nearly ubiquitous presence of smartphones presents a unique opportunity for physicians to collect and process data from their patients more frequently. The Mayo Clinic, in partnership with the Barrett Honors College, has designed and developed a prototype smartphone application targeting palliative care patients. The application collects symptom data from the patients and presents it to the doctors. This development project serves as a proof-of-concept for the application, and shows how such an application might look and function. Additionally, the project has revealed significant possibilities for the future of the application.
ContributorsGaney, David Howard (Author) / Balasooriya, Janaka (Thesis director) / Lipinski, Christopher (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor) / Computer Science and Engineering Program (Contributor)
Created2015-05
136678-Thumbnail Image.png
Description
When planning a road trip today, there are solutions that let the user know what comes along their route, but the user is often presented with too much information, which can overwhelm the user. They are provided suggestions all along the route, not just at those times when they would

When planning a road trip today, there are solutions that let the user know what comes along their route, but the user is often presented with too much information, which can overwhelm the user. They are provided suggestions all along the route, not just at those times when they would be needed. RoutePlanner simply takes all that information and only presents that data to the user, that they would need at a particular time. Gas station suggestions would show when the gas tank range is going to be hit soon, and restaurant suggestions would only be shown around lunch time. The iOS app takes in the users origin and destination and provides the user the route as given by GoogleMaps, and then various stop suggestions at their given time. Each route that is obtained, is broken down into a number of steps, which are basically a connection of coordinate points. These coordinate point collections are used to point to a location at a certain distance or duration away from the origin. Given a coordinate, we query the APIs for places of interest and move to the next stop, until the end of the route.
ContributorsDamania, Harsh Abhay (Author) / Balasooriya, Janaka (Thesis director) / Faucon, Christophe (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2014-12
136440-Thumbnail Image.png
Description
The face of computing is constantly changing. Wearable computers in the form of glasses or watches are becoming more and more common. These devices have very small screens (measured in millimeters), and users often interact with them through voice input and audio feedback. Weather is one of the most regularly

The face of computing is constantly changing. Wearable computers in the form of glasses or watches are becoming more and more common. These devices have very small screens (measured in millimeters), and users often interact with them through voice input and audio feedback. Weather is one of the most regularly checked app category on smart devices, but weather results on these devices are often limited to raw data, canned responses, or sentence templates with numbers plugged in. The goal for this project was to build a system that could generate weather forecast text, which could then be read to a user through text-to-speech. By using methods in language generation, the system can generate weather forecast text in millions of different ways. This is all computed locally, and it covers every possible weather case. In order to generate natural weather forecast texts, the system retrieved raw weather data from a weather API and created the text through six methods: content determination, document structuring, sentence aggregation, lexical choice, referring expression generation, and text realization. Content determination is the process of deciding on what information to include in a computer generated text. The document structuring phase deals with the order and structure of the information. Sentence aggregation is the merging of similar sentences to improve readability and to reduce redundancy. Lexical choice is the process of putting words to concepts. Referring expression generation is the process of identifying objects, regions, time periods, and locations within a text. Finally text realization involves creating sentences with proper syntax, morphology, and orthography. Through these six stages, a system was developed that could generate unique weather forecast text from raw data accurately and efficiently. It was built for iOS devices with Apple's new programming language, Swift, and it will be ported to the Apple Watch when the API is fully opened to developers.
ContributorsJorgensen, Jacob Paul (Author) / Baral, Chitta (Thesis director) / Faucon, Christophe (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2015-05
136179-Thumbnail Image.png
Description
CourseKarma is a web application that engages students in their own learning through peer-driven social networking. The influence of technology on students is advancing faster than the school system, and a major gap still lingers between traditional learning techniques and the fast-paced, online culture of today's generation. CourseKarma enriches the

CourseKarma is a web application that engages students in their own learning through peer-driven social networking. The influence of technology on students is advancing faster than the school system, and a major gap still lingers between traditional learning techniques and the fast-paced, online culture of today's generation. CourseKarma enriches the educational experience of today's student by creating a space for collaborative inquiry as well as illuminating the opportunities of self and group learning through online collaboration. The features of CourseKarma foster this student-driven environment. The main focus is on a news-feed and Question and Answer component that provides a space for students to share instant updates as well ask and answer questions of the community. The community can be as broad as the entire ASU student body, as specific as students in BIO155, or even more targeted via specific subjects and or skills. CourseKarma also provides reputation points, which are the sum of all of their votes received, identifying the individual's level and or ranking in each subject or class. This not only gamifies the usual day-to-day learning environment, but it also provides an in-depth analysis of the individual's skills, accomplishments, and knowledge. The community is also able to input and utilize course and professor descriptions/feedback. This will be in a review format providing the students an opportunity to share and give feedback on their experience as well as providing incoming students the opportunity to be prepared for their future classes. All of the student's contributions and collaborative activity within CourseKarma is displayed on their personal profile creating a timeline of their academic achievements. The application was created using modern web programming technologies such as AngualrJS, Javascript, jQuery, Bootstrap, HTML5, CSS3 for the styling and front-end development, Mustache.js for client side templating, and Firebase AngularFire as the back-end and NoSQL database. Other technologies such as Pivitol Tracker was used for project management and user story generation, as well as, Github for version control management and repository creation. Object-oreinted programming concepts were heavily present in the creation of the various data structures, as well as, a voting algorithm was used to manage voting of specific posts. Down the road, CourseKarma could even be a necessary add-on within LinkedIn or Facebook that provides a quick yet extremely in-depth look at an individuals' education, skills, and potential to learn \u2014 based all on their actual contribution to their academic community rather than just a text they wrote up.
ContributorsCho, Sungjae (Author) / Mayron, Liam (Thesis director) / Lobock, Alan (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of Arts, Media and Engineering (Contributor)
Created2015-05
133565-Thumbnail Image.png
Description
This paper details the process for designing both a simulation of the board game Jaipur, and an artificial intelligence (AI) agent that can play the game against a human player. When designing an AI for a card game, there are two major problems that can arise. The first is the

This paper details the process for designing both a simulation of the board game Jaipur, and an artificial intelligence (AI) agent that can play the game against a human player. When designing an AI for a card game, there are two major problems that can arise. The first is the difficulty of using a search space to analyze every possible set of future moves. Due to the randomized nature of the deck of cards, the search space rapidly leads to an exponentially growing set of potential game states to analyze when one tries to look more than one turn ahead. The second aspect that poses difficulty is the element of uncertainty that exists from opponent feedback. Certain moves are weak to specific opponent reactions, and these are difficult to predict due to hidden information. To circumvent these problems, the AI uses a greedy approach to decision making, attempting to maximize the value of its plays immediately, and not play for future turns. The agent utilizes conditional statements to evaluate the game state and choose a game action that it deems optimal, a heuristic to place an expected value (EV) of the goods it can choose from, and selects the best one based on this evaluation. Initial implementation of the simulation was done using C++ through a terminal application, and then was translated to a graphical interface using Unity and C#.
ContributorsOrr, James Christopher (Author) / Kobayashi, Yoshihiro (Thesis director) / Selgrad, Justin (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134486-Thumbnail Image.png
Description
The objective of this creative project was to gain experience in digital modeling, animation, coding, shader development and implementation, model integration techniques, and application of gaming principles and design through developing a professional educational game. The team collaborated with Glendale Community College (GCC) to produce an interactive product intended to

The objective of this creative project was to gain experience in digital modeling, animation, coding, shader development and implementation, model integration techniques, and application of gaming principles and design through developing a professional educational game. The team collaborated with Glendale Community College (GCC) to produce an interactive product intended to supplement educational instructions regarding nutrition. The educational game developed, "Nutribots" features the player acting as a nutrition based nanobot sent to the small intestine to help the body. Throughout the game the player will be asked nutrition based questions to test their knowledge of proteins, carbohydrates, and lipids. If the player is unable to answer the question, they must use game mechanics to progress and receive the information as a reward. The level is completed as soon as the question is answered correctly. If the player answers the questions incorrectly twenty times within the entirety of the game, the team loses faith in the player, and the player must reset from title screen. This is to limit guessing and to make sure the player retains the information through repetition once it is demonstrated that they do not know the answers. The team was split into two different groups for the development of this game. The first part of the team developed models, animations, and textures using Autodesk Maya 2016 and Marvelous Designer. The second part of the team developed code and shaders, and implemented products from the first team using Unity and Visual Studio. Once a prototype of the game was developed, it was show-cased amongst peers to gain feedback. Upon receiving feedback, the team implemented the desired changes accordingly. Development for this project began on November 2015 and ended on April 2017. Special thanks to Laura Avila Department Chair and Jennifer Nolz from Glendale Community College Technology and Consumer Sciences, Food and Nutrition Department.
ContributorsNolz, Daisy (Co-author) / Martin, Austin (Co-author) / Quinio, Santiago (Co-author) / Armstrong, Jessica (Co-author) / Kobayashi, Yoshihiro (Thesis director) / Valderrama, Jamie (Committee member) / School of Arts, Media and Engineering (Contributor) / School of Film, Dance and Theatre (Contributor) / Department of English (Contributor) / Computer Science and Engineering Program (Contributor) / Computing and Informatics Program (Contributor) / Herberger Institute for Design and the Arts (Contributor) / School of Sustainability (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
132921-Thumbnail Image.png
Description
Virtual reality gives users the opportunity to immerse themselves in an accurately
simulated computer-generated environment. These environments are accurately simulated in that they provide the appearance of- and allow users to interact with- the simulated environment. Using head-mounted displays, controllers, and auditory feedback, virtual reality provides a convincing simulation of

Virtual reality gives users the opportunity to immerse themselves in an accurately
simulated computer-generated environment. These environments are accurately simulated in that they provide the appearance of- and allow users to interact with- the simulated environment. Using head-mounted displays, controllers, and auditory feedback, virtual reality provides a convincing simulation of interactable virtual worlds (Wikipedia, “Virtual reality”). The many worlds of virtual reality are often expansive, colorful, and detailed. However, there is one great flaw among them- an emotion evoked in many users through the exploration of such worlds-loneliness.
The content in these worlds is impressive, immersive, and entertaining. Without other people to share in these experiences, however, one can find themselves lonely. Users discover a feeling that no matter how many objects and colors surround them in countless virtual worlds, every world feels empty. As humans are social beings by nature, they feel lost without a sense of human connection and human interaction. Multiplayer experiences offer this missing element into the immersion of virtual reality worlds. Multiplayer offers users the opportunity to interact with other live people in a virtual simulation, which creates lasting memories and deeper, more meaningful immersion.
ContributorsJorgensen, Nicholas Keith (Co-author) / Jorgensen, Caitlin Nicole (Co-author) / Selgrad, Justin (Thesis director) / Ehgner, Arnaud (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132922-Thumbnail Image.png
Description
Charleston, South Carolina currently faces serious annual flooding issues due to tides and rainfall. These issues are expected to get significantly worse within the next few decades reaching a projected 180 days a year of flooding by 2045 (Carter et al., 2018). Several permanent solutions are in progress by the

Charleston, South Carolina currently faces serious annual flooding issues due to tides and rainfall. These issues are expected to get significantly worse within the next few decades reaching a projected 180 days a year of flooding by 2045 (Carter et al., 2018). Several permanent solutions are in progress by the City of Charleston. However, these solutions are years away at minimum and faced with development issues. This thesis attempts to treat some of the symptoms of flooding, such as navigation, by creating an iPhone application which predicts flooding and helps people navigate around it safely. Specifically, this thesis will take into account rainfall and tide levels to display to users actively flooded areas of downtown Charleston and provide routing to a destination from a user’s location around these flooded areas whenever possible.
ContributorsSalisbury, Mason (Author) / Balasooriya, Janaka (Thesis director) / Faucon, Christophe (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133515-Thumbnail Image.png
Description
Natural Language Processing and Virtual Reality are hot topics in the present. How can we synthesize these together in order to make a cohesive experience? The game focuses on users using vocal commands, building structures, and memorizing spatial objects. In order to get proper vocal commands, the IBM Watson API

Natural Language Processing and Virtual Reality are hot topics in the present. How can we synthesize these together in order to make a cohesive experience? The game focuses on users using vocal commands, building structures, and memorizing spatial objects. In order to get proper vocal commands, the IBM Watson API for Natural Language Processing was incorporated into our game system. User experience elements like gestures, UI color change, and images were used to help guide users in memorizing and building structures. The process to create these elements were streamlined through the VRTK library in Unity. The game has two segments. The first segment is a tutorial level where the user learns to perform motions and in-game actions. The second segment is a game where the user must correctly create a structure by utilizing vocal commands and spatial recognition. A standardized usability test, System Usability Scale, was used to evaluate the effectiveness of the game. A survey was also created in order to evaluate a more descriptive user opinion. Overall, users gave a positive score on the System Usability Scale and slightly positive reviews in the custom survey.
ContributorsOrtega, Excel (Co-author) / Ryan, Alexander (Co-author) / Kobayashi, Yoshihiro (Thesis director) / Nelson, Brian (Committee member) / Computing and Informatics Program (Contributor) / School of Art (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05