Matching Items (42)
Filtering by

Clear all filters

134286-Thumbnail Image.png
Description
Many researchers aspire to create robotics systems that assist humans in common office tasks, especially by taking over delivery and messaging tasks. For meaningful interactions to take place, a mobile robot must be able to identify the humans it interacts with and communicate successfully with them. It must also be

Many researchers aspire to create robotics systems that assist humans in common office tasks, especially by taking over delivery and messaging tasks. For meaningful interactions to take place, a mobile robot must be able to identify the humans it interacts with and communicate successfully with them. It must also be able to successfully navigate the office environment. While mobile robots are well suited for navigating and interacting with elements inside a deterministic office environment, attempting to interact with human beings in an office environment remains a challenge due to the limits on the amount of cost-efficient compute power onboard the robot. In this work, I propose the use of remote cloud services to offload intensive interaction tasks. I detail the interactions required in an office environment and discuss the challenges faced when implementing a human-robot interaction platform in a stochastic office environment. I also experiment with cloud services for facial recognition, speech recognition, and environment navigation and discuss my results. As part of my thesis, I have implemented a human-robot interaction system utilizing cloud APIs into a mobile robot, enabling it to navigate the office environment, identify humans within the environment, and communicate with these humans.
Created2017-05
135340-Thumbnail Image.png
Description
Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and

Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and daily operations. One of the most important parts is being able to predict and foreshadow failures in the system, in order to make sure that those are fixed before they turn into large issues. One specific area where preventive maintenance is a very big part of daily activity is the automotive industry. Automobile owners are encouraged to take their cars in for maintenance on a routine schedule (based on mileage or time), or when their car signals that there is an issue (low oil levels for example). Although this level of maintenance is enough when people are in charge of cars, the rise of autonomous vehicles, specifically self-driving cars, changes that. Now instead of a human being able to look at a car and diagnose any issues, the car needs to be able to do this itself. The objective of this project was to create such a system. The Electronics Preventive Maintenance System is an internal system that is designed to meet all these criteria and more. The EPMS system is comprised of a central computer which monitors all major electronic components in an autonomous vehicle through the use of standard off-the-shelf sensors. The central computer compiles the sensor data, and is able to sort and analyze the readings. The filtered data is run through several mathematical models, each of which diagnoses issues in different parts of the vehicle. The data for each component in the vehicle is compared to pre-set operating conditions. These operating conditions are set in order to encompass all normal ranges of output. If the sensor data is outside the margins, the warning and deviation are recorded and a severity level is calculated. In addition to the individual focus, there's also a vehicle-wide model, which predicts how necessary maintenance is for the vehicle. All of these results are analyzed by a simple heuristic algorithm and a decision is made for the vehicle's health status, which is sent out to the Fleet Management System. This system allows for accurate, effortless monitoring of all parts of an autonomous vehicle as well as predictive modeling that allows the system to determine maintenance needs. With this system, human inspectors are no longer necessary for a fleet of autonomous vehicles. Instead, the Fleet Management System is able to oversee inspections, and the system operator is able to set parameters to decide when to send cars for maintenance. All the models used for the sensor and component analysis are tailored specifically to the vehicle. The models and operating margins are created using empirical data collected during normal testing operations. The system is modular and can be used in a variety of different vehicle platforms, including underwater autonomous vehicles and aerial vehicles.
ContributorsMian, Sami T. (Author) / Collofello, James (Thesis director) / Chen, Yinong (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135660-Thumbnail Image.png
Description
This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can

This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can be used to understand the image better through recognizing different features present within the image. Deep CNNs, however, require training sets that can be larger than a million pictures in order to fine tune their feature detectors. For the case of facial expression datasets, none of these large datasets are available. Due to this limited availability of data required to train a new CNN, the idea of using naïve domain adaptation is explored. Instead of creating and using a new CNN trained specifically to extract features related to FER, a previously trained CNN originally trained for another computer vision task is used. Work for this research involved creating a system that can run a CNN, can extract feature vectors from the CNN, and can classify these extracted features. Once this system was built, different aspects of the system were tested and tuned. These aspects include the pre-trained CNN that was used, the layer from which features were extracted, normalization used on input images, and training data for the classifier. Once properly tuned, the created system returned results more accurate than previous attempts on facial expression recognition. Based on these positive results, naïve domain adaptation is shown to successfully leverage advantages of deep CNNs for facial expression recognition.
ContributorsEusebio, Jose Miguel Ang (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Venkateswara, Hemanth (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description
Technical innovation has always played a part in live theatre, whether in the form of mechanical pieces like lifts and trapdoors to the more recent integration of digital media. The advances of the art form encourage the development of technology, and at the same time, technological development enables the advancement

Technical innovation has always played a part in live theatre, whether in the form of mechanical pieces like lifts and trapdoors to the more recent integration of digital media. The advances of the art form encourage the development of technology, and at the same time, technological development enables the advancement of theatrical expression. As mechanics, lighting, sound, and visual media have made their way into the spotlight, advances in theatrical robotics continue to push for their inclusion in the director's toolbox. However, much of the technology available is gated by high prices and unintuitive interfaces, designed for large troupes and specialized engineers, making it difficult to access for small schools and students new to the medium. As a group of engineering students with a vested interest in the development of the arts, this thesis team designed a system that will enable troupes from any background to participate in the advent of affordable automation. The intended result of this thesis project was to create a robotic platform that interfaces with custom software, receiving commands and transmitting position data, and to design that software so that a user can define intuitive cues for their shows. In addition, a new pathfinding algorithm was developed to support free-roaming automation in a 2D space. The final product consisted of a relatively inexpensive (< $2000) free-roaming platform, made entirely with COTS and standard materials, and a corresponding control system with cue design, wireless path following, and position tracking. This platform was built to support 1000 lbs, and includes integrated emergency stopping. The software allows for custom cue design, speed variation, and dynamic path following. Both the blueprints and the source code for the platform and control system have been released to open-source repositories, to encourage further development in the area of affordable automation. The platform itself was donated to the ASU School of Theater.
ContributorsHollenbeck, Matthew D. (Co-author) / Wiebel, Griffin (Co-author) / Winnemann, Christopher (Thesis director) / Christensen, Stephen (Committee member) / Computer Science and Engineering Program (Contributor) / School of Film, Dance and Theatre (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133565-Thumbnail Image.png
Description
This paper details the process for designing both a simulation of the board game Jaipur, and an artificial intelligence (AI) agent that can play the game against a human player. When designing an AI for a card game, there are two major problems that can arise. The first is the

This paper details the process for designing both a simulation of the board game Jaipur, and an artificial intelligence (AI) agent that can play the game against a human player. When designing an AI for a card game, there are two major problems that can arise. The first is the difficulty of using a search space to analyze every possible set of future moves. Due to the randomized nature of the deck of cards, the search space rapidly leads to an exponentially growing set of potential game states to analyze when one tries to look more than one turn ahead. The second aspect that poses difficulty is the element of uncertainty that exists from opponent feedback. Certain moves are weak to specific opponent reactions, and these are difficult to predict due to hidden information. To circumvent these problems, the AI uses a greedy approach to decision making, attempting to maximize the value of its plays immediately, and not play for future turns. The agent utilizes conditional statements to evaluate the game state and choose a game action that it deems optimal, a heuristic to place an expected value (EV) of the goods it can choose from, and selects the best one based on this evaluation. Initial implementation of the simulation was done using C++ through a terminal application, and then was translated to a graphical interface using Unity and C#.
ContributorsOrr, James Christopher (Author) / Kobayashi, Yoshihiro (Thesis director) / Selgrad, Justin (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134769-Thumbnail Image.png
Description
In order to adequately introduce students to computer science and robotics in an exciting and engaging manner certain teaching techniques should be used. In recent years some of the most popular paradigms are Visual Programming Languages. Visual Programming Languages are meant to introduce problem solving skills and basic programming constructs

In order to adequately introduce students to computer science and robotics in an exciting and engaging manner certain teaching techniques should be used. In recent years some of the most popular paradigms are Visual Programming Languages. Visual Programming Languages are meant to introduce problem solving skills and basic programming constructs inherent to all modern day languages by allowing users to write programs visually as opposed to textually. By bypassing the need to learn syntax students can focus on the thinking behind developing an algorithm and see immediate results that help generate excitement for the field and reduce disinterest due to startup complexity and burnout. The Introduction to Engineering course at Arizona State University supports this approach by teaching students the basics of autonomous maze traversing algorithms and using ASU VIPLE, a Visual Programming Language developed to connect with and direct real-world robots. However, some startup time is needed to learn how to interface with these robots using ASU VIPLE. That is why the HTML5 Autonomous Robot Web Simulator was created -- by encouraging students to use the simulator the problem solving behind autonomous maze traversing algorithms can be introduced more quickly and with immediate affirmation. Our goal was to improve this simulator and add features so that the simulator could be accessed and used for a more wide variety of introductory Computer Science lessons. Features scattered across past implementations of robotic simulators were aggregated in a cross platform solution. Upon initial development, a classroom test group revealed usability concerns and a demonstration of students' mental models. Mean time for task completion was 8.1min - compared to 2min for the authors. The simulator was updated in response to test group feedback and new instructor requirements. The new implementation reduces programming overhead while maintaining a learning environment with support for even the most complex applications.
ContributorsRodewald, Spencer (Co-author, Co-author) / Patel, Ankit (Co-author) / Chen, Yinong (Thesis director) / Chattin, Linda (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
134486-Thumbnail Image.png
Description
The objective of this creative project was to gain experience in digital modeling, animation, coding, shader development and implementation, model integration techniques, and application of gaming principles and design through developing a professional educational game. The team collaborated with Glendale Community College (GCC) to produce an interactive product intended to

The objective of this creative project was to gain experience in digital modeling, animation, coding, shader development and implementation, model integration techniques, and application of gaming principles and design through developing a professional educational game. The team collaborated with Glendale Community College (GCC) to produce an interactive product intended to supplement educational instructions regarding nutrition. The educational game developed, "Nutribots" features the player acting as a nutrition based nanobot sent to the small intestine to help the body. Throughout the game the player will be asked nutrition based questions to test their knowledge of proteins, carbohydrates, and lipids. If the player is unable to answer the question, they must use game mechanics to progress and receive the information as a reward. The level is completed as soon as the question is answered correctly. If the player answers the questions incorrectly twenty times within the entirety of the game, the team loses faith in the player, and the player must reset from title screen. This is to limit guessing and to make sure the player retains the information through repetition once it is demonstrated that they do not know the answers. The team was split into two different groups for the development of this game. The first part of the team developed models, animations, and textures using Autodesk Maya 2016 and Marvelous Designer. The second part of the team developed code and shaders, and implemented products from the first team using Unity and Visual Studio. Once a prototype of the game was developed, it was show-cased amongst peers to gain feedback. Upon receiving feedback, the team implemented the desired changes accordingly. Development for this project began on November 2015 and ended on April 2017. Special thanks to Laura Avila Department Chair and Jennifer Nolz from Glendale Community College Technology and Consumer Sciences, Food and Nutrition Department.
ContributorsNolz, Daisy (Co-author) / Martin, Austin (Co-author) / Quinio, Santiago (Co-author) / Armstrong, Jessica (Co-author) / Kobayashi, Yoshihiro (Thesis director) / Valderrama, Jamie (Committee member) / School of Arts, Media and Engineering (Contributor) / School of Film, Dance and Theatre (Contributor) / Department of English (Contributor) / Computer Science and Engineering Program (Contributor) / Computing and Informatics Program (Contributor) / Herberger Institute for Design and the Arts (Contributor) / School of Sustainability (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
132921-Thumbnail Image.png
Description
Virtual reality gives users the opportunity to immerse themselves in an accurately
simulated computer-generated environment. These environments are accurately simulated in that they provide the appearance of- and allow users to interact with- the simulated environment. Using head-mounted displays, controllers, and auditory feedback, virtual reality provides a convincing simulation of

Virtual reality gives users the opportunity to immerse themselves in an accurately
simulated computer-generated environment. These environments are accurately simulated in that they provide the appearance of- and allow users to interact with- the simulated environment. Using head-mounted displays, controllers, and auditory feedback, virtual reality provides a convincing simulation of interactable virtual worlds (Wikipedia, “Virtual reality”). The many worlds of virtual reality are often expansive, colorful, and detailed. However, there is one great flaw among them- an emotion evoked in many users through the exploration of such worlds-loneliness.
The content in these worlds is impressive, immersive, and entertaining. Without other people to share in these experiences, however, one can find themselves lonely. Users discover a feeling that no matter how many objects and colors surround them in countless virtual worlds, every world feels empty. As humans are social beings by nature, they feel lost without a sense of human connection and human interaction. Multiplayer experiences offer this missing element into the immersion of virtual reality worlds. Multiplayer offers users the opportunity to interact with other live people in a virtual simulation, which creates lasting memories and deeper, more meaningful immersion.
ContributorsJorgensen, Nicholas Keith (Co-author) / Jorgensen, Caitlin Nicole (Co-author) / Selgrad, Justin (Thesis director) / Ehgner, Arnaud (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132967-Thumbnail Image.png
Description
Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that,

Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that, given an abstract problem state, predicts both (i) the best action to be taken from that state and (ii) the generalized “role” of the object being manipulated. The neural network was tested on two classical planning domains: the blocks world domain and the logistic domain. Results indicate that neural networks are capable of making such
predictions with high accuracy, indicating a promising new framework for approaching generalized planning problems.
ContributorsNakhleh, Julia Blair (Author) / Srivastava, Siddharth (Thesis director) / Fainekos, Georgios (Committee member) / Computer Science and Engineering Program (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133339-Thumbnail Image.png
Description
Medical records are increasingly being recorded in the form of electronic health records (EHRs), with a significant amount of patient data recorded as unstructured natural language text. Consequently, being able to extract and utilize clinical data present within these records is an important step in furthering clinical care. One important

Medical records are increasingly being recorded in the form of electronic health records (EHRs), with a significant amount of patient data recorded as unstructured natural language text. Consequently, being able to extract and utilize clinical data present within these records is an important step in furthering clinical care. One important aspect within these records is the presence of prescription information. Existing techniques for extracting prescription information — which includes medication names, dosages, frequencies, reasons for taking, and mode of administration — from unstructured text have focused on the application of rule- and classifier-based methods. While state-of-the-art systems can be effective in extracting many types of information, they require significant effort to develop hand-crafted rules and conduct effective feature engineering. This paper presents the use of a bidirectional LSTM with CRF tagging model initialized with precomputed word embeddings for extracting prescription information from sentences without requiring significant feature engineering. The experimental results, run on the i2b2 2009 dataset, achieve an F1 macro measure of 0.8562, and scores above 0.9449 on four of the six categories, indicating significant potential for this model.
ContributorsRawal, Samarth Chetan (Author) / Baral, Chitta (Thesis director) / Anwar, Saadat (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05