Matching Items (35)
Filtering by

Clear all filters

135380-Thumbnail Image.png
Description
Bioscience High School, a small magnet high school located in Downtown Phoenix and a STEAM (Science, Technology, Engineering, Arts, Math) focused school, has been pushing to establish a computer science curriculum for all of their students from freshman to senior year. The school's Mision (Mission and Vision) is to: "..provide

Bioscience High School, a small magnet high school located in Downtown Phoenix and a STEAM (Science, Technology, Engineering, Arts, Math) focused school, has been pushing to establish a computer science curriculum for all of their students from freshman to senior year. The school's Mision (Mission and Vision) is to: "..provide a rigorous, collaborative, and relevant academic program emphasizing an innovative, problem-based curriculum that develops literacy in the sciences, mathematics, and the arts, thus cultivating critical thinkers, creative problem-solvers, and compassionate citizens, who are able to thrive in our increasingly complex and technological communities." Computational thinking is an important part in developing a future problem solver Bioscience High School is looking to produce. Bioscience High School is unique in the fact that every student has a computer available for him or her to use. Therefore, it makes complete sense for the school to add computer science to their curriculum because one of the school's goals is to be able to utilize their resources to their full potential. However, the school's attempt at computer science integration falls short due to the lack of expertise amongst the math and science teachers. The lack of training and support has postponed the development of the program and they are desperately in need of someone with expertise in the field to help reboot the program. As a result, I've decided to create a course that is focused on teaching students the concepts of computational thinking and its application through Scratch and Arduino programming.
ContributorsLiu, Deming (Author) / Meuth, Ryan (Thesis director) / Nakamura, Mutsumi (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136728-Thumbnail Image.png
Description
This project was centered around designing a processor model (using the C programming language) based on the Coldfire computer architecture that will run on third party software known as Open Virtual Platforms. The end goal is to have a fully functional processor that can run Coldfire instructions and utilize peripheral

This project was centered around designing a processor model (using the C programming language) based on the Coldfire computer architecture that will run on third party software known as Open Virtual Platforms. The end goal is to have a fully functional processor that can run Coldfire instructions and utilize peripheral devices in the same way as the hardware used in the embedded systems lab at ASU. This project would cut down the substantial amount of time students spend commuting to the lab. Having the processor directly at their disposal would also encourage them to spend more time outside of class learning the hardware and familiarizing themselves with development on an embedded micro-controller. The model will be accurate, fast and reliable. These aspects will be achieved through rigorous unit testing and use of the OVP platform which provides instruction accurate simulations at hundreds of MIPS (million instructions per second) for the specified model. The end product was able to accurately simulate a subset of the Coldfire instructions at very high rates.
ContributorsDunning, David Connor (Author) / Burger, Kevin (Thesis director) / Meuth, Ryan (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2014-12
136202-Thumbnail Image.png
Description
The objective of this research is to determine an approach for automating the learning of the initial lexicon used in translating natural language sentences to their formal knowledge representations based on lambda-calculus expressions. Using a universal knowledge representation and its associated parser, this research attempts to use word alignment techniques

The objective of this research is to determine an approach for automating the learning of the initial lexicon used in translating natural language sentences to their formal knowledge representations based on lambda-calculus expressions. Using a universal knowledge representation and its associated parser, this research attempts to use word alignment techniques to align natural language sentences to the linearized parses of their associated knowledge representations in order to learn the meanings of individual words. The work includes proposing and analyzing an approach that can be used to learn some of the initial lexicon.
ContributorsBaldwin, Amy Lynn (Author) / Baral, Chitta (Thesis director) / Vo, Nguyen (Committee member) / Industrial, Systems (Contributor) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2015-05
133397-Thumbnail Image.png
Description
Students learn in various ways \u2014 visualization, auditory, memorizing, or making analogies. Traditional lecturing in engineering courses and the learning styles of engineering students are inharmonious causing students to be at a disadvantage based on their learning style (Felder & Silverman, 1988). My study analyzes the traditional approach to learning

Students learn in various ways \u2014 visualization, auditory, memorizing, or making analogies. Traditional lecturing in engineering courses and the learning styles of engineering students are inharmonious causing students to be at a disadvantage based on their learning style (Felder & Silverman, 1988). My study analyzes the traditional approach to learning coding skills which is unnatural to engineering students with no previous exposure and examining if visual learning enhances introductory computer science education. Visual and text-based learning are evaluated to determine how students learn introductory coding skills and associated problem solving skills. My study was conducted to observe how the two types of learning aid the students in learning how to problem solve as well as how much knowledge can be obtained in a short period of time. The application used for visual learning was Scratch and Repl.it was used for text-based learning. Two exams were made to measure the progress made by each student. The topics covered by the exam were initialization, variable reassignment, output, if statements, if else statements, nested if statements, logical operators, arrays/lists, while loop, type casting, functions, object orientation, and sorting. Analysis of the data collected in the study allow us to observe whether the traditional method of teaching programming or block-based programming is more beneficial and in what topics of introductory computer science concepts.
ContributorsVidaure, Destiny Vanessa (Author) / Meuth, Ryan (Thesis director) / Yang, Yezhou (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133339-Thumbnail Image.png
Description
Medical records are increasingly being recorded in the form of electronic health records (EHRs), with a significant amount of patient data recorded as unstructured natural language text. Consequently, being able to extract and utilize clinical data present within these records is an important step in furthering clinical care. One important

Medical records are increasingly being recorded in the form of electronic health records (EHRs), with a significant amount of patient data recorded as unstructured natural language text. Consequently, being able to extract and utilize clinical data present within these records is an important step in furthering clinical care. One important aspect within these records is the presence of prescription information. Existing techniques for extracting prescription information — which includes medication names, dosages, frequencies, reasons for taking, and mode of administration — from unstructured text have focused on the application of rule- and classifier-based methods. While state-of-the-art systems can be effective in extracting many types of information, they require significant effort to develop hand-crafted rules and conduct effective feature engineering. This paper presents the use of a bidirectional LSTM with CRF tagging model initialized with precomputed word embeddings for extracting prescription information from sentences without requiring significant feature engineering. The experimental results, run on the i2b2 2009 dataset, achieve an F1 macro measure of 0.8562, and scores above 0.9449 on four of the six categories, indicating significant potential for this model.
ContributorsRawal, Samarth Chetan (Author) / Baral, Chitta (Thesis director) / Anwar, Saadat (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133515-Thumbnail Image.png
Description
Natural Language Processing and Virtual Reality are hot topics in the present. How can we synthesize these together in order to make a cohesive experience? The game focuses on users using vocal commands, building structures, and memorizing spatial objects. In order to get proper vocal commands, the IBM Watson API

Natural Language Processing and Virtual Reality are hot topics in the present. How can we synthesize these together in order to make a cohesive experience? The game focuses on users using vocal commands, building structures, and memorizing spatial objects. In order to get proper vocal commands, the IBM Watson API for Natural Language Processing was incorporated into our game system. User experience elements like gestures, UI color change, and images were used to help guide users in memorizing and building structures. The process to create these elements were streamlined through the VRTK library in Unity. The game has two segments. The first segment is a tutorial level where the user learns to perform motions and in-game actions. The second segment is a game where the user must correctly create a structure by utilizing vocal commands and spatial recognition. A standardized usability test, System Usability Scale, was used to evaluate the effectiveness of the game. A survey was also created in order to evaluate a more descriptive user opinion. Overall, users gave a positive score on the System Usability Scale and slightly positive reviews in the custom survey.
ContributorsOrtega, Excel (Co-author) / Ryan, Alexander (Co-author) / Kobayashi, Yoshihiro (Thesis director) / Nelson, Brian (Committee member) / Computing and Informatics Program (Contributor) / School of Art (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135018-Thumbnail Image.png
Description
The software element of home and small business networking solutions has failed to keep pace with annual development of newer and faster hardware. The software running on these devices is an afterthought, oftentimes equipped with minimal features, an obtuse user interface, or both. At the same time, this past year

The software element of home and small business networking solutions has failed to keep pace with annual development of newer and faster hardware. The software running on these devices is an afterthought, oftentimes equipped with minimal features, an obtuse user interface, or both. At the same time, this past year has seen the rise of smart home assistants that represent the next step in human-computer interaction with their advanced use of natural language processing. This project seeks to quell the issues with the former by exploring a possible fusion of a powerful, feature-rich software-defined networking stack and the incredible natural language processing tools of smart home assistants. To accomplish these ends, a piece of software was developed to leverage the powerful natural language processing capabilities of one such smart home assistant, the Amazon Echo. On one end, this software interacts with Amazon Web Services to retrieve information about a user's speech patterns and key information contained in their speech. On the other end, the software joins that information with its previous session state to intelligently translate speech into a series of commands for the separate components of a networking stack. The software developed for this project empowers a user to quickly make changes to several facets of their networking gear or acquire information about it with just their language \u2014 no terminals, java applets, or web configuration interfaces needed, thus circumventing clunky UI's or jumping from shell to shell. It is the author's hope that showing how networking equipment can be configured in this innovative way will draw more attention to the current failings of networking equipment and inspire a new series of intuitive user interfaces.
ContributorsHermens, Ryan Joseph (Author) / Meuth, Ryan (Thesis director) / Burger, Kevin (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
135047-Thumbnail Image.png
Description
This research lays down foundational work in the semantic reconstruction of linguistic politeness in English-to-Japanese machine translation and thereby advances semantic-based automated translation of English into other natural languages. I developed a Java project called the PoliteParser that is intended as a plug-in to existing semantic parsers to determine whether

This research lays down foundational work in the semantic reconstruction of linguistic politeness in English-to-Japanese machine translation and thereby advances semantic-based automated translation of English into other natural languages. I developed a Java project called the PoliteParser that is intended as a plug-in to existing semantic parsers to determine whether verbs in dialogue in an English corpus should be conjugated into the plain or the polite honorific form when translated into Japanese. The PoliteParser bases this decision off of semantic information about the social relationships between the speaker and the listener, the speaker's personality, and the circumstances of the utterance. Testing undergone during the course of this research demonstrates that the PoliteParser can achieve levels of accuracy 31 percentage points higher than that of statistical translation systems when integrated with a semantic parser and 54 percentage points higher when used with pre-parsed data.
ContributorsGuiou, Jared Tyler (Author) / Baral, Chitta (Thesis director) / Tanno, Koji (Committee member) / School of International Letters and Cultures (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
155401-Thumbnail Image.png
Description
This work presents a communication paradigm, using a context-aware mixed reality approach, for instructing human workers when collaborating with robots. The main objective of this approach is to utilize the physical work environment as a canvas to communicate task-related instructions and robot intentions in the form of visual cues. A

This work presents a communication paradigm, using a context-aware mixed reality approach, for instructing human workers when collaborating with robots. The main objective of this approach is to utilize the physical work environment as a canvas to communicate task-related instructions and robot intentions in the form of visual cues. A vision-based object tracking algorithm is used to precisely determine the pose and state of physical objects in and around the workspace. A projection mapping technique is used to overlay visual cues on tracked objects and the workspace. Simultaneous tracking and projection onto objects enables the system to provide just-in-time instructions for carrying out a procedural task. Additionally, the system can also inform and warn humans about the intentions of the robot and safety of the workspace. It was hypothesized that using this system for executing a human-robot collaborative task will improve the overall performance of the team and provide a positive experience to the human partner. To test this hypothesis, an experiment involving human subjects was conducted and the performance (both objective and subjective) of the presented system was compared with a conventional method based on printed instructions. It was found that projecting visual cues enabled human subjects to collaborate more effectively with the robot and resulted in higher efficiency in completing the task.
ContributorsKalpagam Ganesan, Ramsundar (Author) / Ben Amor, Hani (Thesis advisor) / Yang, Yezhou (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2017
155809-Thumbnail Image.png
Description
Light field imaging is limited in its computational processing demands of high

sampling for both spatial and angular dimensions. Single-shot light field cameras

sacrifice spatial resolution to sample angular viewpoints, typically by multiplexing

incoming rays onto a 2D sensor array. While this resolution can be recovered using

compressive sensing, these iterative solutions are slow

Light field imaging is limited in its computational processing demands of high

sampling for both spatial and angular dimensions. Single-shot light field cameras

sacrifice spatial resolution to sample angular viewpoints, typically by multiplexing

incoming rays onto a 2D sensor array. While this resolution can be recovered using

compressive sensing, these iterative solutions are slow in processing a light field. We

present a deep learning approach using a new, two branch network architecture,

consisting jointly of an autoencoder and a 4D CNN, to recover a high resolution

4D light field from a single coded 2D image. This network decreases reconstruction

time significantly while achieving average PSNR values of 26-32 dB on a variety of

light fields. In particular, reconstruction time is decreased from 35 minutes to 6.7

minutes as compared to the dictionary method for equivalent visual quality. These

reconstructions are performed at small sampling/compression ratios as low as 8%,

allowing for cheaper coded light field cameras. We test our network reconstructions

on synthetic light fields, simulated coded measurements of real light fields captured

from a Lytro Illum camera, and real coded images from a custom CMOS diffractive

light field camera. The combination of compressive light field capture with deep

learning allows the potential for real-time light field video acquisition systems in the

future.
ContributorsGupta, Mayank (Author) / Turaga, Pavan (Thesis advisor) / Yang, Yezhou (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2017