Matching Items (20)
150046-Thumbnail Image.png
Description
This thesis describes a synthetic task environment, CyberCog, created for the purposes of 1) understanding and measuring individual and team situation awareness in the context of a cyber security defense task and 2) providing a context for evaluating algorithms, visualizations, and other interventions that are intended to improve cyber situation

This thesis describes a synthetic task environment, CyberCog, created for the purposes of 1) understanding and measuring individual and team situation awareness in the context of a cyber security defense task and 2) providing a context for evaluating algorithms, visualizations, and other interventions that are intended to improve cyber situation awareness. CyberCog provides an interactive environment for conducting human-in-loop experiments in which the participants of the experiment perform the tasks of a cyber security defense analyst in response to a cyber-attack scenario. CyberCog generates the necessary performance measures and interaction logs needed for measuring individual and team cyber situation awareness. Moreover, the CyberCog environment provides good experimental control for conducting effective situation awareness studies while retaining realism in the scenario and in the tasks performed.
ContributorsRajivan, Prashanth (Author) / Femiani, John (Thesis advisor) / Cooke, Nancy J. (Thesis advisor) / Lindquist, Timothy (Committee member) / Gary, Kevin (Committee member) / Arizona State University (Publisher)
Created2011
150447-Thumbnail Image.png
Description
Night vision goggles (NVGs) are widely used by helicopter pilots for flight missions at night, but the equipment can present visually confusing images especially in urban areas. A simulation tool with realistic nighttime urban images would help pilots practice and train for flight with NVGs. However, there is a lack

Night vision goggles (NVGs) are widely used by helicopter pilots for flight missions at night, but the equipment can present visually confusing images especially in urban areas. A simulation tool with realistic nighttime urban images would help pilots practice and train for flight with NVGs. However, there is a lack of tools for visualizing urban areas at night. This is mainly due to difficulties in gathering the light system data, placing the light systems at suitable locations, and rendering millions of lights with complex light intensity distributions (LID). Unlike daytime images, a city can have millions of light sources at night, including street lights, illuminated signs, and light shed from building interiors through windows. In this paper, a Procedural Lighting tool (PL), which predicts the positions and properties of street lights, is presented. The PL tool is used to accomplish three aims: (1) to generate vector data layers for geographic information systems (GIS) with statistically estimated information on lighting designs for streets, as well as the locations, orientations, and models for millions of streetlights; (2) to generate geo-referenced raster data to suitable for use as light maps that cover a large scale urban area so that the effect of millions of street light can be accurately rendered at real time, and (3) to extend existing 3D models by generating detailed light-maps that can be used as UV-mapped textures to render the model. An interactive graphical user interface (GUI) for configuring and previewing lights from a Light System Database (LDB) is also presented. The GUI includes physically accurate information about LID and also the lights' spectral power distributions (SPDs) so that a light-map can be generated for use with any sensor if the sensors luminosity function is known. Finally, for areas where more detail is required, a tool has been developed for editing and visualizing light effects over a 3D building from many light sources including area lights and windows. The above components are integrated in the PL tool to produce a night time urban view for not only a large-scale area but also a detail of a city building.
ContributorsChuang, Chia-Yuan (Author) / Femiani, John (Thesis advisor) / Razdan, Anshuman (Committee member) / Amresh, Ashish (Committee member) / Arizona State University (Publisher)
Created2011
153910-Thumbnail Image.png
Description
Despite the various driver assistance systems and electronics, the threat to life of driver, passengers and other people on the road still persists. With the growth in technology, the use of in-vehicle devices with a plethora of buttons and features is increasing resulting in increased distraction. Recently, speech recognition has

Despite the various driver assistance systems and electronics, the threat to life of driver, passengers and other people on the road still persists. With the growth in technology, the use of in-vehicle devices with a plethora of buttons and features is increasing resulting in increased distraction. Recently, speech recognition has emerged as an alternative to distraction and has the potential to be beneficial. However, considering the fact that automotive environment is dynamic and noisy in nature, distraction may not arise from the manual interaction, but due to the cognitive load. Hence, speech recognition certainly cannot be a reliable mode of communication.

The thesis is focused on proposing a simultaneous multimodal approach for designing interface between driver and vehicle with a goal to enable the driver to be more attentive to the driving tasks and spend less time fiddling with distractive tasks. By analyzing the human-human multimodal interaction techniques, new modes have been identified and experimented, especially suitable for the automotive context. The identified modes are touch, speech, graphics, voice-tip and text-tip. The multiple modes are intended to work collectively to make the interaction more intuitive and natural. In order to obtain a minimalist user-centered design for the center stack, various design principles such as 80/20 rule, contour bias, affordance, flexibility-usability trade-off etc. have been implemented on the prototypes. The prototype was developed using the Dragon software development kit on android platform for speech recognition.

In the present study, the driver behavior was investigated in an experiment conducted on the DriveSafety driving simulator DS-600s. Twelve volunteers drove the simulator under two conditions: (1) accessing the center stack applications using touch only and (2) accessing the applications using speech with offered text-tip. The duration for which user looked away from the road (eyes-off-road) was measured manually for each scenario. Comparison of results proved that eyes-off-road time is less for the second scenario. The minimalist design with 8-10 icons per screen proved to be effective as all the readings were within the driver distraction recommendations (eyes-off-road time < 2sec per screen) defined by NHTSA.
ContributorsMittal, Richa (Author) / Gaffar, Ashraf (Thesis advisor) / Femiani, John (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2015
157371-Thumbnail Image.png
Description
Capturing the information in an image into a natural language sentence is

considered a difficult problem to be solved by computers. Image captioning involves not just detecting objects from images but understanding the interactions between the objects to be translated into relevant captions. So, expertise in the fields of computer vision

Capturing the information in an image into a natural language sentence is

considered a difficult problem to be solved by computers. Image captioning involves not just detecting objects from images but understanding the interactions between the objects to be translated into relevant captions. So, expertise in the fields of computer vision paired with natural language processing are supposed to be crucial for this purpose. The sequence to sequence modelling strategy of deep neural networks is the traditional approach to generate a sequential list of words which are combined to represent the image. But these models suffer from the problem of high variance by not being able to generalize well on the training data.

The main focus of this thesis is to reduce the variance factor which will help in generating better captions. To achieve this, Ensemble Learning techniques have been explored, which have the reputation of solving the high variance problem that occurs in machine learning algorithms. Three different ensemble techniques namely, k-fold ensemble, bootstrap aggregation ensemble and boosting ensemble have been evaluated in this thesis. For each of these techniques, three output combination approaches have been analyzed. Extensive experiments have been conducted on the Flickr8k dataset which has a collection of 8000 images and 5 different captions for every image. The bleu score performance metric, which is considered to be the standard for evaluating natural language processing (NLP) problems, is used to evaluate the predictions. Based on this metric, the analysis shows that ensemble learning performs significantly better and generates more meaningful captions compared to any of the individual models used.
ContributorsKatpally, Harshitha (Author) / Bansal, Ajay (Thesis advisor) / Acuna, Ruben (Committee member) / Gonzalez-Sanchez, Javier (Committee member) / Arizona State University (Publisher)
Created2019
133568-Thumbnail Image.png
Description
The functional programming paradigm is able to provide clean and concise solutions to many common programming problems, as well as promote safer, more testable code by encouraging an isolation of state-modifying behavior. Functional programming is finding its way into traditionally object-oriented and imperative languages, most notably with the introduction of

The functional programming paradigm is able to provide clean and concise solutions to many common programming problems, as well as promote safer, more testable code by encouraging an isolation of state-modifying behavior. Functional programming is finding its way into traditionally object-oriented and imperative languages, most notably with the introduction of Java 8 and in LINQ for C#. However, no functional programming language has achieved widespread adoption, meaning that students without a formal computer science background who learn technology on-demand for personal projects or for business may not come across functional programming in a significant way. Programmers need a reason to spend time learning these concepts to not miss out on the subtle but profound benefits they provide. I propose the use of a video game as an environment in which learning functional programming is the player's goal. In this carefully constructed video game, learning functional programming is the key to progression. Players will be motivated to learn and will be given an immediate chance to test and demonstrate their understanding. The game, named Lambda Starship (stylized as (lambda () starship)), is a 3D first-person video game. It takes place in a spaceship that, due to extreme magnetic interference, has lost all on-board software while leaving the hardware completely intact. The player is tasked to write software using functional programming paradigms to replace the old software and bring the spaceship back to a working state. Throughout the process, the player is guided by an in-game manual and other descriptive resources. The game is implemented in Unity and scripted using C#. The game's educational and entertainment value was evaluated with a study case. 24 undergraduate students at Arizona State University (ASU) played the game and were surveyed detailing their experience. During play, user statistics were recorded automatically, providing a data-driven way to analyze where players struggled with the concepts introduced in the game. Reception was neutral or positive in both the entertainment and educational sides of the game. A few players expressed concerns about the manual in its form factor and engagement value.
ContributorsCompton, Tyler Alexander (Author) / Gonzalez-Sanchez, Javier (Thesis director) / Bansal, Srividya (Committee member) / Software Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description
Brains and computers have been interacting since the invention of the computer. These two entities have worked together to accomplish a monumental set of goals, from landing man on the moon to helping to understand how the universe works on the most microscopic levels, and everything in between. As the

Brains and computers have been interacting since the invention of the computer. These two entities have worked together to accomplish a monumental set of goals, from landing man on the moon to helping to understand how the universe works on the most microscopic levels, and everything in between. As the years have gone on, the extent and depth of interaction between brains and computers have consistently widened, to the point where computers help brains with their thinking in virtually infinite everyday situations around the world. The first purpose of this research project was to conduct a brief review for the purposes of gaining a sound understanding of how both brains and computers operate at fundamental levels, and what it is about these two entities that allow them to work evermore seamlessly as the years go on. Next, a history of interaction between brains and computers was developed, which expanded upon the first task and helped to contribute to visions of future brain-computer interaction (BCI). The subsequent and primary task of this research project was to develop a theoretical framework for a potential brain-aiding device of the future. This was done by conducting an extensive literature review regarding the most advanced BCI technology in modern times and expanding upon the findings to argue feasibility of the future device and its components. Next, social predictions regarding the acceptance and use of the new technology were made by designing and executing a survey based on the Unified Theory of the Acceptance and Use of Technology (UTAUT). Finally, general economic predictions were inferred by examining several relationships between money and computers over time.
ContributorsThum, Giuseppe Edwardo (Author) / Gaffar, Ashraf (Thesis director) / Gonzalez-Sanchez, Javier (Committee member) / College of Integrative Sciences and Arts (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
154698-Thumbnail Image.png
Description
Lecture videos are a widely used resource for learning. A simple way to create

videos is to record live lectures, but these videos end up being lengthy, include long

pauses and repetitive words making the viewing experience time consuming. While

pauses are useful in live learning environments where students take notes, I question

the

Lecture videos are a widely used resource for learning. A simple way to create

videos is to record live lectures, but these videos end up being lengthy, include long

pauses and repetitive words making the viewing experience time consuming. While

pauses are useful in live learning environments where students take notes, I question

the value of pauses in video lectures. Techniques and algorithms that can shorten such

videos can have a huge impact in saving students’ time and reducing storage space.

I study this problem of shortening videos by removing long pauses and adaptively

modifying the playback rate by emphasizing the most important sections of the video

and its effect on the student community. The playback rate is designed in such a

way to play uneventful sections faster and significant sections slower. Important and

unimportant sections of a video are identified using textual analysis. I use an existing

speech-to-text algorithm to extract the transcript and apply latent semantic analysis

and standard information retrieval techniques to identify the relevant segments of

the video. I compute relevance scores of different segments and propose a variable

playback rate for each of these segments. The aim is to reduce the amount of time

students spend on passive learning while watching videos without harming their ability

to follow the lecture. I validate the approach by conducting a user study among

computer science students and measuring their engagement. The results indicate

no significant difference in their engagement when this method is compared to the

original unedited video.
ContributorsPurushothama Shenoy, Sreenivas (Author) / Amresh, Ashish (Thesis advisor) / Femiani, John (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2016
154717-Thumbnail Image.png
Description
Large datasets of sub-meter aerial imagery represented as orthophoto mosaics are widely available today, and these data sets may hold a great deal of untapped information. This imagery has a potential to locate several types of features; for example, forests, parking lots, airports, residential areas, or freeways in the imagery.

Large datasets of sub-meter aerial imagery represented as orthophoto mosaics are widely available today, and these data sets may hold a great deal of untapped information. This imagery has a potential to locate several types of features; for example, forests, parking lots, airports, residential areas, or freeways in the imagery. However, the appearances of these things vary based on many things including the time that the image is captured, the sensor settings, processing done to rectify the image, and the geographical and cultural context of the region captured by the image. This thesis explores the use of deep convolutional neural networks to classify land use from very high spatial resolution (VHR), orthorectified, visible band multispectral imagery. Recent technological and commercial applications have driven the collection a massive amount of VHR images in the visible red, green, blue (RGB) spectral bands, this work explores the potential for deep learning algorithms to exploit this imagery for automatic land use/ land cover (LULC) classification. The benefits of automatic visible band VHR LULC classifications may include applications such as automatic change detection or mapping. Recent work has shown the potential of Deep Learning approaches for land use classification; however, this thesis improves on the state-of-the-art by applying additional dataset augmenting approaches that are well suited for geospatial data. Furthermore, the generalizability of the classifiers is tested by extensively evaluating the classifiers on unseen datasets and we present the accuracy levels of the classifier in order to show that the results actually generalize beyond the small benchmarks used in training. Deep networks have many parameters, and therefore they are often built with very large sets of labeled data. Suitably large datasets for LULC are not easy to come by, but techniques such as refinement learning allow networks trained for one task to be retrained to perform another recognition task. Contributions of this thesis include demonstrating that deep networks trained for image recognition in one task (ImageNet) can be efficiently transferred to remote sensing applications and perform as well or better than manually crafted classifiers without requiring massive training data sets. This is demonstrated on the UC Merced dataset, where 96% mean accuracy is achieved using a CNN (Convolutional Neural Network) and 5-fold cross validation. These results are further tested on unrelated VHR images at the same resolution as the training set.
ContributorsUba, Nagesh Kumar (Author) / Femiani, John (Thesis advisor) / Razdan, Anshuman (Committee member) / Amresh, Ashish (Committee member) / Arizona State University (Publisher)
Created2016
155483-Thumbnail Image.png
Description
A lot of research can be seen in the field of social robotics that majorly concentrate on various aspects of social robots including design of mechanical parts and their move- ment, cognitive speech and face recognition capabilities. Several robots have been developed with the intention of being social, like humans,

A lot of research can be seen in the field of social robotics that majorly concentrate on various aspects of social robots including design of mechanical parts and their move- ment, cognitive speech and face recognition capabilities. Several robots have been developed with the intention of being social, like humans, without much emphasis on how human-like they actually look, in terms of expressions and behavior. Fur- thermore, a substantial disparity can be seen in the success of results of any research involving ”humanizing” the robots’ behavior, or making it behave more human-like as opposed to research into biped movement, movement of individual body parts like arms, fingers, eyeballs, or human-like appearance itself. The research in this paper in- volves understanding why the research on facial expressions of social humanoid robots fails where it is not accepted completely in the current society owing to the uncanny valley theory. This paper identifies the problem with the current facial expression research as information retrieval problem. This paper identifies the current research method in the design of facial expressions of social robots, followed by using deep learning as similarity evaluation technique to measure the humanness of the facial ex- pressions developed from the current technique and further suggests a novel solution to the facial expression design of humanoids using deep learning.
ContributorsMurthy, Shweta (Author) / Gaffar, Ashraf (Thesis advisor) / Ghazarian, Arbi (Committee member) / Gonzalez-Sanchez, Javier (Committee member) / Arizona State University (Publisher)
Created2017
155799-Thumbnail Image.png
Description
In today's data-driven world, every datum is connected to a large amount of data. Relational databases have been proving itself a pioneer in the field of data storage and manipulation since 1970s. But more recently they have been challenged by NoSQL graph databases in handling data models which have an

In today's data-driven world, every datum is connected to a large amount of data. Relational databases have been proving itself a pioneer in the field of data storage and manipulation since 1970s. But more recently they have been challenged by NoSQL graph databases in handling data models which have an inherent graphical representation. Graph databases with the ability to store physical relationships between two nodes and native graph processing technique have been doing exceptionally well in graph data storage and management for applications like recommendation engines, biological modeling, network modeling, social media applications, etc.

Instructional Module Development System (IMODS) is a web-based software system that guides STEM instructors through the complex task of curriculum design, ensures tight alignment between various components of a course (i.e., learning objectives, content, assessments), and provides relevant information about research-based pedagogical and assessment strategies. The data model of IMODS is highly connected and has an inherent graphical representation between all its entities with numerous relationships between them. This thesis focuses on developing an algorithm to determine completeness of course design developed using IMODS. As part of this research objective, the study also analyzes the data model for best fit database to run these algorithms. As part of this thesis, two separate applications abstracting the data model of IMODS have been developed - one with Neo4j (graph database) and another with PostgreSQL (relational database). The research objectives of the thesis are as follows: (i) evaluate the performance of Neo4j and PostgreSQL in handling complex queries that will be fired throughout the life cycle of the course design process; (ii) devise an algorithm to determine the completeness of a course design developed using IMODS. This thesis presents the process of creating data model for PostgreSQL and converting it into a graph data model to be abstracted by Neo4j, creating SQL and CYPHER scripts for undertaking experiments on both platforms, testing and elaborate analysis of the results and evaluation of the databases in the context of IMODS.
ContributorsSaha, Abir Lal (Author) / Bansal, Srividya (Thesis advisor) / Bansal, Ajay (Committee member) / Gonzalez-Sanchez, Javier (Committee member) / Arizona State University (Publisher)
Created2017