Matching Items (75)
151948-Thumbnail Image.png
Description
Smart home system (SHS) is a kind of information system aiming at realizing home automation. The SHS can connect with almost any kind of electronic/electric device used in a home so that they can be controlled and monitored centrally. Today's technology also allows the home owners to control and monitor

Smart home system (SHS) is a kind of information system aiming at realizing home automation. The SHS can connect with almost any kind of electronic/electric device used in a home so that they can be controlled and monitored centrally. Today's technology also allows the home owners to control and monitor the SHS installed in their homes remotely. This is typically realized by giving the SHS network access ability. Although the SHS's network access ability brings a lot of conveniences to the home owners, it also makes the SHS facing more security threats than ever before. As a result, when designing a SHS, the security threats it might face should be given careful considerations. System security threats can be solved properly by understanding them and knowing the parts in the system that should be protected against them first. This leads to the idea of solving the security threats a SHS might face from the requirements engineering level. Following this idea, this paper proposes a systematic approach to generate the security requirements specifications for the SHS. It can be viewed as the first step toward the complete SHS security requirements engineering process.
ContributorsXu, Rongcao (Author) / Ghazarian, Arbi (Thesis advisor) / Bansal, Ajay (Committee member) / Lindquist, Timothy (Committee member) / Arizona State University (Publisher)
Created2013
151325-Thumbnail Image.png
Description
As technology enhances our communication capabilities, the number of distributed teams has risen in both public and private sectors. There is no doubt that these technological advancements have addressed a need for communication and collaboration of distributed teams. However, is all technology useful for effective collaboration? Are some methods (modalities)

As technology enhances our communication capabilities, the number of distributed teams has risen in both public and private sectors. There is no doubt that these technological advancements have addressed a need for communication and collaboration of distributed teams. However, is all technology useful for effective collaboration? Are some methods (modalities) of communication more conducive than others to effective performance and collaboration of distributed teams? Although previous literature identifies some differences in modalities, there is little research on geographically distributed mobile teams (DMTs) performing a collaborative task. To investigate communication and performance in this context, I developed the GeoCog system. This system is a mobile communications and collaboration platform enabling small, distributed teams of three to participate in a variant of the military-inspired game, "Capture the Flag". Within the task, teams were given one hour to complete as many "captures" as possible while utilizing resources to the advantage of the team. In this experiment, I manipulated the modality of communication across three conditions with text-based messaging only, vocal communication only, and a combination of the two conditions. It was hypothesized that bi-modal communication would yield superior performance compared to either single modality conditions. Results indicated that performance was not affected by modality. Further results, including communication analysis, are discussed within this paper.
ContributorsChampion, Michael (Author) / Cooke, Nancy J. (Thesis advisor) / Shope, Steven (Committee member) / Wu, Bing (Committee member) / Arizona State University (Publisher)
Created2012
151275-Thumbnail Image.png
Description
The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations.
ContributorsBuell, Kevin, Ph.D (Author) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Lindquist, Timothy (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
152815-Thumbnail Image.png
Description
Research on priming has shown that exposure to the concept of fast food can have an effect on human behavior by inducing haste and impatience (Zhong & E. DeVoe, 2010). This research suggests that thinking about fast food makes individuals impatient and strengthens their desire to complete tasks such as

Research on priming has shown that exposure to the concept of fast food can have an effect on human behavior by inducing haste and impatience (Zhong & E. DeVoe, 2010). This research suggests that thinking about fast food makes individuals impatient and strengthens their desire to complete tasks such as reading and decision making as quickly and efficiently as possible. Two experiments were conducted in which the effects of fast food priming were examined using a driving simulator. The experiments examined whether fast food primes can induce impatient driving. In experiment 1, 30 adult drivers drove a course in a driving simulator after being exposed to images by rating aesthetics of four different logos. Experiment 1 did not yield faster driving speeds nor an impatient and faster break at the yellow light in the fast food logo prime condition. In experiment 2, 30 adult drivers drove the same course from experiment 1. Participants did not rate logos on their aesthetics prior to the drive, instead billboards were included in the simulation that had either fast food or diner logos. Experiment 2 did not yielded faster driving speeds, however there was a significant effect of faster breaking and a higher number of participants running the yellow light.
ContributorsTaggart, Mistey. L (Author) / Branaghan, Russell (Thesis advisor) / Cooke, Nancy J. (Committee member) / Song, Hyunjin (Committee member) / Arizona State University (Publisher)
Created2014
152796-Thumbnail Image.png
Description
The Internet is transforming its look, in a short span of time we have come very far from black and white web forms with plain buttons to responsive, colorful and appealing user interface elements. With the sudden rise in demand of web applications, developers are making full use of the

The Internet is transforming its look, in a short span of time we have come very far from black and white web forms with plain buttons to responsive, colorful and appealing user interface elements. With the sudden rise in demand of web applications, developers are making full use of the power of HTML5, JavaScript and CSS3 to cater to their users on various platforms. There was never a need of classifying the ways in which these languages can be interconnected to each other as the size of the front end code base was relatively small and did not involve critical business logic. This thesis focuses on listing and defining all dependencies between HTML5, JavaScript and CSS3 that will help developers better understand the interconnections within these languages. We also explore the present techniques available to a developer to make his code free of dependency related defects. We build a prototype tool, HJCDepend, based on our model, which aims at helping developers discover and remove defects early in the development cycle.
ContributorsVasugupta (Author) / Gary, Kevin (Thesis advisor) / Lindquist, Timothy (Committee member) / Bansal, Ajay (Committee member) / Arizona State University (Publisher)
Created2014
152917-Thumbnail Image.png
Description
When discussing human factors and performance, researchers recognize stress as a factor, but overlook mood as contributing factor. To explore the relationship between mood, stress and cognitive performance, a field study was conducted involving fire fighters engaged in a fire response simulation. Firefighter participants completed a stress questionnaire, an emotional

When discussing human factors and performance, researchers recognize stress as a factor, but overlook mood as contributing factor. To explore the relationship between mood, stress and cognitive performance, a field study was conducted involving fire fighters engaged in a fire response simulation. Firefighter participants completed a stress questionnaire, an emotional state questionnaire, and a cognitive task. Stress and cognitive task performance scores were examined before and after the firefighting simulation for individual cognitive performance depreciation caused by stress or mood. They study revealed that existing stress was a reliable predictor of the pre-simulation cognitive task score, that, as mood becomes more positive, perceived stress scores decrease, and that negative mood and pre-simulation stress are also positively and significantly correlated.
ContributorsGomez-Herbert, Maria Elena (Author) / Cooke, Nancy J. (Thesis advisor) / Becker, Vaughn (Committee member) / Branaghan, Russell (Committee member) / Hyunjin, Song (Committee member) / Arizona State University (Publisher)
Created2014
152869-Thumbnail Image.png
Description
Preoperative team briefings have been suggested to be important for improving team performance in the operating room. Many high risk environments have accepted team briefings; however healthcare has been slower to follow. While applying briefings in the operating room has shown positive benefits including improved communication and perceptions of teamwork,

Preoperative team briefings have been suggested to be important for improving team performance in the operating room. Many high risk environments have accepted team briefings; however healthcare has been slower to follow. While applying briefings in the operating room has shown positive benefits including improved communication and perceptions of teamwork, most research has only focused on feasibility of implementation and not on understanding how the quality of briefings can impact subsequent surgical procedures. Thus, there are no formal protocols or methodologies that have been developed.

The goal of this study was to relate specific characteristics of team briefings back to objective measures of team performance. The study employed cognitive interviews, prospective observations, and principle component regression to characterize and model the relationship between team briefing characteristics and non-routine events (NREs) in gynecological surgery. Interviews were conducted with 13 team members representing each role on the surgical team and data were collected for 24 pre-operative team briefings and 45 subsequent surgical cases. The findings revealed that variations within the team briefing are associated with differences in team-related outcomes, namely NREs, during the subsequent surgical procedures. Synthesis of the data highlighted three important trends which include the need to promote team communication during the briefing, the importance of attendance by all surgical team members, and the value of holding a briefing prior to each surgical procedure. These findings have implications for development of formal briefing protocols.

Pre-operative team briefings are beneficial for team performance in the operating room. Future research will be needed to continue understanding this relationship between how briefings are conducted and team performance to establish more consistent approaches and as well as for the continuing assessment of team briefings and other similar team-related events in the operating room.
ContributorsHildebrand, Emily A (Author) / Branaghan, Russell J (Thesis advisor) / Cooke, Nancy J. (Committee member) / Hallbeck, M. Susan (Committee member) / Bekki, Jennifer M (Committee member) / Blocker, Renaldo C (Committee member) / Arizona State University (Publisher)
Created2014
153444-Thumbnail Image.png
Description
In this research work, a novel control system strategy for the robust control of an unmanned ground vehicle is proposed. This strategy is motivated by efforts to mitigate the problem for scenarios in which the human operator is unable to properly communicate with the vehicle. This novel control system strategy

In this research work, a novel control system strategy for the robust control of an unmanned ground vehicle is proposed. This strategy is motivated by efforts to mitigate the problem for scenarios in which the human operator is unable to properly communicate with the vehicle. This novel control system strategy consisted of three major components: I.) Two independent intelligent controllers, II.) An intelligent navigation system, and III.) An intelligent controller tuning unit. The inner workings of the first two components are based off the Brain Emotional Learning (BEL), which is a mathematical model of the Amygdala-Orbitofrontal, a region in mammalians brain known to be responsible for emotional learning. Simulation results demonstrated the implementation of the BEL model to be very robust, efficient, and adaptable to dynamical changes in its application as controller and as a sensor fusion filter for an unmanned ground vehicle. These results were obtained with significantly less computational cost when compared to traditional methods for control and sensor fusion. For the intelligent controller tuning unit, the implementation of a human emotion recognition system was investigated. This system was utilized for the classification of driving behavior. Results from experiments showed that the affective states of the driver are accurately captured. However, the driver's affective state is not a good indicator of the driver's driving behavior. As a result, an alternative method for classifying driving behavior from the driver's brain activity was explored. This method proved to be successful at classifying the driver's behavior. It obtained results comparable to the common approach through vehicle parameters. This alternative approach has the advantage of directly classifying driving behavior from the driver, which is of particular use in UGV domain because the operator's information is readily available. The classified driving mode was used tune the controllers' performance to a desired mode of operation. Such qualities are required for a contingency control system that would allow the vehicle to operate with no operator inputs.
ContributorsVargas-Clara, Alvaro (Author) / Redkar, Sangram (Thesis advisor) / McKenna, Anna (Committee member) / Cooke, Nancy J. (Committee member) / Arizona State University (Publisher)
Created2015
153492-Thumbnail Image.png
Description
Although current urban search and rescue (USAR) robots are little more than remotely controlled cameras, the end goal is for them to work alongside humans as trusted teammates. Natural language communications and performance data are collected as a team of humans works to carry out a simulated search and rescue

Although current urban search and rescue (USAR) robots are little more than remotely controlled cameras, the end goal is for them to work alongside humans as trusted teammates. Natural language communications and performance data are collected as a team of humans works to carry out a simulated search and rescue task in an uncertain virtual environment. Conditions are tested emulating a remotely controlled robot versus an intelligent one. Differences in performance, situation awareness, trust, workload, and communications are measured. The Intelligent robot condition resulted in higher levels of performance and operator situation awareness (SA).
ContributorsBartlett, Cade Earl (Author) / Cooke, Nancy J. (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Wu, Bing (Committee member) / Arizona State University (Publisher)
Created2015
153213-Thumbnail Image.png
Description
The processing of large volumes of RDF data require an efficient storage and query processing engine that can scale well with the volume of data. The initial attempts to address this issue focused on optimizing native RDF stores as well as conventional relational databases management systems. But as the

The processing of large volumes of RDF data require an efficient storage and query processing engine that can scale well with the volume of data. The initial attempts to address this issue focused on optimizing native RDF stores as well as conventional relational databases management systems. But as the volume of RDF data grew to exponential proportions, the limitations of these systems became apparent and researchers began to focus on using big data analysis tools, most notably Hadoop, to process RDF data. Various studies and benchmarks that evaluate these tools for RDF data processing have been published. In the past two and half years, however, heavy users of big data systems, like Facebook, noted limitations with the query performance of these big data systems and began to develop new distributed query engines for big data that do not rely on map-reduce. Facebook's Presto is one such example.

This thesis deals with evaluating the performance of Presto in processing big RDF data against Apache Hive. A comparative analysis was also conducted against 4store, a native RDF store. To evaluate the performance Presto for big RDF data processing, a map-reduce program and a compiler, based on Flex and Bison, were implemented. The map-reduce program loads RDF data into HDFS while the compiler translates SPARQL queries into a subset of SQL that Presto (and Hive) can understand. The evaluation was done on four and eight node Linux clusters installed on Microsoft Windows Azure platform with RDF datasets of size 10, 20, and 30 million triples. The results of the experiment show that Presto has a much higher performance than Hive can be used to process big RDF data. The thesis also proposes an architecture based on Presto, Presto-RDF, that can be used to process big RDF data.
ContributorsMammo, Mulugeta (Author) / Bansal, Srividya (Thesis advisor) / Bansal, Ajay (Committee member) / Lindquist, Timothy (Committee member) / Arizona State University (Publisher)
Created2014