Matching Items (23)
Filtering by
- All Subjects: Computer Science
Description
Mobile apps have improved human lifestyle in various aspects ranging from instant messaging to tele-health. In the current app development paradigm, apps are being developed individually and agnostic of each other. The goal of this thesis is to allow a new world where multiple apps communicate with each other to achieve synergistic benefits. To enable integration between apps, manual communication between developers is needed, which can be problematic on many levels. In order to promote app integration, a systematic approach towards data sharing between multiple apps is essential. However, current approaches to app integration require large code modifications to reap the benefits of shared data such as requiring developers to provide APIs or use large, invasive middlewares. In this thesis, a data sharing framework was developed providing a non-invasive interface between mobile apps for data sharing and integration. A separate app acts as a registry to allow apps to register database tables to be shared and query this information. Two health monitoring apps were developed to evaluate the sharing framework and different methods of data integration between apps to promote synergistic feedback. The health monitoring apps have shown non-invasive solutions can provide data sharing functionality without large code modifications and manual communication between developers.
ContributorsMilazzo, Joseph (Author) / Gupta, Sandeep K.S. (Thesis advisor) / Varsamopoulos, Georgios (Committee member) / Nelson, Brian (Committee member) / Arizona State University (Publisher)
Created2014
Description
Many web search improvements have been developed since the advent of the modern search engine, but one underrepresented area is the application of specific customizations to search results for educational web sites. In order to address this issue and improve the relevance of search results in automated learning environments, this work has integrated context-aware search principles with applications of preference based re-ranking and query modifications. This research investigates several aspects of context-aware search principles, specifically context-sensitive and preference based re-ranking of results which take user inputs as to their preferred content, and combines this with search query modifications which automatically search for a variety of modified terms based on the given search query, integrating these results into the overall re-ranking for the context. The result of this work is a novel web search algorithm which could be applied to any online learning environment attempting to collect relevant resources for learning about a given topic. The algorithm has been evaluated through user studies comparing traditional search results to the context-aware results returned through the algorithm for a given topic. These studies explore how this integration of methods could provide improved relevance in the search results returned when compared against other modern search engines.
ContributorsVan Egmond, Eric (Author) / Burleson, Winslow (Thesis advisor) / Syrotiuk, Violet (Thesis advisor) / Nelson, Brian (Committee member) / Arizona State University (Publisher)
Created2014
Description
Corporations invest considerable resources to create, preserve and analyze
their data; yet while organizations are interested in protecting against
unauthorized data transfer, there lacks a comprehensive metric to discriminate
what data are at risk of leaking.
This thesis motivates the need for a quantitative leakage risk metric, and
provides a risk assessment system, called Whispers, for computing it. Using
unsupervised machine learning techniques, Whispers uncovers themes in an
organization's document corpus, including previously unknown or unclassified
data. Then, by correlating the document with its authors, Whispers can
identify which data are easier to contain, and conversely which are at risk.
Using the Enron email database, Whispers constructs a social network segmented
by topic themes. This graph uncovers communication channels within the
organization. Using this social network, Whispers determines the risk of each
topic by measuring the rate at which simulated leaks are not detected. For the
Enron set, Whispers identified 18 separate topic themes between January 1999
and December 2000. The highest risk data emanated from the legal department
with a leakage risk as high as 60%.
their data; yet while organizations are interested in protecting against
unauthorized data transfer, there lacks a comprehensive metric to discriminate
what data are at risk of leaking.
This thesis motivates the need for a quantitative leakage risk metric, and
provides a risk assessment system, called Whispers, for computing it. Using
unsupervised machine learning techniques, Whispers uncovers themes in an
organization's document corpus, including previously unknown or unclassified
data. Then, by correlating the document with its authors, Whispers can
identify which data are easier to contain, and conversely which are at risk.
Using the Enron email database, Whispers constructs a social network segmented
by topic themes. This graph uncovers communication channels within the
organization. Using this social network, Whispers determines the risk of each
topic by measuring the rate at which simulated leaks are not detected. For the
Enron set, Whispers identified 18 separate topic themes between January 1999
and December 2000. The highest risk data emanated from the legal department
with a leakage risk as high as 60%.
ContributorsWright, Jeremy (Author) / Syrotiuk, Violet (Thesis advisor) / Davulcu, Hasan (Committee member) / Yau, Stephen (Committee member) / Arizona State University (Publisher)
Created2014
Description
This thesis proposed a novel approach to establish the trust model in a social network scenario based on users' emails. Email is one of the most important social connections nowadays. By analyzing email exchange activities among users, a social network trust model can be established to judge the trust rate between each two users. The whole trust checking process is divided into two steps: local checking and remote checking. Local checking directly contacts the email server to calculate the trust rate based on user's own email communication history. Remote checking is a distributed computing process to get help from user's social network friends and built the trust rate together. The email-based trust model is built upon a cloud computing framework called MobiCloud. Inside MobiCloud, each user occupies a virtual machine which can directly communicate with others. Based on this feature, the distributed trust model is implemented as a combination of local analysis and remote analysis in the cloud. Experiment results show that the trust evaluation model can give accurate trust rate even in a small scale social network which does not have lots of social connections. With this trust model, the security in both social network services and email communication could be improved.
ContributorsZhong, Yunji (Author) / Huang, Dijiang (Thesis advisor) / Dasgupta, Partha (Committee member) / Syrotiuk, Violet (Committee member) / Arizona State University (Publisher)
Created2011
Description
Exhaustive testing is generally infeasible except in the smallest of systems. Research
has shown that testing the interactions among fewer (up to 6) components is generally
sufficient while retaining the capability to detect up to 99% of defects. This leads to a
substantial decrease in the number of tests. Covering arrays are combinatorial objects
that guarantee that every interaction is tested at least once.
In the absence of direct constructions, forming small covering arrays is generally
an expensive computational task. Algorithms to generate covering arrays have been
extensively studied yet no single algorithm provides the smallest solution. More
recently research has been directed towards a new technique called post-optimization.
These algorithms take an existing covering array and attempt to reduce its size.
This thesis presents a new idea for post-optimization by representing covering
arrays as graphs. Some properties of these graphs are established and the results are
contrasted with existing post-optimization algorithms. The idea is then generalized to
close variants of covering arrays with surprising results which in some cases reduce
the size by 30%. Applications of the method to generation and test prioritization are
studied and some interesting results are reported.
has shown that testing the interactions among fewer (up to 6) components is generally
sufficient while retaining the capability to detect up to 99% of defects. This leads to a
substantial decrease in the number of tests. Covering arrays are combinatorial objects
that guarantee that every interaction is tested at least once.
In the absence of direct constructions, forming small covering arrays is generally
an expensive computational task. Algorithms to generate covering arrays have been
extensively studied yet no single algorithm provides the smallest solution. More
recently research has been directed towards a new technique called post-optimization.
These algorithms take an existing covering array and attempt to reduce its size.
This thesis presents a new idea for post-optimization by representing covering
arrays as graphs. Some properties of these graphs are established and the results are
contrasted with existing post-optimization algorithms. The idea is then generalized to
close variants of covering arrays with surprising results which in some cases reduce
the size by 30%. Applications of the method to generation and test prioritization are
studied and some interesting results are reported.
ContributorsKaria, Rushang Vinod (Author) / Colbourn, Charles J (Thesis advisor) / Syrotiuk, Violet (Committee member) / Richa, Andréa W. (Committee member) / Arizona State University (Publisher)
Created2015
Description
Medium access control (MAC) is a fundamental problem in wireless networks.
In ad-hoc wireless networks especially, many of the performance and scaling issues
these networks face can be attributed to their use of the core IEEE 802.11 MAC
protocol: distributed coordination function (DCF). Smoothed Airtime Linear Tuning
(SALT) is a new contention window tuning algorithm proposed to address some of the
deficiencies of DCF in 802.11 ad-hoc networks. SALT works alongside a new user level
and optimized implementation of REACT, a distributed resource allocation protocol,
to ensure that each node secures the amount of airtime allocated to it by REACT.
The algorithm accomplishes that by tuning the contention window size parameter
that is part of the 802.11 backoff process. SALT converges more tightly on airtime
allocations than a contention window tuning algorithm from previous work and this
increases fairness in transmission opportunities and reduces jitter more than either
802.11 DCF or the other tuning algorithm. REACT and SALT were also extended
to the multi-hop flow scenario with the introduction of a new airtime reservation
algorithm. With a reservation in place multi-hop TCP throughput actually increased
when running SALT and REACT as compared to 802.11 DCF, and the combination of
protocols still managed to maintain its fairness and jitter advantages. All experiments
were performed on a wireless testbed, not in simulation.
In ad-hoc wireless networks especially, many of the performance and scaling issues
these networks face can be attributed to their use of the core IEEE 802.11 MAC
protocol: distributed coordination function (DCF). Smoothed Airtime Linear Tuning
(SALT) is a new contention window tuning algorithm proposed to address some of the
deficiencies of DCF in 802.11 ad-hoc networks. SALT works alongside a new user level
and optimized implementation of REACT, a distributed resource allocation protocol,
to ensure that each node secures the amount of airtime allocated to it by REACT.
The algorithm accomplishes that by tuning the contention window size parameter
that is part of the 802.11 backoff process. SALT converges more tightly on airtime
allocations than a contention window tuning algorithm from previous work and this
increases fairness in transmission opportunities and reduces jitter more than either
802.11 DCF or the other tuning algorithm. REACT and SALT were also extended
to the multi-hop flow scenario with the introduction of a new airtime reservation
algorithm. With a reservation in place multi-hop TCP throughput actually increased
when running SALT and REACT as compared to 802.11 DCF, and the combination of
protocols still managed to maintain its fairness and jitter advantages. All experiments
were performed on a wireless testbed, not in simulation.
ContributorsMellott, Matthew (Author) / Syrotiuk, Violet (Thesis advisor) / Colbourn, Charles (Committee member) / Tinnirello, Ilenia (Committee member) / Arizona State University (Publisher)
Created2018
Description
One of the core components of many video games is their artificial intelligence. Through AI, a game can tell stories, generate challenges, and create encounters for the player to overcome. Even though AI has continued to advance through the implementation of neural networks and machine learning, game AI tends to implement a series of states or decisions instead to give the illusion of intelligence. Despite this limitation, games can still generate a wide range of experiences for the player. The Hybrid Game AI Framework is an AI system that combines the benefits of two commonly used approaches to developing game AI: Behavior Trees and Finite State Machines. Developed in the Unity Game Engine and the C# programming language, this AI Framework represents the research that went into studying modern approaches to game AI and my own attempt at implementing the techniques learned. Object-oriented programming concepts such as inheritance, abstraction, and low coupling are utilized with the intent to create game AI that's easy to implement and expand upon. The final goal was to create a flexible yet structured AI data structure while also minimizing drawbacks by combining Behavior Trees and Finite State Machines.
ContributorsRamirez Cordero, Erick Alberto (Author) / Kobayashi, Yoshihiro (Thesis director) / Nelson, Brian (Committee member) / Computer Science and Engineering Program (Contributor) / Computing and Informatics Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description
Can a skill taught in a virtual environment be utilized in the physical world? This idea is explored by creating a Virtual Reality game for the HTC Vive to teach users how to play the drums. The game focuses on developing the user's muscle memory, improving the user's ability to play music as they hear it in their head, and refining the user's sense of rhythm. Several different features were included to achieve this such as a score, different levels, a demo feature, and a metronome. The game was tested for its ability to teach and for its overall enjoyability by using a small sample group. Most participants of the sample group noted that they felt as if their sense of rhythm and drumming skill level would improve by playing the game. Through the findings of this project, it can be concluded that while it should not be considered as a complete replacement for traditional instruction, a virtual environment can be successfully used as a learning aid and practicing tool.
ContributorsDinapoli, Allison (Co-author) / Tuznik, Richard (Co-author) / Kobayashi, Yoshihiro (Thesis director) / Nelson, Brian (Committee member) / Computer Science and Engineering Program (Contributor) / School of International Letters and Cultures (Contributor) / Computing and Informatics Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
Description
This thesis investigates students' learning behaviors through their interaction with an educational technology, Web Programming Grading Assistant. The technology was developed to facilitate the grading of paper-based examinations in large lecture-based classrooms and to provide richer and more meaningful feedback to students. A classroom study was designed and data was gathered from an undergraduate computer-programming course in the fall of 2016. Analysis of the data revealed that there was a negative correlation between time lag of first review attempt and performance. A survey was developed and disseminated that gave insight into how students felt about the technology and what they normally do to study for programming exams. In conclusion, the knowledge gained in this study aids in the quest to better educate students in computer programming in large in-person classrooms.
ContributorsMurphy, Hannah (Author) / Hsiao, Ihan (Thesis director) / Nelson, Brian (Committee member) / School of Computing, Informatics, and Decision Systems Engineering (Contributor) / Department of Supply Chain Management (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
Description
In software testing, components are tested individually to make sure each performs as expected. The next step is to confirm that two or more components are able to work together. This stage of testing is often difficult because there can be numerous configurations between just two components.
Covering arrays are one way to ensure a set of tests will cover every possible configuration at least once. However, on systems with many settings, it is computationally intensive to run every possible test. Test prioritization methods can identify tests of greater importance. This concept of test prioritization can help determine which tests can be removed with minimal impact to the overall testing of the system.
This thesis presents three algorithms that generate covering arrays that test the interaction of every two components at least twice. These algorithms extend the functionality of an established greedy test prioritization method to ensure important components are selected in earlier tests. The algorithms are tested on various inputs and the results reveal that on average, the resulting covering arrays are two-fifths to one-half times smaller than a covering array generated through brute force.
Covering arrays are one way to ensure a set of tests will cover every possible configuration at least once. However, on systems with many settings, it is computationally intensive to run every possible test. Test prioritization methods can identify tests of greater importance. This concept of test prioritization can help determine which tests can be removed with minimal impact to the overall testing of the system.
This thesis presents three algorithms that generate covering arrays that test the interaction of every two components at least twice. These algorithms extend the functionality of an established greedy test prioritization method to ensure important components are selected in earlier tests. The algorithms are tested on various inputs and the results reveal that on average, the resulting covering arrays are two-fifths to one-half times smaller than a covering array generated through brute force.
ContributorsAng, Nicole (Author) / Syrotiuk, Violet (Thesis advisor) / Colbourn, Charles (Committee member) / Richa, Andrea (Committee member) / Arizona State University (Publisher)
Created2015