Matching Items (49)
151940-Thumbnail Image.png
Description
Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided

Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided a deluge of data from which we may attempt to infer a representation of the true genetic regulatory system. A gene regulatory network model, if accurate enough, may allow us to perform hypothesis testing in the form of computational experiments. Of great importance to modeling accuracy is the acknowledgment of biological contexts within the models -- i.e. recognizing the heterogeneous nature of the true biological system and the data it generates. This marriage of engineering, mathematics and computer science with systems biology creates a cycle of progress between computer simulation and lab experimentation, rapidly translating interventions and treatments for patients from the bench to the bedside. This dissertation will first discuss the landscape for modeling the biological system, explore the identification of targets for intervention in Boolean network models of biological interactions, and explore context specificity both in new graphical depictions of models embodying context-specific genomic regulation and in novel analysis approaches designed to reveal embedded contextual information. Overall, the dissertation will explore a spectrum of biological modeling with a goal towards therapeutic intervention, with both formal and informal notions of biological context, in such a way that will enable future work to have an even greater impact in terms of direct patient benefit on an individualized level.
ContributorsVerdicchio, Michael (Author) / Kim, Seungchan (Thesis advisor) / Baral, Chitta (Committee member) / Stolovitzky, Gustavo (Committee member) / Collofello, James (Committee member) / Arizona State University (Publisher)
Created2013
Description
Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use

Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use them for developing software for laboratory automation systems. This thesis proposes an architecture that is based on existing software architectural paradigms and is specifically tailored to developing software for a laboratory automation system. The architecture is based on fairly autonomous software components that can be distributed across multiple computers. The components in the architecture make use of asynchronous communication methodologies that are facilitated by passing messages between one another. The architecture can be used to develop software that is distributed, responsive and thread-safe. The thesis also proposes a framework that has been developed to implement the ideas proposed by the architecture. The framework is used to develop software that is scalable, distributed, responsive and thread-safe. The framework currently has components to control very commonly used laboratory automation devices such as mechanical stages, cameras, and also to do common laboratory automation functionalities such as imaging.
ContributorsKuppuswamy, Venkataramanan (Author) / Meldrum, Deirdre (Thesis advisor) / Collofello, James (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Johnson, Roger (Committee member) / Arizona State University (Publisher)
Created2012
151275-Thumbnail Image.png
Description
The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations.
ContributorsBuell, Kevin, Ph.D (Author) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Lindquist, Timothy (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
152590-Thumbnail Image.png
Description
Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost

Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost in handling breaches for security architects and security engineers. The process of security testing which involves creating tests that effectively examine vulnerabilities is a challenging task. Role-Based Access Control (RBAC) has been widely adopted to support fine-grained access control. However, in practice, due to its complexity including role management, role hierarchy with hundreds of roles, and their associated privileges and users, systematically testing RBAC systems is crucial to ensure the security in various domains ranging from cyber-infrastructure to mission-critical applications. In this thesis, we introduce i) a security testing technique for RBAC systems considering the principle of maximum privileges, the structure of the role hierarchy, and a new security test coverage criterion; ii) a MTBDD (Multi-Terminal Binary Decision Diagram) based representation of RBAC security policy including RHMTBDD (Role Hierarchy MTBDD) to efficiently generate effective positive and negative security test cases; and iii) a security testing framework which takes an XACML-based RBAC security policy as an input, parses it into a RHMTBDD representation and then generates positive and negative test cases. We also demonstrate the efficacy of our approach through case studies.
ContributorsGupta, Poonam (Author) / Ahn, Gail-Joon (Thesis advisor) / Collofello, James (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2014
152168-Thumbnail Image.png
Description
There has been a lot of research in the field of artificial intelligence about thinking machines. Alan Turing proposed a test to observe a machine's intelligent behaviour with respect to natural language conversation. The Winograd schema challenge is suggested as an alternative, to the Turing test. It needs inferencing capabilities,

There has been a lot of research in the field of artificial intelligence about thinking machines. Alan Turing proposed a test to observe a machine's intelligent behaviour with respect to natural language conversation. The Winograd schema challenge is suggested as an alternative, to the Turing test. It needs inferencing capabilities, reasoning abilities and background knowledge to get the answer right. It involves a coreference resolution task in which a machine is given a sentence containing a situation which involves two entities, one pronoun and some more information about the situation and the machine has to come up with the right resolution of a pronoun to one of the entities. The complexity of the task is increased with the fact that the Winograd sentences are not constrained by one domain or specific sentence structure and it also contains a lot of human proper names. This modification makes the task of association of entities, to one particular word in the sentence, to derive the answer, difficult. I have developed a pronoun resolver system for the confined domain Winograd sentences. I have developed a classifier or filter which takes input sentences and decides to accept or reject them based on a particular criteria. Once the sentence is accepted. I run parsers on it to obtain the detailed analysis. Furthermore I have developed four answering modules which use world knowledge and inferencing mechanisms to try and resolve the pronoun. The four techniques I use are : ConceptNet knowledgebase, Search engine pattern counts,Narrative event chains and sentiment analysis. I have developed a particular aggregation mechanism for the answers from these modules to arrive at a final answer. I have used caching technique for the association relations that I obtain for different modules, so as to boost the performance. I run my system on the standard ‘nyu dataset’ of Winograd sentences and questions. This dataset is then restricted, by my classifier, to 90 sentences. I evaluate my system on this 90 sentence dataset. When I compare my results against the state of the art system on the same dataset, I get nearly 4.5 % improvement in the restricted domain.
ContributorsBudukh, Tejas Ulhas (Author) / Baral, Chitta (Thesis advisor) / VanLehn, Kurt (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2013
152909-Thumbnail Image.png
Description
This thesis is an initial test of the hypothesis that superficial measures suffice for measuring collaboration among pairs of students solving complex math problems, where the degree of collaboration is categorized at a high level. Data were collected

in the form of logs from students' tablets and the vocal interaction

This thesis is an initial test of the hypothesis that superficial measures suffice for measuring collaboration among pairs of students solving complex math problems, where the degree of collaboration is categorized at a high level. Data were collected

in the form of logs from students' tablets and the vocal interaction between pairs of students. Thousands of different features were defined, and then extracted computationally from the audio and log data. Human coders used richer data (several video streams) and a thorough understand of the tasks to code episodes as

collaborative, cooperative or asymmetric contribution. Machine learning was used to induce a detector, based on random forests, that outputs one of these three codes for an episode given only a characterization of the episode in terms of superficial features. An overall accuracy of 92.00% (kappa = 0.82) was obtained when

comparing the detector's codes to the humans' codes. However, due irregularities in running the study (e.g., the tablet software kept crashing), these results should be viewed as preliminary.
ContributorsViswanathan, Sree Aurovindh (Author) / VanLehn, Kurt (Thesis advisor) / T.H CHI, Michelene (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2014
152976-Thumbnail Image.png
Description
Research in the learning sciences suggests that students learn better by collaborating with their peers than learning individually. Students working together as a group tend to generate new ideas more frequently and exhibit a higher level of reasoning. In this internet age with the advent of massive open online courses

Research in the learning sciences suggests that students learn better by collaborating with their peers than learning individually. Students working together as a group tend to generate new ideas more frequently and exhibit a higher level of reasoning. In this internet age with the advent of massive open online courses (MOOCs), students across the world are able to access and learn material remotely. This creates a need for tools that support distant or remote collaboration. In order to build such tools we need to understand the basic elements of remote collaboration and how it differs from traditional face-to-face collaboration.

The main goal of this thesis is to explore how spoken dialogue varies in face-to-face and remote collaborative learning settings. Speech data is collected from student participants solving mathematical problems collaboratively on a tablet. Spoken dialogue is analyzed based on conversational and acoustic features in both the settings. Looking for collaborative differences of transactivity and dialogue initiative, both settings are compared in detail using machine learning classification techniques based on acoustic and prosodic features of speech. Transactivity is defined as a joint construction of knowledge by peers. The main contributions of this thesis are: a speech corpus to analyze spoken dialogue in face-to-face and remote settings and an empirical analysis of conversation, collaboration, and speech prosody in both the settings. The results from the experiments show that amount of overlap is lower in remote dialogue than in the face-to-face setting. There is a significant difference in transactivity among strangers. My research benefits the computer-supported collaborative learning community by providing an analysis that can be used to build more efficient tools for supporting remote collaborative learning.
ContributorsNelakurthi, Arun Reddy (Author) / Pon-Barry, Heather (Thesis advisor) / VanLehn, Kurt (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2014
150293-Thumbnail Image.png
Description
Strong communities are important for society. One of the most important community builders, making friends, is poorly supported online. Dating sites support it but in romantic contexts. Other major social networks seem not to encourage it because either their purpose isn't compatible with introducing strangers or the prevalent methods of

Strong communities are important for society. One of the most important community builders, making friends, is poorly supported online. Dating sites support it but in romantic contexts. Other major social networks seem not to encourage it because either their purpose isn't compatible with introducing strangers or the prevalent methods of introduction aren't effective enough to merit use over real word alternatives. This paper presents a novel digital social network emphasizing creating friendships. Research has shown video chat communication can reach in-person levels of trust; coupled with a game environment to ease the discomfort people often have interacting with strangers and a recommendation engine, Zazzer, the presented system, allows people to meet and get to know each other in a manner much more true to real life than traditional methods. Its network also allows players to continue to communicate afterwards. The evaluation looks at real world use, measuring the frequency with which players choose the video chat game versus alternative, more traditional methods of online introduction. It also looks at interactions after the initial meeting to discover how effective video chat games are in creating sticky social connections. After initial use it became apparent a critical mass of users would be necessary to draw strong conclusions, however the collected data seemed to give preliminary support to the idea that video chat games are more effective than traditional ways of meeting online in creating new relationships.
ContributorsSorensen, Asael (Author) / VanLehn, Kurt (Thesis advisor) / Liu, Huan (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2011
149950-Thumbnail Image.png
Description
With the rapid growth of mobile computing and sensor technology, it is now possible to access data from a variety of sources. A big challenge lies in linking sensor based data with social and cognitive variables in humans in real world context. This dissertation explores the relationship between creativity in

With the rapid growth of mobile computing and sensor technology, it is now possible to access data from a variety of sources. A big challenge lies in linking sensor based data with social and cognitive variables in humans in real world context. This dissertation explores the relationship between creativity in teamwork, and team members' movement and face-to-face interaction strength in the wild. Using sociometric badges (wearable sensors), electronic Experience Sampling Methods (ESM), the KEYS team creativity assessment instrument, and qualitative methods, three research studies were conducted in academic and industry R&D; labs. Sociometric badges captured movement of team members and face-to-face interaction between team members. KEYS scale was implemented using ESM for self-rated creativity and expert-coded creativity assessment. Activities (movement and face-to-face interaction) and creativity of one five member and two seven member teams were tracked for twenty five days, eleven days, and fifteen days respectively. Day wise values of movement and face-to-face interaction for participants were mean split categorized as creative and non-creative using self- rated creativity measure and expert-coded creativity measure. Paired-samples t-tests [t(36) = 3.132, p < 0.005; t(23) = 6.49 , p < 0.001] confirmed that average daily movement energy during creative days (M = 1.31, SD = 0.04; M = 1.37, SD = 0.07) was significantly greater than the average daily movement of non-creative days (M = 1.29, SD = 0.03; M = 1.24, SD = 0.09). The eta squared statistic (0.21; 0.36) indicated a large effect size. A paired-samples t-test also confirmed that face-to-face interaction tie strength of team members during creative days (M = 2.69, SD = 4.01) is significantly greater [t(41) = 2.36, p < 0.01] than the average face-to-face interaction tie strength of team members for non-creative days (M = 0.9, SD = 2.1). The eta squared statistic (0.11) indicated a large effect size. The combined approach of principal component analysis (PCA) and linear discriminant analysis (LDA) conducted on movement and face-to-face interaction data predicted creativity with 87.5% and 91% accuracy respectively. This work advances creativity research and provides a foundation for sensor based real-time creativity support tools for teams.
ContributorsTripathi, Priyamvada (Author) / Burleson, Winslow (Thesis advisor) / Liu, Huan (Committee member) / VanLehn, Kurt (Committee member) / Pentland, Alex (Committee member) / Arizona State University (Publisher)
Created2011
150224-Thumbnail Image.png
Description
Lots of previous studies have analyzed human tutoring at great depths and have shown expert human tutors to produce effect sizes, which is twice of that produced by an intelligent tutoring system (ITS). However, there has been no consensus on which factor makes them so effective. It is important to

Lots of previous studies have analyzed human tutoring at great depths and have shown expert human tutors to produce effect sizes, which is twice of that produced by an intelligent tutoring system (ITS). However, there has been no consensus on which factor makes them so effective. It is important to know this, so that same phenomena can be replicated in an ITS in order to achieve the same level of proficiency as expert human tutors. Also, to the best of my knowledge no one has looked at student reactions when they are working with a computer based tutor. The answers to both these questions are needed in order to build a highly effective computer-based tutor. My research focuses on the second question. In the first phase of my thesis, I analyzed the behavior of students when they were working with a step-based tutor Andes, using verbal-protocol analysis. The accomplishment of doing this was that I got to know of some ways in which students use a step-based tutor which can pave way for the creation of more effective computer-based tutors. I found from the first phase of the research that students often keep trying to fix errors by guessing repeatedly instead of asking for help by clicking the hint button. This phenomenon is known as hint refusal. Surprisingly, a large portion of the student's foundering was due to hint refusal. The hypothesis tested in the second phase of the research is that hint refusal can be significantly reduced and learning can be significantly increased if Andes uses more unsolicited hints and meta hints. An unsolicited hint is a hint that is given without the student asking for one. A meta-hint is like an unsolicited hint in that it is given without the student asking for it, but it just prompts the student to click on the hint button. Two versions of Andes were compared: the original version and a new version that gave more unsolicited and meta-hints. During a two-hour experiment, there were large, statistically reliable differences in several performance measures suggesting that the new policy was more effective.
ContributorsRanganathan, Rajagopalan (Author) / VanLehn, Kurt (Thesis advisor) / Atkinson, Robert (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2011