Matching Items (27)
151940-Thumbnail Image.png
Description
Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided

Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided a deluge of data from which we may attempt to infer a representation of the true genetic regulatory system. A gene regulatory network model, if accurate enough, may allow us to perform hypothesis testing in the form of computational experiments. Of great importance to modeling accuracy is the acknowledgment of biological contexts within the models -- i.e. recognizing the heterogeneous nature of the true biological system and the data it generates. This marriage of engineering, mathematics and computer science with systems biology creates a cycle of progress between computer simulation and lab experimentation, rapidly translating interventions and treatments for patients from the bench to the bedside. This dissertation will first discuss the landscape for modeling the biological system, explore the identification of targets for intervention in Boolean network models of biological interactions, and explore context specificity both in new graphical depictions of models embodying context-specific genomic regulation and in novel analysis approaches designed to reveal embedded contextual information. Overall, the dissertation will explore a spectrum of biological modeling with a goal towards therapeutic intervention, with both formal and informal notions of biological context, in such a way that will enable future work to have an even greater impact in terms of direct patient benefit on an individualized level.
ContributorsVerdicchio, Michael (Author) / Kim, Seungchan (Thesis advisor) / Baral, Chitta (Committee member) / Stolovitzky, Gustavo (Committee member) / Collofello, James (Committee member) / Arizona State University (Publisher)
Created2013
Description
Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use

Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use them for developing software for laboratory automation systems. This thesis proposes an architecture that is based on existing software architectural paradigms and is specifically tailored to developing software for a laboratory automation system. The architecture is based on fairly autonomous software components that can be distributed across multiple computers. The components in the architecture make use of asynchronous communication methodologies that are facilitated by passing messages between one another. The architecture can be used to develop software that is distributed, responsive and thread-safe. The thesis also proposes a framework that has been developed to implement the ideas proposed by the architecture. The framework is used to develop software that is scalable, distributed, responsive and thread-safe. The framework currently has components to control very commonly used laboratory automation devices such as mechanical stages, cameras, and also to do common laboratory automation functionalities such as imaging.
ContributorsKuppuswamy, Venkataramanan (Author) / Meldrum, Deirdre (Thesis advisor) / Collofello, James (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Johnson, Roger (Committee member) / Arizona State University (Publisher)
Created2012
151275-Thumbnail Image.png
Description
The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations.
ContributorsBuell, Kevin, Ph.D (Author) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Lindquist, Timothy (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
152590-Thumbnail Image.png
Description
Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost

Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost in handling breaches for security architects and security engineers. The process of security testing which involves creating tests that effectively examine vulnerabilities is a challenging task. Role-Based Access Control (RBAC) has been widely adopted to support fine-grained access control. However, in practice, due to its complexity including role management, role hierarchy with hundreds of roles, and their associated privileges and users, systematically testing RBAC systems is crucial to ensure the security in various domains ranging from cyber-infrastructure to mission-critical applications. In this thesis, we introduce i) a security testing technique for RBAC systems considering the principle of maximum privileges, the structure of the role hierarchy, and a new security test coverage criterion; ii) a MTBDD (Multi-Terminal Binary Decision Diagram) based representation of RBAC security policy including RHMTBDD (Role Hierarchy MTBDD) to efficiently generate effective positive and negative security test cases; and iii) a security testing framework which takes an XACML-based RBAC security policy as an input, parses it into a RHMTBDD representation and then generates positive and negative test cases. We also demonstrate the efficacy of our approach through case studies.
ContributorsGupta, Poonam (Author) / Ahn, Gail-Joon (Thesis advisor) / Collofello, James (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2014
150234-Thumbnail Image.png
Description
Introductory programming courses, also known as CS1, have a specific set of expected outcomes related to the learning of the most basic and essential computational concepts in computer science (CS). However, two of the most often heard complaints in such courses are that (1) they are divorced from the reality

Introductory programming courses, also known as CS1, have a specific set of expected outcomes related to the learning of the most basic and essential computational concepts in computer science (CS). However, two of the most often heard complaints in such courses are that (1) they are divorced from the reality of application and (2) they make the learning of the basic concepts tedious. The concepts introduced in CS1 courses are highly abstract and not easily comprehensible. In general, the difficulty is intrinsic to the field of computing, often described as "too mathematical or too abstract." This dissertation presents a small-scale mixed method study conducted during the fall 2009 semester of CS1 courses at Arizona State University. This study explored and assessed students' comprehension of three core computational concepts - abstraction, arrays of objects, and inheritance - in both algorithm design and problem solving. Through this investigation students' profiles were categorized based on their scores and based on their mistakes categorized into instances of five computational thinking concepts: abstraction, algorithm, scalability, linguistics, and reasoning. It was shown that even though the notion of computational thinking is not explicit in the curriculum, participants possessed and/or developed this skill through the learning and application of the CS1 core concepts. Furthermore, problem-solving experiences had a direct impact on participants' knowledge skills, explanation skills, and confidence. Implications for teaching CS1 and for future research are also considered.
ContributorsBillionniere, Elodie V (Author) / Collofello, James (Thesis advisor) / Ganesh, Tirupalavanam G. (Thesis advisor) / VanLehn, Kurt (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2011
156904-Thumbnail Image.png
Description
Machine learning tutorials often employ an application and runtime specific solution for a given problem in which users are expected to have a broad understanding of data analysis and software programming. This thesis focuses on designing and implementing a new, hands-on approach to teaching machine learning by streamlining the process

Machine learning tutorials often employ an application and runtime specific solution for a given problem in which users are expected to have a broad understanding of data analysis and software programming. This thesis focuses on designing and implementing a new, hands-on approach to teaching machine learning by streamlining the process of generating Inertial Movement Unit (IMU) data from multirotor flight sessions, training a linear classifier, and applying said classifier to solve Multi-rotor Activity Recognition (MAR) problems in an online lab setting. MAR labs leverage cloud computing and data storage technologies to host a versatile environment capable of logging, orchestrating, and visualizing the solution for an MAR problem through a user interface. MAR labs extends Arizona State University’s Visual IoT/Robotics Programming Language Environment (VIPLE) as a control platform for multi-rotors used in data collection. VIPLE is a platform developed for teaching computational thinking, visual programming, Internet of Things (IoT) and robotics application development. As a part of this education platform, this work also develops a 3D simulator capable of simulating the programmable behaviors of a robot within a maze environment and builds a physical quadrotor for use in MAR lab experiments.
ContributorsDe La Rosa, Matthew Lee (Author) / Chen, Yinong (Thesis advisor) / Collofello, James (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2018
135340-Thumbnail Image.png
Description
Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and

Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and daily operations. One of the most important parts is being able to predict and foreshadow failures in the system, in order to make sure that those are fixed before they turn into large issues. One specific area where preventive maintenance is a very big part of daily activity is the automotive industry. Automobile owners are encouraged to take their cars in for maintenance on a routine schedule (based on mileage or time), or when their car signals that there is an issue (low oil levels for example). Although this level of maintenance is enough when people are in charge of cars, the rise of autonomous vehicles, specifically self-driving cars, changes that. Now instead of a human being able to look at a car and diagnose any issues, the car needs to be able to do this itself. The objective of this project was to create such a system. The Electronics Preventive Maintenance System is an internal system that is designed to meet all these criteria and more. The EPMS system is comprised of a central computer which monitors all major electronic components in an autonomous vehicle through the use of standard off-the-shelf sensors. The central computer compiles the sensor data, and is able to sort and analyze the readings. The filtered data is run through several mathematical models, each of which diagnoses issues in different parts of the vehicle. The data for each component in the vehicle is compared to pre-set operating conditions. These operating conditions are set in order to encompass all normal ranges of output. If the sensor data is outside the margins, the warning and deviation are recorded and a severity level is calculated. In addition to the individual focus, there's also a vehicle-wide model, which predicts how necessary maintenance is for the vehicle. All of these results are analyzed by a simple heuristic algorithm and a decision is made for the vehicle's health status, which is sent out to the Fleet Management System. This system allows for accurate, effortless monitoring of all parts of an autonomous vehicle as well as predictive modeling that allows the system to determine maintenance needs. With this system, human inspectors are no longer necessary for a fleet of autonomous vehicles. Instead, the Fleet Management System is able to oversee inspections, and the system operator is able to set parameters to decide when to send cars for maintenance. All the models used for the sensor and component analysis are tailored specifically to the vehicle. The models and operating margins are created using empirical data collected during normal testing operations. The system is modular and can be used in a variety of different vehicle platforms, including underwater autonomous vehicles and aerial vehicles.
ContributorsMian, Sami T. (Author) / Collofello, James (Thesis director) / Chen, Yinong (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136102-Thumbnail Image.png
Description
This paper studies the change in social diversity and interaction space from the Classic to Postclassic periods in the Mimbres Valley and East Mimbres Area. Between the Classic and Postclassic periods the Mimbres region of the American Southwest exhibits an increase in diversity of ceramic wares. Previous research suggests that

This paper studies the change in social diversity and interaction space from the Classic to Postclassic periods in the Mimbres Valley and East Mimbres Area. Between the Classic and Postclassic periods the Mimbres region of the American Southwest exhibits an increase in diversity of ceramic wares. Previous research suggests that increased diversity of ceramics indicates a more diverse community, which could pose challenges to local social interaction (Nelson et al. 2011). I am interested in whether the architecture of plazas, focal points of communities' social structures, change in response to the growing social diversity. To examine this, I quantify the diversity of painted ceramics at Classic and Postclassic villages as well as the extent of the enclosure of plazas. I find that there is a definite shift towards greater plaza enclosure between the Classic and Postclassic periods. I conclude this paper with a discussion of possible interpretations of this trend regarding the social reactions of Mimbres communities to the changes which reshaped the region between the Classic and Postclassic periods.
Created2015-05
137350-Thumbnail Image.png
Description
The Mimbres tradition of southwestern New Mexico, underwent what can be characterized as continuity with change, a form of non-collapse transformation, from the Late Pithouse (ca. 550-1000 CE) to Classic (ca. 1000-1130 CE) period. Both transitions are characterized by large-scale shifts in housing, settlement patterns, pottery, and mortuary customs.The goal

The Mimbres tradition of southwestern New Mexico, underwent what can be characterized as continuity with change, a form of non-collapse transformation, from the Late Pithouse (ca. 550-1000 CE) to Classic (ca. 1000-1130 CE) period. Both transitions are characterized by large-scale shifts in housing, settlement patterns, pottery, and mortuary customs.The goal of this thesis is to evaluate changes in the intrasite and inter-site frequencies of selected nonlocal items in Mimbres burial contexts dating to the Late Pithouse and Classic periods. Because those living in the Mimbres region seem to have dramatically changed the ways in which they lived and expressed their social identities it is reasonable to assume that their mortuary use of these high-value objects might have also transformed.
ContributorsOwen, Paige Elizabeth (Author) / Nelson, Ben (Thesis director) / Nelson, Margaret (Committee member) / Russell, Will (Committee member) / Barrett, The Honors College (Contributor) / Walter Cronkite School of Journalism and Mass Communication (Contributor) / School of Human Evolution and Social Change (Contributor)
Created2013-12
137375-Thumbnail Image.png
Description
Smartphones have become increasingly common over the past few years, and mobile games continue to be the most common type of application (Apple, Inc., 2013). For many people, the social aspect of gaming is very important, and thus most mobile games include support for playing with multiple players. However, there

Smartphones have become increasingly common over the past few years, and mobile games continue to be the most common type of application (Apple, Inc., 2013). For many people, the social aspect of gaming is very important, and thus most mobile games include support for playing with multiple players. However, there is a lack of common knowledge about which implementation of this functionality is most favorable from a development standpoint. In this study, we evaluate three different types of multiplayer gameplay (pass-and-play, Bluetooth, and GameCenter) via development cost and user interviews. We find that pass-and-play, the most easily-implemented mode, is not favored by players due to its inconvenience. We also find that GameCenter is not as well favored as expected due to latency of GameCenter's servers, and that Bluetooth multiplayer is the most well favored for social play due to its similarity to real-life play. Despite there being a large overhead in developing and testing Bluetooth and GameCenter multiplayer due to Apple's development process, this is irrelevant since professional developers must enroll in this process anyway. Therefore, the most effective multiplayer mode to develop is mostly determined by whether Internet play is desirable: Bluetooth if not, GameCenter if so. Future studies involving more complete development work and more types of multiplayer modes could yield more promising results.
ContributorsBradley, Michael Robert (Author) / Collofello, James (Thesis director) / Wilkerson, Kelly (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-12