Matching Items (64)
150234-Thumbnail Image.png
Description
Introductory programming courses, also known as CS1, have a specific set of expected outcomes related to the learning of the most basic and essential computational concepts in computer science (CS). However, two of the most often heard complaints in such courses are that (1) they are divorced from the reality

Introductory programming courses, also known as CS1, have a specific set of expected outcomes related to the learning of the most basic and essential computational concepts in computer science (CS). However, two of the most often heard complaints in such courses are that (1) they are divorced from the reality of application and (2) they make the learning of the basic concepts tedious. The concepts introduced in CS1 courses are highly abstract and not easily comprehensible. In general, the difficulty is intrinsic to the field of computing, often described as "too mathematical or too abstract." This dissertation presents a small-scale mixed method study conducted during the fall 2009 semester of CS1 courses at Arizona State University. This study explored and assessed students' comprehension of three core computational concepts - abstraction, arrays of objects, and inheritance - in both algorithm design and problem solving. Through this investigation students' profiles were categorized based on their scores and based on their mistakes categorized into instances of five computational thinking concepts: abstraction, algorithm, scalability, linguistics, and reasoning. It was shown that even though the notion of computational thinking is not explicit in the curriculum, participants possessed and/or developed this skill through the learning and application of the CS1 core concepts. Furthermore, problem-solving experiences had a direct impact on participants' knowledge skills, explanation skills, and confidence. Implications for teaching CS1 and for future research are also considered.
ContributorsBillionniere, Elodie V (Author) / Collofello, James (Thesis advisor) / Ganesh, Tirupalavanam G. (Thesis advisor) / VanLehn, Kurt (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2011
137617-Thumbnail Image.png
Description
This honors thesis utilizes smart home components and concepts from Dr. Burleson's Game as Life, Life as Game (GaLLaG) systems. The thesis focuses on an automated lifestyle, where individuals utilize technology, such as door sensors, appliance and lamp modules, and system notifications, to assist in daily activities. The findings from

This honors thesis utilizes smart home components and concepts from Dr. Burleson's Game as Life, Life as Game (GaLLaG) systems. The thesis focuses on an automated lifestyle, where individuals utilize technology, such as door sensors, appliance and lamp modules, and system notifications, to assist in daily activities. The findings from our efforts to date indicate that after weeks of observations, there is no evidence that automated lifestyles create more productive and healthy lifestyles and lead to overall satisfaction in life; however, there are certain design principles that would assist future home automation applications.
ContributorsRosales, Justin Bart (Author) / Burleson, Winslow (Thesis director) / Walker, Erin (Committee member) / Hekler, Eric (Committee member) / Civil, Environmental and Sustainable Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2013-05
151802-Thumbnail Image.png
Description
The complexity of the systems that software engineers build has continuously grown since the inception of the field. What has not changed is the engineers' mental capacity to operate on about seven distinct pieces of information at a time. The widespread use of UML has led to more abstract software

The complexity of the systems that software engineers build has continuously grown since the inception of the field. What has not changed is the engineers' mental capacity to operate on about seven distinct pieces of information at a time. The widespread use of UML has led to more abstract software design activities, however the same cannot be said for reverse engineering activities. The introduction of abstraction to reverse engineering will allow the engineer to move farther away from the details of the system, increasing his ability to see the role that domain level concepts play in the system. In this thesis, we present a technique that facilitates filtering of classes from existing systems at the source level based on their relationship to concepts in the domain via a classification method using machine learning. We showed that concepts can be identified using a machine learning classifier based on source level metrics. We developed an Eclipse plugin to assist with the process of manually classifying Java source code, and collecting metrics and classifications into a standard file format. We developed an Eclipse plugin to act as a concept identifier that visually indicates a class as a domain concept or not. We minimized the size of training sets to ensure a useful approach in practice. This allowed us to determine that a training set of 7:5 to 10% is nearly as effective as a training set representing 50% of the system. We showed that random selection is the most consistent and effective means of selecting a training set. We found that KNN is the most consistent performer among the learning algorithms tested. We determined the optimal feature set for this classification problem. We discussed two possible structures besides a one to one mapping of domain knowledge to implementation. We showed that classes representing more than one concept are simply concepts at differing levels of abstraction. We also discussed composite concepts representing a domain concept implemented by more than one class. We showed that these composite concepts are difficult to detect because the problem is NP-complete.
ContributorsCarey, Maurice (Author) / Colbourn, Charles (Thesis advisor) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Sarjoughian, Hessam S. (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151940-Thumbnail Image.png
Description
Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided

Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided a deluge of data from which we may attempt to infer a representation of the true genetic regulatory system. A gene regulatory network model, if accurate enough, may allow us to perform hypothesis testing in the form of computational experiments. Of great importance to modeling accuracy is the acknowledgment of biological contexts within the models -- i.e. recognizing the heterogeneous nature of the true biological system and the data it generates. This marriage of engineering, mathematics and computer science with systems biology creates a cycle of progress between computer simulation and lab experimentation, rapidly translating interventions and treatments for patients from the bench to the bedside. This dissertation will first discuss the landscape for modeling the biological system, explore the identification of targets for intervention in Boolean network models of biological interactions, and explore context specificity both in new graphical depictions of models embodying context-specific genomic regulation and in novel analysis approaches designed to reveal embedded contextual information. Overall, the dissertation will explore a spectrum of biological modeling with a goal towards therapeutic intervention, with both formal and informal notions of biological context, in such a way that will enable future work to have an even greater impact in terms of direct patient benefit on an individualized level.
ContributorsVerdicchio, Michael (Author) / Kim, Seungchan (Thesis advisor) / Baral, Chitta (Committee member) / Stolovitzky, Gustavo (Committee member) / Collofello, James (Committee member) / Arizona State University (Publisher)
Created2013
Description
Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use

Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use them for developing software for laboratory automation systems. This thesis proposes an architecture that is based on existing software architectural paradigms and is specifically tailored to developing software for a laboratory automation system. The architecture is based on fairly autonomous software components that can be distributed across multiple computers. The components in the architecture make use of asynchronous communication methodologies that are facilitated by passing messages between one another. The architecture can be used to develop software that is distributed, responsive and thread-safe. The thesis also proposes a framework that has been developed to implement the ideas proposed by the architecture. The framework is used to develop software that is scalable, distributed, responsive and thread-safe. The framework currently has components to control very commonly used laboratory automation devices such as mechanical stages, cameras, and also to do common laboratory automation functionalities such as imaging.
ContributorsKuppuswamy, Venkataramanan (Author) / Meldrum, Deirdre (Thesis advisor) / Collofello, James (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Johnson, Roger (Committee member) / Arizona State University (Publisher)
Created2012
151275-Thumbnail Image.png
Description
The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations.
ContributorsBuell, Kevin, Ph.D (Author) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Lindquist, Timothy (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
136153-Thumbnail Image.png
Description
Along with the number of technologies that have been introduced over a few years ago, gesture-based human-computer interactions are becoming the new phase in encompassing the creativity and abilities for users to communicate and interact with devices. Because of how the nature of defining free-space gestures influence user's preference and

Along with the number of technologies that have been introduced over a few years ago, gesture-based human-computer interactions are becoming the new phase in encompassing the creativity and abilities for users to communicate and interact with devices. Because of how the nature of defining free-space gestures influence user's preference and the length of usability of gesture-driven devices, defined low-stress and intuitive gestures for users to interact with gesture recognition systems are necessary to consider. To measure stress, a Galvanic Skin Response instrument was used as a primary indicator, which provided evidence of the relationship between stress and intuitive gestures, as well as user preferences towards certain tasks and gestures during performance. Fifteen participants engaged in creating and performing their own gestures for specified tasks that would be required during the use of free-space gesture-driven devices. The tasks include "activation of the display," scroll, page, selection, undo, and "return to main menu." They were also asked to repeat their gestures for around ten seconds each, which would give them time and further insight of how their gestures would be appropriate or not for them and any given task. Surveys were given at different time to the users: one after they had defined their gestures and another after they had repeated their gestures. In the surveys, they ranked their gestures based on comfort, intuition, and the ease of communication. Out of those user-ranked gestures, health-efficient gestures, given that the participants' rankings were based on comfort and intuition, were chosen in regards to the highest ranked gestures.
ContributorsLam, Christine (Author) / Walker, Erin (Thesis director) / Danielescu, Andreea (Committee member) / Barrett, The Honors College (Contributor) / Ira A. Fulton School of Engineering (Contributor) / School of Arts, Media and Engineering (Contributor) / Department of English (Contributor) / Computing and Informatics Program (Contributor)
Created2015-05
136160-Thumbnail Image.png
Description
Technological advances in the past decade alone are calling for modifications to the usability of various devices. Physical human interaction is becoming a popular method to communicate with user interfaces. This ranges from touch-based devices such as an iPad or tablet to free space gesture systems such as the Microsoft

Technological advances in the past decade alone are calling for modifications to the usability of various devices. Physical human interaction is becoming a popular method to communicate with user interfaces. This ranges from touch-based devices such as an iPad or tablet to free space gesture systems such as the Microsoft Kinect. With the rise in popularity of these types of devices comes the increased amount of them in public areas. Public areas frequently use walk-up-and-use displays, which give many people the opportunity to interact with them. Walk-up-and-use displays are intended to be simple enough that any individual, regardless of experience using similar technology, will be able to successfully maneuver the system. While this should be easy enough for the people using it, it is a more complicated task for the designers who are in charge of creating an interface simple enough to use while also accomplishing the tasks it was built to complete. A serious issue that I'll be addressing in this thesis is how a system designer knows what gestures to program the interface to successfully respond to. Gesture elicitation is one widely used method to discover common, intuitive, gestures that can be used with public walk-up-and-use interactive displays. In this paper, I present a study to extract common intuitive gestures for various tasks, an analysis of the responses, and suggestions for future designs of interactive, public, walk-up-and use interactions.
ContributorsVan Horn, Sarah Elizabeth (Author) / Walker, Erin (Thesis director) / Danielescu, Andreea (Committee member) / Economics Program in CLAS (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2015-05
135709-Thumbnail Image.png
Description
A study was undertaken to examine and test the effectiveness of a self-experimentation model, guided by a mobile app called PACO, in helping college students improve behaviors associated with sleep. Thirteen participants were enrolled in this study and their nightly sleep quality and sleep duration were measured via PACO as

A study was undertaken to examine and test the effectiveness of a self-experimentation model, guided by a mobile app called PACO, in helping college students improve behaviors associated with sleep. Thirteen participants were enrolled in this study and their nightly sleep quality and sleep duration were measured via PACO as they underwent three conditions: a baseline non-intervention phase, an expert-developed intervention phase, in which pre-made intervention examples were provided and used in PACO, and a self-experimentation phase, during which users were invited to develop their own sleep-behavior interventions using PACO. The participants were randomly placed into three groups, and the points of transition between phases was staggered across five weeks according to a multiple baseline design. The goal and hypothesis was to determine if sleep duration and sleep quality (sleep satisfaction) were improved in the final self-experimentation phase compared to the expert-developed experimentation phase and baseline phase, as well as in the expert-developed experimentation phase compared to the baseline phase. The results show little change, and nearly no improvement in the outcome measures between phases, leaving us unable to support the hypothesis. However, the existence of several limitations considered in retrospect, such as the small sample size, the short study time period, and technical difficulties with the PACO application means that no concrete conclusions should be made regarding the effectiveness of the self-experimentation model, nor the usability of PACO. Additional research should be made toward user motivation and modes of teaching the underlying behavioral science principles to casual users to increase effectiveness.
ContributorsNazareno, Alexandra Nicole (Author) / Hekler, Eric (Thesis director) / Walker, Erin (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
137085-Thumbnail Image.png
Description
Digital technologies are quickly being combined with and replacing teacher curriculums and student resource tools. This is particularly true with advances in digital textbooks as it provides a medium for opportunity and growth in the nature of the textbook as it pertains to students in the classroom. Although great strides

Digital technologies are quickly being combined with and replacing teacher curriculums and student resource tools. This is particularly true with advances in digital textbooks as it provides a medium for opportunity and growth in the nature of the textbook as it pertains to students in the classroom. Although great strides have been taken in intelligent tutoring systems personalized toward a student's needs there seems to be an overall disconnect between student needs in the classroom in not utilizing or adopting these technologies. In this paper I provide both conflicting and comparable needs of teachers and students surrounding the textbook to reveal the costs and benefits associated with technology adoption. Through 4 teacher interviews and 4 participatory prototyping sessions I found that students and teachers desire the following elements in technology: 1) Collaboration 2) Synchronicity 3) Adaptive 4) Automation. I discuss the implications of implementing such features and how they could be applied in integrated Q&A system to encourage collaborative learning.
ContributorsRodriguez, James Paul (Author) / Walker, Erin (Thesis director) / Finn, Edward (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / WPC Graduate Programs (Contributor)
Created2014-05