Matching Items (54)
150234-Thumbnail Image.png
Description
Introductory programming courses, also known as CS1, have a specific set of expected outcomes related to the learning of the most basic and essential computational concepts in computer science (CS). However, two of the most often heard complaints in such courses are that (1) they are divorced from the reality

Introductory programming courses, also known as CS1, have a specific set of expected outcomes related to the learning of the most basic and essential computational concepts in computer science (CS). However, two of the most often heard complaints in such courses are that (1) they are divorced from the reality of application and (2) they make the learning of the basic concepts tedious. The concepts introduced in CS1 courses are highly abstract and not easily comprehensible. In general, the difficulty is intrinsic to the field of computing, often described as "too mathematical or too abstract." This dissertation presents a small-scale mixed method study conducted during the fall 2009 semester of CS1 courses at Arizona State University. This study explored and assessed students' comprehension of three core computational concepts - abstraction, arrays of objects, and inheritance - in both algorithm design and problem solving. Through this investigation students' profiles were categorized based on their scores and based on their mistakes categorized into instances of five computational thinking concepts: abstraction, algorithm, scalability, linguistics, and reasoning. It was shown that even though the notion of computational thinking is not explicit in the curriculum, participants possessed and/or developed this skill through the learning and application of the CS1 core concepts. Furthermore, problem-solving experiences had a direct impact on participants' knowledge skills, explanation skills, and confidence. Implications for teaching CS1 and for future research are also considered.
ContributorsBillionniere, Elodie V (Author) / Collofello, James (Thesis advisor) / Ganesh, Tirupalavanam G. (Thesis advisor) / VanLehn, Kurt (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2011
151802-Thumbnail Image.png
Description
The complexity of the systems that software engineers build has continuously grown since the inception of the field. What has not changed is the engineers' mental capacity to operate on about seven distinct pieces of information at a time. The widespread use of UML has led to more abstract software

The complexity of the systems that software engineers build has continuously grown since the inception of the field. What has not changed is the engineers' mental capacity to operate on about seven distinct pieces of information at a time. The widespread use of UML has led to more abstract software design activities, however the same cannot be said for reverse engineering activities. The introduction of abstraction to reverse engineering will allow the engineer to move farther away from the details of the system, increasing his ability to see the role that domain level concepts play in the system. In this thesis, we present a technique that facilitates filtering of classes from existing systems at the source level based on their relationship to concepts in the domain via a classification method using machine learning. We showed that concepts can be identified using a machine learning classifier based on source level metrics. We developed an Eclipse plugin to assist with the process of manually classifying Java source code, and collecting metrics and classifications into a standard file format. We developed an Eclipse plugin to act as a concept identifier that visually indicates a class as a domain concept or not. We minimized the size of training sets to ensure a useful approach in practice. This allowed us to determine that a training set of 7:5 to 10% is nearly as effective as a training set representing 50% of the system. We showed that random selection is the most consistent and effective means of selecting a training set. We found that KNN is the most consistent performer among the learning algorithms tested. We determined the optimal feature set for this classification problem. We discussed two possible structures besides a one to one mapping of domain knowledge to implementation. We showed that classes representing more than one concept are simply concepts at differing levels of abstraction. We also discussed composite concepts representing a domain concept implemented by more than one class. We showed that these composite concepts are difficult to detect because the problem is NP-complete.
ContributorsCarey, Maurice (Author) / Colbourn, Charles (Thesis advisor) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Sarjoughian, Hessam S. (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151940-Thumbnail Image.png
Description
Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided

Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided a deluge of data from which we may attempt to infer a representation of the true genetic regulatory system. A gene regulatory network model, if accurate enough, may allow us to perform hypothesis testing in the form of computational experiments. Of great importance to modeling accuracy is the acknowledgment of biological contexts within the models -- i.e. recognizing the heterogeneous nature of the true biological system and the data it generates. This marriage of engineering, mathematics and computer science with systems biology creates a cycle of progress between computer simulation and lab experimentation, rapidly translating interventions and treatments for patients from the bench to the bedside. This dissertation will first discuss the landscape for modeling the biological system, explore the identification of targets for intervention in Boolean network models of biological interactions, and explore context specificity both in new graphical depictions of models embodying context-specific genomic regulation and in novel analysis approaches designed to reveal embedded contextual information. Overall, the dissertation will explore a spectrum of biological modeling with a goal towards therapeutic intervention, with both formal and informal notions of biological context, in such a way that will enable future work to have an even greater impact in terms of direct patient benefit on an individualized level.
ContributorsVerdicchio, Michael (Author) / Kim, Seungchan (Thesis advisor) / Baral, Chitta (Committee member) / Stolovitzky, Gustavo (Committee member) / Collofello, James (Committee member) / Arizona State University (Publisher)
Created2013
Description
Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use

Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use them for developing software for laboratory automation systems. This thesis proposes an architecture that is based on existing software architectural paradigms and is specifically tailored to developing software for a laboratory automation system. The architecture is based on fairly autonomous software components that can be distributed across multiple computers. The components in the architecture make use of asynchronous communication methodologies that are facilitated by passing messages between one another. The architecture can be used to develop software that is distributed, responsive and thread-safe. The thesis also proposes a framework that has been developed to implement the ideas proposed by the architecture. The framework is used to develop software that is scalable, distributed, responsive and thread-safe. The framework currently has components to control very commonly used laboratory automation devices such as mechanical stages, cameras, and also to do common laboratory automation functionalities such as imaging.
ContributorsKuppuswamy, Venkataramanan (Author) / Meldrum, Deirdre (Thesis advisor) / Collofello, James (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Johnson, Roger (Committee member) / Arizona State University (Publisher)
Created2012
151177-Thumbnail Image.png
Description
Single cell analysis has become increasingly important in understanding disease onset, progression, treatment and prognosis, especially when applied to cancer where cellular responses are highly heterogeneous. Through the advent of single cell computerized tomography (Cell-CT), researchers and clinicians now have the ability to obtain high resolution three-dimensional (3D) reconstructions of

Single cell analysis has become increasingly important in understanding disease onset, progression, treatment and prognosis, especially when applied to cancer where cellular responses are highly heterogeneous. Through the advent of single cell computerized tomography (Cell-CT), researchers and clinicians now have the ability to obtain high resolution three-dimensional (3D) reconstructions of single cells. Yet to date, no live-cell compatible version of the technology exists. In this thesis, a microfluidic chip with the ability to rotate live single cells in hydrodynamic microvortices about an axis parallel to the optical focal plane has been demonstrated. The chip utilizes a novel 3D microchamber design arranged beneath a main channel creating flow detachment into the chamber, producing recirculating flow conditions. Single cells are flowed through the main channel, held in the center of the microvortex by an optical trap, and rotated by the forces induced by the recirculating fluid flow. Computational fluid dynamics (CFD) was employed to optimize the geometry of the microchamber. Two methods for the fabrication of the 3D microchamber were devised: anisotropic etching of silicon and backside diffuser photolithography (BDPL). First, the optimization of the silicon etching conditions was demonstrated through design of experiment (DOE). In addition, a non-conventional method of soft-lithography was demonstrated which incorporates the use of two positive molds, one of the main channel and the other of the microchambers, compressed together during replication to produce a single ultra-thin (<200 µm) negative used for device assembly. Second, methods for using thick negative photoresists such as SU-8 with BDPL have been developed which include a new simple and effective method for promoting the adhesion of SU-8 to glass. An assembly method that bonds two individual ultra-thin (<100 µm) replications of the channel and the microfeatures has also been demonstrated. Finally, a pressure driven pumping system with nanoliter per minute flow rate regulation, sub-second response times, and < 3% flow variability has been designed and characterized. The fabrication and assembly of this device is inexpensive and utilizes simple variants of conventional microfluidic fabrication techniques, making it easily accessible to the single cell analysis community.
ContributorsMyers, Jakrey R (Author) / Meldrum, Deirdre (Thesis advisor) / Johnson, Roger (Committee member) / Frakes, David (Committee member) / Arizona State University (Publisher)
Created2012
151275-Thumbnail Image.png
Description
The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations.
ContributorsBuell, Kevin, Ph.D (Author) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Lindquist, Timothy (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
137679-Thumbnail Image.png
Description
Gamification is the process of adding game mechanics to non game activities, thus creating a more engaging environment. Loyals provides a gamification API which can be consumed to add Loyals (achievements) to any website, application, or mobile app. Loyals are used in two major ways: (1) to create an interactive

Gamification is the process of adding game mechanics to non game activities, thus creating a more engaging environment. Loyals provides a gamification API which can be consumed to add Loyals (achievements) to any website, application, or mobile app. Loyals are used in two major ways: (1) to create an interactive environment where users are rewarded for completing tasks and (2) as contextual information useful for analyzing user interaction with the application. The interactive environment inspires users to continue using an application while the contextual information can be used for improving the application to draw in new loyal visitors, ad targeting, creating user profiles, and much more.
ContributorsClaxton, Joshua Allen (Author) / Chen, Yinong (Thesis director) / Collofello, James (Committee member) / Irwin, Don (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-05
137375-Thumbnail Image.png
Description
Smartphones have become increasingly common over the past few years, and mobile games continue to be the most common type of application (Apple, Inc., 2013). For many people, the social aspect of gaming is very important, and thus most mobile games include support for playing with multiple players. However, there

Smartphones have become increasingly common over the past few years, and mobile games continue to be the most common type of application (Apple, Inc., 2013). For many people, the social aspect of gaming is very important, and thus most mobile games include support for playing with multiple players. However, there is a lack of common knowledge about which implementation of this functionality is most favorable from a development standpoint. In this study, we evaluate three different types of multiplayer gameplay (pass-and-play, Bluetooth, and GameCenter) via development cost and user interviews. We find that pass-and-play, the most easily-implemented mode, is not favored by players due to its inconvenience. We also find that GameCenter is not as well favored as expected due to latency of GameCenter's servers, and that Bluetooth multiplayer is the most well favored for social play due to its similarity to real-life play. Despite there being a large overhead in developing and testing Bluetooth and GameCenter multiplayer due to Apple's development process, this is irrelevant since professional developers must enroll in this process anyway. Therefore, the most effective multiplayer mode to develop is mostly determined by whether Internet play is desirable: Bluetooth if not, GameCenter if so. Future studies involving more complete development work and more types of multiplayer modes could yield more promising results.
ContributorsBradley, Michael Robert (Author) / Collofello, James (Thesis director) / Wilkerson, Kelly (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-12
137384-Thumbnail Image.png
Description
A new honors class created at Arizona State University utilizes a new "thinking" paradigm. The new paradigm is a problem solution using deductive logic and natural laws to replace the traditional acquisition and usage of detailed knowledge. When utilizing deductive logic, less time is required for students to learn, and

A new honors class created at Arizona State University utilizes a new "thinking" paradigm. The new paradigm is a problem solution using deductive logic and natural laws to replace the traditional acquisition and usage of detailed knowledge. When utilizing deductive logic, less time is required for students to learn, and students are able to resolve unique issues with minimal amounts of information. Students use their logic and processing skills to replace the traditional need of collecting large amounts of detailed information. The concepts taught in the class have come from the industry success of the Best Value (BV) approach developed by a leading research group at Arizona State University over the last 17 years. The research group identified the source of the industry's problem is due to the traditional business approach of management, direction and control (MDC). With over 1500 tests conducted, delivering $5.7B of services, with results showing: 30% decrease in cost, 30% increase in value, and customer satisfaction improvement by up to 140%, the Best Value (BV) approach has been identified as more efficient and can deliver better quality services than the traditional MDC approach. Through the research group's implementation of the new paradigm in higher education, the author identified a windfall effect that was able to give students understanding and an increased ability to cope with stressful situations, disease and extraordinary complications. It also exposed students to potentially harmful practices in their lives and has helped them to change. The study tested in K-12 proved potential value in exposing the paradigm to K-12 students, and what impact it may have on future professionals. The author's results include satisfaction rating of 9.5 (out of 10), increased career alignment by up to 113%, increased understanding of self by up to 70%, and a reduction of stress by up to 71%. The author's K-12 case studies aligned with the successful results shown in the industry and college classes run by the leading research group. The pattern of the new paradigm shows as resistance to it decreases, productivity, efficiency, processing speed, understanding, and effectiveness all increase.
ContributorsRivera, Alfredo (Author) / Kashiwagi, Dean (Thesis director) / Collofello, James (Committee member) / Nelson, Margaret (Committee member) / Barrett, The Honors College (Contributor) / Department of Management (Contributor) / Del E. Webb Construction (Contributor)
Created2013-12
130342-Thumbnail Image.png
Description
Background
Grading schemes for breast cancer diagnosis are predominantly based on pathologists' qualitative assessment of altered nuclear structure from 2D brightfield microscopy images. However, cells are three-dimensional (3D) objects with features that are inherently 3D and thus poorly characterized in 2D. Our goal is to quantitatively characterize nuclear structure in 3D,

Background
Grading schemes for breast cancer diagnosis are predominantly based on pathologists' qualitative assessment of altered nuclear structure from 2D brightfield microscopy images. However, cells are three-dimensional (3D) objects with features that are inherently 3D and thus poorly characterized in 2D. Our goal is to quantitatively characterize nuclear structure in 3D, assess its variation with malignancy, and investigate whether such variation correlates with standard nuclear grading criteria.
Methodology
We applied micro-optical computed tomographic imaging and automated 3D nuclear morphometry to quantify and compare morphological variations between human cell lines derived from normal, benign fibrocystic or malignant breast epithelium. To reproduce the appearance and contrast in clinical cytopathology images, we stained cells with hematoxylin and eosin and obtained 3D images of 150 individual stained cells of each cell type at sub-micron, isotropic resolution. Applying volumetric image analyses, we computed 42 3D morphological and textural descriptors of cellular and nuclear structure.
Principal Findings
We observed four distinct nuclear shape categories, the predominant being a mushroom cap shape. Cell and nuclear volumes increased from normal to fibrocystic to metastatic type, but there was little difference in the volume ratio of nucleus to cytoplasm (N/C ratio) between the lines. Abnormal cell nuclei had more nucleoli, markedly higher density and clumpier chromatin organization compared to normal. Nuclei of non-tumorigenic, fibrocystic cells exhibited larger textural variations than metastatic cell nuclei. At p<0.0025 by ANOVA and Kruskal-Wallis tests, 90% of our computed descriptors statistically differentiated control from abnormal cell populations, but only 69% of these features statistically differentiated the fibrocystic from the metastatic cell populations.
Conclusions
Our results provide a new perspective on nuclear structure variations associated with malignancy and point to the value of automated quantitative 3D nuclear morphometry as an objective tool to enable development of sensitive and specific nuclear grade classification in breast cancer diagnosis.
Created2012-01-05