Matching Items (131)
150234-Thumbnail Image.png
Description
Introductory programming courses, also known as CS1, have a specific set of expected outcomes related to the learning of the most basic and essential computational concepts in computer science (CS). However, two of the most often heard complaints in such courses are that (1) they are divorced from the reality

Introductory programming courses, also known as CS1, have a specific set of expected outcomes related to the learning of the most basic and essential computational concepts in computer science (CS). However, two of the most often heard complaints in such courses are that (1) they are divorced from the reality of application and (2) they make the learning of the basic concepts tedious. The concepts introduced in CS1 courses are highly abstract and not easily comprehensible. In general, the difficulty is intrinsic to the field of computing, often described as "too mathematical or too abstract." This dissertation presents a small-scale mixed method study conducted during the fall 2009 semester of CS1 courses at Arizona State University. This study explored and assessed students' comprehension of three core computational concepts - abstraction, arrays of objects, and inheritance - in both algorithm design and problem solving. Through this investigation students' profiles were categorized based on their scores and based on their mistakes categorized into instances of five computational thinking concepts: abstraction, algorithm, scalability, linguistics, and reasoning. It was shown that even though the notion of computational thinking is not explicit in the curriculum, participants possessed and/or developed this skill through the learning and application of the CS1 core concepts. Furthermore, problem-solving experiences had a direct impact on participants' knowledge skills, explanation skills, and confidence. Implications for teaching CS1 and for future research are also considered.
ContributorsBillionniere, Elodie V (Author) / Collofello, James (Thesis advisor) / Ganesh, Tirupalavanam G. (Thesis advisor) / VanLehn, Kurt (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2011
152348-Thumbnail Image.png
Description
Pathogenic Gram-negative bacteria employ a variety of molecular mechanisms to combat host defenses. Two-component regulatory systems (TCR systems) are the most ubiquitous signal transduction systems which regulate many genes required for virulence and survival of bacteria. In this study, I analyzed different TCR systems in two clinically-relevant Gram-negative bacteria, i.e.,

Pathogenic Gram-negative bacteria employ a variety of molecular mechanisms to combat host defenses. Two-component regulatory systems (TCR systems) are the most ubiquitous signal transduction systems which regulate many genes required for virulence and survival of bacteria. In this study, I analyzed different TCR systems in two clinically-relevant Gram-negative bacteria, i.e., oral pathogen Porphyromonas gingivalis and enterobacterial Escherichia coli. P. gingivalis is a major causative agent of periodontal disease as well as systemic illnesses, like cardiovascular disease. A microarray study found that the putative PorY-PorX TCR system controls the secretion and maturation of virulence factors, as well as loci involved in the PorSS secretion system, which secretes proteinases, i.e., gingipains, responsible for periodontal disease. Proteomic analysis (SILAC) was used to improve the microarray data, reverse-transcription PCR to verify the proteomic data, and primer extension assay to determine the promoter regions of specific PorX regulated loci. I was able to characterize multiple genetic loci regulated by this TCR system, many of which play an essential role in hemagglutination and host-cell adhesion, and likely contribute to virulence in this bacterium. Enteric Gram-negative bacteria must withstand many host defenses such as digestive enzymes, low pH, and antimicrobial peptides (AMPs). The CpxR-CpxA TCR system of E. coli has been extensively characterized and shown to be required for protection against AMPs. Most recently, this TCR system has been shown to up-regulate the rfe-rff operon which encodes genes involved in the production of enterobacterial common antigen (ECA), and confers protection against a variety of AMPs. In this study, I utilized primer extension and DNase I footprinting to determine how CpxR regulates the ECA operon. My findings suggest that CpxR modulates transcription by directly binding to the rfe promoter. Multiple genetic and biochemical approaches were used to demonstrate that specific TCR systems contribute to regulation of virulence factors and resistance to host defenses in P. gingivalis and E. coli, respectively. Understanding these genetic circuits provides insight into strategies for pathogenesis and resistance to host defenses in Gram negative bacterial pathogens. Finally, these data provide compelling potential molecular targets for therapeutics to treat P. gingivalis and E. coli infections.
ContributorsLeonetti, Cori (Author) / Shi, Yixin (Thesis advisor) / Stout, Valerie (Committee member) / Nickerson, Cheryl (Committee member) / Sandrin, Todd (Committee member) / Arizona State University (Publisher)
Created2013
151797-Thumbnail Image.png
Description
The study of bacterial resistance to antimicrobial peptides (AMPs) is a significant area of interest as these peptides have the potential to be developed into alternative drug therapies to combat microbial pathogens. AMPs represent a class of host-mediated factors that function to prevent microbial infection of their host and serve

The study of bacterial resistance to antimicrobial peptides (AMPs) is a significant area of interest as these peptides have the potential to be developed into alternative drug therapies to combat microbial pathogens. AMPs represent a class of host-mediated factors that function to prevent microbial infection of their host and serve as a first line of defense. To date, over 1,000 AMPs of various natures have been predicted or experimentally characterized. Their potent bactericidal activities and broad-based target repertoire make them a promising next-generation pharmaceutical therapy to combat bacterial pathogens. It is important to understand the molecular mechanisms, both genetic and physiological, that bacteria employ to circumvent the bactericidal activities of AMPs. These understandings will allow researchers to overcome challenges posed with the development of new drug therapies; as well as identify, at a fundamental level, how bacteria are able to adapt and survive within varied host environments. Here, results are presented from the first reported large scale, systematic screen in which the Keio collection of ~4,000 Escherichia coli deletion mutants were challenged against physiologically significant AMPs to identify genes required for resistance. Less than 3% of the total number of genes on the E. coli chromosome was determined to contribute to bacterial resistance to at least one AMP analyzed in the screen. Further, the screen implicated a single cellular component (enterobacterial common antigen, ECA) and a single transporter system (twin-arginine transporter, Tat) as being required for resistance to each AMP class. Using antimicrobial resistance as a tool to identify novel genetic mechanisms, subsequent analyses were able to identify a two-component system, CpxR/CpxA, as a global regulator in bacterial resistance to AMPs. Multiple previously characterized CpxR/A members, as well as members found in this study, were identified in the screen. Notably, CpxR/A was found to transcriptionally regulate the gene cluster responsible for the biosynthesis of the ECA. Thus, a novel genetic mechanism was uncovered that directly correlates with a physiologically significant cellular component that appears to globally contribute to bacterial resistance to AMPs.
ContributorsWeatherspoon-Griffin, Natasha (Author) / Shi, Yixin (Thesis advisor) / Clark-Curtiss, Josephine (Committee member) / Misra, Rajeev (Committee member) / Nickerson, Cheryl (Committee member) / Stout, Valerie (Committee member) / Arizona State University (Publisher)
Created2013
151802-Thumbnail Image.png
Description
The complexity of the systems that software engineers build has continuously grown since the inception of the field. What has not changed is the engineers' mental capacity to operate on about seven distinct pieces of information at a time. The widespread use of UML has led to more abstract software

The complexity of the systems that software engineers build has continuously grown since the inception of the field. What has not changed is the engineers' mental capacity to operate on about seven distinct pieces of information at a time. The widespread use of UML has led to more abstract software design activities, however the same cannot be said for reverse engineering activities. The introduction of abstraction to reverse engineering will allow the engineer to move farther away from the details of the system, increasing his ability to see the role that domain level concepts play in the system. In this thesis, we present a technique that facilitates filtering of classes from existing systems at the source level based on their relationship to concepts in the domain via a classification method using machine learning. We showed that concepts can be identified using a machine learning classifier based on source level metrics. We developed an Eclipse plugin to assist with the process of manually classifying Java source code, and collecting metrics and classifications into a standard file format. We developed an Eclipse plugin to act as a concept identifier that visually indicates a class as a domain concept or not. We minimized the size of training sets to ensure a useful approach in practice. This allowed us to determine that a training set of 7:5 to 10% is nearly as effective as a training set representing 50% of the system. We showed that random selection is the most consistent and effective means of selecting a training set. We found that KNN is the most consistent performer among the learning algorithms tested. We determined the optimal feature set for this classification problem. We discussed two possible structures besides a one to one mapping of domain knowledge to implementation. We showed that classes representing more than one concept are simply concepts at differing levels of abstraction. We also discussed composite concepts representing a domain concept implemented by more than one class. We showed that these composite concepts are difficult to detect because the problem is NP-complete.
ContributorsCarey, Maurice (Author) / Colbourn, Charles (Thesis advisor) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Sarjoughian, Hessam S. (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151940-Thumbnail Image.png
Description
Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided

Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided a deluge of data from which we may attempt to infer a representation of the true genetic regulatory system. A gene regulatory network model, if accurate enough, may allow us to perform hypothesis testing in the form of computational experiments. Of great importance to modeling accuracy is the acknowledgment of biological contexts within the models -- i.e. recognizing the heterogeneous nature of the true biological system and the data it generates. This marriage of engineering, mathematics and computer science with systems biology creates a cycle of progress between computer simulation and lab experimentation, rapidly translating interventions and treatments for patients from the bench to the bedside. This dissertation will first discuss the landscape for modeling the biological system, explore the identification of targets for intervention in Boolean network models of biological interactions, and explore context specificity both in new graphical depictions of models embodying context-specific genomic regulation and in novel analysis approaches designed to reveal embedded contextual information. Overall, the dissertation will explore a spectrum of biological modeling with a goal towards therapeutic intervention, with both formal and informal notions of biological context, in such a way that will enable future work to have an even greater impact in terms of direct patient benefit on an individualized level.
ContributorsVerdicchio, Michael (Author) / Kim, Seungchan (Thesis advisor) / Baral, Chitta (Committee member) / Stolovitzky, Gustavo (Committee member) / Collofello, James (Committee member) / Arizona State University (Publisher)
Created2013
Description
Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use

Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use them for developing software for laboratory automation systems. This thesis proposes an architecture that is based on existing software architectural paradigms and is specifically tailored to developing software for a laboratory automation system. The architecture is based on fairly autonomous software components that can be distributed across multiple computers. The components in the architecture make use of asynchronous communication methodologies that are facilitated by passing messages between one another. The architecture can be used to develop software that is distributed, responsive and thread-safe. The thesis also proposes a framework that has been developed to implement the ideas proposed by the architecture. The framework is used to develop software that is scalable, distributed, responsive and thread-safe. The framework currently has components to control very commonly used laboratory automation devices such as mechanical stages, cameras, and also to do common laboratory automation functionalities such as imaging.
ContributorsKuppuswamy, Venkataramanan (Author) / Meldrum, Deirdre (Thesis advisor) / Collofello, James (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Johnson, Roger (Committee member) / Arizona State University (Publisher)
Created2012
151177-Thumbnail Image.png
Description
Single cell analysis has become increasingly important in understanding disease onset, progression, treatment and prognosis, especially when applied to cancer where cellular responses are highly heterogeneous. Through the advent of single cell computerized tomography (Cell-CT), researchers and clinicians now have the ability to obtain high resolution three-dimensional (3D) reconstructions of

Single cell analysis has become increasingly important in understanding disease onset, progression, treatment and prognosis, especially when applied to cancer where cellular responses are highly heterogeneous. Through the advent of single cell computerized tomography (Cell-CT), researchers and clinicians now have the ability to obtain high resolution three-dimensional (3D) reconstructions of single cells. Yet to date, no live-cell compatible version of the technology exists. In this thesis, a microfluidic chip with the ability to rotate live single cells in hydrodynamic microvortices about an axis parallel to the optical focal plane has been demonstrated. The chip utilizes a novel 3D microchamber design arranged beneath a main channel creating flow detachment into the chamber, producing recirculating flow conditions. Single cells are flowed through the main channel, held in the center of the microvortex by an optical trap, and rotated by the forces induced by the recirculating fluid flow. Computational fluid dynamics (CFD) was employed to optimize the geometry of the microchamber. Two methods for the fabrication of the 3D microchamber were devised: anisotropic etching of silicon and backside diffuser photolithography (BDPL). First, the optimization of the silicon etching conditions was demonstrated through design of experiment (DOE). In addition, a non-conventional method of soft-lithography was demonstrated which incorporates the use of two positive molds, one of the main channel and the other of the microchambers, compressed together during replication to produce a single ultra-thin (<200 µm) negative used for device assembly. Second, methods for using thick negative photoresists such as SU-8 with BDPL have been developed which include a new simple and effective method for promoting the adhesion of SU-8 to glass. An assembly method that bonds two individual ultra-thin (<100 µm) replications of the channel and the microfeatures has also been demonstrated. Finally, a pressure driven pumping system with nanoliter per minute flow rate regulation, sub-second response times, and < 3% flow variability has been designed and characterized. The fabrication and assembly of this device is inexpensive and utilizes simple variants of conventional microfluidic fabrication techniques, making it easily accessible to the single cell analysis community.
ContributorsMyers, Jakrey R (Author) / Meldrum, Deirdre (Thesis advisor) / Johnson, Roger (Committee member) / Frakes, David (Committee member) / Arizona State University (Publisher)
Created2012
151275-Thumbnail Image.png
Description
The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations.
ContributorsBuell, Kevin, Ph.D (Author) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Lindquist, Timothy (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
136287-Thumbnail Image.png
Description
Hepatitis C virus (HCV) is a globally prevalent infection which is a main contributor to the global burden of liver disease. Due to its ability to establish a chronic infection, and the lack of usefulness of traditional neutralizing antibody vaccine design in producing a protective immune response, a preventative vaccine

Hepatitis C virus (HCV) is a globally prevalent infection which is a main contributor to the global burden of liver disease. Due to its ability to establish a chronic infection, and the lack of usefulness of traditional neutralizing antibody vaccine design in producing a protective immune response, a preventative vaccine has been notoriously difficult to produce. To overcome this, a vaccine using non-structural protein 3 (NS3) as a target to elicit a T cell specific immune response is thought to be a possible strategy for eliciting a protective immune response against hepatitis C infection. In this paper, a recombinant strain of measles virus (MV) that expresses HCV NS3 protein was analyzed. The replication fitness of this recombinant virus also indicates that this construct replicates at a higher rate than parental measles strain. It is also demonstrated through western blot analysis of protein expression and immunofluorescence that this recombinant virus expresses both the inserted HCV NS3 protein, as well as native measles proteins.
ContributorsWoell, Dana Marie (Author) / Reyes del Valle, Jorge (Thesis director) / Nickerson, Cheryl (Committee member) / Julik, Emily (Committee member) / Barrett, The Honors College (Contributor) / Department of Chemistry and Biochemistry (Contributor) / School of Human Evolution and Social Change (Contributor)
Created2015-05
135647-Thumbnail Image.png
Description
Clean water for drinking, food preparation, and bathing is essential for astronaut health and safety during long duration habitation of the International Space Station (ISS), including future missions to Mars. Despite stringent water treatment and recycling efforts on the ISS, it is impossible to completely prevent microbial contamination of onboard

Clean water for drinking, food preparation, and bathing is essential for astronaut health and safety during long duration habitation of the International Space Station (ISS), including future missions to Mars. Despite stringent water treatment and recycling efforts on the ISS, it is impossible to completely prevent microbial contamination of onboard water supplies. In this work, we used a spaceflight analogue culture system to better understand how the microgravity environment can influence the pathogenesis-related characteristics of Burkholderia cepacia complex (Bcc), an opportunistic pathogen previously recovered from the ISS water system. The results of the present study suggest that there may be important differences in how this pathogen can respond and adapt to spaceflight and other low fluid shear environments encountered during their natural life cycles. Future studies are aimed at understanding the underlying mechanisms responsible for these phenotypes.
ContributorsKang, Bianca Younseon (Author) / Nickerson, Cheryl (Thesis director) / Barrila, Jennifer (Committee member) / Ott, Mark (Committee member) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05