This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 21 - 30 of 39
Filtering by

Clear all filters

129239-Thumbnail Image.png
Description

The U.S. scientific research community does not reflect America's diversity. Hispanics, African Americans, and Native Americans made up 31% of the general population in 2010, but they represented only 18 and 7% of science, technology, engineering, and mathematics (STEM) bachelor's and doctoral degrees, respectively, and 6% of STEM faculty members

The U.S. scientific research community does not reflect America's diversity. Hispanics, African Americans, and Native Americans made up 31% of the general population in 2010, but they represented only 18 and 7% of science, technology, engineering, and mathematics (STEM) bachelor's and doctoral degrees, respectively, and 6% of STEM faculty members (National Science Foundation [NSF], 2013). Equity in the scientific research community is important for a variety of reasons; a diverse community of researchers can minimize the negative influence of bias in scientific reasoning, because people from different backgrounds approach a problem from different perspectives and can raise awareness regarding biases (Intemann, 2009). Additionally, by failing to be attentive to equity, we may exclude some of the best and brightest scientific minds and limit the pool of possible scientists (Intemann, 2009). Given this need for equity, how can our scientific research community become more inclusive?

ContributorsBangera, Gita (Author) / Brownell, Sara (Author) / College of Liberal Arts and Sciences (Contributor)
Created2014-12-01
128958-Thumbnail Image.png
Description

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt to model, may have a diverse range of applications.

Methods: We investigate the utility of a number of statistical methods to determine model performance and address challenges inherent in analyzing immunosignatures. Some of these methods include exploratory and confirmatory factor analyses, classical significance testing, structural equation and mixture modeling.

Results: We demonstrate an ability to classify samples based on disease status and show that immunosignaturing is a very promising technology for screening and presymptomatic screening of disease. In addition, we are able to model complex patterns and latent factors underlying immunosignatures. These latent factors may serve as biomarkers for disease and may play a key role in a bioinformatic method for antibody discovery.

Conclusion: Based on this research, we lay out an analytic framework illustrating how immunosignatures may be useful as a general method for screening and presymptomatic screening of disease as well as antibody discovery.

ContributorsBrown, Justin (Author) / Stafford, Phillip (Author) / Johnston, Stephen (Author) / Dinu, Valentin (Author) / College of Health Solutions (Contributor)
Created2011-08-19
129075-Thumbnail Image.png
Description

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of features. As new microarrays are invented, classification systems that worked well for other array types may not be ideal. Expression microarrays, arguably one of the most prevalent array types, have been used for years to help develop classification algorithms. Many biological assumptions are built into classifiers that were designed for these types of data. One of the more problematic is the assumption of independence, both at the probe level and again at the biological level. Probes for RNA transcripts are designed to bind single transcripts. At the biological level, many genes have dependencies across transcriptional pathways where co-regulation of transcriptional units may make many genes appear as being completely dependent. Thus, algorithms that perform well for gene expression data may not be suitable when other technologies with different binding characteristics exist. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides. It relies on many-to-many binding of antibodies to the random sequence peptides. Each peptide can bind multiple antibodies and each antibody can bind multiple peptides. This technology has been shown to be highly reproducible and appears promising for diagnosing a variety of disease states. However, it is not clear what is the optimal classification algorithm for analyzing this new type of data.

Results: We characterized several classification algorithms to analyze immunosignaturing data. We selected several datasets that range from easy to difficult to classify, from simple monoclonal binding to complex binding patterns in asthma patients. We then classified the biological samples using 17 different classification algorithms. Using a wide variety of assessment criteria, we found ‘Naïve Bayes’ far more useful than other widely used methods due to its simplicity, robustness, speed and accuracy.

Conclusions: ‘Naïve Bayes’ algorithm appears to accommodate the complex patterns hidden within multilayered immunosignaturing microarray data due to its fundamental mathematical properties.

ContributorsKukreja, Muskan (Author) / Johnston, Stephen (Author) / Stafford, Phillip (Author) / Biodesign Institute (Contributor)
Created2012-06-21
128754-Thumbnail Image.png
Description

The rise in antibiotic resistance has led to an increased research focus on discovery of new antibacterial candidates. While broad-spectrum antibiotics are widely pursued, there is evidence that resistance arises in part from the wide spread use of these antibiotics. Our group has developed a system to produce protein affinity

The rise in antibiotic resistance has led to an increased research focus on discovery of new antibacterial candidates. While broad-spectrum antibiotics are widely pursued, there is evidence that resistance arises in part from the wide spread use of these antibiotics. Our group has developed a system to produce protein affinity agents, called synbodies, which have high affinity and specificity for their target. In this report, we describe the adaptation of this system to produce new antibacterial candidates towards a target bacterium. The system functions by screening target bacteria against an array of 10,000 random sequence peptides and, using a combination of membrane labeling and intracellular dyes, we identified peptides with target specific binding or killing functions. Binding and lytic peptides were identified in this manner and in vitro tests confirmed the activity of the lead peptides. A peptide with antibacterial activity was linked to a peptide specifically binding Staphylococcus aureus to create a synbody with increased antibacterial activity. Subsequent tests showed that this peptide could block S. aureus induced killing of HEK293 cells in a co-culture experiment. These results demonstrate the feasibility of using the synbody system to discover new antibacterial candidate agents.

ContributorsDomenyuk, Valeriy (Author) / Loskutov, Andrey (Author) / Johnston, Stephen (Author) / Diehnelt, Chris (Author) / Biodesign Institute (Contributor)
Created2013-01-23
128442-Thumbnail Image.png
Description

Integrating research experiences into undergraduate life sciences curricula in the form of course-based undergraduate research experiences (CUREs) can meet national calls for education reform by giving students the chance to “do science.” In this article, we provide a step-by-step practical guide to help instructors assess their CUREs using best practices

Integrating research experiences into undergraduate life sciences curricula in the form of course-based undergraduate research experiences (CUREs) can meet national calls for education reform by giving students the chance to “do science.” In this article, we provide a step-by-step practical guide to help instructors assess their CUREs using best practices in assessment. We recommend that instructors first identify their anticipated CURE learning outcomes, then work to identify an assessment instrument that aligns to those learning outcomes and critically evaluate the results from their course assessment. To aid instructors in becoming aware of what instruments have been developed, we have also synthesized a table of “off-the-shelf” assessment instruments that instructors could use to assess their own CUREs. However, we acknowledge that each CURE is unique and instructors may expect specific learning outcomes that cannot be assessed using existing assessment instruments, so we recommend that instructors consider developing their own assessments that are tightly aligned to the context of their CURE.

ContributorsShortlidge, Erin (Author) / Brownell, Sara (Author) / College of Liberal Arts and Sciences (Contributor)
Created2016-12
127961-Thumbnail Image.png
Description

As gesture interfaces become more main-stream, it is increasingly important to investigate the behavioral characteristics of these interactions – particularly in three-dimensional (3D) space. In this study, Fitts’ method was extended to such input technologies, and the applicability of Fitts’ law to gesture-based interactions was examined. The experiment included three

As gesture interfaces become more main-stream, it is increasingly important to investigate the behavioral characteristics of these interactions – particularly in three-dimensional (3D) space. In this study, Fitts’ method was extended to such input technologies, and the applicability of Fitts’ law to gesture-based interactions was examined. The experiment included three gesture-based input devices that utilize different techniques to capture user movement, and compared them to conventional input technologies like touchscreen and mouse. Participants completed a target-acquisition test and were instructed to move a cursor from a home location to a spherical target as quickly and accurately as possible. Three distances and three target sizes were tested six times in a randomized order for all input devices. A total of 81 participants completed all tasks. Movement time, error rate, and throughput were calculated for each input technology. Results showed that the mean movement time was highly correlated with the target's index of difficulty for all devices, providing evidence that Fitts’ law can be extended and applied to gesture-based devices. Throughputs were found to be significantly lower for the gesture-based devices compared to mouse and touchscreen, and as the index of difficulty increased, the movement time increased significantly more for these gesture technologies. Error counts were statistically higher for all gesture-based input technologies compared to mouse. In addition, error counts for all inputs were highly correlated with target width, but little impact was shown by movement distance. Overall, the findings suggest that gesture-based devices can be characterized by Fitts’ law in a similar fashion to conventional 1D or 2D devices.

ContributorsBurno, Rachael A. (Author) / Wu, Bing (Author) / Doherty, Rina (Author) / Colett, Hannah (Author) / Elnaggar, Rania (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2015-10-23
127952-Thumbnail Image.png
Description

Summer bridge programs are designed to help transition students into the college learning environment. Increasingly, bridge programs are being developed in science, technology, engineering, and mathematics (STEM) disciplines because of the rigorous content and lower student persistence in college STEM compared with other disciplines. However, to our knowledge, a comprehensive

Summer bridge programs are designed to help transition students into the college learning environment. Increasingly, bridge programs are being developed in science, technology, engineering, and mathematics (STEM) disciplines because of the rigorous content and lower student persistence in college STEM compared with other disciplines. However, to our knowledge, a comprehensive review of STEM summer bridge programs does not exist. To provide a resource for bridge program developers, we conducted a systematic review of the literature on STEM summer bridge programs. We identified 46 published reports on 30 unique STEM bridge programs that have been published over the past 25 years. In this review, we report the goals of each bridge program and whether the program was successful in meeting these goals. We identify 14 distinct bridge program goals that can be organized into three categories: academic success goals, psychosocial goals, and department-level goals. Building on the findings of published bridge reports, we present a set of recommendations for STEM bridge programs in hopes of developing better bridges into college.

ContributorsAshley, Michael (Author) / Cooper, Katelyn (Author) / Cala, Jacqueline (Author) / Brownell, Sara (Author) / College of Liberal Arts and Sciences (Contributor)
Created2017-12-01
127936-Thumbnail Image.png
Description

Load associated fatigue cracking is one of the major distress types occurring in flexible pavements. Flexural bending beam fatigue laboratory test has been used for several decades and is considered an integral part of the Superpave advanced characterization procedure. One of the most significant solutions to sustain the fatigue life

Load associated fatigue cracking is one of the major distress types occurring in flexible pavements. Flexural bending beam fatigue laboratory test has been used for several decades and is considered an integral part of the Superpave advanced characterization procedure. One of the most significant solutions to sustain the fatigue life for an asphaltic mixture is to add sustainable materials such as rubber or polymers to the asphalt mixture. A laboratory testing program was performed on three gap-graded mixtures: unmodified, Asphalt Rubber (AR) and polymer-modified. Strain controlled fatigue tests were conducted according to the AASHTO T321 procedure. The results from the beam fatigue tests indicated that the AR and polymer-modified gap graded mixtures would have much longer fatigue lives compared to the reference (unmodified) mixture. In addition, a mechanistic analysis using 3D-Move software coupled with a cost-effectiveness analysis study based on the fatigue performance on the three mixtures were performed. Overall, the analysis showed that the AR and polymer-modified asphalt mixtures exhibited significantly higher cost-effectiveness compared to unmodified HMA mixture. Although AR and polymer-modification increases the cost of the material, the analysis showed that they are more cost effective than the unmodified mixture.

ContributorsSouliman, Mena I. (Author) / Mamlouk, Michael (Author) / Eifert, Annie (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127935-Thumbnail Image.png
Description

The principles of a new project management model have been tested for the past 20 years. This project management model utilizes expertise instead of the traditional management, direction, and control (MDC). This new project management model is a leadership-based model instead of a management model. The practice of the new

The principles of a new project management model have been tested for the past 20 years. This project management model utilizes expertise instead of the traditional management, direction, and control (MDC). This new project management model is a leadership-based model instead of a management model. The practice of the new model requires a change in paradigm and project management structure. Some of the practices of this new paradigm include minimizing the flow of information and communications to and from the project manager [including meetings, emails and documents], eliminating technical communications, reducing client management, direction, and control of the vendor, and the hiring of vendors or personnel to do specific tasks. A vendors is hired only after they have clearly shown that they know what they are doing by showing past performance on similar projects, that they clearly understand how to create transparency to minimize risk that they do not control, and that they can clearly outline their project plan using a detailed milestone schedule including time, cost, and tasks all communicated in the language of metrics.

ContributorsRivera, Alfredo (Author) / Kashiwagi, Dean (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127934-Thumbnail Image.png
Description

For the past three decades, the Saudi construction industry (SCI) has exhibited poor performance. Many research efforts have tried to identify the problem and the potential causes but there have been few publications identifying ways to mitigate the problem and describing testing to validate the proposed solution. This paper examines

For the past three decades, the Saudi construction industry (SCI) has exhibited poor performance. Many research efforts have tried to identify the problem and the potential causes but there have been few publications identifying ways to mitigate the problem and describing testing to validate the proposed solution. This paper examines the research and development (R&D) approach in the SCI. A literature research was performed identifying the impact that R&D has had on the SCI. A questionnaire was also created for surveying industry professionals and researchers. The results show evidence that the SCI practice and the academic research work exist in separate silos. This study recommends a change of mindset in both the public and private sector on their views on R&D since cooperation is required to create collaboration between the two sectors and improve the competitiveness of the country's economy.

ContributorsAlhammadi, Yasir (Author) / Algahtany, Mohammed (Author) / Kashiwagi, Dean (Author) / Sullivan, Kenneth (Author) / Kashiwagi, Jacob (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20