This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 71
Filtering by

Clear all filters

141473-Thumbnail Image.png
Description

Critical flicker fusion thresholds (CFFTs) describe when quick amplitude modulations of a light source become undetectable as the frequency of the modulation increases and are thought to underlie a number of visual processing skills, including reading. Here, we compare the impact of two vision-training approaches, one involving contrast sensitivity training

Critical flicker fusion thresholds (CFFTs) describe when quick amplitude modulations of a light source become undetectable as the frequency of the modulation increases and are thought to underlie a number of visual processing skills, including reading. Here, we compare the impact of two vision-training approaches, one involving contrast sensitivity training and the other directional dot-motion training, compared to an active control group trained on Sudoku. The three training paradigms were compared on their effectiveness for altering CFFT. Directional dot-motion and contrast sensitivity training resulted in significant improvement in CFFT, while the Sudoku group did not yield significant improvement. This finding indicates that dot-motion and contrast sensitivity training similarly transfer to effect changes in CFFT. The results, combined with prior research linking CFFT to high-order cognitive processes such as reading ability, and studies showing positive impact of both dot-motion and contrast sensitivity training in reading, provide a possible mechanistic link of how these different training approaches impact reading abilities.

ContributorsZhou, Tianyou (Author) / Nanez, Jose (Author) / Zimmerman, Daniel (Author) / Holloway, Steven (Author) / Seitz, Aaron (Author) / New College of Interdisciplinary Arts and Sciences (Contributor)
Created2016-10-26
141474-Thumbnail Image.png
Description

Although autism spectrum disorder (ASD) is a serious lifelong condition, its underlying neural mechanism remains unclear. Recently, neuroimaging-based classifiers for ASD and typically developed (TD) individuals were developed to identify the abnormality of functional connections (FCs). Due to over-fitting and interferential effects of varying measurement conditions and demographic distributions, no

Although autism spectrum disorder (ASD) is a serious lifelong condition, its underlying neural mechanism remains unclear. Recently, neuroimaging-based classifiers for ASD and typically developed (TD) individuals were developed to identify the abnormality of functional connections (FCs). Due to over-fitting and interferential effects of varying measurement conditions and demographic distributions, no classifiers have been strictly validated for independent cohorts. Here we overcome these difficulties by developing a novel machine-learning algorithm that identifies a small number of FCs that separates ASD versus TD. The classifier achieves high accuracy for a Japanese discovery cohort and demonstrates a remarkable degree of generalization for two independent validation cohorts in the USA and Japan. The developed ASD classifier does not distinguish individuals with major depressive disorder and attention-deficit hyperactivity disorder from their controls but moderately distinguishes patients with schizophrenia from their controls. The results leave open the viable possibility of exploring neuroimaging-based dimensions quantifying the multiple-disorder spectrum.

ContributorsYahata, Noriaki (Author) / Morimoto, Jun (Author) / Hashimoto, Ryuichiro (Author) / Lisi, Giuseppe (Author) / Shibata, Kazuhisa (Author) / Kawakubo, Yuki (Author) / Kuwabara, Hitoshi (Author) / Kuroda, Miho (Author) / Yamada, Takashi (Author) / Megumi, Fukuda (Author) / Imamizu, Hiroshi (Author) / Nanez, Jose (Author) / Takahashi, Hidehiko (Author) / Okamoto, Yasumasa (Author) / Kasai, Kiyoto (Author) / Kato, Nobumasa (Author) / Sasaki, Yuka (Author) / Watanabe, Takeo (Author) / Kawato, Mitsuo (Author) / New College of Interdisciplinary Arts and Sciences (Contributor)
Created2016-04-14
129333-Thumbnail Image.png
Description

MicroRNAs (miRNAs) are short non-coding RNAs that regulate gene output at the post-transcriptional level by targeting degenerate elements primarily in 3′untranslated regions (3′UTRs) of mRNAs. Individual miRNAs can regulate networks of hundreds of genes, yet for the majority of miRNAs few, if any, targets are known. Misexpression of miRNAs is

MicroRNAs (miRNAs) are short non-coding RNAs that regulate gene output at the post-transcriptional level by targeting degenerate elements primarily in 3′untranslated regions (3′UTRs) of mRNAs. Individual miRNAs can regulate networks of hundreds of genes, yet for the majority of miRNAs few, if any, targets are known. Misexpression of miRNAs is also a major contributor to cancer progression, thus there is a critical need to validate miRNA targets in high-throughput to understand miRNAs' contribution to tumorigenesis. Here we introduce a novel high-throughput assay to detect miRNA targets in 3′UTRs, called Luminescent Identification of Functional Elements in 3′UTRs (3′LIFE). We demonstrate the feasibility of 3′LIFE using a data set of 275 human 3′UTRs and two cancer-relevant miRNAs, let-7c and miR-10b, and compare our results to alternative methods to detect miRNA targets throughout the genome. We identify a large number of novel gene targets for these miRNAs, with only 32% of hits being bioinformatically predicted and 27% directed by non-canonical interactions. Functional analysis of target genes reveals consistent roles for each miRNA as either a tumor suppressor (let-7c) or oncogenic miRNA (miR-10b), and preferentially target multiple genes within regulatory networks, suggesting 3′LIFE is a rapid and sensitive method to detect miRNA targets in high-throughput.

ContributorsWolter, Justin (Author) / Kotagama, Kasuen (Author) / Pierre-Bez, Alexandra C. (Author) / Firago, Mari (Author) / Mangone, Marco (Author) / College of Liberal Arts and Sciences (Contributor)
Created2014-09-29
129236-Thumbnail Image.png
Description

Perchloroethylene (PCE) is a highly utilized solvent in the dry cleaning industry because of its cleaning effectiveness and relatively low cost to consumers. According to the 2006 U.S. Census, approximately 28,000 dry cleaning operations used PCE as their principal cleaning agent. Widespread use of PCE is problematic because of its

Perchloroethylene (PCE) is a highly utilized solvent in the dry cleaning industry because of its cleaning effectiveness and relatively low cost to consumers. According to the 2006 U.S. Census, approximately 28,000 dry cleaning operations used PCE as their principal cleaning agent. Widespread use of PCE is problematic because of its adverse impacts on human health and environmental quality. As PCE use is curtailed, effective alternatives must be analyzed for their toxicity and impacts to human health and the environment. Potential alternatives to PCE in dry cleaning include dipropylene glycol n-butyl ether (DPnB) and dipropylene glycol tert-butyl ether (DPtB), both promising to pose a relatively smaller risk. To evaluate these two alternatives to PCE, we established and scored performance criteria, including chemical toxicity, employee and customer exposure levels, impacts on the general population, costs of each system, and cleaning efficacy. The scores received for PCE were 5, 5, 3, 5, 3, and 3, respectively, and DPnB and DPtB scored 3, 1, 2, 2, 4, and 4, respectively. An aggregate sum of the performance criteria yielded a favorably low score of “16” for both DPnB and DPtB compared to “24” for PCE. We conclude that DPnB and DPtB are preferable dry cleaning agents, exhibiting reduced human toxicity and a lesser adverse impact on human health and the environment compared to PCE, with comparable capital investments, and moderately higher annual operating costs.

ContributorsHesari, Nikou (Author) / Francis, Chelsea (Author) / Halden, Rolf (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014-04-03
Description

A meta-analysis was conducted to inform the epistemology, or theory of knowledge, of contaminants of emerging concern (CECs). The CEC terminology acknowledges the existence of harmful environmental agents whose identities, occurrences, hazards, and effects are not sufficiently understood. Here, data on publishing activity were analyzed for 12 CECs, revealing a

A meta-analysis was conducted to inform the epistemology, or theory of knowledge, of contaminants of emerging concern (CECs). The CEC terminology acknowledges the existence of harmful environmental agents whose identities, occurrences, hazards, and effects are not sufficiently understood. Here, data on publishing activity were analyzed for 12 CECs, revealing a common pattern of emergence, suitable for identifying past years of peak concern and forecasting future ones: dichlorodiphenyltrichloroethane (DDT; 1972, 2008), trichloroacetic acid (TCAA; 1972, 2009), nitrosodimethylamine (1984), methyl tert-butyl ether (2001), trichloroethylene (2005), perchlorate (2006), 1,4-dioxane (2009), prions (2009), triclocarban (2010), triclosan (2012), nanomaterials (by 2016), and microplastics (2022 ± 4). CECs were found to emerge from obscurity to the height of concern in 14.1 ± 3.6 years, and subside to a new baseline level of concern in 14.5 ± 4.5 years. CECs can emerge more than once (e.g., TCAA, DDT) and the multifactorial process of emergence may be driven by inception of novel scientific methods (e.g., ion chromatography, mass spectrometry and nanometrology), scientific paradigm shifts (discovery of infectious proteins), and the development, marketing and mass consumption of novel products (antimicrobial personal care products, microplastics and nanomaterials). Publishing activity and U.S. regulatory actions were correlated for several CECs investigated.

ContributorsHalden, Rolf (Author) / Biodesign Institute (Contributor)
Created2015-01-23
129255-Thumbnail Image.png
Description

Nanoscale zero-valent iron (nZVI) is a strong nonspecific reducing agent that is used for in situ degradation of chlorinated solvents and other oxidized pollutants. However, there are significant concerns regarding the risks posed by the deliberate release of engineered nanomaterials into the environment, which have triggered moratoria, for example, in

Nanoscale zero-valent iron (nZVI) is a strong nonspecific reducing agent that is used for in situ degradation of chlorinated solvents and other oxidized pollutants. However, there are significant concerns regarding the risks posed by the deliberate release of engineered nanomaterials into the environment, which have triggered moratoria, for example, in the United Kingdom. This critical review focuses on the effect of nZVI injection on subsurface microbial communities, which are of interest due to their important role in contaminant attenuation processes. Corrosion of ZVI stimulates dehalorespiring bacteria, due to the production of H2 that can serve as an electron donor for reduction of chlorinated contaminants. Conversely, laboratory studies show that nZVI can be inhibitory to pure bacterial cultures, although toxicity is reduced when nZVI is coated with polyelectrolytes or natural organic matter. The emerging toolkit of molecular biological analyses should enable a more sophisticated assessment of combined nZVI/biostimulation or bioaugmentation approaches. While further research on the consequences of its application for subsurface microbial communities is needed, nZVI continues to hold promise as an innovative technology for in situ remediation of pollutants It is particularly attractive. for the remediation of subsurface environments containing chlorinated ethenes because of its ability to potentially elicit and sustain both physical–chemical and biological removal despite its documented antimicrobial properties.

ContributorsBruton, Thomas (Author) / Pycke, Benny (Author) / Halden, Rolf (Author) / Biodesign Institute (Contributor)
Created2015-06-03
Description

This essay uses census data from the eighteenth century to examine the leadership role of caciques in the Guaraní missions. Cacique succession between 1735 and 1759 confirms that the position of cacique transitioned from the Guaraníes’ flexible interpretation of hereditary succession to the Jesuits’ rigid idea of primogenitor (father to

This essay uses census data from the eighteenth century to examine the leadership role of caciques in the Guaraní missions. Cacique succession between 1735 and 1759 confirms that the position of cacique transitioned from the Guaraníes’ flexible interpretation of hereditary succession to the Jesuits’ rigid idea of primogenitor (father to eldest son) succession. This essay argues that scholars overstate the caciques’ leadership role in the Guaraní missions. Adherence to primogenitor succession did not take into account a candidate's leadership qualities, and thus, some caciques functioned as placeholders for organizing the mission population and calculating tribute and not as active leaders. An assortment of other Guaraní leadership positions compensated for this weakness by providing both access to leadership roles for non-caciques who possessed leadership qualities but not the proper bloodline and additional leadership opportunities for more capable caciques. By taking into account leadership qualities and not just descent, these positions provided flexibility and reflected continuity with pre-contact Guaraní ideas about leadership.

Created2013-11-30
129422-Thumbnail Image.png
Description

Faced with numerous seemingly intractable social and environmental challenges, many scholars and practitioners are increasingly interested in understanding how to actively engage and transform the existing systems holding such problems in place. Although a variety of analytical models have emerged in recent years, most emphasize either the social or ecological

Faced with numerous seemingly intractable social and environmental challenges, many scholars and practitioners are increasingly interested in understanding how to actively engage and transform the existing systems holding such problems in place. Although a variety of analytical models have emerged in recent years, most emphasize either the social or ecological elements of such transformations rather than their coupled nature. To address this, first we have presented a definition of the core elements of a social-ecological system (SES) that could potentially be altered in a transformation. Second, we drew on insights about transformation from three branches of literature focused on radical change, i.e., social movements, socio-technical transitions, and social innovation, and gave consideration to the similarities and differences with the current studies by resilience scholars. Drawing on these findings, we have proposed a framework that outlines the process and phases of transformative change in an SES. Future research will be able to utilize the framework as a tool for analyzing the alteration of social-ecological feedbacks, identifying critical barriers and leverage points and assessing the outcome of social-ecological transformations.

ContributorsMoore, Michele-Lee (Author) / Tjornbo, Ola (Author) / Enfors, Elin (Author) / Knapp, Corrie (Author) / Hodbod, Jennifer (Author) / Baggio, Jacopo (Author) / Norstrom, Albert (Author) / Olsson, Per (Author) / Biggs, Duan (Author) / Julie Ann Wrigley Global Institute of Sustainability (Contributor)
Created2013-11-30
Description

Widespread contamination of groundwater by chlorinated ethenes and their biological dechlorination products necessitates the reliable monitoring of liquid matrices; current methods approved by the U.S. Environmental Protection Agency (EPA) require a minimum of 5 mL of sample volume and cannot simultaneously detect all transformative products. This paper reports on the

Widespread contamination of groundwater by chlorinated ethenes and their biological dechlorination products necessitates the reliable monitoring of liquid matrices; current methods approved by the U.S. Environmental Protection Agency (EPA) require a minimum of 5 mL of sample volume and cannot simultaneously detect all transformative products. This paper reports on the simultaneous detection of six chlorinated ethenes and ethene itself, using a liquid sample volume of 1 mL by concentrating the compounds onto an 85-µm carboxen-polydimenthylsiloxane solid-phase microextraction fiber in 5 min and subsequent chromatographic analysis in 9.15 min. Linear increases in signal response were obtained over three orders of magnitude (∼0.05 to ∼50 µM) for simultaneous analysis with coefficient of determination (R2) values of ≥ 0.99. The detection limits of the method (1.3–6 µg/L) were at or below the maximum contaminant levels specified by the EPA. Matrix spike studies with groundwater and mineral medium showed recovery rates between 79–108%. The utility of the method was demonstrated in lab-scale sediment flow-through columns assessing the bioremediation potential of chlorinated ethene-contaminated groundwater. Owing to its low sample volume requirements, good sensitivity and broad target analyte range, the method is suitable for routine compliance monitoring and is particularly attractive for interpreting the bench-scale feasibility studies that are commonly performed during the remedial design stage of groundwater cleanup projects.

ContributorsZiv-El, Michal (Author) / Kalinowski, Tomasz (Author) / Krajmalnik-Brown, Rosa (Author) / Halden, Rolf (Author) / Biodesign Institute (Contributor)
Created2014-02-01
129434-Thumbnail Image.png
Description

Aquaculture production has nearly tripled in the last two decades, bringing with it a significant increase in the use of antibiotics. Using liquid chromatography/tandem mass spectrometry (LC–MS/MS), the presence of 47 antibiotics was investigated in U.S. purchased shrimp, salmon, catfish, trout, tilapia, and swai originating from 11 different countries. All

Aquaculture production has nearly tripled in the last two decades, bringing with it a significant increase in the use of antibiotics. Using liquid chromatography/tandem mass spectrometry (LC–MS/MS), the presence of 47 antibiotics was investigated in U.S. purchased shrimp, salmon, catfish, trout, tilapia, and swai originating from 11 different countries. All samples (n = 27) complied with U.S. FDA regulations and five antibiotics were detected above the limits of detection: oxytetracycline (in wild shrimp, 7.7 ng/g of fresh weight; farmed tilapia, 2.7; farmed salmon, 8.6; farmed trout with spinal deformities, 3.9), 4-epioxytetracycline (farmed salmon, 4.1), sulfadimethoxine (farmed shrimp, 0.3), ormetoprim (farmed salmon, 0.5), and virginiamycin (farmed salmon marketed as antibiotic-free, 5.2). A literature review showed that sub-regulatory levels of antibiotics, as found here, can promote resistance development; publications linking aquaculture to this have increased more than 8-fold from 1991 to 2013. Although this study was limited in size and employed sample pooling, it represents the largest reconnaissance of antibiotics in U.S. seafood to date, providing data on previously unmonitored antibiotics and on farmed trout with spinal deformities. Results indicate low levels of antibiotic residues and general compliance with U.S. regulations. The potential for development of microbial drug resistance was identified as a key concern and research priority.

ContributorsDone, Hansa (Author) / Halden, Rolf (Author) / Biodesign Institute (Contributor)
Created2015-01-23