This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 63
Filtering by

Clear all filters

141463-Thumbnail Image.png
Description

Five immunocompetent C57BL/6-cBrd/cBrd/Cr (albino C57BL/6) mice were injected with GL261-luc2 cells, a cell line sharing characteristics of human glioblastoma multiforme (GBM). The mice were imaged using magnetic resonance (MR) at five separate time points to characterize growth and development of the tumor. After 25 days, the final tumor volumes of

Five immunocompetent C57BL/6-cBrd/cBrd/Cr (albino C57BL/6) mice were injected with GL261-luc2 cells, a cell line sharing characteristics of human glioblastoma multiforme (GBM). The mice were imaged using magnetic resonance (MR) at five separate time points to characterize growth and development of the tumor. After 25 days, the final tumor volumes of the mice varied from 12 mm3 to 62 mm3, even though mice were inoculated from the same tumor cell line under carefully controlled conditions. We generated hypotheses to explore large variances in final tumor size and tested them with our simple reaction-diffusion model in both a 3-dimensional (3D) finite difference method and a 2-dimensional (2D) level set method. The parameters obtained from a best-fit procedure, designed to yield simulated tumors as close as possible to the observed ones, vary by an order of magnitude between the three mice analyzed in detail. These differences may reflect morphological and biological variability in tumor growth, as well as errors in the mathematical model, perhaps from an oversimplification of the tumor dynamics or nonidentifiability of parameters. Our results generate parameters that match other experimental in vitro and in vivo measurements. Additionally, we calculate wave speed, which matches with other rat and human measurements.

ContributorsRutter, Erica (Author) / Stepien, Tracy (Author) / Anderies, Barrett (Author) / Plasencia, Jonathan (Author) / Woolf, Eric C. (Author) / Scheck, Adrienne C. (Author) / Turner, Gregory H. (Author) / Liu, Qingwei (Author) / Frakes, David (Author) / Kodibagkar, Vikram (Author) / Kuang, Yang (Author) / Preul, Mark C. (Author) / Kostelich, Eric (Author) / College of Liberal Arts and Sciences (Contributor)
Created2017-05-31
141473-Thumbnail Image.png
Description

Critical flicker fusion thresholds (CFFTs) describe when quick amplitude modulations of a light source become undetectable as the frequency of the modulation increases and are thought to underlie a number of visual processing skills, including reading. Here, we compare the impact of two vision-training approaches, one involving contrast sensitivity training

Critical flicker fusion thresholds (CFFTs) describe when quick amplitude modulations of a light source become undetectable as the frequency of the modulation increases and are thought to underlie a number of visual processing skills, including reading. Here, we compare the impact of two vision-training approaches, one involving contrast sensitivity training and the other directional dot-motion training, compared to an active control group trained on Sudoku. The three training paradigms were compared on their effectiveness for altering CFFT. Directional dot-motion and contrast sensitivity training resulted in significant improvement in CFFT, while the Sudoku group did not yield significant improvement. This finding indicates that dot-motion and contrast sensitivity training similarly transfer to effect changes in CFFT. The results, combined with prior research linking CFFT to high-order cognitive processes such as reading ability, and studies showing positive impact of both dot-motion and contrast sensitivity training in reading, provide a possible mechanistic link of how these different training approaches impact reading abilities.

ContributorsZhou, Tianyou (Author) / Nanez, Jose (Author) / Zimmerman, Daniel (Author) / Holloway, Steven (Author) / Seitz, Aaron (Author) / New College of Interdisciplinary Arts and Sciences (Contributor)
Created2016-10-26
141474-Thumbnail Image.png
Description

Although autism spectrum disorder (ASD) is a serious lifelong condition, its underlying neural mechanism remains unclear. Recently, neuroimaging-based classifiers for ASD and typically developed (TD) individuals were developed to identify the abnormality of functional connections (FCs). Due to over-fitting and interferential effects of varying measurement conditions and demographic distributions, no

Although autism spectrum disorder (ASD) is a serious lifelong condition, its underlying neural mechanism remains unclear. Recently, neuroimaging-based classifiers for ASD and typically developed (TD) individuals were developed to identify the abnormality of functional connections (FCs). Due to over-fitting and interferential effects of varying measurement conditions and demographic distributions, no classifiers have been strictly validated for independent cohorts. Here we overcome these difficulties by developing a novel machine-learning algorithm that identifies a small number of FCs that separates ASD versus TD. The classifier achieves high accuracy for a Japanese discovery cohort and demonstrates a remarkable degree of generalization for two independent validation cohorts in the USA and Japan. The developed ASD classifier does not distinguish individuals with major depressive disorder and attention-deficit hyperactivity disorder from their controls but moderately distinguishes patients with schizophrenia from their controls. The results leave open the viable possibility of exploring neuroimaging-based dimensions quantifying the multiple-disorder spectrum.

ContributorsYahata, Noriaki (Author) / Morimoto, Jun (Author) / Hashimoto, Ryuichiro (Author) / Lisi, Giuseppe (Author) / Shibata, Kazuhisa (Author) / Kawakubo, Yuki (Author) / Kuwabara, Hitoshi (Author) / Kuroda, Miho (Author) / Yamada, Takashi (Author) / Megumi, Fukuda (Author) / Imamizu, Hiroshi (Author) / Nanez, Jose (Author) / Takahashi, Hidehiko (Author) / Okamoto, Yasumasa (Author) / Kasai, Kiyoto (Author) / Kato, Nobumasa (Author) / Sasaki, Yuka (Author) / Watanabe, Takeo (Author) / Kawato, Mitsuo (Author) / New College of Interdisciplinary Arts and Sciences (Contributor)
Created2016-04-14
129538-Thumbnail Image.png
Description

Gompertz’s empirical equation remains the most popular one in describing cancer cell population growth in a wide spectrum of bio-medical situations due to its good fit to data and simplicity. Many efforts were documented in the literature aimed at understanding the mechanisms that may support Gompertz’s elegant model equation. One

Gompertz’s empirical equation remains the most popular one in describing cancer cell population growth in a wide spectrum of bio-medical situations due to its good fit to data and simplicity. Many efforts were documented in the literature aimed at understanding the mechanisms that may support Gompertz’s elegant model equation. One of the most convincing efforts was carried out by Gyllenberg and Webb. They divide the cancer cell population into the proliferative cells and the quiescent cells. In their two dimensional model, the dead cells are assumed to be removed from the tumor instantly. In this paper, we modify their model by keeping track of the dead cells remaining in the tumor. We perform mathematical and computational studies on this three dimensional model and compare the model dynamics to that of the model of Gyllenberg and Webb. Our mathematical findings suggest that if an avascular tumor grows according to our three-compartment model, then as the death rate of quiescent cells decreases to zero, the percentage of proliferative cells also approaches to zero. Moreover, a slow dying quiescent population will increase the size of the tumor. On the other hand, while the tumor size does not depend on the dead cell removal rate, its early and intermediate growth stages are very sensitive to it.

ContributorsAlzahrani, E. O. (Author) / Asiri, Asim (Author) / El-Dessoky, M. M. (Author) / Kuang, Yang (Author) / College of Liberal Arts and Sciences (Contributor)
Created2014-08-01
129236-Thumbnail Image.png
Description

Perchloroethylene (PCE) is a highly utilized solvent in the dry cleaning industry because of its cleaning effectiveness and relatively low cost to consumers. According to the 2006 U.S. Census, approximately 28,000 dry cleaning operations used PCE as their principal cleaning agent. Widespread use of PCE is problematic because of its

Perchloroethylene (PCE) is a highly utilized solvent in the dry cleaning industry because of its cleaning effectiveness and relatively low cost to consumers. According to the 2006 U.S. Census, approximately 28,000 dry cleaning operations used PCE as their principal cleaning agent. Widespread use of PCE is problematic because of its adverse impacts on human health and environmental quality. As PCE use is curtailed, effective alternatives must be analyzed for their toxicity and impacts to human health and the environment. Potential alternatives to PCE in dry cleaning include dipropylene glycol n-butyl ether (DPnB) and dipropylene glycol tert-butyl ether (DPtB), both promising to pose a relatively smaller risk. To evaluate these two alternatives to PCE, we established and scored performance criteria, including chemical toxicity, employee and customer exposure levels, impacts on the general population, costs of each system, and cleaning efficacy. The scores received for PCE were 5, 5, 3, 5, 3, and 3, respectively, and DPnB and DPtB scored 3, 1, 2, 2, 4, and 4, respectively. An aggregate sum of the performance criteria yielded a favorably low score of “16” for both DPnB and DPtB compared to “24” for PCE. We conclude that DPnB and DPtB are preferable dry cleaning agents, exhibiting reduced human toxicity and a lesser adverse impact on human health and the environment compared to PCE, with comparable capital investments, and moderately higher annual operating costs.

ContributorsHesari, Nikou (Author) / Francis, Chelsea (Author) / Halden, Rolf (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014-04-03
Description

A meta-analysis was conducted to inform the epistemology, or theory of knowledge, of contaminants of emerging concern (CECs). The CEC terminology acknowledges the existence of harmful environmental agents whose identities, occurrences, hazards, and effects are not sufficiently understood. Here, data on publishing activity were analyzed for 12 CECs, revealing a

A meta-analysis was conducted to inform the epistemology, or theory of knowledge, of contaminants of emerging concern (CECs). The CEC terminology acknowledges the existence of harmful environmental agents whose identities, occurrences, hazards, and effects are not sufficiently understood. Here, data on publishing activity were analyzed for 12 CECs, revealing a common pattern of emergence, suitable for identifying past years of peak concern and forecasting future ones: dichlorodiphenyltrichloroethane (DDT; 1972, 2008), trichloroacetic acid (TCAA; 1972, 2009), nitrosodimethylamine (1984), methyl tert-butyl ether (2001), trichloroethylene (2005), perchlorate (2006), 1,4-dioxane (2009), prions (2009), triclocarban (2010), triclosan (2012), nanomaterials (by 2016), and microplastics (2022 ± 4). CECs were found to emerge from obscurity to the height of concern in 14.1 ± 3.6 years, and subside to a new baseline level of concern in 14.5 ± 4.5 years. CECs can emerge more than once (e.g., TCAA, DDT) and the multifactorial process of emergence may be driven by inception of novel scientific methods (e.g., ion chromatography, mass spectrometry and nanometrology), scientific paradigm shifts (discovery of infectious proteins), and the development, marketing and mass consumption of novel products (antimicrobial personal care products, microplastics and nanomaterials). Publishing activity and U.S. regulatory actions were correlated for several CECs investigated.

ContributorsHalden, Rolf (Author) / Biodesign Institute (Contributor)
Created2015-01-23
129255-Thumbnail Image.png
Description

Nanoscale zero-valent iron (nZVI) is a strong nonspecific reducing agent that is used for in situ degradation of chlorinated solvents and other oxidized pollutants. However, there are significant concerns regarding the risks posed by the deliberate release of engineered nanomaterials into the environment, which have triggered moratoria, for example, in

Nanoscale zero-valent iron (nZVI) is a strong nonspecific reducing agent that is used for in situ degradation of chlorinated solvents and other oxidized pollutants. However, there are significant concerns regarding the risks posed by the deliberate release of engineered nanomaterials into the environment, which have triggered moratoria, for example, in the United Kingdom. This critical review focuses on the effect of nZVI injection on subsurface microbial communities, which are of interest due to their important role in contaminant attenuation processes. Corrosion of ZVI stimulates dehalorespiring bacteria, due to the production of H2 that can serve as an electron donor for reduction of chlorinated contaminants. Conversely, laboratory studies show that nZVI can be inhibitory to pure bacterial cultures, although toxicity is reduced when nZVI is coated with polyelectrolytes or natural organic matter. The emerging toolkit of molecular biological analyses should enable a more sophisticated assessment of combined nZVI/biostimulation or bioaugmentation approaches. While further research on the consequences of its application for subsurface microbial communities is needed, nZVI continues to hold promise as an innovative technology for in situ remediation of pollutants It is particularly attractive. for the remediation of subsurface environments containing chlorinated ethenes because of its ability to potentially elicit and sustain both physical–chemical and biological removal despite its documented antimicrobial properties.

ContributorsBruton, Thomas (Author) / Pycke, Benny (Author) / Halden, Rolf (Author) / Biodesign Institute (Contributor)
Created2015-06-03
Description

This essay uses census data from the eighteenth century to examine the leadership role of caciques in the Guaraní missions. Cacique succession between 1735 and 1759 confirms that the position of cacique transitioned from the Guaraníes’ flexible interpretation of hereditary succession to the Jesuits’ rigid idea of primogenitor (father to

This essay uses census data from the eighteenth century to examine the leadership role of caciques in the Guaraní missions. Cacique succession between 1735 and 1759 confirms that the position of cacique transitioned from the Guaraníes’ flexible interpretation of hereditary succession to the Jesuits’ rigid idea of primogenitor (father to eldest son) succession. This essay argues that scholars overstate the caciques’ leadership role in the Guaraní missions. Adherence to primogenitor succession did not take into account a candidate's leadership qualities, and thus, some caciques functioned as placeholders for organizing the mission population and calculating tribute and not as active leaders. An assortment of other Guaraní leadership positions compensated for this weakness by providing both access to leadership roles for non-caciques who possessed leadership qualities but not the proper bloodline and additional leadership opportunities for more capable caciques. By taking into account leadership qualities and not just descent, these positions provided flexibility and reflected continuity with pre-contact Guaraní ideas about leadership.

Created2013-11-30
Description

Widespread contamination of groundwater by chlorinated ethenes and their biological dechlorination products necessitates the reliable monitoring of liquid matrices; current methods approved by the U.S. Environmental Protection Agency (EPA) require a minimum of 5 mL of sample volume and cannot simultaneously detect all transformative products. This paper reports on the

Widespread contamination of groundwater by chlorinated ethenes and their biological dechlorination products necessitates the reliable monitoring of liquid matrices; current methods approved by the U.S. Environmental Protection Agency (EPA) require a minimum of 5 mL of sample volume and cannot simultaneously detect all transformative products. This paper reports on the simultaneous detection of six chlorinated ethenes and ethene itself, using a liquid sample volume of 1 mL by concentrating the compounds onto an 85-µm carboxen-polydimenthylsiloxane solid-phase microextraction fiber in 5 min and subsequent chromatographic analysis in 9.15 min. Linear increases in signal response were obtained over three orders of magnitude (∼0.05 to ∼50 µM) for simultaneous analysis with coefficient of determination (R2) values of ≥ 0.99. The detection limits of the method (1.3–6 µg/L) were at or below the maximum contaminant levels specified by the EPA. Matrix spike studies with groundwater and mineral medium showed recovery rates between 79–108%. The utility of the method was demonstrated in lab-scale sediment flow-through columns assessing the bioremediation potential of chlorinated ethene-contaminated groundwater. Owing to its low sample volume requirements, good sensitivity and broad target analyte range, the method is suitable for routine compliance monitoring and is particularly attractive for interpreting the bench-scale feasibility studies that are commonly performed during the remedial design stage of groundwater cleanup projects.

ContributorsZiv-El, Michal (Author) / Kalinowski, Tomasz (Author) / Krajmalnik-Brown, Rosa (Author) / Halden, Rolf (Author) / Biodesign Institute (Contributor)
Created2014-02-01
129434-Thumbnail Image.png
Description

Aquaculture production has nearly tripled in the last two decades, bringing with it a significant increase in the use of antibiotics. Using liquid chromatography/tandem mass spectrometry (LC–MS/MS), the presence of 47 antibiotics was investigated in U.S. purchased shrimp, salmon, catfish, trout, tilapia, and swai originating from 11 different countries. All

Aquaculture production has nearly tripled in the last two decades, bringing with it a significant increase in the use of antibiotics. Using liquid chromatography/tandem mass spectrometry (LC–MS/MS), the presence of 47 antibiotics was investigated in U.S. purchased shrimp, salmon, catfish, trout, tilapia, and swai originating from 11 different countries. All samples (n = 27) complied with U.S. FDA regulations and five antibiotics were detected above the limits of detection: oxytetracycline (in wild shrimp, 7.7 ng/g of fresh weight; farmed tilapia, 2.7; farmed salmon, 8.6; farmed trout with spinal deformities, 3.9), 4-epioxytetracycline (farmed salmon, 4.1), sulfadimethoxine (farmed shrimp, 0.3), ormetoprim (farmed salmon, 0.5), and virginiamycin (farmed salmon marketed as antibiotic-free, 5.2). A literature review showed that sub-regulatory levels of antibiotics, as found here, can promote resistance development; publications linking aquaculture to this have increased more than 8-fold from 1991 to 2013. Although this study was limited in size and employed sample pooling, it represents the largest reconnaissance of antibiotics in U.S. seafood to date, providing data on previously unmonitored antibiotics and on farmed trout with spinal deformities. Results indicate low levels of antibiotic residues and general compliance with U.S. regulations. The potential for development of microbial drug resistance was identified as a key concern and research priority.

ContributorsDone, Hansa (Author) / Halden, Rolf (Author) / Biodesign Institute (Contributor)
Created2015-01-23