The title “Regents’ Professor” is the highest faculty honor awarded at Arizona State University. It is conferred on ASU faculty who have made pioneering contributions in their areas of expertise, who have achieved a sustained level of distinction, and who enjoy national and international recognition for these accomplishments. This collection contains primarily open access works by ASU Regents' Professors.
Does the Growth Rate Hypothesis Apply across Temperatures? Variation in the Growth Rate and Body Phosphorus of Neotropical Benthic Grazers
The growth rate hypothesis predicts that organisms with higher maximum growth rates will also have higher body percent phosphorus (P) due to the increased demand for ribosomal RNA production needed to sustain rapid growth. However, this hypothesis was formulated for invertebrates growing at the same temperature. Within a biologically relevant temperature range, increased temperatures can lead to more rapid growth, suggesting that organisms in warmer environments might also contain more P per gram of dry mass. However, since higher growth rates at higher temperature can be supported by more rapid protein synthesis per ribosome rather than increased ribosome investment, increasing temperature might not lead to a positive relationship between growth and percent P. We tested the growth rate hypothesis by examining two genera of Neotropical stream grazers, the leptophlebiid mayfly Thraulodes and the bufonid toad tadpole Rhinella. We measured the body percent P of field-collected Thraulodes as well as the stoichiometry of periphyton resources in six Panamanian streams over an elevational gradient spanning approximately 1,100 m and 7°C in mean annual temperature. We also measured Thraulodes growth rates using in situ growth chambers in two of these streams. Finally, we conducted temperature manipulation experiments with both Thraulodes and Rhinella at the highest and lowest elevation sites and measured differences in percent P and growth rates. Thraulodes body percent P increased with temperature across the six streams, and average specific growth rate was higher in the warmer lowland stream. In the temperature manipulation experiments, both taxa exhibited higher growth rate and body percent P in the lowland experiments regardless of experimental temperature, but growth rate and body percent P of individuals were not correlated. Although we found that Thraulodes from warmer streams grew more rapidly and had higher body percent P, our experimental results suggest that the growth rate hypothesis does not apply across temperatures. Instead, our results indicate that factors other than temperature drive variation in organismal percent P among sites.
I present the case for a fire-centric scholarship, and suggest the transition between burning living landscapes and lithic ones (in the form of fossil fuels) would make a good demonstration of what such scholarship might do and what its value could be.
Thermionic energy conversion, a process that allows direct transformation of thermal to electrical energy, presents a means of efficient electrical power generation as the hot and cold side of the corresponding heat engine are separated by a vacuum gap. Conversion efficiencies approaching those of the Carnot cycle are possible if material parameters of the active elements at the converter, i.e., electron emitter or cathode and collector or anode, are optimized for operation in the desired temperature range.
These parameters can be defined through the law of Richardson–Dushman that quantifies the ability of a material to release an electron current at a certain temperature as a function of the emission barrier or work function and the emission or Richardson constant. Engineering materials to defined parameter values presents the key challenge in constructing practical thermionic converters. The elevated temperature regime of operation presents a constraint that eliminates most semiconductors and identifies diamond, a wide band-gap semiconductor, as a suitable thermionic material through its unique material properties. For its surface, a configuration can be established, the negative electron affinity, that shifts the vacuum level below the conduction band minimum eliminating the surface barrier for electron emission.
In addition, its ability to accept impurities as donor states allows materials engineering to control the work function and the emission constant. Single-crystal diamond electrodes with nitrogen levels at 1.7 eV and phosphorus levels at 0.6 eV were prepared by plasma-enhanced chemical vapor deposition where the work function was controlled from 2.88 to 0.67 eV, one of the lowest thermionic work functions reported. This work function range was achieved through control of the doping concentration where a relation to the amount of band bending emerged. Upward band bending that contributed to the work function was attributed to surface states where lower doped homoepitaxial films exhibited a surface state density of ∼3 × 10[superscript 11] cm[superscript −2]. With these optimized doped diamond electrodes, highly efficient thermionic converters are feasible with a Schottky barrier at the diamond collector contact mitigated through operation at elevated temperatures.
Sequence Diversity of Pan troglodytes Subspecies and the Impact of WFDC6 Selective Constraints in Reproductive Immunity
Recent efforts have attempted to describe the population structure of common chimpanzee, focusing on four subspecies: Pan troglodytes verus, P. t. ellioti, P. t. troglodytes, and P. t. schweinfurthii. However, few studies have pursued the effects of natural selection in shaping their response to pathogens and reproduction. Whey acidic protein (WAP) four-disulfide core domain (WFDC) genes and neighboring semenogelin (SEMG) genes encode proteins with combined roles in immunity and fertility. They display a strikingly high rate of amino acid replacement (dN/dS), indicative of adaptive pressures during primate evolution. In human populations, three signals of selection at the WFDC locus were described, possibly influencing the proteolytic profile and antimicrobial activities of the male reproductive tract. To evaluate the patterns of genomic variation and selection at the WFDC locus in chimpanzees, we sequenced 17 WFDC genes and 47 autosomal pseudogenes in 68 chimpanzees (15 P. t. troglodytes, 22 P. t. verus, and 31 P. t. ellioti). We found a clear differentiation of P. t. verus and estimated the divergence of P. t. troglodytes and P. t. ellioti subspecies in 0.173 Myr; further, at the WFDC locus we identified a signature of strong selective constraints common to the three subspecies in WFDC6—a recent paralog of the epididymal protease inhibitor EPPIN. Overall, chimpanzees and humans do not display similar footprints of selection across the WFDC locus, possibly due to different selective pressures between the two species related to immune response and reproductive biology.
Several forensic sciences, especially of the pattern-matching kind, are increasingly seen to lack the scientific foundation needed to justify continuing admission as trial evidence. Indeed, several have been abolished in the recent past. A likely next candidate for elimination is bitemark identification. A number of DNA exonerations have occurred in recent years for individuals convicted based on erroneous bitemark identifications. Intense scientific and legal scrutiny has resulted. An important National Academies review found little scientific support for the field. The Texas Forensic Science Commission recently recommended a moratorium on the admission of bitemark expert testimony. The California Supreme Court has a case before it that could start a national dismantling of forensic odontology. This article describes the (legal) basis for the rise of bitemark identification and the (scientific) basis for its impending fall. The article explains the general logic of forensic identification, the claims of bitemark identification, and reviews relevant empirical research on bitemark identification—highlighting both the lack of research and the lack of support provided by what research does exist. The rise and possible fall of bitemark identification evidence has broader implications—highlighting the weak scientific culture of forensic science and the law's difficulty in evaluating and responding to unreliable and unscientific evidence.
Diamond is considered as an ideal material for high field and high power devices due to its high breakdown field, high lightly doped carrier mobility, and high thermal conductivity. The modeling and simulation of diamond devices are therefore important to predict the performances of diamond based devices. In this context, we use Silvaco[superscript ®] Atlas, a drift-diffusion based commercial software, to model diamond based power devices. The models used in Atlas were modified to account for both variable range and nearest neighbor hopping transport in the impurity bands associated with high activation energies for boron doped and phosphorus doped diamond. The models were fit to experimentally reported resistivity data over a wide range of doping concentrations and temperatures. We compare to recent data on depleted diamond Schottky PIN diodes demonstrating low turn-on voltages and high reverse breakdown voltages, which could be useful for high power rectifying applications due to the low turn-on voltage enabling high forward current densities. Three dimensional simulations of the depleted Schottky PIN diamond devices were performed and the results are verified with experimental data at different operating temperatures.
A single-center study evaluating the effect of the controlled adverse environment (CAEsm) model on tear film stability
Purpose: To investigate use of an improved ocular tear film analysis protocol (OPI 2.0) in the Controlled Adverse Environment (CAE[superscript SM]) model of dry eye disease, and to examine the utility of new metrics in the identification of subpopulations of dry eye patients.
Methods: Thirty-three dry eye subjects completed a single-center, single-visit, pilot CAE study. The primary endpoint was mean break-up area (MBA) as assessed by the OPI 2.0 system. Secondary endpoints included corneal fluorescein staining, tear film break-up time, and OPI 2.0 system measurements. Subjects were also asked to rate their ocular discomfort throughout the CAE. Dry eye endpoints were measured at baseline, immediately following a 90-minute CAE exposure, and again 30 minutes after exposure.
Results: The post-CAE measurements of MBA showed a statistically significant decrease from the baseline measurements. The decrease was relatively specific to those patients with moderate to severe dry eye, as measured by baseline MBA. Secondary endpoints including palpebral fissure size, corneal staining, and redness, also showed significant changes when pre- and post-CAE measurements were compared. A correlation analysis identified specific associations between MBA, blink rate, and palpebral fissure size. Comparison of MBA responses allowed us to identify subpopulations of subjects who exhibited different compensatory mechanisms in response to CAE challenge. Of note, none of the measures of tear film break-up time showed statistically significant changes or correlations in pre-, versus post-CAE measures.
Conclusion: This pilot study confirms that the tear film metric MBA can detect changes in the ocular surface induced by a CAE, and that these changes are correlated with other, established measures of dry eye disease. The observed decrease in MBA following CAE exposure demonstrates that compensatory mechanisms are initiated during the CAE exposure, and that this compensation may provide the means to identify and characterize clinically relevant subpopulations of dry eye patients.
Purpose: To evaluate a new method of measuring ocular exposure in the context of a natural blink pattern through analysis of the variables tear film breakup time (TFBUT), interblink interval (IBI), and tear film breakup area (BUA).
Methods: The traditional methodology (Forced-Stare [FS]) measures TFBUT and IBI separately. TFBUT is measured under forced-stare conditions by an examiner using a stopwatch, while IBI is measured as the subject watches television. The new methodology (video capture manual analysis [VCMA]) involves retrospective analysis of video data of fluorescein-stained eyes taken through a slit lamp while the subject watches television, and provides TFBUT and BUA for each IBI during the 1-minute video under natural blink conditions. The FS and VCMA methods were directly compared in the same set of dry-eye subjects. The VCMA method was evaluated for the ability to discriminate between dry-eye subjects and normal subjects. The VCMA method was further evaluated in the dry eye subjects for the ability to detect a treatment effect before, and 10 minutes after, bilateral instillation of an artificial tear solution.
Results: Ten normal subjects and 17 dry-eye subjects were studied. In the dry-eye subjects, the two methods differed with respect to mean TFBUTs (5.82 seconds, FS; 3.98 seconds, VCMA; P = 0.002). The FS variables alone (TFBUT, IBI) were not able to successfully distinguish between the dry-eye and normal subjects, whereas the additional VCMA variables, both derived and observed (BUA, BUA/IBI, breakup rate), were able to successfully distinguish between the dry-eye and normal subjects in a statistically significant fashion. TFBUT (P = 0.034) and BUA/IBI (P = 0.001) were able to distinguish the treatment effect of artificial tears in dry-eye subjects.
Conclusion: The VCMA methodology provides a clinically relevant analysis of tear film stability measured in the context of a natural blink pattern.
The invention of the laser in the 1950 s for visible light and microwaves, and the slow but steady recognition of its manifold uses, is a truly remarkable story in the history of science. But the severe λ[superscript 3] dependence of the ratio of stimulated (mostly coherent) to spontaneous (incoherent) emission meant that efforts to build an X-ray laser seemed hopeless for decades. As so often happens in the history of science, the breakthrough eventually occurred at the interface of several fields – synchrotron science (and especially their insertion devices), laser physics, and work on microwave tubes for radar, emerging from the second world war. Synchrotrons themselves were an outgrowth of the particle accelerators of nuclear physics, whose X-ray radiation was considered a nuisance. All of this culminated recently in the construction of the first hard-X-ray laser, the US Department of Energy's Linac Coherent Light Source (LCLS), at their SLAC laboratory near Stanford. The first X-ray lasing occurred in that two-mile long tunnel on April 21, 2009, at about 2 kV, in an all-or-nothing moment of intense excitement, as theoretical predictions proved spot-on. The new laser principle needed for hard-X-ray lasing, the free-electron laser (FEL), was first demonstrated in the infra-red region at Stanford in 1975 in John Madey's group, following earlier theoretical work by Motz and Phillips on microwave tubes. Other FELs soon followed, in the microwave and visible region, leading to the LCLS. The XFEL method provides brief pulses of X-ray laser radiation by the SASE (self-amplified spontaneous emission) process, using a resonant undulator driven by a LINAC electron accelerator. Each LCLS pulse, of 10 fs duration (repeated 120 times a second) contains about 10[superscript 12] hard-X-ray photons, about the same number that a synchrotron might generate in a second.
X-ray diffraction patterns from two-dimensional (2-D) protein crystals obtained using femtosecond X-ray pulses from an X-ray free-electron laser (XFEL) are presented. To date, it has not been possible to acquire transmission X-ray diffraction patterns from individual 2-D protein crystals due to radiation damage. However, the intense and ultrafast pulses generated by an XFEL permit a new method of collecting diffraction data before the sample is destroyed. Utilizing a diffract-before-destroy approach at the Linac Coherent Light Source, Bragg diffraction was acquired to better than 8.5 Å resolution for two different 2-D protein crystal samples each less than 10 nm thick and maintained at room temperature. These proof-of-principle results show promise for structural analysis of both soluble and membrane proteins arranged as 2-D crystals without requiring cryogenic conditions or the formation of three-dimensional crystals.