Matching Items (29)
Filtering by

Clear all filters

141376-Thumbnail Image.png
Description

Background:
The evidence that heat waves can result in both increased deaths and illness is substantial, and concern over this issue is rising because of climate change. Adverse health impacts from heat waves can be avoided, and epidemiologic studies have identified specific population and community characteristics that mark vulnerability to heat

Background:
The evidence that heat waves can result in both increased deaths and illness is substantial, and concern over this issue is rising because of climate change. Adverse health impacts from heat waves can be avoided, and epidemiologic studies have identified specific population and community characteristics that mark vulnerability to heat waves.

Objectives:
We situated vulnerability to heat in geographic space and identified potential areas for intervention and further research.

Methods:
We mapped and analyzed 10 vulnerability factors for heat-related morbidity/mortality in the United States: six demographic characteristics and two household air conditioning variables from the U.S. Census Bureau, vegetation cover from satellite images, and diabetes prevalence from a national survey. We performed a factor analysis of these 10 variables and assigned values of increasing vulnerability for the four resulting factors to each of 39,794 census tracts. We added the four factor scores to obtain a cumulative heat vulnerability index value.

Results:
Four factors explained > 75% of the total variance in the original 10 vulnerability variables: a) social/environmental vulnerability (combined education/poverty/race/green space), b) social isolation, c) air conditioning prevalence, and d) proportion elderly/diabetes. We found substantial spatial variability of heat vulnerability nationally, with generally higher vulnerability in the Northeast and Pacific Coast and the lowest in the Southeast. In urban areas, inner cities showed the highest vulnerability to heat.

Conclusions:
These methods provide a template for making local and regional heat vulnerability maps. After validation using health outcome data, interventions can be targeted at the most vulnerable populations.

ContributorsReid, Colleen E. (Author) / O'Neill, Marie S. (Author) / Gronlund, Carina J. (Author) / Brines, Shannon J. (Author) / Brown, Daniel G. (Author) / Diez-Roux, Ana V. (Author) / Schwartz, Joel (Author)
Created2009-11-01
130300-Thumbnail Image.png
Description
We studied left ventricular flow patterns for a range of rotational orientations of a bileaflet mechanical heart valve (MHV) implanted in the mitral position of an elastic model of a beating left ventricle (LV). The valve was rotated through 3 angular positions (0, 45, and 90 degrees) about the LV

We studied left ventricular flow patterns for a range of rotational orientations of a bileaflet mechanical heart valve (MHV) implanted in the mitral position of an elastic model of a beating left ventricle (LV). The valve was rotated through 3 angular positions (0, 45, and 90 degrees) about the LV long axis. Ultrasound scans of the elastic LV were obtained in four apical 2-dimensional (2D) imaging projections, each with 45 degrees of separation. Particle imaging velocimetry was performed during the diastolic period to quantify the in-plane velocity field obtained by computer tracking of diluted microbubbles in the acquired ultrasound projections. The resulting velocity field, vorticity, and shear stresses were statistically significantly altered by angular positioning of the mechanical valve, although the results did not show any specific trend with the valve angular position and were highly dependent on the orientation of the imaging plane with respect to the valve. We conclude that bileaflet MHV orientation influences hemodynamics of LV filling. However, determination of ‘optimal’ valve orientation cannot be made without measurement techniques that account for the highly 3-dimensional (3D) intraventricular flow.
Created2015-06-26
130291-Thumbnail Image.png
Description
pH and fermentable substrates impose selective pressures on gut microbial communities and their metabolisms. We evaluated the relative contributions of pH, alkalinity, and substrate on microbial community structure, metabolism, and functional interactions using triplicate batch cultures started from fecal slurry and incubated with an initial pH of 6.0, 6.5, or

pH and fermentable substrates impose selective pressures on gut microbial communities and their metabolisms. We evaluated the relative contributions of pH, alkalinity, and substrate on microbial community structure, metabolism, and functional interactions using triplicate batch cultures started from fecal slurry and incubated with an initial pH of 6.0, 6.5, or 6.9 and 10 mM glucose, fructose, or cellobiose as the carbon substrate. We analyzed 16S rRNA gene sequences and fermentation products. Microbial diversity was driven by both pH and substrate type. Due to insufficient alkalinity, a drop in pH from 6.0 to ~4.5 clustered pH 6.0 cultures together and distant from pH 6.5 and 6.9 cultures, which experienced only small pH drops. Cellobiose yielded more acidity than alkalinity due to the amount of fermentable carbon, which moved cellobiose pH 6.5 cultures away from other pH 6.5 cultures. The impact of pH on microbial community structure was reflected by fermentative metabolism. Lactate accumulation occurred in pH 6.0 cultures, whereas propionate and acetate accumulations were observed in pH 6.5 and 6.9 cultures and independently from the type of substrate provided. Finally, pH had an impact on the interactions between lactate-producing and -consuming communities. Lactate-producing Streptococcus dominated pH 6.0 cultures, and acetate- and propionate-producing Veillonella, Bacteroides, and Escherichia dominated the cultures started at pH 6.5 and 6.9. Acid inhibition on lactate-consuming species led to lactate accumulation. Our results provide insights into pH-derived changes in fermenting microbiota and metabolisms in the human gut.
Created2017-05-03
130268-Thumbnail Image.png
Description
Purpose: To evaluate a new method of measuring ocular exposure in the context of a natural blink pattern through analysis of the variables tear film breakup time (TFBUT), interblink interval (IBI), and tear film breakup area (BUA).
Methods: The traditional methodology (Forced-Stare [FS]) measures TFBUT and IBI separately. TFBUT is measured

Purpose: To evaluate a new method of measuring ocular exposure in the context of a natural blink pattern through analysis of the variables tear film breakup time (TFBUT), interblink interval (IBI), and tear film breakup area (BUA).
Methods: The traditional methodology (Forced-Stare [FS]) measures TFBUT and IBI separately. TFBUT is measured under forced-stare conditions by an examiner using a stopwatch, while IBI is measured as the subject watches television. The new methodology (video capture manual analysis [VCMA]) involves retrospective analysis of video data of fluorescein-stained eyes taken through a slit lamp while the subject watches television, and provides TFBUT and BUA for each IBI during the 1-minute video under natural blink conditions. The FS and VCMA methods were directly compared in the same set of dry-eye subjects. The VCMA method was evaluated for the ability to discriminate between dry-eye subjects and normal subjects. The VCMA method was further evaluated in the dry eye subjects for the ability to detect a treatment effect before, and 10 minutes after, bilateral instillation of an artificial tear solution.
Results: Ten normal subjects and 17 dry-eye subjects were studied. In the dry-eye subjects, the two methods differed with respect to mean TFBUTs (5.82 seconds, FS; 3.98 seconds, VCMA; P = 0.002). The FS variables alone (TFBUT, IBI) were not able to successfully distinguish between the dry-eye and normal subjects, whereas the additional VCMA variables, both derived and observed (BUA, BUA/IBI, breakup rate), were able to successfully distinguish between the dry-eye and normal subjects in a statistically significant fashion. TFBUT (P = 0.034) and BUA/IBI (P = 0.001) were able to distinguish the treatment effect of artificial tears in dry-eye subjects.
Conclusion: The VCMA methodology provides a clinically relevant analysis of tear film stability measured in the context of a natural blink pattern.
Created2011-09-21
130267-Thumbnail Image.png
Description
Purpose: To investigate use of an improved ocular tear film analysis protocol (OPI 2.0) in the Controlled Adverse Environment (CAE[superscript SM]) model of dry eye disease, and to examine the utility of new metrics in the identification of subpopulations of dry eye patients.
Methods: Thirty-three dry eye subjects completed a single-center,

Purpose: To investigate use of an improved ocular tear film analysis protocol (OPI 2.0) in the Controlled Adverse Environment (CAE[superscript SM]) model of dry eye disease, and to examine the utility of new metrics in the identification of subpopulations of dry eye patients.
Methods: Thirty-three dry eye subjects completed a single-center, single-visit, pilot CAE study. The primary endpoint was mean break-up area (MBA) as assessed by the OPI 2.0 system. Secondary endpoints included corneal fluorescein staining, tear film break-up time, and OPI 2.0 system measurements. Subjects were also asked to rate their ocular discomfort throughout the CAE. Dry eye endpoints were measured at baseline, immediately following a 90-minute CAE exposure, and again 30 minutes after exposure.
Results: The post-CAE measurements of MBA showed a statistically significant decrease from the baseline measurements. The decrease was relatively specific to those patients with moderate to severe dry eye, as measured by baseline MBA. Secondary endpoints including palpebral fissure size, corneal staining, and redness, also showed significant changes when pre- and post-CAE measurements were compared. A correlation analysis identified specific associations between MBA, blink rate, and palpebral fissure size. Comparison of MBA responses allowed us to identify subpopulations of subjects who exhibited different compensatory mechanisms in response to CAE challenge. Of note, none of the measures of tear film break-up time showed statistically significant changes or correlations in pre-, versus post-CAE measures.
Conclusion: This pilot study confirms that the tear film metric MBA can detect changes in the ocular surface induced by a CAE, and that these changes are correlated with other, established measures of dry eye disease. The observed decrease in MBA following CAE exposure demonstrates that compensatory mechanisms are initiated during the CAE exposure, and that this compensation may provide the means to identify and characterize clinically relevant subpopulations of dry eye patients.
Created2012-11-12
130266-Thumbnail Image.png
Description
Diamond is considered as an ideal material for high field and high power devices due to its high breakdown field, high lightly doped carrier mobility, and high thermal conductivity. The modeling and simulation of diamond devices are therefore important to predict the performances of diamond based devices. In this context,

Diamond is considered as an ideal material for high field and high power devices due to its high breakdown field, high lightly doped carrier mobility, and high thermal conductivity. The modeling and simulation of diamond devices are therefore important to predict the performances of diamond based devices. In this context, we use Silvaco[superscript ®] Atlas, a drift-diffusion based commercial software, to model diamond based power devices. The models used in Atlas were modified to account for both variable range and nearest neighbor hopping transport in the impurity bands associated with high activation energies for boron doped and phosphorus doped diamond. The models were fit to experimentally reported resistivity data over a wide range of doping concentrations and temperatures. We compare to recent data on depleted diamond Schottky PIN diodes demonstrating low turn-on voltages and high reverse breakdown voltages, which could be useful for high power rectifying applications due to the low turn-on voltage enabling high forward current densities. Three dimensional simulations of the depleted Schottky PIN diamond devices were performed and the results are verified with experimental data at different operating temperatures.
Created2016-06-08
130260-Thumbnail Image.png
Description
Electricity plays a special role in our lives and life. The dynamics of electrons allow light to flow through a vacuum. The equations of electron dynamics are nearly exact and apply from nuclear particles to stars. These Maxwell equations include a special term, the displacement current (of a vacuum). The

Electricity plays a special role in our lives and life. The dynamics of electrons allow light to flow through a vacuum. The equations of electron dynamics are nearly exact and apply from nuclear particles to stars. These Maxwell equations include a special term, the displacement current (of a vacuum). The displacement current allows electrical signals to propagate through space. Displacement current guarantees that current is exactly conserved from inside atoms to between stars, as long as current is defined as the entire source of the curl of the magnetic field, as Maxwell did.We show that the Bohm formulation of quantum mechanics allows the easy definition of the total current, and its conservation, without the dificulties implicit in the orthodox quantum theory. The orthodox theory neglects the reality of magnitudes, like the currents, during times that they are not being explicitly measured.We show how conservation of current can be derived without mention of the polarization or dielectric properties of matter. We point out that displacement current is handled correctly in electrical engineering by ‘stray capacitances’, although it is rarely discussed explicitly. Matter does not behave as physicists of the 1800’s thought it did. They could only measure on a time scale of seconds and tried to explain dielectric properties and polarization with a single dielectric constant, a real positive number independent of everything. Matter and thus charge moves in enormously complicated ways that cannot be described by a single dielectric constant,when studied on time scales important today for electronic technology and molecular biology. When classical theories could not explain complex charge movements, constants in equations were allowed to vary in solutions of those equations, in a way not justified by mathematics, with predictable consequences. Life occurs in ionic solutions where charge is moved by forces not mentioned or described in the Maxwell equations, like convection and diffusion. These movements and forces produce crucial currents that cannot be described as classical conduction or classical polarization. Derivations of conservation of current involve oversimplified treatments of dielectrics and polarization in nearly every textbook. Because real dielectrics do not behave in that simple way-not even approximately-classical derivations of conservation of current are often distrusted or even ignored. We show that current is conserved inside atoms. We show that current is conserved exactly in any material no matter how complex are the properties of dielectric, polarization, or conduction currents. Electricity has a special role because conservation of current is a universal law.Most models of chemical reactions do not conserve current and need to be changed to do so. On the macroscopic scale of life, conservation of current necessarily links far spread boundaries to each other, correlating inputs and outputs, and thereby creating devices.We suspect that correlations created by displacement current link all scales and allow atoms to control the machines and organisms of life. Conservation of current has a special role in our lives and life, as well as in physics. We believe models, simulations, and computations should conserve current on all scales, as accurately as possible, because physics conserves current that way. We believe models will be much more successful if they conserve current at every level of resolution, the way physics does.We surely need successful models as we try to control macroscopic functions by atomic interventions, in technology, life, and medicine. Maxwell’s displacement current lets us see stars. We hope it will help us see how atoms control life.
Created2017-10-28
130346-Thumbnail Image.png
Description
Recent studies indicate the presence of nano-scale titanium dioxide (TiO[subscript 2]) as an additive in human foodstuffs, but a practical protocol to isolate and separate nano-fractions from soluble foodstuffs as a source of material remains elusive. As such, we developed a method for separating the nano and submicron fractions found

Recent studies indicate the presence of nano-scale titanium dioxide (TiO[subscript 2]) as an additive in human foodstuffs, but a practical protocol to isolate and separate nano-fractions from soluble foodstuffs as a source of material remains elusive. As such, we developed a method for separating the nano and submicron fractions found in commercial-grade TiO[subscript 2] (E171) and E171 extracted from soluble foodstuffs and pharmaceutical products (e.g., chewing gum, pain reliever, and allergy medicine). Primary particle analysis of commercial-grade E171 indicated that 54% of particles were nano-sized (i.e., < 100 nm). Isolation and primary particle analysis of five consumer goods intended to be ingested revealed differences in the percent of nano-sized particles from 32%‒58%. Separation and enrichment of nano- and submicron-sized particles from commercial-grade E171 and E171 isolated from foodstuffs and pharmaceuticals was accomplished using rate-zonal centrifugation. Commercial-grade E171 was separated into nano- and submicron-enriched fractions consisting of a nano:submicron fraction of approximately 0.45:1 and 3.2:1, respectively. E171 extracted from gum had nano:submicron fractions of 1.4:1 and 0.19:1 for nano- and submicron-enriched, respectively. We show a difference in particle adhesion to the cell surface, which was found to be dependent on particle size and epithelial orientation. Finally, we provide evidence that E171 particles are not immediately cytotoxic to the Caco-2 human intestinal epithelium model. These data suggest that this separation method is appropriate for studies interested in isolating the nano-sized particle fraction taken directly from consumer products, in order to study separately the effects of nano and submicron particles.
Created2016-10-31
130342-Thumbnail Image.png
Description
Background
Grading schemes for breast cancer diagnosis are predominantly based on pathologists' qualitative assessment of altered nuclear structure from 2D brightfield microscopy images. However, cells are three-dimensional (3D) objects with features that are inherently 3D and thus poorly characterized in 2D. Our goal is to quantitatively characterize nuclear structure in 3D,

Background
Grading schemes for breast cancer diagnosis are predominantly based on pathologists' qualitative assessment of altered nuclear structure from 2D brightfield microscopy images. However, cells are three-dimensional (3D) objects with features that are inherently 3D and thus poorly characterized in 2D. Our goal is to quantitatively characterize nuclear structure in 3D, assess its variation with malignancy, and investigate whether such variation correlates with standard nuclear grading criteria.
Methodology
We applied micro-optical computed tomographic imaging and automated 3D nuclear morphometry to quantify and compare morphological variations between human cell lines derived from normal, benign fibrocystic or malignant breast epithelium. To reproduce the appearance and contrast in clinical cytopathology images, we stained cells with hematoxylin and eosin and obtained 3D images of 150 individual stained cells of each cell type at sub-micron, isotropic resolution. Applying volumetric image analyses, we computed 42 3D morphological and textural descriptors of cellular and nuclear structure.
Principal Findings
We observed four distinct nuclear shape categories, the predominant being a mushroom cap shape. Cell and nuclear volumes increased from normal to fibrocystic to metastatic type, but there was little difference in the volume ratio of nucleus to cytoplasm (N/C ratio) between the lines. Abnormal cell nuclei had more nucleoli, markedly higher density and clumpier chromatin organization compared to normal. Nuclei of non-tumorigenic, fibrocystic cells exhibited larger textural variations than metastatic cell nuclei. At p<0.0025 by ANOVA and Kruskal-Wallis tests, 90% of our computed descriptors statistically differentiated control from abnormal cell populations, but only 69% of these features statistically differentiated the fibrocystic from the metastatic cell populations.
Conclusions
Our results provide a new perspective on nuclear structure variations associated with malignancy and point to the value of automated quantitative 3D nuclear morphometry as an objective tool to enable development of sensitive and specific nuclear grade classification in breast cancer diagnosis.
Created2012-01-05
130326-Thumbnail Image.png
Description

Inhibition by ammonium at concentrations above 1000 mgN/L is known to harm the methanogenesis phase of anaerobic digestion. We anaerobically digested swine waste and achieved steady state COD-removal efficiency of around 52% with no fatty-acid or H[subscript 2] accumulation. As the anaerobic microbial community adapted to the gradual increase of total

Inhibition by ammonium at concentrations above 1000 mgN/L is known to harm the methanogenesis phase of anaerobic digestion. We anaerobically digested swine waste and achieved steady state COD-removal efficiency of around 52% with no fatty-acid or H[subscript 2] accumulation. As the anaerobic microbial community adapted to the gradual increase of total ammonia-N (NH[subscript 3]-N) from 890 ± 295 to 2040 ± 30 mg/L, the Bacterial and Archaeal communities became less diverse. Phylotypes most closely related to hydrogenotrophic Methanoculleus (36.4%) and Methanobrevibacter (11.6%), along with acetoclastic Methanosaeta (29.3%), became the most abundant Archaeal sequences during acclimation. This was accompanied by a sharp increase in the relative abundances of phylotypes most closely related to acetogens and fatty-acid producers (Clostridium, Coprococcus, and Sphaerochaeta) and syntrophic fatty-acid Bacteria (Syntrophomonas, Clostridium, Clostridiaceae species, and Cloacamonaceae species) that have metabolic capabilities for butyrate and propionate fermentation, as well as for reverse acetogenesis. Our results provide evidence countering a prevailing theory that acetoclastic methanogens are selectively inhibited when the total ammonia-N concentration is greater than ~1000 mgN/L. Instead, acetoclastic and hydrogenotrophic methanogens coexisted in the presence of total ammonia-N of ~2000 mgN/L by establishing syntrophic relationships with fatty-acid fermenters, as well as homoacetogens able to carry out forward and reverse acetogenesis.

Created2016-08-11