Matching Items (179)
130879-Thumbnail Image.png
Description
Major Depressive Disorder (MDD) affects over 300 million people worldwide, with the hippocampus showing decreased volume and activity in patients with MDD. The current study investigated whether a novel preclinical model of depression, unpredictable intermittent restraint (UIR), would decrease hippocampal neuronal dendritic complexity. Adult Sprague Dawley rats (24 male, 24

Major Depressive Disorder (MDD) affects over 300 million people worldwide, with the hippocampus showing decreased volume and activity in patients with MDD. The current study investigated whether a novel preclinical model of depression, unpredictable intermittent restraint (UIR), would decrease hippocampal neuronal dendritic complexity. Adult Sprague Dawley rats (24 male, 24 female) were equally divided into 4 groups: control males (CON-M), UIR males (UIR-M), control females (CON-F) and UIR females (UIR-F). UIR groups received restraint and shaking on an orbital shaker on a randomized schedule for 30 or 60 minutes/day for two to six days in a row for 26 days (21 total UIR days) before behavioral testing commenced. UIR continued and was interspersed between behavioral test days. At the end of behavioral testing, brains were processed. The behavior is published and not part of my honor’s thesis; my contribution involved quantifying and analyzing neurons in the hippocampus. Several neuronal types are found in the CA3 subregion of the hippocampus and I focused on short shaft (SS) neurons, which show different sensitivities to stress than the more common long shaft (LS) variety. Brains sections were mounted to slides and Golgi stained. SS neurons were drawn using a microscope with camera lucida attachment and quantified using the number of bifurcations and dendritic intersections as metrics for dendritic complexity in the apical and basal areas separately. The hypothesis that SS neurons in the CA3 region of the hippocampus would exhibit apical dendritic simplification in both sexes after UIR was not supported by our findings. In contrast, following UIR, SS apical dendrites were more complex in both sexes compared to controls. Although unexpected, we believe that the UIR paradigm was an effective stressor, robust enough to illicit neuronal adaptations. It appears that the time from the end of UIR to when the brain tissue was collected, or the post-stress recovery period, and/or repeated behavioral testing may have played a role in the observed increased neuronal complexity. Future studies are needed to parse out these potential effects.
ContributorsAcuna, Amanda Marie (Author) / Conrad, Cheryl (Thesis director) / Corbin, William (Committee member) / Olive, M. Foster (Committee member) / School of Life Sciences (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2020-12
130393-Thumbnail Image.png
Description
Mathematical epidemiology, one of the oldest and richest areas in mathematical biology, has significantly enhanced our understanding of how pathogens emerge, evolve, and spread. Classical epidemiological models, the standard for predicting and managing the spread of infectious disease, assume that contacts between susceptible and infectious individuals depend on their relative

Mathematical epidemiology, one of the oldest and richest areas in mathematical biology, has significantly enhanced our understanding of how pathogens emerge, evolve, and spread. Classical epidemiological models, the standard for predicting and managing the spread of infectious disease, assume that contacts between susceptible and infectious individuals depend on their relative frequency in the population. The behavioral factors that underpin contact rates are not generally addressed. There is, however, an emerging a class of models that addresses the feedbacks between infectious disease dynamics and the behavioral decisions driving host contact. Referred to as “economic epidemiology” or “epidemiological economics,” the approach explores the determinants of decisions about the number and type of contacts made by individuals, using insights and methods from economics. We show how the approach has the potential both to improve predictions of the course of infectious disease, and to support development of novel approaches to infectious disease management.
Created2015-12-01
130400-Thumbnail Image.png
Description
Preserving a system’s viability in the presence of diversity erosion is critical if the goal is to sustainably support biodiversity. Reduction in population heterogeneity, whether inter- or intraspecies, may increase population fragility, either decreasing its ability to adapt effectively to environmental changes or facilitating the survival and success of ordinarily

Preserving a system’s viability in the presence of diversity erosion is critical if the goal is to sustainably support biodiversity. Reduction in population heterogeneity, whether inter- or intraspecies, may increase population fragility, either decreasing its ability to adapt effectively to environmental changes or facilitating the survival and success of ordinarily rare phenotypes. The latter may result in over-representation of individuals who may participate in resource utilization patterns that can lead to over-exploitation, exhaustion, and, ultimately, collapse of both the resource and the population that depends on it. Here, we aim to identify regimes that can signal whether a consumer–resource system is capable of supporting viable degrees of heterogeneity. The framework used here is an expansion of a previously introduced consumer–resource type system of a population of individuals classified by their resource consumption. Application of the Reduction Theorem to the system enables us to evaluate the health of the system through tracking both the mean value of the parameter of resource (over)consumption, and the population variance, as both change over time. The article concludes with a discussion that highlights applicability of the proposed system to investigation of systems that are affected by particularly devastating overly adapted populations, namely cancerous cells. Potential intervention approaches for system management are discussed in the context of cancer therapies.
Created2015-02-01
130341-Thumbnail Image.png
Description
Background
In the weeks following the first imported case of Ebola in the U. S. on September 29, 2014, coverage of the very limited outbreak dominated the news media, in a manner quite disproportionate to the actual threat to national public health; by the end of October, 2014, there were only

Background
In the weeks following the first imported case of Ebola in the U. S. on September 29, 2014, coverage of the very limited outbreak dominated the news media, in a manner quite disproportionate to the actual threat to national public health; by the end of October, 2014, there were only four laboratory confirmed cases of Ebola in the entire nation. Public interest in these events was high, as reflected in the millions of Ebola-related Internet searches and tweets performed in the month following the first confirmed case. Use of trending Internet searches and tweets has been proposed in the past for real-time prediction of outbreaks (a field referred to as “digital epidemiology”), but accounting for the biases of public panic has been problematic. In the case of the limited U. S. Ebola outbreak, we know that the Ebola-related searches and tweets originating the U. S. during the outbreak were due only to public interest or panic, providing an unprecedented means to determine how these dynamics affect such data, and how news media may be driving these trends.
Methodology
We examine daily Ebola-related Internet search and Twitter data in the U. S. during the six week period ending Oct 31, 2014. TV news coverage data were obtained from the daily number of Ebola-related news videos appearing on two major news networks. We fit the parameters of a mathematical contagion model to the data to determine if the news coverage was a significant factor in the temporal patterns in Ebola-related Internet and Twitter data.
Conclusions
We find significant evidence of contagion, with each Ebola-related news video inspiring tens of thousands of Ebola-related tweets and Internet searches. Between 65% to 76% of the variance in all samples is described by the news media contagion model.
Created2015-06-11
130348-Thumbnail Image.png
Description
Background
Seroepidemiological studies before and after the epidemic wave of H1N1-2009 are useful for estimating population attack rates with a potential to validate early estimates of the reproduction number, R, in modeling studies.
Methodology/Principal Findings
Since the final epidemic size, the proportion of individuals in a population who become infected during an epidemic,

Background
Seroepidemiological studies before and after the epidemic wave of H1N1-2009 are useful for estimating population attack rates with a potential to validate early estimates of the reproduction number, R, in modeling studies.
Methodology/Principal Findings
Since the final epidemic size, the proportion of individuals in a population who become infected during an epidemic, is not the result of a binomial sampling process because infection events are not independent of each other, we propose the use of an asymptotic distribution of the final size to compute approximate 95% confidence intervals of the observed final size. This allows the comparison of the observed final sizes against predictions based on the modeling study (R = 1.15, 1.40 and 1.90), which also yields simple formulae for determining sample sizes for future seroepidemiological studies. We examine a total of eleven published seroepidemiological studies of H1N1-2009 that took place after observing the peak incidence in a number of countries. Observed seropositive proportions in six studies appear to be smaller than that predicted from R = 1.40; four of the six studies sampled serum less than one month after the reported peak incidence. The comparison of the observed final sizes against R = 1.15 and 1.90 reveals that all eleven studies appear not to be significantly deviating from the prediction with R = 1.15, but final sizes in nine studies indicate overestimation if the value R = 1.90 is used.
Conclusions
Sample sizes of published seroepidemiological studies were too small to assess the validity of model predictions except when R = 1.90 was used. We recommend the use of the proposed approach in determining the sample size of post-epidemic seroepidemiological studies, calculating the 95% confidence interval of observed final size, and conducting relevant hypothesis testing instead of the use of methods that rely on a binomial proportion.
Created2011-03-24
130349-Thumbnail Image.png
Description
Background
Several past studies have found that media reports of suicides and homicides appear to subsequently increase the incidence of similar events in the community, apparently due to the coverage planting the seeds of ideation in at-risk individuals to commit similar acts.
Methods
Here we explore whether or not contagion is evident in

Background
Several past studies have found that media reports of suicides and homicides appear to subsequently increase the incidence of similar events in the community, apparently due to the coverage planting the seeds of ideation in at-risk individuals to commit similar acts.
Methods
Here we explore whether or not contagion is evident in more high-profile incidents, such as school shootings and mass killings (incidents with four or more people killed). We fit a contagion model to recent data sets related to such incidents in the US, with terms that take into account the fact that a school shooting or mass murder may temporarily increase the probability of a similar event in the immediate future, by assuming an exponential decay in contagiousness after an event.
Conclusions
We find significant evidence that mass killings involving firearms are incented by similar events in the immediate past. On average, this temporary increase in probability lasts 13 days, and each incident incites at least 0.30 new incidents (p = 0.0015). We also find significant evidence of contagion in school shootings, for which an incident is contagious for an average of 13 days, and incites an average of at least 0.22 new incidents (p = 0.0001). All p-values are assessed based on a likelihood ratio test comparing the likelihood of a contagion model to that of a null model with no contagion. On average, mass killings involving firearms occur approximately every two weeks in the US, while school shootings occur on average monthly. We find that state prevalence of firearm ownership is significantly associated with the state incidence of mass killings with firearms, school shootings, and mass shootings.
Created2015-07-02
131000-Thumbnail Image.png
Description
Major Depressive Disorder (MDD) is a widespread mood disorder that affects more than 300 million people worldwide and yet, high relapse rates persist. This current study aimed to use an animal model for depression, unpredictable intermittent restraint (UIR), to investigate changes in a subset of neurons within the hippocampus, a

Major Depressive Disorder (MDD) is a widespread mood disorder that affects more than 300 million people worldwide and yet, high relapse rates persist. This current study aimed to use an animal model for depression, unpredictable intermittent restraint (UIR), to investigate changes in a subset of neurons within the hippocampus, a region of high susceptibility in MDD. Adult male and female Sprague-Dawley rats were randomly assigned to four treatment groups based on sex (n = 48, n = 12/group). Half of the rats underwent UIR that involved restraint with orbital shaking (30 min or 1 h) for 2-6 consecutive days, followed by one or two days of no stressors; the other half of the rats were undisturbed (CON). UIR rats were stressed for 28 days (21 days of actual stressors) before behavioral testing began with UIR continuing between testing days for nearly 70 days. Rats were then euthanized between 9 and 11 days after the last UIR session. Brains were processed for Golgi stain and long-shaft (LS) neurons within the hippocampal CA3a and CA3b regions were quantified for dendritic complexity using a Camera Lucida attachment. Our findings failed to support our hypothesis that UIR would produce apical dendritic retraction in CA3 hippocampal LS neurons in both males and females. Given that UIR failed to produce CA3 apical dendritic retraction in males, which is commonly observed in the literature, we discuss several reasons for these findings including, time from the end of UIR to when brains were sampled, and the effects of repeated cognitive testing. Given our published findings that UIR impaired spatial ability in males, but not females, we believe that UIR holds validity as a chronic stress paradigm, as UIR attenuated body weight gain in both males and females and produced reductions in thymus gland weight in UIR males. These findings corroborate UIR as an effective stressor in males and warrant further research into the timing of UIR-induced changes in hippocampal CA3 apical dendritic morphology.
ContributorsReynolds, Cindy Marie (Author) / Conrad, Cheryl D. (Thesis director) / Olive, M. Foster (Committee member) / School of Molecular Sciences (Contributor) / Department of English (Contributor) / Barrett, The Honors College (Contributor)
Created2020-12
131788-Thumbnail Image.png
Description
Coffee is an important link between the United States and Latin America and an important part of Latin America’s culture and economy. This paper looks at the similarities and differences between coffee organizations in Colombia, Ecuador, Peru, and Guatemala. Colombia has the strongest coffee organizations with the most political power.

Coffee is an important link between the United States and Latin America and an important part of Latin America’s culture and economy. This paper looks at the similarities and differences between coffee organizations in Colombia, Ecuador, Peru, and Guatemala. Colombia has the strongest coffee organizations with the most political power. Guatemala and Peru, to a lesser extent, have well organized and powerful organizations that make up their industry. However, Ecuador has a significantly less organized organization. At their core, each country has a similar structure. There is one organization on the national level that watches out for the industry as a whole. Underneath that, there are smaller, often regional organizations made up of cooperatives pooling their resources for export. They function in similar ways as the national organizations, but have less reach. At the bottom, there are individual cooperatives and independent farmers. These cooperatives do not have much reach or connection to international markets.
ContributorsChabin, James Edward (Author) / Janssen, Marco (Thesis director) / Taylor, Keith (Committee member) / School of Sustainability (Contributor) / School of International Letters and Cultures (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131979-Thumbnail Image.png
Description
With opioid use disorder (OUD) being an epidemic, it is important to investigate the mechanisms as to why this is so. This study established a self-administration paradigm to model and investigate the mechanisms of polysubstance, sequential use in conjunction with the analysis of withdrawal symptomatology driven by opioid withdrawal. The

With opioid use disorder (OUD) being an epidemic, it is important to investigate the mechanisms as to why this is so. This study established a self-administration paradigm to model and investigate the mechanisms of polysubstance, sequential use in conjunction with the analysis of withdrawal symptomatology driven by opioid withdrawal. The independent variables were dichotomized into the control group (food/cocaine) and the experimental group (oxycodone/cocaine). We hypothesized that more cocaine would be self-administered on the first day of oxycodone withdrawal. In addition, we hypothesized that somatic signs of withdrawal would increase at 16 hours post-oxycodone self-administration. Finally, we hypothesized that cocaine intake during oxycodone withdrawal would potentiate subsequent oxycodone self-administration. Our findings revealed that animals readily discriminated between the active (food or oxycodone) and inactive levers - but will however require more animals to achieve the appropriate power. Further, the average cocaine infusions across phases exhibited significance between the oxycodone/cocaine and food/cocaine group, with the average cocaine infusions being lower in food than in oxycodone-experienced animals. This implies that the exacerbation of the sequential co-use pattern in this case yields an increase in cocaine infusions that may be driven by oxycodone withdrawal. Further, to characterize withdrawal from oxycodone self-administration, somatic signs were examined at either 0 or 16 hrs following completion of oxycodone self-administration. The oxycodone/cocaine group exhibited significantly lower body temperature at 16 hrs of oxycodone withdrawal compared to 0 hrs. No differences in somatic signs of withdrawal in the food/cocaine group was found between the two timepoints. Oxycodone withdrawal was not found to potentiate any subsequent self-administration of oxycodone. Future research is needed to uncover neurobiological underpinnings of motivated polysubstance use in order to discover novel pharmacotherapeutic treatments to decrease co-use of drugs of abuse. Overall, this study is of importance as it is the first to establish a working preclinical model of a clinically-relevant pattern of polysubstance use. By doing so, it enables an exceptional opportunity to examine co-use in a highly-controlled setting.
ContributorsUlangkaya, Hanaa Corsino (Author) / Gipson-Reichardt, Cassandra (Thesis director) / Olive, M. Foster (Committee member) / Department of Psychology (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
130356-Thumbnail Image.png
Description
Background
The transmission dynamics of Tuberculosis (TB) involve complex epidemiological and socio-economical interactions between individuals living in highly distinct regional conditions. The level of exogenous reinfection and first time infection rates within high-incidence settings may influence the impact of control programs on TB prevalence. The impact that effective population size and

Background
The transmission dynamics of Tuberculosis (TB) involve complex epidemiological and socio-economical interactions between individuals living in highly distinct regional conditions. The level of exogenous reinfection and first time infection rates within high-incidence settings may influence the impact of control programs on TB prevalence. The impact that effective population size and the distribution of individuals’ residence times in different patches have on TB transmission and control are studied using selected scenarios where risk is defined by the estimated or perceive first time infection and/or exogenous re-infection rates.
Methods
This study aims at enhancing the understanding of TB dynamics, within simplified, two patch, risk-defined environments, in the presence of short term mobility and variations in reinfection and infection rates via a mathematical model. The modeling framework captures the role of individuals’ ‘daily’ dynamics within and between places of residency, work or business via the average proportion of time spent in residence and as visitors to TB-risk environments (patches). As a result, the effective population size of Patch i (home of i-residents) at time t must account for visitors and residents of Patch i, at time t.
Results
The study identifies critical social behaviors mechanisms that can facilitate or eliminate TB infection in vulnerable populations. The results suggest that short-term mobility between heterogeneous patches contributes to significant overall increases in TB prevalence when risk is considered only in terms of direct new infection transmission, compared to the effect of exogenous reinfection. Although, the role of exogenous reinfection increases the risk that come from large movement of individuals, due to catastrophes or conflict, to TB-free areas.
Conclusions
The study highlights that allowing infected individuals to move from high to low TB prevalence areas (for example via the sharing of treatment and isolation facilities) may lead to a reduction in the total TB prevalence in the overall population. The higher the population size heterogeneity between distinct risk patches, the larger the benefit (low overall prevalence) under the same “traveling” patterns. Policies need to account for population specific factors (such as risks that are inherent with high levels of migration, local and regional mobility patterns, and first time infection rates) in order to be long lasting, effective and results in low number of drug resistant cases.
Created2017-01-11