The purpose of this study was to examine the demographic and geographic disparities in the incidence of newborn babies with Neonatal Abstinence Syndrome (NAS) in the United States from 2012 to 2015. Specifically, I examined the prevalence of NAS according to geographic location (i.e. urban versus rural) and race while also controlling for mother’s insurance type, median household income, and trends over time. Additional analyses explored the relationship between NAS and delivery method, birth weight, and neonatal candidiasis that causes sepsis. Understanding the disparities in NAS and birth outcomes during this period (2012-2015) can help better target interventions for combating the health and economic burdens of NAS since maternal opioid use has continued to rise since 2015. Additionally, existing research into geographic disparities in NAS have only been region-specific. This study expands the scope of this literature by considering urban versus rural disparities across the country.
This project explorers the potential reasons for the discrepancies between state responses to the COVID-19 pandemic, with a particular focus on the possibility of a correlation between political ideology and a state’s nonpharmacological intervention policy timing. In addition to outlining the current literature on the preferences of conservative and liberal ideology, examples of both past and present scientific based pandemic responses are described as well. Given the current understanding of the social and economic dimension of conservative and liberal political ideology, it was hypothesized that there may be a positive correlation between conservative ideology and premature action by a state. Data was collected on the current ideological landscape and the daily COVID-19 cases numbers of each state in addition to tracking each state’s policy changes. Two correlation tests were performed to find that there was no significant positive or negative correlation between the two variables.
Despite differences in schooling and clinical experience prior to practice, advanced practice providers often have similar scopes of practice, which raises concerns about the quality of care being provided. In this paper, we explore if prescribing patterns are comparable between provider types by comparing differences in time spent on pharmacological interventions utilizing a simulated healthcare environment. Physicians (MDs and DOs), Nurse Practitioners (NPs), and Physician Assistants (PAs) actively practicing in Family Practice/Medicine or Internal Medicine in the U.S. state license/recognition were recruited at healthcare conferences and simulation centers. Participants were provided 20 minutes to complete the patient consultation on a Standardized Patient (SP) presenting with a chief complaint of a post-hospitalization follow-up for heart failure, fatigue, and some edema. All encounters were recorded and uploaded to be reviewed by undergraduate evaluators, who were responsible for quantifying the amount of time the participants spent on each of the task categories, including pharmacologic interventions. With a total of 46 participants in this study, the average amount of time spent discussing this activity per visit across each provider type was 14.8 seconds for MDs/DOs, 29.2 seconds for NPs, and 38.8 seconds for PAs. The results of this study suggest that PAs (p= 0.0028) spent significantly more time discussing pharmacological interventions and were significantly more likely to discuss pharmacological interventions (p=0.0243) when compared with physicians (MD/DOs). It is important to note that the sample size of PAs was very small (N=9), which could potentially skew the results and not be representative of the population. With limited literature that examines whether time spent discussing pharmacological interventions is comparable across provider types, it is important for more simulated healthcare research to be conducted on this topic.
Despite differences in schooling and clinical experience prior to practice, advanced practice providers often have similar scopes of practice, which raises concerns about the quality of care being provided. In this paper, we explore if prescribing patterns are comparable between provider types by comparing differences in time spent on pharmacological interventions utilizing a simulated healthcare environment. Physicians (MDs and DOs), Nurse Practitioners (NPs), and Physician Assistants (PAs) actively practicing in Family Practice/Medicine or Internal Medicine in the U.S. state license/recognition were recruited at healthcare conferences and simulation centers. Participants were provided 20 minutes to complete the patient consultation on a Standardized Patient (SP) presenting with a chief complaint of a post-hospitalization follow-up for heart failure, fatigue, and some edema. All encounters were recorded and uploaded to be reviewed by undergraduate evaluators, who were responsible for quantifying the amount of time the participants spent on each of the task categories, including pharmacologic interventions. With a total of 46 participants in this study, the average amount of time spent discussing this activity per visit across each provider type was 14.8 seconds for MDs/DOs, 29.2 seconds for NPs, and 38.8 seconds for PAs. The results of this study suggest that PAs (p= 0.0028) spent significantly more time discussing pharmacological interventions and were significantly more likely to discuss pharmacological interventions (p=0.0243) when compared with physicians (MD/DOs). It is important to note that the sample size of PAs was very small (N=9), which could potentially skew the results and not be representative of the population. With limited literature that examines whether time spent discussing pharmacological interventions is comparable across provider types, it is important for more simulated healthcare research to be conducted on this topic.
Methods: The standard NLP process was used for this study in which a gold standard was reached through matched paired annotations of the forum text in brat and a neural network was trained on the content. Following the annotation process, adjudication occurred to increase the inter-annotator agreement. Categories were developed by local physicians to describe the questions and three pilots were run to test the best way to categorize the questions.
Results: The inter-annotator agreement, calculated via F-score, before adjudication for a 0.7 threshold was 0.378 for the annotation activity. After adjudication at a threshold of 0.7, the inter-annotator agreement increased to 0.560. Pilots 1, 2, and 3 of the categorization activity had an inter-annotator agreement of 0.375, 0.5, and 0.966 respectively.
Discussion: The inter-annotator agreement of the annotation activity may have been low initially since the annotators were students who may have not been as invested in the project as necessary to accurately annotate the text. Also, as everyone interprets the text slightly differently, it is possible that that contributed to the differences in the matched pairs’ annotations. The F-score variation for the categorization activity partially had to do with different delivery systems of the instructions and partially with the area of study of the participants. The first pilot did not mandate the use of the original context located in brat and the instructions were provided in the form of a downloadable document. The participants were computer science graduate students. The second pilot also had the instructions delivered via a document, but it was strongly suggested that the context be used to gain an understanding of the questions’ meanings. The participants were also computer science graduate students who upon a discussion of their results after the pilot expressed that they did not have a good understanding of the medical jargon in the posts. The final pilot used a combination of students with and without medical background, required to use the context, and included verbal instructions in combination with the written ones. The combination of these factors increased the F-score significantly. For a full-scale experiment, students with a medical background should be used to categorize the questions.
In this article, we explore how independently reported measures of subjects' cognitive capabilities, preferences, and sociodemographic characteristics relate to their behavior in a real-effort moral dilemma experiment. To do this, we use a unique dataset, the Chapman Preferences and Characteristics Instrument Set (CPCIS), which contains over 30 standardized measures of preferences and characteristics. We find that simple correlation analysis provides an incomplete picture of how individual measures relate to behavior. In contrast, clustering subjects into groups based on observed behavior in the real-effort task reveals important systematic differences in individual characteristics across groups. However, while we find more differences, these differences are not systematic and difficult to interpret. These results indicate a need for more comprehensive theory explaining how combinations of different individual characteristics impact behavior is needed.