Recent studies suggest a role for the microbiota in autism spectrum disorders (ASD), potentially arising from their role in modulating the immune system and gastrointestinal (GI) function or from gut–brain interactions dependent or independent from the immune system. GI problems such as chronic constipation and/or diarrhea are common in children with ASD, and significantly worsen their behavior and their quality of life. Here we first summarize previously published data supporting that GI dysfunction is common in individuals with ASD and the role of the microbiota in ASD. Second, by comparing with other publically available microbiome datasets, we provide some evidence that the shifted microbiota can be a result of westernization and that this shift could also be framing an altered immune system. Third, we explore the possibility that gut–brain interactions could also be a direct result of microbially produced metabolites.
There is a growing body of scientific evidence that the health of the microbiome (the trillions of microbes that inhabit the human host) plays an important role in maintaining the health of the host and that disruptions in the microbiome may play a role in certain disease processes. An increasing number of research studies have provided evidence that the composition of the gut (enteric) microbiome (GM) in at least a subset of individuals with autism spectrum disorder (ASD) deviates from what is usually observed in typically developing individuals. There are several lines of research that suggest that specific changes in the GM could be causative or highly associated with driving core and associated ASD symptoms, pathology, and comorbidities which include gastrointestinal symptoms, although it is also a possibility that these changes, in whole or in part, could be a consequence of underlying pathophysiological features associated with ASD. However, if the GM truly plays a causative role in ASD, then the manipulation of the GM could potentially be leveraged as a therapeutic approach to improve ASD symptoms and/or comorbidities, including gastrointestinal symptoms.
One approach to investigating this possibility in greater detail includes a highly controlled clinical trial in which the GM is systematically manipulated to determine its significance in individuals with ASD. To outline the important issues that would be required to design such a study, a group of clinicians, research scientists, and parents of children with ASD participated in an interdisciplinary daylong workshop as an extension of the 1st International Symposium on the Microbiome in Health and Disease with a Special Focus on Autism (www.microbiome-autism.com). The group considered several aspects of designing clinical studies, including clinical trial design, treatments that could potentially be used in a clinical trial, appropriate ASD participants for the clinical trial, behavioral and cognitive assessments, important biomarkers, safety concerns, and ethical considerations. Overall, the group not only felt that this was a promising area of research for the ASD population and a promising avenue for potential treatment but also felt that further basic and translational research was needed to clarify the clinical utility of such treatments and to elucidate possible mechanisms responsible for a clinical response, so that new treatments and approaches may be discovered and/or fostered in the future.
Human activity recognition is the task of identifying a person’s movement from sensors in a wearable device, such as a smartphone, smartwatch, or a medical-grade device. A great method for this task is machine learning, which is the study of algorithms that learn and improve on their own with the help of massive amounts of useful data. These classification models can accurately classify activities with the time-series data from accelerometers and gyroscopes. A significant way to improve the accuracy of these machine learning models is preprocessing the data, essentially augmenting data to make the identification of each activity, or class, easier for the model. <br/>On this topic, this paper explains the design of SigNorm, a new web application which lets users conveniently transform time-series data and view the effects of those transformations in a code-free, browser-based user interface. The second and final section explains my take on a human activity recognition problem, which involves comparing a preprocessed dataset to an un-augmented one, and comparing the differences in accuracy using a one-dimensional convolutional neural network to make classifications.
The purpose of this study is to determine the feasibility of three widely used wearable sensors in research settings for 24 h monitoring of sleep, sedentary, and active behaviors in middle-aged women.
Methods
Participants were 21 inactive, overweight (M Body Mass Index (BMI) = 29.27 ± 7.43) women, 30 to 64 years (M = 45.31 ± 9.67). Women were instructed to wear each sensor on the non-dominant hip (ActiGraph GT3X+), wrist (GENEActiv), or upper arm (BodyMedia SenseWear Mini) for 24 h/day and record daily wake and bed times for one week over the course of three consecutive weeks. Women received feedback about their daily physical activity and sleep behaviors. Feasibility (i.e., acceptability and demand) was measured using surveys, interviews, and wear time.
Results
Women felt the GENEActiv (94.7 %) and SenseWear Mini (90.0 %) were easier to wear and preferred the placement (68.4, 80 % respectively) as compared to the ActiGraph (42.9, 47.6 % respectively). Mean wear time on valid days was similar across sensors (ActiGraph: M = 918.8 ± 115.0 min; GENEActiv: M = 949.3 ± 86.6; SenseWear: M = 928.0 ± 101.8) and well above other studies using wake time only protocols. Informational feedback was the biggest motivator, while appearance, comfort, and inconvenience were the biggest barriers to wearing sensors. Wear time was valid on 93.9 % (ActiGraph), 100 % (GENEActiv), and 95.2 % (SenseWear) of eligible days. 61.9, 95.2, and 71.4 % of participants had seven valid days of data for the ActiGraph, GENEActiv, and SenseWear, respectively.
Conclusion
Twenty-four hour monitoring over seven consecutive days is a feasible approach in middle-aged women. Researchers should consider participant acceptability and demand, in addition to validity and reliability, when choosing a wearable sensor. More research is needed across populations and study designs.
Inhibition by ammonium at concentrations above 1000 mgN/L is known to harm the methanogenesis phase of anaerobic digestion. We anaerobically digested swine waste and achieved steady state COD-removal efficiency of around 52% with no fatty-acid or H[subscript 2] accumulation. As the anaerobic microbial community adapted to the gradual increase of total ammonia-N (NH[subscript 3]-N) from 890 ± 295 to 2040 ± 30 mg/L, the Bacterial and Archaeal communities became less diverse. Phylotypes most closely related to hydrogenotrophic Methanoculleus (36.4%) and Methanobrevibacter (11.6%), along with acetoclastic Methanosaeta (29.3%), became the most abundant Archaeal sequences during acclimation. This was accompanied by a sharp increase in the relative abundances of phylotypes most closely related to acetogens and fatty-acid producers (Clostridium, Coprococcus, and Sphaerochaeta) and syntrophic fatty-acid Bacteria (Syntrophomonas, Clostridium, Clostridiaceae species, and Cloacamonaceae species) that have metabolic capabilities for butyrate and propionate fermentation, as well as for reverse acetogenesis. Our results provide evidence countering a prevailing theory that acetoclastic methanogens are selectively inhibited when the total ammonia-N concentration is greater than ~1000 mgN/L. Instead, acetoclastic and hydrogenotrophic methanogens coexisted in the presence of total ammonia-N of ~2000 mgN/L by establishing syntrophic relationships with fatty-acid fermenters, as well as homoacetogens able to carry out forward and reverse acetogenesis.
Validity of the Rapid Eating Assessment for Patients for assessing dietary patterns in NCAA athletes
Athletes may be at risk for developing adverse health outcomes due to poor eating behaviors during college. Due to the complex nature of the diet, it is difficult to include or exclude individual food items and specific food groups from the diet. Eating behaviors may better characterize the complex interactions between individual food items and specific food groups. The purpose was to examine the Rapid Eating Assessment for Patients survey (REAP) as a valid tool for analyzing eating behaviors of NCAA Division-I male and female athletes using pattern identification. Also, to investigate the relationships between derived eating behavior patterns and body mass index (BMI) and waist circumference (WC) while stratifying by sex and aesthetic nature of the sport.
Methods
Two independent samples of male (n = 86; n = 139) and female (n = 64; n = 102) collegiate athletes completed the REAP in June-August 2011 (n = 150) and June-August 2012 (n = 241). Principal component analysis (PCA) determined possible factors using wave-1 athletes. Exploratory (EFA) and confirmatory factor analyses (CFA) determined factors accounting for error and confirmed model fit in wave-2 athletes. Wave-2 athletes' BMI and WC were recorded during a physical exam and sport participation determined classification in aesthetic and non-aesthetic sport. Mean differences in eating behavior pattern score were explored. Regression models examined interactions between pattern scores, participation in aesthetic or non-aesthetic sport, and BMI and waist circumference controlling for age and race.
Results
A 5-factor PCA solution accounting for 60.3% of sample variance determined fourteen questions for EFA and CFA. A confirmed solution revealed patterns of Desserts, Healthy food, Meats, High-fat food, and Dairy. Pattern score (mean ± SE) differences were found, as non-aesthetic sport males had a higher (better) Dessert score than aesthetic sport males (2.16 ± 0.07 vs. 1.93 ± 0.11). Female aesthetic athletes had a higher score compared to non-aesthetic female athletes for the Dessert (2.11 ± 0.11 vs. 1.88 ± 0.08), Meat (1.95 ± 0.10 vs. 1.72 ± 0.07), High-fat food (1.70 ± 0.08 vs. 1.46 ± 0.06), and Dairy (1.70 ± 0.11 vs. 1.43 ± 0.07) patterns.
Conclusions
REAP is a construct valid tool to assess dietary patterns in college athletes. In light of varying dietary patterns, college athletes should be evaluated for healthful and unhealthful eating behaviors.
protocols, including within sleep-focused studies. This study seeks to address accuracy of
accelerometer data in detection of the beginnings and ends of sleep bouts in young adults with
polysomnography (PSG) corroboration. An existing algorithm used to differentiate valid/invalid wear
time and detect bouts of sleep has been modified with the goal of maximizing accuracy of sleep bout
detection. Methods: Three key decisions and thresholds of the algorithm have been modified with three
experimental values each being tested. The main experimental variable Sleepwindow controls the
amount of time before and after a determined bout of sleep that is searched for additional sedentary
time to incorporate and consider part of the same sleep bout. Results were compared to PSG and sleep
diary data for absolute agreement of sleep bout start time (START), end time (END) and time in bed
(TIB). Adjustments were made for outliers as well as sleep latency, snooze time, and the sum of both.
Results: Only adjustments made to a sleep window variable yielded altered results. Between a 5-, 15-,
and 30-minute window, a 15-minute window incurred the least error and most agreement to
comparisons for START, while a 5-minute window was best for END and TIB. Discussion: Contrary
to expectation, corrections for snooze, latency, and both did not substantially improve agreement to
PSG. Algorithm-derived estimates of START and END always fell after sleep diary and PSG both,
suggesting either participants’ sedentary behavior beginning and ends were at a delay from sleep and
wake times, or the algorithm estimates consistently later times than appropriate. The inclusion of a
sleep window variable yields substantial variety in results. A 15-minute window appears best at
determining START while a 5-minute window appears best for END and TIB. Further investigation on
the optimal window length per demographic and condition is required.