Engineered pavements cover a large fraction of cities and offer significant potential for urban heat island mitigation. Though rapidly increasing research efforts have been devoted to the study of pavement materials, thermal interactions between buildings and the ambient environment are mostly neglected. In this study, numerical models featuring a realistic representation of building-environment thermal interactions, were applied to quantify the effect of pavements on the urban thermal environment at multiple scales. It was found that performance of pavements inside the canyon was largely determined by the canyon geometry. In a high-density residential area, modifying pavements had insignificant effect on the wall temperature and building energy consumption. At a regional scale, various pavement types were also found to have a limited cooling effect on land surface temperature and 2-m air temperature for metropolitan Phoenix. In the context of global climate change, the effect of pavement was evaluated in terms of the equivalent CO2 emission. Equivalent CO2 emission offset by reflective pavements in urban canyons was only about 13.9e46.6% of that without building canopies, depending on the canyon geometry. This study revealed the importance of building-environment thermal interactions in determining thermal conditions inside the urban canopy.
In this synthesis, we hope to accomplish two things: 1) reflect on how the analysis of the new archaeological cases presented in this special feature adds to previous case studies by revisiting a set of propositions reported in a 2006 special feature, and 2) reflect on four main ideas that are more specific to the archaeological cases: i) societal choices are influenced by robustness–vulnerability trade-offs, ii) there is interplay between robustness–vulnerability trade-offs and robustness–performance trade-offs, iii) societies often get locked in to particular strategies, and iv) multiple positive feedbacks escalate the perceived cost of societal change. We then discuss whether these lock-in traps can be prevented or whether the risks associated with them can be mitigated. We conclude by highlighting how these long-term historical studies can help us to understand current society, societal practices, and the nexus between ecology and society.
What relationships can be understood between resilience and vulnerability in social-ecological systems? In particular, what vulnerabilities are exacerbated or ameliorated by different sets of social practices associated with water management? These questions have been examined primarily through the study of contemporary or recent historic cases. Archaeology extends scientific observation beyond all social memory and can thus illuminate interactions occurring over centuries or millennia. We examined trade-offs of resilience and vulnerability in the changing social, technological, and environmental contexts of three long-term, pre-Hispanic sequences in the U.S. Southwest: the Mimbres area in southwestern New Mexico (AD 650–1450), the Zuni area in northern New Mexico (AD 850–1540), and the Hohokam area in central Arizona (AD 700–1450). In all three arid landscapes, people relied on agricultural systems that depended on physical and social infrastructure that diverted adequate water to agricultural soils. However, investments in infrastructure varied across the cases, as did local environmental conditions. Zuni farming employed a variety of small-scale water control strategies, including centuries of reliance on small runoff agricultural systems; Mimbres fields were primarily watered by small-scale canals feeding floodplain fields; and the Hohokam area had the largest canal system in pre-Hispanic North America. The cases also vary in their historical trajectories: at Zuni, population and resource use remained comparatively stable over centuries, extending into the historic period; in the Mimbres and Hohokam areas, there were major demographic and environmental transformations. Comparisons across these cases thus allow an understanding of factors that promote vulnerability and influence resilience in specific contexts.
Human activity recognition is the task of identifying a person’s movement from sensors in a wearable device, such as a smartphone, smartwatch, or a medical-grade device. A great method for this task is machine learning, which is the study of algorithms that learn and improve on their own with the help of massive amounts of useful data. These classification models can accurately classify activities with the time-series data from accelerometers and gyroscopes. A significant way to improve the accuracy of these machine learning models is preprocessing the data, essentially augmenting data to make the identification of each activity, or class, easier for the model. <br/>On this topic, this paper explains the design of SigNorm, a new web application which lets users conveniently transform time-series data and view the effects of those transformations in a code-free, browser-based user interface. The second and final section explains my take on a human activity recognition problem, which involves comparing a preprocessed dataset to an un-augmented one, and comparing the differences in accuracy using a one-dimensional convolutional neural network to make classifications.
The purpose of this study is to determine the feasibility of three widely used wearable sensors in research settings for 24 h monitoring of sleep, sedentary, and active behaviors in middle-aged women.
Methods
Participants were 21 inactive, overweight (M Body Mass Index (BMI) = 29.27 ± 7.43) women, 30 to 64 years (M = 45.31 ± 9.67). Women were instructed to wear each sensor on the non-dominant hip (ActiGraph GT3X+), wrist (GENEActiv), or upper arm (BodyMedia SenseWear Mini) for 24 h/day and record daily wake and bed times for one week over the course of three consecutive weeks. Women received feedback about their daily physical activity and sleep behaviors. Feasibility (i.e., acceptability and demand) was measured using surveys, interviews, and wear time.
Results
Women felt the GENEActiv (94.7 %) and SenseWear Mini (90.0 %) were easier to wear and preferred the placement (68.4, 80 % respectively) as compared to the ActiGraph (42.9, 47.6 % respectively). Mean wear time on valid days was similar across sensors (ActiGraph: M = 918.8 ± 115.0 min; GENEActiv: M = 949.3 ± 86.6; SenseWear: M = 928.0 ± 101.8) and well above other studies using wake time only protocols. Informational feedback was the biggest motivator, while appearance, comfort, and inconvenience were the biggest barriers to wearing sensors. Wear time was valid on 93.9 % (ActiGraph), 100 % (GENEActiv), and 95.2 % (SenseWear) of eligible days. 61.9, 95.2, and 71.4 % of participants had seven valid days of data for the ActiGraph, GENEActiv, and SenseWear, respectively.
Conclusion
Twenty-four hour monitoring over seven consecutive days is a feasible approach in middle-aged women. Researchers should consider participant acceptability and demand, in addition to validity and reliability, when choosing a wearable sensor. More research is needed across populations and study designs.
Validity of the Rapid Eating Assessment for Patients for assessing dietary patterns in NCAA athletes
Athletes may be at risk for developing adverse health outcomes due to poor eating behaviors during college. Due to the complex nature of the diet, it is difficult to include or exclude individual food items and specific food groups from the diet. Eating behaviors may better characterize the complex interactions between individual food items and specific food groups. The purpose was to examine the Rapid Eating Assessment for Patients survey (REAP) as a valid tool for analyzing eating behaviors of NCAA Division-I male and female athletes using pattern identification. Also, to investigate the relationships between derived eating behavior patterns and body mass index (BMI) and waist circumference (WC) while stratifying by sex and aesthetic nature of the sport.
Methods
Two independent samples of male (n = 86; n = 139) and female (n = 64; n = 102) collegiate athletes completed the REAP in June-August 2011 (n = 150) and June-August 2012 (n = 241). Principal component analysis (PCA) determined possible factors using wave-1 athletes. Exploratory (EFA) and confirmatory factor analyses (CFA) determined factors accounting for error and confirmed model fit in wave-2 athletes. Wave-2 athletes' BMI and WC were recorded during a physical exam and sport participation determined classification in aesthetic and non-aesthetic sport. Mean differences in eating behavior pattern score were explored. Regression models examined interactions between pattern scores, participation in aesthetic or non-aesthetic sport, and BMI and waist circumference controlling for age and race.
Results
A 5-factor PCA solution accounting for 60.3% of sample variance determined fourteen questions for EFA and CFA. A confirmed solution revealed patterns of Desserts, Healthy food, Meats, High-fat food, and Dairy. Pattern score (mean ± SE) differences were found, as non-aesthetic sport males had a higher (better) Dessert score than aesthetic sport males (2.16 ± 0.07 vs. 1.93 ± 0.11). Female aesthetic athletes had a higher score compared to non-aesthetic female athletes for the Dessert (2.11 ± 0.11 vs. 1.88 ± 0.08), Meat (1.95 ± 0.10 vs. 1.72 ± 0.07), High-fat food (1.70 ± 0.08 vs. 1.46 ± 0.06), and Dairy (1.70 ± 0.11 vs. 1.43 ± 0.07) patterns.
Conclusions
REAP is a construct valid tool to assess dietary patterns in college athletes. In light of varying dietary patterns, college athletes should be evaluated for healthful and unhealthful eating behaviors.
protocols, including within sleep-focused studies. This study seeks to address accuracy of
accelerometer data in detection of the beginnings and ends of sleep bouts in young adults with
polysomnography (PSG) corroboration. An existing algorithm used to differentiate valid/invalid wear
time and detect bouts of sleep has been modified with the goal of maximizing accuracy of sleep bout
detection. Methods: Three key decisions and thresholds of the algorithm have been modified with three
experimental values each being tested. The main experimental variable Sleepwindow controls the
amount of time before and after a determined bout of sleep that is searched for additional sedentary
time to incorporate and consider part of the same sleep bout. Results were compared to PSG and sleep
diary data for absolute agreement of sleep bout start time (START), end time (END) and time in bed
(TIB). Adjustments were made for outliers as well as sleep latency, snooze time, and the sum of both.
Results: Only adjustments made to a sleep window variable yielded altered results. Between a 5-, 15-,
and 30-minute window, a 15-minute window incurred the least error and most agreement to
comparisons for START, while a 5-minute window was best for END and TIB. Discussion: Contrary
to expectation, corrections for snooze, latency, and both did not substantially improve agreement to
PSG. Algorithm-derived estimates of START and END always fell after sleep diary and PSG both,
suggesting either participants’ sedentary behavior beginning and ends were at a delay from sleep and
wake times, or the algorithm estimates consistently later times than appropriate. The inclusion of a
sleep window variable yields substantial variety in results. A 15-minute window appears best at
determining START while a 5-minute window appears best for END and TIB. Further investigation on
the optimal window length per demographic and condition is required.