Human activity recognition is the task of identifying a person’s movement from sensors in a wearable device, such as a smartphone, smartwatch, or a medical-grade device. A great method for this task is machine learning, which is the study of algorithms that learn and improve on their own with the help of massive amounts of useful data. These classification models can accurately classify activities with the time-series data from accelerometers and gyroscopes. A significant way to improve the accuracy of these machine learning models is preprocessing the data, essentially augmenting data to make the identification of each activity, or class, easier for the model. <br/>On this topic, this paper explains the design of SigNorm, a new web application which lets users conveniently transform time-series data and view the effects of those transformations in a code-free, browser-based user interface. The second and final section explains my take on a human activity recognition problem, which involves comparing a preprocessed dataset to an un-augmented one, and comparing the differences in accuracy using a one-dimensional convolutional neural network to make classifications.
![149451-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-08/149451-Thumbnail%20Image.png?versionId=HqNc1bEDIHxe.gOUMqU0TMdLfXsaEx7r&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240616/us-west-2/s3/aws4_request&X-Amz-Date=20240616T022803Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=79a389ff20089dd75879d4eeab1f847d7678955ff18b55200e0498913295fe4c&itok=o4h8iYEP)
![135927-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-05/135927-Thumbnail%20Image.png?versionId=4xu8DkdRYgz0.lak7_K5Xf.qIbw31per&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240614/us-west-2/s3/aws4_request&X-Amz-Date=20240614T150719Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=13508bc59c91e4095bccb318fbbe83060aa6d77e06418d375dd2e7522426d587&itok=1VUIs_uX)
![135929-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-05/135929-Thumbnail%20Image.png?versionId=QUsOUdHiUc2HFyPV9qFRdDyGapS8k_sj&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240616/us-west-2/s3/aws4_request&X-Amz-Date=20240616T005937Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=c0f4a5988203b334fc0709b59b78644f036359a9f9101ccd8ebd8ba020f25ee5&itok=GEfiiRIm)
![130364-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-04/130364-Thumbnail%20Image.png?versionId=DRybtnEm2kHxQAUqRWrLcCl5N.Jf3mgG&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240530/us-west-2/s3/aws4_request&X-Amz-Date=20240530T154040Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=ab1623541826f97ef25e1a6bf009786dedc3540ff43dca6759baeed8ebeecda9&itok=e1wos-LQ)
Drosophila melanogaster has been established as a model organism for investigating the developmental gene interactions. The spatio-temporal gene expression patterns of Drosophila melanogaster can be visualized by in situ hybridization and documented as digital images. Automated and efficient tools for analyzing these expression images will provide biological insights into the gene functions, interactions, and networks. To facilitate pattern recognition and comparison, many web-based resources have been created to conduct comparative analysis based on the body part keywords and the associated images. With the fast accumulation of images from high-throughput techniques, manual inspection of images will impose a serious impediment on the pace of biological discovery. It is thus imperative to design an automated system for efficient image annotation and comparison.
Results
We present a computational framework to perform anatomical keywords annotation for Drosophila gene expression images. The spatial sparse coding approach is used to represent local patches of images in comparison with the well-known bag-of-words (BoW) method. Three pooling functions including max pooling, average pooling and Sqrt (square root of mean squared statistics) pooling are employed to transform the sparse codes to image features. Based on the constructed features, we develop both an image-level scheme and a group-level scheme to tackle the key challenges in annotating Drosophila gene expression pattern images automatically. To deal with the imbalanced data distribution inherent in image annotation tasks, the undersampling method is applied together with majority vote. Results on Drosophila embryonic expression pattern images verify the efficacy of our approach.
Conclusion
In our experiment, the three pooling functions perform comparably well in feature dimension reduction. The undersampling with majority vote is shown to be effective in tackling the problem of imbalanced data. Moreover, combining sparse coding and image-level scheme leads to consistent performance improvement in keywords annotation.
![130366-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-04/130366-Thumbnail%20Image.png?versionId=Y3UFsAqEPFLGR1T0ZQLJRZ.sD6BFOtMt&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240605/us-west-2/s3/aws4_request&X-Amz-Date=20240605T194838Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=718c194d50060ecbf7fd1c1a1829bc11b4c67ad11f1dacf98718f9a82e48c8c2&itok=6MENkW_C)
The purpose of this study is to determine the feasibility of three widely used wearable sensors in research settings for 24 h monitoring of sleep, sedentary, and active behaviors in middle-aged women.
Methods
Participants were 21 inactive, overweight (M Body Mass Index (BMI) = 29.27 ± 7.43) women, 30 to 64 years (M = 45.31 ± 9.67). Women were instructed to wear each sensor on the non-dominant hip (ActiGraph GT3X+), wrist (GENEActiv), or upper arm (BodyMedia SenseWear Mini) for 24 h/day and record daily wake and bed times for one week over the course of three consecutive weeks. Women received feedback about their daily physical activity and sleep behaviors. Feasibility (i.e., acceptability and demand) was measured using surveys, interviews, and wear time.
Results
Women felt the GENEActiv (94.7 %) and SenseWear Mini (90.0 %) were easier to wear and preferred the placement (68.4, 80 % respectively) as compared to the ActiGraph (42.9, 47.6 % respectively). Mean wear time on valid days was similar across sensors (ActiGraph: M = 918.8 ± 115.0 min; GENEActiv: M = 949.3 ± 86.6; SenseWear: M = 928.0 ± 101.8) and well above other studies using wake time only protocols. Informational feedback was the biggest motivator, while appearance, comfort, and inconvenience were the biggest barriers to wearing sensors. Wear time was valid on 93.9 % (ActiGraph), 100 % (GENEActiv), and 95.2 % (SenseWear) of eligible days. 61.9, 95.2, and 71.4 % of participants had seven valid days of data for the ActiGraph, GENEActiv, and SenseWear, respectively.
Conclusion
Twenty-four hour monitoring over seven consecutive days is a feasible approach in middle-aged women. Researchers should consider participant acceptability and demand, in addition to validity and reliability, when choosing a wearable sensor. More research is needed across populations and study designs.
![130368-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-04/130368-Thumbnail%20Image.png?versionId=LALL_5UB0cy0KcxV0OCrvVK2OSmAdyPk&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240530/us-west-2/s3/aws4_request&X-Amz-Date=20240530T153906Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=2e332ad25dbb9b6e85a43640f71d0de9ca437144727e2b55bb063d73fe3e58e5&itok=SyYZawqT)
Weight gain during the childbearing years and failure to lose pregnancy weight after birth contribute to the development of obesity in postpartum Latinas.
Methods
Madres para la Salud [Mothers for Health] was a 12-month, randomized controlled trial exploring a social support intervention with moderate-intensity physical activity (PA) seeking to effect changes in body fat, fat tissue inflammation, and depression symptoms in sedentary postpartum Latinas. This report describes the efficacy of the Madres intervention.
Results
The results show that while social support increased during the active intervention delivery, it declined to pre-intervention levels by the end of the intervention. There were significant achievements in aerobic and total steps across the 12 months of the intervention, and declines in body adiposity assessed with bioelectric impedance.
Conclusions
Social support from family and friends mediated increases in aerobic PA resulting in decrease in percent body fat.
![130370-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-04/130370-Thumbnail%20Image.png?versionId=r1BJpf8yxqds5e1m_5oeGTnCIq0RcpXV&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240530/us-west-2/s3/aws4_request&X-Amz-Date=20240530T154040Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=3922d2a2882968d59c264b4579d3833b1a84402c6548cfe729a82ad7dc5016c3&itok=hqxA9aRn)
Background:
Drosophila gene expression pattern images document the spatiotemporal dynamics of gene expression during embryogenesis. A comparative analysis of these images could provide a fundamentally important way for studying the regulatory networks governing development. To facilitate pattern comparison and searching, groups of images in the Berkeley Drosophila Genome Project (BDGP) high-throughput study were annotated with a variable number of anatomical terms manually using a controlled vocabulary. Considering that the number of available images is rapidly increasing, it is imperative to design computational methods to automate this task.
Results:
We present a computational method to annotate gene expression pattern images automatically. The proposed method uses the bag-of-words scheme to utilize the existing information on pattern annotation and annotates images using a model that exploits correlations among terms. The proposed method can annotate images individually or in groups (e.g., according to the developmental stage). In addition, the proposed method can integrate information from different two-dimensional views of embryos. Results on embryonic patterns from BDGP data demonstrate that our method significantly outperforms other methods.
Conclusion:
The proposed bag-of-words scheme is effective in representing a set of annotations assigned to a group of images, and the model employed to annotate images successfully captures the correlations among different controlled vocabulary terms. The integration of existing annotation information from multiple embryonic views improves annotation performance.
![130385-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-04/130385-Thumbnail%20Image.png?versionId=rAUYk60iZoWmfXG7vA4QlIqXtvnbWUAk&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240614/us-west-2/s3/aws4_request&X-Amz-Date=20240614T031323Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=595e4bcaee06a2169c09ad00103b17e9520f6e0b4ae233b67a9297f1b88b1de4&itok=whrkTp87)
![130386-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-04/130386-Thumbnail%20Image.png?versionId=NDzyAxNN5W9yrZo5T7VZQrAVJYIz7jFA&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240530/us-west-2/s3/aws4_request&X-Amz-Date=20240530T154321Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=4beda067247ebb02f5d3fed054a707ad13f2596af795cc0df698ae6b6975909e&itok=5uVFs2uk)