![136287-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-05/136287-Thumbnail%20Image.png?versionId=cb5yO5FxYdXJ6JF.yvw9Isa7y9ckJvox&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240605/us-west-2/s3/aws4_request&X-Amz-Date=20240605T144604Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=62234808bbe27bc4169a6634a14f97b91c0ef56e0c7ccc1c88f09f6c5438a131&itok=e_KnUzZP)
![136474-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-05/136474-Thumbnail%20Image.png?versionId=iiDdhw9zuLf3NWw5_mjprpeDHKekMC5j&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240615/us-west-2/s3/aws4_request&X-Amz-Date=20240615T000407Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=5b3d73973580af35fcbf3779b82c7afa2a8894078feb3f30372017c1f3474a2e&itok=ZQuaRrWB)
![135647-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-05/135647-Thumbnail%20Image.png?versionId=84tYGsW3nx.Nu22UA4gnTgyye36sH2uY&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240530/us-west-2/s3/aws4_request&X-Amz-Date=20240530T153840Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=5fe6a267fd36559ddfe1f0124cf1500a7701ccd22ab2fcfec1695e12db78766e&itok=0TlkZ2uD)
![135691-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-05/135691-Thumbnail%20Image.png?versionId=mMvJT2VOx0Lwh2qTVvsJCR6k0ctzvE91&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240530/us-west-2/s3/aws4_request&X-Amz-Date=20240530T153906Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=ffe37870443e0666ebf6b5ffaf9c030258d77f5085ae39c22b6b1efe765950ea&itok=SRupt0Pr)
![136820-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-05/136820-Thumbnail%20Image.png?versionId=7ZU6y5JcpU8KCeaQY2.maeb6TmMfmVyw&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240615/us-west-2/s3/aws4_request&X-Amz-Date=20240615T040811Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=9d377ddbd715c05d4ac81e3776d76a07e2b71b02122b1500f7372de66d74d381&itok=bNmm002P)
![137043-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-05/137043-Thumbnail%20Image.png?versionId=NmbaXdLx2CdKLhSORWZDCBxcTAUYjGSR&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240615/us-west-2/s3/aws4_request&X-Amz-Date=20240615T131558Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=1bd9883e3ae6b87580d45b17c6844b551cf9c91bd1e33ef7830b1b252bd6a030&itok=C2VAPTEC)
![141473-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-06/141473-Thumbnail%20Image.png?versionId=lEiBSbazXh6rO9.4_YXpySOYQRNcOnP6&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240530/us-west-2/s3/aws4_request&X-Amz-Date=20240530T154456Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=c31c845f94b4acdd128ace7e6ebdf6f4dec47238ccb7a5cf802b2677cb60d536&itok=-DqC2NMZ)
Critical flicker fusion thresholds (CFFTs) describe when quick amplitude modulations of a light source become undetectable as the frequency of the modulation increases and are thought to underlie a number of visual processing skills, including reading. Here, we compare the impact of two vision-training approaches, one involving contrast sensitivity training and the other directional dot-motion training, compared to an active control group trained on Sudoku. The three training paradigms were compared on their effectiveness for altering CFFT. Directional dot-motion and contrast sensitivity training resulted in significant improvement in CFFT, while the Sudoku group did not yield significant improvement. This finding indicates that dot-motion and contrast sensitivity training similarly transfer to effect changes in CFFT. The results, combined with prior research linking CFFT to high-order cognitive processes such as reading ability, and studies showing positive impact of both dot-motion and contrast sensitivity training in reading, provide a possible mechanistic link of how these different training approaches impact reading abilities.
![141474-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-06/141474-Thumbnail%20Image.png?versionId=ghW0Y9UCht88oLKWsCjFU9tM_TY9Dc3c&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240606/us-west-2/s3/aws4_request&X-Amz-Date=20240606T023947Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=51ee880627f4b51add0e31b7924c141822a3cc17d977083113da35e68c399d44&itok=uWO4W9vS)
Although autism spectrum disorder (ASD) is a serious lifelong condition, its underlying neural mechanism remains unclear. Recently, neuroimaging-based classifiers for ASD and typically developed (TD) individuals were developed to identify the abnormality of functional connections (FCs). Due to over-fitting and interferential effects of varying measurement conditions and demographic distributions, no classifiers have been strictly validated for independent cohorts. Here we overcome these difficulties by developing a novel machine-learning algorithm that identifies a small number of FCs that separates ASD versus TD. The classifier achieves high accuracy for a Japanese discovery cohort and demonstrates a remarkable degree of generalization for two independent validation cohorts in the USA and Japan. The developed ASD classifier does not distinguish individuals with major depressive disorder and attention-deficit hyperactivity disorder from their controls but moderately distinguishes patients with schizophrenia from their controls. The results leave open the viable possibility of exploring neuroimaging-based dimensions quantifying the multiple-disorder spectrum.
Human activity recognition is the task of identifying a person’s movement from sensors in a wearable device, such as a smartphone, smartwatch, or a medical-grade device. A great method for this task is machine learning, which is the study of algorithms that learn and improve on their own with the help of massive amounts of useful data. These classification models can accurately classify activities with the time-series data from accelerometers and gyroscopes. A significant way to improve the accuracy of these machine learning models is preprocessing the data, essentially augmenting data to make the identification of each activity, or class, easier for the model. <br/>On this topic, this paper explains the design of SigNorm, a new web application which lets users conveniently transform time-series data and view the effects of those transformations in a code-free, browser-based user interface. The second and final section explains my take on a human activity recognition problem, which involves comparing a preprocessed dataset to an un-augmented one, and comparing the differences in accuracy using a one-dimensional convolutional neural network to make classifications.
![135927-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-05/135927-Thumbnail%20Image.png?versionId=4xu8DkdRYgz0.lak7_K5Xf.qIbw31per&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240614/us-west-2/s3/aws4_request&X-Amz-Date=20240614T150719Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=13508bc59c91e4095bccb318fbbe83060aa6d77e06418d375dd2e7522426d587&itok=1VUIs_uX)