Matching Items (5)
152061-Thumbnail Image.png
Description
Most people are experts in some area of information; however, they may not be knowledgeable about other closely related areas. How knowledge is generalized to hierarchically related categories was explored. Past work has found little to no generalization to categories closely related to learned categories. These results do not fit

Most people are experts in some area of information; however, they may not be knowledgeable about other closely related areas. How knowledge is generalized to hierarchically related categories was explored. Past work has found little to no generalization to categories closely related to learned categories. These results do not fit well with other work focusing on attention during and after category learning. The current work attempted to merge these two areas of by creating a category structure with the best chance to detect generalization. Participants learned order level bird categories and family level wading bird categories. Then participants completed multiple measures to test generalization to old wading bird categories, new wading bird categories, owl and raptor categories, and lizard categories. As expected, the generalization measures converged on a single overall pattern of generalization. No generalization was found, except for already learned categories. This pattern fits well with past work on generalization within a hierarchy, but do not fit well with theories of dimensional attention. Reasons why these findings do not match are discussed, as well as directions for future research.
ContributorsLancaster, Matthew E (Author) / Homa, Donald (Thesis advisor) / Glenberg, Arthur (Committee member) / Chi, Michelene (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2013
156545-Thumbnail Image.png
Description
Adapting to one novel condition of a motor task has been shown to generalize to other naïve conditions (i.e., motor generalization). In contrast, learning one task affects the proficiency of another task that is altogether different (i.e. motor transfer). Much more is known about motor generalization than about motor transfer,

Adapting to one novel condition of a motor task has been shown to generalize to other naïve conditions (i.e., motor generalization). In contrast, learning one task affects the proficiency of another task that is altogether different (i.e. motor transfer). Much more is known about motor generalization than about motor transfer, despite of decades of behavioral evidence. Moreover, motor generalization is studied as a probe to understanding how movements in any novel situations are affected by previous experiences. Thus, one could assume that mechanisms underlying transfer from trained to untrained tasks may be same as the ones known to be underlying motor generalization. However, the direct relationship between transfer and generalization has not yet been shown, thereby limiting the assumption that transfer and generalization rely on the same mechanisms. The purpose of this study was to test whether there is a relationship between motor generalization and motor transfer. To date, ten healthy young adult subjects were scored on their motor generalization ability and motor transfer ability on various upper extremity tasks. Although our current sample size is too small to clearly identify whether there is a relationship between generalization and transfer, Pearson product-moment correlation results and a priori power analysis suggest that a significant relationship will be observed with an increased sample size by 30%. If so, this would suggest that the mechanisms of transfer may be similar to those of motor generalization.
ContributorsSohani, Priyanka (Author) / Schaefer, Sydney (Thesis advisor) / Daliri, Ayoub (Committee member) / Honeycutt, Claire (Committee member) / Arizona State University (Publisher)
Created2018
132187-Thumbnail Image.png
Description
Reactive step and treadmill perturbation training have been shown to improve first step measurements and reduce falls. However, the effect of variable training on the efficacy of generalization is poorly understood. The objective of this study was to measure whether the addition of variability in the perturbation training

Reactive step and treadmill perturbation training have been shown to improve first step measurements and reduce falls. However, the effect of variable training on the efficacy of generalization is poorly understood. The objective of this study was to measure whether the addition of variability in the perturbation training protocol can increase the amount of generalization seen in forward perturbations. The study included 28 young, healthy adults between the age of 20-35 years old with no known significant medical history. Fifteen participants underwent constant training in one direction with the same belt acceleration (4 m/s2) and thirteen participants underwent variable training where their foot positioned and belt acceleration (3 m/s2, 4 m/s2, 5 m/s2) were randomized throughout the collections All slips were done in the forward direction requiring a forward reactive step. To assess the effects of variable training an independent sample t-test of the differences in generalization between each group was calculated. Primary outcome variables in both studies were margin of stability (MOS), step length, and step latency. Results from the study indicated that variable training made no significant improvement (p<0.05) in generalization across the variables. The P-values for the difference in generalization of MOS, step length, and step latency were 0.635, 0.225, 0.148 respectively. Despite the lack of significant evidence to support improvement in generalization with variable training, further investigations are warranted to develop training methods capable of reducing falls in at risk populations.
ContributorsArroyo, Randall Adrian (Author) / Peterson, Daniel (Thesis director) / Ofori, Edward (Committee member) / College of Health Solutions (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
161967-Thumbnail Image.png
Description
Machine learning models can pick up biases and spurious correlations from training data and projects and amplify these biases during inference, thus posing significant challenges in real-world settings. One approach to mitigating this is a class of methods that can identify filter out bias-inducing samples from the training datasets to

Machine learning models can pick up biases and spurious correlations from training data and projects and amplify these biases during inference, thus posing significant challenges in real-world settings. One approach to mitigating this is a class of methods that can identify filter out bias-inducing samples from the training datasets to force models to avoid being exposed to biases. However, the filtering leads to a considerable wastage of resources as most of the dataset created is discarded as biased. This work deals with avoiding the wastage of resources by identifying and quantifying the biases. I further elaborate on the implications of dataset filtering on robustness (to adversarial attacks) and generalization (to out-of-distribution samples). The findings suggest that while dataset filtering does help to improve OOD(Out-Of-Distribution) generalization, it has a significant negative impact on robustness to adversarial attacks. It also shows that transforming bias-inducing samples into adversarial samples (instead of eliminating them from the dataset) can significantly boost robustness without sacrificing generalization.
ContributorsSachdeva, Bhavdeep Singh (Author) / Baral, Chitta (Thesis advisor) / Liu, Huan (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
187454-Thumbnail Image.png
Description
This dissertation presents novel solutions for improving the generalization capabilities of deep learning based computer vision models. Neural networks are known to suffer a large drop in performance when tested on samples from a different distribution than the one on which they were trained. The proposed solutions, based on latent

This dissertation presents novel solutions for improving the generalization capabilities of deep learning based computer vision models. Neural networks are known to suffer a large drop in performance when tested on samples from a different distribution than the one on which they were trained. The proposed solutions, based on latent space geometry and meta-learning, address this issue by improving the robustness of these models to distribution shifts. Through the use of geometrical alignment, state-of-the-art domain adaptation and source-free test-time adaptation strategies are developed. Additionally, geometrical alignment can allow classifiers to be progressively adapted to new, unseen test domains without requiring retraining of the feature extractors. The dissertation also presents algorithms for enabling in-the-wild generalization without needing access to any samples from the target domain. Other causes of poor generalization, such as data scarcity in critical applications and training data with high levels of noise and variance, are also explored. To address data scarcity in fine-grained computer vision tasks such as object detection, novel context-aware augmentations are suggested. While the first four chapters focus on general-purpose computer vision models, strategies are also developed to improve robustness in specific applications. The efficiency of training autonomous agents for visual navigation is improved by incorporating semantic knowledge, and the integration of domain experts' knowledge allows for the realization of a low-cost, minimally invasive generalizable automated rehabilitation system. Lastly, new tools for explainability and model introspection using counter-factual explainers trained through interval-based uncertainty calibration objectives are presented.
ContributorsThopalli, Kowshik (Author) / Turaga, Pavan (Thesis advisor) / Thiagarajan, Jayaraman J (Committee member) / Li, Baoxin (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2023