Matching Items (127)
150044-Thumbnail Image.png
Description
The purpose of this study was to investigate the effect of partial exemplar experience on category formation and use. Participants had either complete or limited access to the three dimensions that defined categories by dimensions within different modalities. The concept of "crucial dimension" was introduced and the role it plays

The purpose of this study was to investigate the effect of partial exemplar experience on category formation and use. Participants had either complete or limited access to the three dimensions that defined categories by dimensions within different modalities. The concept of "crucial dimension" was introduced and the role it plays in category definition was explained. It was hypothesized that the effects of partial experience are not explained by a shifting of attention between dimensions (Taylor & Ross, 2009) but rather by an increased reliance on prototypical values used to fill in missing information during incomplete experiences. Results indicated that participants (1) do not fill in missing information with prototypical values, (2) integrate information less efficiently between different modalities than within a single modality, and (3) have difficulty learning only when partial experience prevents access to diagnostic information.
ContributorsCrawford, Thomas (Author) / Homa, Donald (Thesis advisor) / Mcbeath, Micheal (Committee member) / Glenberg, Arthur (Committee member) / Arizona State University (Publisher)
Created2011
149644-Thumbnail Image.png
Description
Intuitive decision making refers to decision making based on situational pattern recognition, which happens without deliberation. It is a fast and effortless process that occurs without complete awareness. Moreover, it is believed that implicit learning is one means by which a foundation for intuitive decision making is developed. Accordingly, the

Intuitive decision making refers to decision making based on situational pattern recognition, which happens without deliberation. It is a fast and effortless process that occurs without complete awareness. Moreover, it is believed that implicit learning is one means by which a foundation for intuitive decision making is developed. Accordingly, the present study investigated several factors that affect implicit learning and the development of intuitive decision making in a simulated real-world environment: (1) simple versus complex situational patterns; (2) the diversity of the patterns to which an individual is exposed; (3) the underlying mechanisms. The results showed that simple patterns led to higher levels of implicit learning and intuitive decision-making accuracy than complex patterns; increased diversity enhanced implicit learning and intuitive decision-making accuracy; and an embodied mechanism, labeling, contributes to the development of intuitive decision making in a simulated real-world environment. The results suggest that simulated real-world environments can provide the basis for training intuitive decision making, that diversity is influential in the process of training intuitive decision making, and that labeling contributes to the development of intuitive decision making. These results are interpreted in the context of applied situations such as military applications involving remotely piloted aircraft.
ContributorsCovas-Smith, Christine Marie (Author) / Cooke, Nancy J. (Thesis advisor) / Patterson, Robert (Committee member) / Glenberg, Arthur (Committee member) / Homa, Donald (Committee member) / Arizona State University (Publisher)
Created2011
150181-Thumbnail Image.png
Description
Real-world environments are characterized by non-stationary and continuously evolving data. Learning a classification model on this data would require a framework that is able to adapt itself to newer circumstances. Under such circumstances, transfer learning has come to be a dependable methodology for improving classification performance with reduced training costs

Real-world environments are characterized by non-stationary and continuously evolving data. Learning a classification model on this data would require a framework that is able to adapt itself to newer circumstances. Under such circumstances, transfer learning has come to be a dependable methodology for improving classification performance with reduced training costs and without the need for explicit relearning from scratch. In this thesis, a novel instance transfer technique that adapts a "Cost-sensitive" variation of AdaBoost is presented. The method capitalizes on the theoretical and functional properties of AdaBoost to selectively reuse outdated training instances obtained from a "source" domain to effectively classify unseen instances occurring in a different, but related "target" domain. The algorithm is evaluated on real-world classification problems namely accelerometer based 3D gesture recognition, smart home activity recognition and text categorization. The performance on these datasets is analyzed and evaluated against popular boosting-based instance transfer techniques. In addition, supporting empirical studies, that investigate some of the less explored bottlenecks of boosting based instance transfer methods, are presented, to understand the suitability and effectiveness of this form of knowledge transfer.
ContributorsVenkatesan, Ashok (Author) / Panchanathan, Sethuraman (Thesis advisor) / Li, Baoxin (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2011
150112-Thumbnail Image.png
Description
Typically, the complete loss or severe impairment of a sense such as vision and/or hearing is compensated through sensory substitution, i.e., the use of an alternative sense for receiving the same information. For individuals who are blind or visually impaired, the alternative senses have predominantly been hearing and touch. For

Typically, the complete loss or severe impairment of a sense such as vision and/or hearing is compensated through sensory substitution, i.e., the use of an alternative sense for receiving the same information. For individuals who are blind or visually impaired, the alternative senses have predominantly been hearing and touch. For movies, visual content has been made accessible to visually impaired viewers through audio descriptions -- an additional narration that describes scenes, the characters involved and other pertinent details. However, as audio descriptions should not overlap with dialogue, sound effects and musical scores, there is limited time to convey information, often resulting in stunted and abridged descriptions that leave out many important visual cues and concepts. This work proposes a promising multimodal approach to sensory substitution for movies by providing complementary information through haptics, pertaining to the positions and movements of actors, in addition to a film's audio description and audio content. In a ten-minute presentation of five movie clips to ten individuals who were visually impaired or blind, the novel methodology was found to provide an almost two time increase in the perception of actors' movements in scenes. Moreover, participants appreciated and found useful the overall concept of providing a visual perspective to film through haptics.
ContributorsViswanathan, Lakshmie Narayan (Author) / Panchanathan, Sethuraman (Thesis advisor) / Hedgpeth, Terri (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
150150-Thumbnail Image.png
Description
Learning and transfer were investigated for a categorical structure in which relevant stimulus information could be mapped without loss from one modality to another. The category space was composed of three non-overlapping, linearly-separable categories. Each stimulus was composed of a sequence of on-off events that varied in duration and number

Learning and transfer were investigated for a categorical structure in which relevant stimulus information could be mapped without loss from one modality to another. The category space was composed of three non-overlapping, linearly-separable categories. Each stimulus was composed of a sequence of on-off events that varied in duration and number of sub-events (complexity). Categories were learned visually, haptically, or auditorily, and transferred to the same or an alternate modality. The transfer set contained old, new, and prototype stimuli, and subjects made both classification and recognition judgments. The results showed an early learning advantage in the visual modality, with transfer performance varying among the conditions in both classification and recognition. In general, classification accuracy was highest for the category prototype, with false recognition of the category prototype higher in the cross-modality conditions. The results are discussed in terms of current theories in modality transfer, and shed preliminary light on categorical transfer of temporal stimuli.
ContributorsFerguson, Ryan (Author) / Homa, Donald (Thesis advisor) / Goldinger, Stephen (Committee member) / Glenberg, Arthur (Committee member) / Arizona State University (Publisher)
Created2011
151716-Thumbnail Image.png
Description
The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a

The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a large amount of data is cheap and easy, annotating them with class labels is an expensive process in terms of time, labor and human expertise. This has paved the way for research in the field of active learning. Such algorithms automatically select the salient and exemplar instances from large quantities of unlabeled data and are effective in reducing human labeling effort in inducing classification models. To utilize the possible presence of multiple labeling agents, there have been attempts towards a batch mode form of active learning, where a batch of data instances is selected simultaneously for manual annotation. This dissertation is aimed at the development of novel batch mode active learning algorithms to reduce manual effort in training classification models in real world multimedia pattern recognition applications. Four major contributions are proposed in this work: $(i)$ a framework for dynamic batch mode active learning, where the batch size and the specific data instances to be queried are selected adaptively through a single formulation, based on the complexity of the data stream in question, $(ii)$ a batch mode active learning strategy for fuzzy label classification problems, where there is an inherent imprecision and vagueness in the class label definitions, $(iii)$ batch mode active learning algorithms based on convex relaxations of an NP-hard integer quadratic programming (IQP) problem, with guaranteed bounds on the solution quality and $(iv)$ an active matrix completion algorithm and its application to solve several variants of the active learning problem (transductive active learning, multi-label active learning, active feature acquisition and active learning for regression). These contributions are validated on the face recognition and facial expression recognition problems (which are commonly encountered in real world applications like robotics, security and assistive technology for the blind and the visually impaired) and also on collaborative filtering applications like movie recommendation.
ContributorsChakraborty, Shayok (Author) / Panchanathan, Sethuraman (Thesis advisor) / Balasubramanian, Vineeth N. (Committee member) / Li, Baoxin (Committee member) / Mittelmann, Hans (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151926-Thumbnail Image.png
Description
In recent years, machine learning and data mining technologies have received growing attention in several areas such as recommendation systems, natural language processing, speech and handwriting recognition, image processing and biomedical domain. Many of these applications which deal with physiological and biomedical data require person specific or person adaptive systems.

In recent years, machine learning and data mining technologies have received growing attention in several areas such as recommendation systems, natural language processing, speech and handwriting recognition, image processing and biomedical domain. Many of these applications which deal with physiological and biomedical data require person specific or person adaptive systems. The greatest challenge in developing such systems is the subject-dependent data variations or subject-based variability in physiological and biomedical data, which leads to difference in data distributions making the task of modeling these data, using traditional machine learning algorithms, complex and challenging. As a result, despite the wide application of machine learning, efficient deployment of its principles to model real-world data is still a challenge. This dissertation addresses the problem of subject based variability in physiological and biomedical data and proposes person adaptive prediction models based on novel transfer and active learning algorithms, an emerging field in machine learning. One of the significant contributions of this dissertation is a person adaptive method, for early detection of muscle fatigue using Surface Electromyogram signals, based on a new multi-source transfer learning algorithm. This dissertation also proposes a subject-independent algorithm for grading the progression of muscle fatigue from 0 to 1 level in a test subject, during isometric or dynamic contractions, at real-time. Besides subject based variability, biomedical image data also varies due to variations in their imaging techniques, leading to distribution differences between the image databases. Hence a classifier learned on one database may perform poorly on the other database. Another significant contribution of this dissertation has been the design and development of an efficient biomedical image data annotation framework, based on a novel combination of transfer learning and a new batch-mode active learning method, capable of addressing the distribution differences across databases. The methodologies developed in this dissertation are relevant and applicable to a large set of computing problems where there is a high variation of data between subjects or sources, such as face detection, pose detection and speech recognition. From a broader perspective, these frameworks can be viewed as a first step towards design of automated adaptive systems for real world data.
ContributorsChattopadhyay, Rita (Author) / Panchanathan, Sethuraman (Thesis advisor) / Ye, Jieping (Thesis advisor) / Li, Baoxin (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2013
152061-Thumbnail Image.png
Description
Most people are experts in some area of information; however, they may not be knowledgeable about other closely related areas. How knowledge is generalized to hierarchically related categories was explored. Past work has found little to no generalization to categories closely related to learned categories. These results do not fit

Most people are experts in some area of information; however, they may not be knowledgeable about other closely related areas. How knowledge is generalized to hierarchically related categories was explored. Past work has found little to no generalization to categories closely related to learned categories. These results do not fit well with other work focusing on attention during and after category learning. The current work attempted to merge these two areas of by creating a category structure with the best chance to detect generalization. Participants learned order level bird categories and family level wading bird categories. Then participants completed multiple measures to test generalization to old wading bird categories, new wading bird categories, owl and raptor categories, and lizard categories. As expected, the generalization measures converged on a single overall pattern of generalization. No generalization was found, except for already learned categories. This pattern fits well with past work on generalization within a hierarchy, but do not fit well with theories of dimensional attention. Reasons why these findings do not match are discussed, as well as directions for future research.
ContributorsLancaster, Matthew E (Author) / Homa, Donald (Thesis advisor) / Glenberg, Arthur (Committee member) / Chi, Michelene (Committee member) / Brewer, Gene (Committee member) / Arizona State University (Publisher)
Created2013
150599-Thumbnail Image.png
Description
Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's

Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's computerized devices and displays largely engage--have become overloaded, creating possibilities for distractions, delays and high cognitive load; which in turn can lead to a loss of situational awareness, increasing chances for life threatening situations such as texting while driving. Surprisingly, alternative modalities for information delivery have seen little exploration. Touch, in particular, is a promising candidate given that it is our largest sensory organ with impressive spatial and temporal acuity. Although some approaches have been proposed for touch-based information delivery, they are not without limitations including high learning curves, limited applicability and/or limited expression. This is largely due to the lack of a versatile, comprehensive design theory--specifically, a theory that addresses the design of touch-based building blocks for expandable, efficient, rich and robust touch languages that are easy to learn and use. Moreover, beyond design, there is a lack of implementation and evaluation theories for such languages. To overcome these limitations, a unified, theoretical framework, inspired by natural, spoken language, is proposed called Somatic ABC's for Articulating (designing), Building (developing) and Confirming (evaluating) touch-based languages. To evaluate the usefulness of Somatic ABC's, its design, implementation and evaluation theories were applied to create communication languages for two very unique application areas: audio described movies and motor learning. These applications were chosen as they presented opportunities for complementing communication by offloading information, typically conveyed visually and/or aurally, to the skin. For both studies, it was found that Somatic ABC's aided the design, development and evaluation of rich somatic languages with distinct and natural communication units.
ContributorsMcDaniel, Troy Lee (Author) / Panchanathan, Sethuraman (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2012
151104-Thumbnail Image.png
Description
Medical images constitute a special class of images that are captured to allow diagnosis of disease, and their "correct" interpretation is vitally important. Because they are not "natural" images, radiologists must be trained to visually interpret them. This training process includes implicit perceptual learning that is gradually acquired over an

Medical images constitute a special class of images that are captured to allow diagnosis of disease, and their "correct" interpretation is vitally important. Because they are not "natural" images, radiologists must be trained to visually interpret them. This training process includes implicit perceptual learning that is gradually acquired over an extended period of exposure to medical images. This dissertation proposes novel computational methods for evaluating and facilitating perceptual training in radiologists. Part 1 of this dissertation proposes an eye-tracking-based metric for measuring the training progress of individual radiologists. Six metrics were identified as potentially useful: time to complete task, fixation count, fixation duration, consciously viewed regions, subconsciously viewed regions, and saccadic length. Part 2 of this dissertation proposes an eye-tracking-based entropy metric for tracking the rise and fall in the interest level of radiologists, as they scan chest radiographs. The results showed that entropy was significantly lower when radiologists were fixating on abnormal regions. Part 3 of this dissertation develops a method that allows extraction of Gabor-based feature vectors from corresponding anatomical regions of "normal" chest radiographs, despite anatomical variations across populations. These feature vectors are then used to develop and compare transductive and inductive computational methods for generating overlay maps that show atypical regions within test radiographs. The results show that the transductive methods produced much better maps than the inductive methods for 20 ground-truthed test radiographs. Part 4 of this dissertation uses an Extended Fuzzy C-Means (EFCM) based instance selection method to reduce the computational cost of transductive methods. The results showed that EFCM substantially reduced the computational cost without a substantial drop in performance. The dissertation then proposes a novel Variance Based Instance Selection (VBIS) method that also reduces the computational cost, but allows for incremental incorporation of new informative radiographs, as they are encountered. Part 5 of this dissertation develops and demonstrates a novel semi-transductive framework that combines the superior performance of transductive methods with the reduced computational cost of inductive methods. The results showed that the semi-transductive approach provided both an effective and efficient framework for detection of atypical regions in chest radiographs.
ContributorsAlzubaidi, Mohammad A (Author) / Panchanathan, Sethuraman (Thesis advisor) / Black, John A. (Committee member) / Ye, Jieping (Committee member) / Patel, Ameet (Committee member) / Arizona State University (Publisher)
Created2012