Matching Items (26)
Filtering by

Clear all filters

152036-Thumbnail Image.png
Description
It is commonly known that the left hemisphere of the brain is more efficient in the processing of verbal information, compared to the right hemisphere. One proposal suggests that hemispheric asymmetries in verbal processing are due in part to the efficient use of top-down mechanisms by the left hemisphere. Most

It is commonly known that the left hemisphere of the brain is more efficient in the processing of verbal information, compared to the right hemisphere. One proposal suggests that hemispheric asymmetries in verbal processing are due in part to the efficient use of top-down mechanisms by the left hemisphere. Most evidence for this comes from hemispheric semantic priming, though fewer studies have investigated verbal memory in the cerebral hemispheres. The goal of the current investigations is to examine how top-down mechanisms influence hemispheric asymmetries in verbal memory, and determine the specific nature of hypothesized top-down mechanisms. Five experiments were conducted to explore the influence of top-down mechanisms on hemispheric asymmetries in verbal memory. Experiments 1 and 2 used item-method directed forgetting to examine maintenance and inhibition mechanisms. In Experiment 1, participants were cued to remember or forget certain words, and cues were presented simultaneously or after the presentation of target words. In Experiment 2, participants were cued again to remember or forget words, but each word was repeated once or four times. Experiments 3 and 4 examined the influence of cognitive load on hemispheric asymmetries in true and false memory. In Experiment 3, cognitive load was imposed during memory encoding, while in Experiment 4, cognitive load was imposed during memory retrieval. Finally, Experiment 5 investigated the association between controlled processing in hemispheric semantic priming, and top-down mechanisms used for hemispheric verbal memory. Across all experiments, divided visual field presentation was used to probe verbal memory in the cerebral hemispheres. Results from all experiments revealed several important findings. First, top-down mechanisms used by the LH primarily used to facilitate verbal processing, but also operate in a domain general manner in the face of increasing processing demands. Second, evidence indicates that the RH uses top-down mechanisms minimally, and processes verbal information in a more bottom-up manner. These data help clarify the nature of top-down mechanisms used in hemispheric memory and language processing, and build upon current theories that attempt to explain hemispheric asymmetries in language processing.
ContributorsTat, Michael J (Author) / Azuma, Tamiko (Thesis advisor) / Goldinger, Stephen D (Committee member) / Liss, Julie M (Committee member) / Arizona State University (Publisher)
Created2013
151721-Thumbnail Image.png
Description
Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and

Frequency effects favoring high print-frequency words have been observed in frequency judgment memory tasks. Healthy young adults performed frequency judgment tasks; one group performed a single task while another group did the same task while alternating their attention to a secondary task (mathematical equations). Performance was assessed by correct and error responses, reaction times, and accuracy. Accuracy and reaction times were analyzed in terms of memory load (task condition), number of repetitions, effect of high vs. low print-frequency, and correlations with working memory span. Multinomial tree analyses were also completed to investigate source vs. item memory and revealed a mirror effect in episodic memory experiments (source memory), but a frequency advantage in span tasks (item memory). Interestingly enough, we did not observe an advantage for high working memory span individuals in frequency judgments, even when participants split their attention during the dual task (similar to a complex span task). However, we concluded that both the amount of attentional resources allocated and prior experience with an item affect how it is stored in memory.
ContributorsPeterson, Megan Paige (Author) / Azuma, Tamiko (Thesis advisor) / Gray, Shelley (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2013
152006-Thumbnail Image.png
Description
When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are

When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are searching. However, unlike laboratory experiments, searchers in the real-world rarely have perfect knowledge regarding the appearance of their target. In five experiments (with nearly 1,000 participants), we examined how the precision of the observer's template affects their ability to conduct visual search. Specifically, we simulated template imprecision in two ways: First, by contaminating our searchers' templates with inaccurate features, and second, by introducing extraneous features to the template that were unhelpful. In those experiments we recorded the eye movements of our searchers in order to make inferences regarding the extent to which attentional guidance and decision-making are hindered by template imprecision. We also examined a third way in which templates may become imprecise; namely, that they may deteriorate over time. Overall, our findings support a dual-function theory of the target template, and highlight the importance of examining template precision in future research.
ContributorsHout, Michael C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Reichle, Erik (Committee member) / Arizona State University (Publisher)
Created2013
152859-Thumbnail Image.png
Description
Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts

Previous research has shown that people can implicitly learn repeated visual contexts and use this information when locating relevant items. For example, when people are presented with repeated spatial configurations of distractor items or distractor identities in visual search, they become faster to find target stimuli in these repeated contexts over time (Chun and Jiang, 1998; 1999). Given that people learn these repeated distractor configurations and identities, might they also implicitly encode semantic information about distractors, if this information is predictive of the target location? We investigated this question with a series of visual search experiments using real-world stimuli within a contextual cueing paradigm (Chun and Jiang, 1998). Specifically, we tested whether participants could learn, through experience, that the target images they are searching for are always located near specific categories of distractors, such as food items or animals. We also varied the spatial consistency of target locations, in order to rule out implicit learning of repeated target locations. Results suggest that participants implicitly learned the target-predictive categories of distractors and used this information during search, although these results failed to reach significance. This lack of significance may have been due the relative simplicity of the search task, however, and several new experiments are proposed to further investigate whether repeated category information can benefit search.
ContributorsWalenchok, Stephen C (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / Arizona State University (Publisher)
Created2014
153415-Thumbnail Image.png
Description
In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important

In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important because they help identify word boundaries, a process known as lexical segmentation. Individuals vary in the degree to which they rely on phonemic or prosodic cues for lexical segmentation in degraded conditions.

Deafened individuals who use a cochlear implant have diminished access to fine frequency information in the speech signal, and show resulting difficulty perceiving phonemic and prosodic cues. Auditory training on phonemic elements improves word recognition for some listeners. Little is known, however, about the potential benefits of prosodic training, or the degree to which individual differences in cue use affect outcomes.

The present study used simulated cochlear implant stimulation to examine the effects of phonemic and prosodic training on lexical segmentation. Participants completed targeted training with either phonemic or prosodic cues, and received passive exposure to the non-targeted cue. Results show that acuity to the targeted cue improved after training. In addition, both targeted attention and passive exposure to prosodic features led to increased use of these cues for lexical segmentation. Individual differences in degree and source of benefit point to the importance of personalizing clinical intervention to increase flexible use of a range of perceptual strategies for understanding speech.
ContributorsHelms Tillery, Augusta Katherine (Author) / Liss, Julie M. (Thesis advisor) / Azuma, Tamiko (Committee member) / Brown, Christopher A. (Committee member) / Dorman, Michael F. (Committee member) / Utianski, Rene L. (Committee member) / Arizona State University (Publisher)
Created2015
153910-Thumbnail Image.png
Description
Despite the various driver assistance systems and electronics, the threat to life of driver, passengers and other people on the road still persists. With the growth in technology, the use of in-vehicle devices with a plethora of buttons and features is increasing resulting in increased distraction. Recently, speech recognition has

Despite the various driver assistance systems and electronics, the threat to life of driver, passengers and other people on the road still persists. With the growth in technology, the use of in-vehicle devices with a plethora of buttons and features is increasing resulting in increased distraction. Recently, speech recognition has emerged as an alternative to distraction and has the potential to be beneficial. However, considering the fact that automotive environment is dynamic and noisy in nature, distraction may not arise from the manual interaction, but due to the cognitive load. Hence, speech recognition certainly cannot be a reliable mode of communication.

The thesis is focused on proposing a simultaneous multimodal approach for designing interface between driver and vehicle with a goal to enable the driver to be more attentive to the driving tasks and spend less time fiddling with distractive tasks. By analyzing the human-human multimodal interaction techniques, new modes have been identified and experimented, especially suitable for the automotive context. The identified modes are touch, speech, graphics, voice-tip and text-tip. The multiple modes are intended to work collectively to make the interaction more intuitive and natural. In order to obtain a minimalist user-centered design for the center stack, various design principles such as 80/20 rule, contour bias, affordance, flexibility-usability trade-off etc. have been implemented on the prototypes. The prototype was developed using the Dragon software development kit on android platform for speech recognition.

In the present study, the driver behavior was investigated in an experiment conducted on the DriveSafety driving simulator DS-600s. Twelve volunteers drove the simulator under two conditions: (1) accessing the center stack applications using touch only and (2) accessing the applications using speech with offered text-tip. The duration for which user looked away from the road (eyes-off-road) was measured manually for each scenario. Comparison of results proved that eyes-off-road time is less for the second scenario. The minimalist design with 8-10 icons per screen proved to be effective as all the readings were within the driver distraction recommendations (eyes-off-road time < 2sec per screen) defined by NHTSA.
ContributorsMittal, Richa (Author) / Gaffar, Ashraf (Thesis advisor) / Femiani, John (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2015
156054-Thumbnail Image.png
Description
Medical errors are now estimated to be the third leading cause of death in the United States (Makary & Daniel, 2016). Look-alike, sound- alike prescription drug mix-ups contribute to this figure. The US Food and Drug Administration (FDA) and Institute for Safe Medication Practices (ISMP) have recommended the use of

Medical errors are now estimated to be the third leading cause of death in the United States (Makary & Daniel, 2016). Look-alike, sound- alike prescription drug mix-ups contribute to this figure. The US Food and Drug Administration (FDA) and Institute for Safe Medication Practices (ISMP) have recommended the use of Tall Man lettering since 2008, in which dissimilar portions of confusable drug names pairs are capitalized in order to make them more distinguishable. Research on the efficacy of Tall Man lettering in differentiating confusable drug name pairs has been inconclusive and it is imperative to investigate potential efficacy further considering the clinical implications (Lambert, Schroeder & Galanter, 2015). The present study aimed to add to the body of research on Tall Man lettering while also investigating another possibility for the mechanism behind Tall Man’s efficacy, if it in fact exists. Studies indicate that the first letter in a word offers an advantage over other positions, resulting in more accurate and faster recognition (Adelman, Marquis & Sabatos-DeVito, 2010; Scaltritti & Balota, 2013). The present study used a 2x3 repeated measures design to analyze the effect of position on Tall Man lettering efficacy. Participants were shown a prime drug, followed by a brief mask, and then either the same drug name or its confusable pair and asked to identify whether they were the same or different. All participants completed both lowercase and Tall Man conditions. Overall performance measured by accuracy and reaction time revealed lowercase to be more effective than Tall Man. With regard to the position of Tall Man letters, a first position advantage was seen both in accuracy and reaction time. A first position advantage was seen in the lowercase condition as well, suggesting the location of the differing portion of the word matters more than the format used. These findings add to the body of inconclusive research on the efficacy of Tall Man lettering in drug name confusion. Considering its impact on patient safety, more research should be conducted to definitively answer the question as to whether or not Tall Man should be used in practice.
ContributorsKnobloch, Ashley (Author) / Branaghan, Russell (Thesis advisor) / Cooke, Nancy J. (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2017
156243-Thumbnail Image.png
Description
Using a modified news media brand personality scale developed by Kim, Baek, and Martin (2010), this study measured the personalities of eight news media outlets and combined them into the same associative network with participants’ self-image via the Pathfinder tool (Schvaneveldt, Durso, & Dearholt, 1989). Using these networks, this study

Using a modified news media brand personality scale developed by Kim, Baek, and Martin (2010), this study measured the personalities of eight news media outlets and combined them into the same associative network with participants’ self-image via the Pathfinder tool (Schvaneveldt, Durso, & Dearholt, 1989). Using these networks, this study was able to both explore the personality associations of participants and observe if self-congruity, measured by the distance between the self-image node and a brand, is significantly related to participant preference for a brand. Self-congruity was found to be significantly related to preference. However, this relationship was mediated by participants’ fiscal and social orientation. Overall, using Pathfinder to generate associative networks and measure self-congruity could be a useful approach for understanding how people perceive and relate to different news media outlets.
ContributorsWillinger, Jacob T (Author) / Branaghan, Russel (Thesis advisor) / Craig, Scotty (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2018
157377-Thumbnail Image.png
Description
Older adults often experience communication difficulties, including poorer comprehension of auditory speech when it contains complex sentence structures or occurs in noisy environments. Previous work has linked cognitive abilities and the engagement of domain-general cognitive resources, such as the cingulo-opercular and frontoparietal brain networks, in response to challenging speech. However,

Older adults often experience communication difficulties, including poorer comprehension of auditory speech when it contains complex sentence structures or occurs in noisy environments. Previous work has linked cognitive abilities and the engagement of domain-general cognitive resources, such as the cingulo-opercular and frontoparietal brain networks, in response to challenging speech. However, the degree to which these networks can support comprehension remains unclear. Furthermore, how hearing loss may be related to the cognitive resources recruited during challenging speech comprehension is unknown. This dissertation investigated how hearing, cognitive performance, and functional brain networks contribute to challenging auditory speech comprehension in older adults. Experiment 1 characterized how age and hearing loss modulate resting-state functional connectivity between Heschl’s gyrus and several sensory and cognitive brain networks. The results indicate that older adults exhibit decreased functional connectivity between Heschl’s gyrus and sensory and attention networks compared to younger adults. Within older adults, greater hearing loss was associated with increased functional connectivity between right Heschl’s gyrus and the cingulo-opercular and language networks. Experiments 2 and 3 investigated how hearing, working memory, attentional control, and fMRI measures predict comprehension of complex sentence structures and speech in noisy environments. Experiment 2 utilized resting-state functional magnetic resonance imaging (fMRI) and behavioral measures of working memory and attentional control. Experiment 3 used activation-based fMRI to examine the brain regions recruited in response to sentences with both complex structures and in noisy background environments as a function of hearing and cognitive abilities. The results suggest that working memory abilities and the functionality of the frontoparietal and language networks support the comprehension of speech in multi-speaker environments. Conversely, attentional control and the cingulo-opercular network were shown to support comprehension of complex sentence structures. Hearing loss was shown to decrease activation within right Heschl’s gyrus in response to all sentence conditions and increase activation within frontoparietal and cingulo-opercular regions. Hearing loss also was associated with poorer sentence comprehension in energetic, but not informational, masking. Together, these three experiments identify the unique contributions of cognition and brain networks that support challenging auditory speech comprehension in older adults, further probing how hearing loss affects these relationships.
ContributorsFitzhugh, Megan (Author) / (Reddy) Rogalsky, Corianne (Thesis advisor) / Baxter, Leslie C (Thesis advisor) / Azuma, Tamiko (Committee member) / Braden, Blair (Committee member) / Arizona State University (Publisher)
Created2019
156857-Thumbnail Image.png
Description
Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when this strategy is not optimal. Furthermore, recent research from our

Previous research from Rajsic et al. (2015, 2017) suggests that a visual form of confirmation bias arises during visual search for simple stimuli, under certain conditions, wherein people are biased to seek stimuli matching an initial cue color even when this strategy is not optimal. Furthermore, recent research from our lab suggests that varying the prevalence of cue-colored targets does not attenuate the visual confirmation bias, although people still fail to detect rare targets regardless of whether they match the initial cue (Walenchok et al. under review). The present investigation examines the boundary conditions of the visual confirmation bias under conditions of equal, low, and high cued-target frequency. Across experiments, I found that: (1) People are strongly susceptible to the low-prevalence effect, often failing to detect rare targets regardless of whether they match the cue (Wolfe et al., 2005). (2) However, they are still biased to seek cue-colored stimuli, even when such targets are rare. (3) Regardless of target prevalence, people employ strategies when search is made sufficiently burdensome with distributed items and large search sets. These results further support previous findings that the low-prevalence effect arises from a failure to perceive rare items (Hout et al., 2015), while visual confirmation bias is a bias of attentional guidance (Rajsic et al., 2015, 2017).
ContributorsWalenchok, Stephen Charles (Author) / Goldinger, Stephen D (Thesis advisor) / Azuma, Tamiko (Committee member) / Homa, Donald (Committee member) / Hout, Michael C (Committee member) / McClure, Samuel M. (Committee member) / Arizona State University (Publisher)
Created2018