Matching Items (8)
Filtering by

Clear all filters

152072-Thumbnail Image.png
Description
When a rolling ball exits a spiral tube, it typically maintains its final inertial state and travels along straight line in concordance with Newton's first law of motion. Yet, most people predict that the ball will curve, a "naive physics" misconception called the curvilinear impetus (CI) bias. In the current

When a rolling ball exits a spiral tube, it typically maintains its final inertial state and travels along straight line in concordance with Newton's first law of motion. Yet, most people predict that the ball will curve, a "naive physics" misconception called the curvilinear impetus (CI) bias. In the current paper, we explore the ecological hypothesis that the CI bias arises from overgeneralization of correct motion of biological agents. Previous research has established that humans curve when exiting a spiral maze, and college students believe this motion is the same for balls and humans. The current paper consists of two follow up experiments. The first experiment tested the exiting behavior of rodents from a spiral rat maze. Though there were weaknesses in design and procedures of the maze, the findings support that rats do not behave like humans who exhibit the CI bias when exiting a spiral maze. These results are consistent with the CI bias being an overgeneralization of human motion, rather than generic biological motion. The second experiment tested physics teachers on their conception of how a humans and balls behave when exiting a spiral tube. Teachers demonstrated correct knowledge of the straight trajectory of a ball, but generalized the ball's behavior to human motion. Thus physics teachers exhibit the opposite bias from college students and presume that all motion is like inanimate motion. This evidence supports that this type of naive physics inertial bias is at least partly due to participants overgeneralizing both inanimate and animate motion to be the same, perhaps in an effort to minimize cognitive reference memory load. In short, physics training appears not to eliminate the bias, but rather to simply shift it from the presumption of stereotypical animate to stereotypical inanimate behavior.
ContributorsDye, Rosaline (Author) / Mcbeath, Michael K (Thesis advisor) / Sanabria, Federico (Committee member) / Megowan, Colleen (Committee member) / Arizona State University (Publisher)
Created2013
156081-Thumbnail Image.png
Description
Auditory scene analysis (ASA) is the process through which listeners parse and organize their acoustic environment into relevant auditory objects. ASA functions by exploiting natural regularities in the structure of auditory information. The current study investigates spectral envelope and its contribution to the perception of changes in pitch and loudness.

Auditory scene analysis (ASA) is the process through which listeners parse and organize their acoustic environment into relevant auditory objects. ASA functions by exploiting natural regularities in the structure of auditory information. The current study investigates spectral envelope and its contribution to the perception of changes in pitch and loudness. Experiment 1 constructs a perceptual continuum of twelve f0- and intensity-matched vowel phonemes (i.e. a pure timbre manipulation) and reveals spectral envelope as a primary organizational dimension. The extremes of this dimension are i (as in “bee”) and Ʌ (“bun”). Experiment 2 measures the strength of the relationship between produced f0 and the previously observed phonetic-pitch continuum at three different levels of phonemic constraint. Scat performances and, to a lesser extent, recorded interviews were found to exhibit changes in accordance with the natural regularity; specifically, f0 changes were correlated with the phoneme pitch-height continuum. The more constrained case of lyrical singing did not exhibit the natural regularity. Experiment 3 investigates participant ratings of pitch and loudness as stimuli vary in f0, intensity, and the phonetic-pitch continuum. Psychophysical functions derived from the results reveal that moving from i to Ʌ is equivalent to a .38 semitone decrease in f0 and a .75 dB decrease in intensity. Experiment 4 examines the potentially functional aspect of the pitch, loudness, and spectral envelope relationship. Detection thresholds of stimuli in which all three dimensions change congruently (f0 increase, intensity increase, Ʌ to i) or incongruently (no f0 change, intensity increase, i to Ʌ) are compared using an objective version of the method of limits. Congruent changes did not provide a detection benefit over incongruent changes; however, when the contribution of phoneme change was removed, congruent changes did offer a slight detection benefit, as in previous research. While this relationship does not offer a detection benefit at threshold, there is a natural regularity for humans to produce phonemes at higher f0s according to their relative position on the pitch height continuum. Likewise, humans have a bias to detect pitch and loudness changes in phoneme sweeps in accordance with the natural regularity.
ContributorsPatten, K. Jakob (Author) / Mcbeath, Michael K (Thesis advisor) / Amazeen, Eric L (Committee member) / Glenberg, Arthur W (Committee member) / Zhou, Yi (Committee member) / Arizona State University (Publisher)
Created2017
155505-Thumbnail Image.png
Description
While various collision warning studies in driving have been conducted, only a handful of studies have investigated the effectiveness of warnings with a distracted driver. Across four experiments, the present study aimed to understand the apparent gap in the literature of distracted drivers and warning effectiveness, specifically by studying various

While various collision warning studies in driving have been conducted, only a handful of studies have investigated the effectiveness of warnings with a distracted driver. Across four experiments, the present study aimed to understand the apparent gap in the literature of distracted drivers and warning effectiveness, specifically by studying various warnings presented to drivers while they were operating a smart phone. Experiment One attempted to understand which smart phone tasks, (text vs image) or (self-paced vs other-paced) are the most distracting to a driver. Experiment Two compared the effectiveness of different smartphone based applications (app’s) for mitigating driver distraction. Experiment Three investigated the effects of informative auditory and tactile warnings which were designed to convey directional information to a distracted driver (moving towards or away). Lastly, Experiment Four extended the research into the area of autonomous driving by investigating the effectiveness of different auditory take-over request signals. Novel to both Experiment Three and Four was that the warnings were delivered from the source of the distraction (i.e., by either the sound triggered at the smart phone location or through a vibration given on the wrist of the hand holding the smart phone). This warning placement was an attempt to break the driver’s attentional focus on their smart phone and understand how to best re-orient the driver in order to improve the driver’s situational awareness (SA). The overall goal was to explore these novel methods of improved SA so drivers may more quickly and appropriately respond to a critical event.
ContributorsMcNabb, Jaimie Christine (Author) / Gray, Dr. Rob (Thesis advisor) / Branaghan, Dr. Russell (Committee member) / Becker, Dr. Vaughn (Committee member) / Arizona State University (Publisher)
Created2017
168282-Thumbnail Image.png
Description
Research has demonstrated observers have a generic bias for top saliency in object identification, such that random shapes appear more similar to ones that share the same tops versus same bottoms (Chambers et al., 1999). These findings are consistent with the idea that in nature, the tops of most important

Research has demonstrated observers have a generic bias for top saliency in object identification, such that random shapes appear more similar to ones that share the same tops versus same bottoms (Chambers et al., 1999). These findings are consistent with the idea that in nature, the tops of most important objects and living things tend to be the most informative locations with respect to intentionality and functionality, leading observers to favor attending to top. Yet, such a bias also may imply a generic downward vantage bias, suggesting that unlike natural objects, the more informative aspects of scenes tend to lie below their horizon midpoints. In two experiments, saliency bias was investigated for objects and scenes with both information-balanced and naturalistic stimuli. Experiment 1 replicates and extends the study of the top-saliency effect for information-balanced objects. Here 91 participants made 80 similarity judgments between an information-balanced object and two comparison objects that contain either the same top or the same bottom. Participants also made 80 similarity judgments of information-balanced scenes in which the coordinates of the vertices of the random shapes were replaced with little objects to create a scene. Experiment 2 extends Chambers et al. (1999) by examining top-saliency bias in naturalistic object perception when 91 participants made similarity judgments between a photographed test object and two comparison objects which contain either the same top or the same bottom. Experiment 2 also tests the idea of a downward vantage bias by predicting that naturalistic scenes will be judged more similar when the portions that lie below the horizon are identical versus when the portions above are the same. Results of the two experiments confirm that observers tend to assume a downward vantage when viewing pictures of objects and objects within scenes, which supports that saliency varies as a function of the informative aspect of the visually attended component.
ContributorsLangley, Matthew (Author) / Mcbeath, Michael K (Thesis advisor) / Brewer, Gene A (Committee member) / Lucca, Kelsey (Committee member) / Arizona State University (Publisher)
Created2021
154351-Thumbnail Image.png
Description
Watanabe, Náñez, and Sasaki (2001) introduced a phenomenon they named “task-irrelevant perceptual learning” in which near-threshold stimuli that are not essential to a given task can be associatively learned when consistently and concurrently paired with the focal task. The present study employs a visual paired-shapes recognition task, using colored

Watanabe, Náñez, and Sasaki (2001) introduced a phenomenon they named “task-irrelevant perceptual learning” in which near-threshold stimuli that are not essential to a given task can be associatively learned when consistently and concurrently paired with the focal task. The present study employs a visual paired-shapes recognition task, using colored polygon targets as salient attended focal stimuli, with the goal of comparing the increases in perceptual sensitivity observed when near-threshold stimuli are temporally paired in varying manners with focal targets. Experiment 1 separated and compared the target-acquisition and target-recognition phases and revealed that sensitivity improved most when the near-threshold motion stimuli were paired with the focal target-acquisition phase. The parameters of sensitivity improvement were motion detection, critical flicker fusion threshold (CFFT), and letter-orientation decoding. Experiment 2 tested perceptual learning of near-threshold stimuli when they were offset from the focal stimuli presentation by ±350 ms. Performance improvements in motion detection, CFFT, and decoding were significantly greater for the group in which near-threshold motion was presented after the focal target. Experiment 3 showed that participants with reading difficulties who were exposed to focal target-acquisition training improved in sensitivity in all visual measures. Experiment 4 tested whether near-threshold stimulus learning occurred cross-modally with auditory stimuli and served as an active control for the first, second, and third experiments. Here, a tone was paired with all focal stimuli, but the tone was 1 Hz higher or lower when paired with the targeted focal stimuli associated with recognition. In Experiment 4, there was no improvement in visual sensitivity, but there was significant improvement in tone discrimination. Thus, this study, as a whole, confirms that pairing near-threshold stimuli with focal stimuli can improve performance in just tone discrimination, or in motion detection, CFFT, and letter decoding. Findings further support the thesis that the act of trying to remember a focal target also elicited greater associative learning of correlated near-threshold stimulus than the act of recognizing a target. Finally, these findings support that we have developed a visual learning paradigm that may potentially mitigate some of the visual deficits that are often experienced by the reading disabled.
ContributorsHolloway, Steven Robert (Author) / Mcbeath, Michael K (Thesis advisor) / Macknik, Stephen (Committee member) / Homa, Donald (Committee member) / Náñez, Sr., José E (Committee member) / Arizona State University (Publisher)
Created2016
158795-Thumbnail Image.png
Description
Temporal-order judgments can require integration of self-generated action-events and external sensory information. In a previous study, it was found that participants are biased to perceive one’s own action-events to occur prior to simultaneous external events. This phenomenon, named the “Egocentric Temporal Order Bias”, or ETO bias, was demonstrated as a

Temporal-order judgments can require integration of self-generated action-events and external sensory information. In a previous study, it was found that participants are biased to perceive one’s own action-events to occur prior to simultaneous external events. This phenomenon, named the “Egocentric Temporal Order Bias”, or ETO bias, was demonstrated as a 67% probability for participants to report self-generated events as occurring prior to simultaneous externally-determined events. These results were interpreted as supporting a feed-forward, constructive model of perception. However, the empirical data could support many potential mechanisms. The present study tests whether the ETO bias is driven by attentional differences, feed-forward predictability, or action. These findings support that participants exhibit a bias due to both feed-forward predictability and action, and a Bayesian analysis supports that these effects are quantitatively unique. Therefore, the results indicate that the ETO bias is largely driven by one’s own action, over and above feed-forward predictability.
ContributorsTang, Tim (Author) / Mcbeath, Michael K (Thesis advisor) / Brewer, Gene A. (Committee member) / Sanabria, Federico (Committee member) / Arizona State University (Publisher)
Created2020
161818-Thumbnail Image.png
Description
Color perception has been widely studied and well modeled with respect to combining visible electromagnetic frequencies, yet new technology provides the means to better explore and test novel temporal frequency characteristics of color perception. Experiment 1 tests how reliably participants categorize static spectral rainbow colors, which can be a useful

Color perception has been widely studied and well modeled with respect to combining visible electromagnetic frequencies, yet new technology provides the means to better explore and test novel temporal frequency characteristics of color perception. Experiment 1 tests how reliably participants categorize static spectral rainbow colors, which can be a useful tool for efficiently identifying those with functional dichromacy, trichromacy, and tetrachromacy. The findings confirm that all individuals discern the four principal opponent process colors, red, yellow, green, and blue, with normal and potential tetrachromats seeing more distinct colors than color blind individuals. Experiment 2 tests the moving flicker fusion rate of the central electromagnetic frequencies within each color category found in Experiment 1 as a test of the Where system. It then compares this to the maximum temporal processing rate for discriminating direction of hue change with colors displayed serially as a test of the What system. The findings confirm respective processing thresholds of about 20 Hz for Where and 2-7 Hz for What processing systems. Experiment 3 tests conditions that optimize false colors based on the spinning Benham’s Top illusion. Findings indicate the same four principal colors emerge as in Experiment 1, but at low saturation levels for trichromats that diminish further for dichromats. Taken together, the three experiments provide an overview of the common categorical boundaries and temporal processing limits of human color vision.
ContributorsKrynen, Richard Chandler (Author) / Mcbeath, Michael K (Thesis advisor) / Homa, Donald (Committee member) / Newman, Nathan (Committee member) / Stone, Greg (Committee member) / Arizona State University (Publisher)
Created2021
187370-Thumbnail Image.png
Description
This project investigates the gleam-glum effect, a well-replicated phonetic emotion association in which words with the [i] vowel-sound (as in “gleam”) are judged more emotionally positive than words with the [Ʌ] vowel-sound (as in “glum”). The effect is observed across different modalities and languages and is moderated by mouth movements

This project investigates the gleam-glum effect, a well-replicated phonetic emotion association in which words with the [i] vowel-sound (as in “gleam”) are judged more emotionally positive than words with the [Ʌ] vowel-sound (as in “glum”). The effect is observed across different modalities and languages and is moderated by mouth movements relevant to word production. This research presents and tests an articulatory explanation for this association in three experiments. Experiment 1 supported the articulatory explanation by comparing recordings of 71 participants completing an emotional recall task and a word read-aloud task, showing that oral movements were more similar between positive emotional expressions and [i] articulation, and negative emotional expressions and [Ʌ] articulation. Experiment 2 partially supported the explanation with 98 YouTube recordings of natural speech. In Experiment 3, 149 participants judged emotions expressed by a speaker during [i] and [Ʌ] articulation. Contradicting the robust phonetic emotion association, participants judged more frequently that the speaker’s [Ʌ] articulatory movements were positive emotional expressions and [i] articulatory movements were negative emotional expressions. This is likely due to other visual emotional cues not related to oral movements and the order of word lists read by the speaker. Findings from the current project overall support an articulatory explanation for the gleam-glum effect, which has major implications for language and communication.
ContributorsYu, Shin-Phing (Author) / Mcbeath, Michael K (Thesis advisor) / Glenberg, Arthur M (Committee member) / Stone, Greg O (Committee member) / Coza, Aurel (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2023