Matching Items (5)
Filtering by

Clear all filters

153054-Thumbnail Image.png
Description
During attempted fixation, the eyes are not still but continue to produce so called "fixational eye movements", which include microsaccades, drift, and tremor. Microsaccades are thought to help prevent and restore vision loss during fixation, and to correct fixation errors, but how they contribute to these functions remains a matter

During attempted fixation, the eyes are not still but continue to produce so called "fixational eye movements", which include microsaccades, drift, and tremor. Microsaccades are thought to help prevent and restore vision loss during fixation, and to correct fixation errors, but how they contribute to these functions remains a matter of debate. This dissertation presents the results of four experiments conducted to address current controversies concerning the role of microsaccades in visibility and oculomotor control.

The first two experiments set out to correlate microsaccade production with the visibility of foveal and peripheral targets of varied spatial frequencies, during attempted fixation. The results indicate that microsaccades restore the visibility of both peripheral targets and targets presented entirely within the fovea, as a function of their spatial frequency characteristics.

The last two experiments set out to determine the role of microsaccades and drifts on the correction of gaze-position errors due to blinks in human and non-human primates, and to characterize microsaccades forming square-wave jerks (SWJs) in non-human primates. The results showed that microsaccades, but not drifts, correct gaze-position errors due to blinks, and that SWJ production and dynamic properties are equivalent in human and non-human primates.

These combined findings suggest that microsaccades, like saccades, serve multiple and non-exclusive functional roles in vision and oculomotor control, as opposed to having a single specialized function.
ContributorsCostela, Francisco M (Author) / Crook, Sharon M (Committee member) / Martinez-Conde, Susana (Committee member) / Macknik, Stephen L. (Committee member) / Baer, Stephen (Committee member) / McCamy, Michael B (Committee member) / Arizona State University (Publisher)
Created2014
150499-Thumbnail Image.png
Description
The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain

The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain unclear. Thus several studies have been performed to elucidate the role and influence of sensorimotor noise on movement variability. The first study focuses on sensory integration and movement planning across the reaching workspace. An experiment was designed to examine the relative contributions of vision and proprioception to movement planning by measuring the rotation of the initial movement direction induced by a perturbation of the visual feedback prior to movement onset. The results suggest that contribution of vision was relatively consistent across the evaluated workspace depths; however, the influence of vision differed between the vertical and later axes indicate that additional factors beyond vision and proprioception influence movement planning of 3-dimensional movements. If the first study investigated the role of noise in sensorimotor integration, the second and third studies investigate relative influence of sensorimotor noise on reaching performance. Specifically, they evaluate how the characteristics of neural processing that underlie movement planning and execution manifest in movement variability during natural reaching. Subjects performed reaching movements with and without visual feedback throughout the movement and the patterns of endpoint variability were compared across movement directions. The results of these studies suggest a primary role of visual feedback noise in shaping patterns of variability and in determining the relative influence of planning and execution related noise sources. The final work considers a computational approach to characterizing how sensorimotor processes interact to shape movement variability. A model of multi-modal feedback control was developed to simulate the interaction of planning and execution noise on reaching variability. The model predictions suggest that anisotropic properties of feedback noise significantly affect the relative influence of planning and execution noise on patterns of reaching variability.
ContributorsApker, Gregory Allen (Author) / Buneo, Christopher A (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2012
156586-Thumbnail Image.png
Description
Image Understanding is a long-established discipline in computer vision, which encompasses a body of advanced image processing techniques, that are used to locate (“where”), characterize and recognize (“what”) objects, regions, and their attributes in the image. However, the notion of “understanding” (and the goal of artificial intelligent machines) goes beyond

Image Understanding is a long-established discipline in computer vision, which encompasses a body of advanced image processing techniques, that are used to locate (“where”), characterize and recognize (“what”) objects, regions, and their attributes in the image. However, the notion of “understanding” (and the goal of artificial intelligent machines) goes beyond factual recall of the recognized components and includes reasoning and thinking beyond what can be seen (or perceived). Understanding is often evaluated by asking questions of increasing difficulty. Thus, the expected functionalities of an intelligent Image Understanding system can be expressed in terms of the functionalities that are required to answer questions about an image. Answering questions about images require primarily three components: Image Understanding, question (natural language) understanding, and reasoning based on knowledge. Any question, asking beyond what can be directly seen, requires modeling of commonsense (or background/ontological/factual) knowledge and reasoning.

Knowledge and reasoning have seen scarce use in image understanding applications. In this thesis, we demonstrate the utilities of incorporating background knowledge and using explicit reasoning in image understanding applications. We first present a comprehensive survey of the previous work that utilized background knowledge and reasoning in understanding images. This survey outlines the limited use of commonsense knowledge in high-level applications. We then present a set of vision and reasoning-based methods to solve several applications and show that these approaches benefit in terms of accuracy and interpretability from the explicit use of knowledge and reasoning. We propose novel knowledge representations of image, knowledge acquisition methods, and a new implementation of an efficient probabilistic logical reasoning engine that can utilize publicly available commonsense knowledge to solve applications such as visual question answering, image puzzles. Additionally, we identify the need for new datasets that explicitly require external commonsense knowledge to solve. We propose the new task of Image Riddles, which requires a combination of vision, and reasoning based on ontological knowledge; and we collect a sufficiently large dataset to serve as an ideal testbed for vision and reasoning research. Lastly, we propose end-to-end deep architectures that can combine vision, knowledge and reasoning modules together and achieve large performance boosts over state-of-the-art methods.
ContributorsAditya, Somak (Author) / Baral, Chitta (Thesis advisor) / Yang, Yezhou (Thesis advisor) / Aloimonos, Yiannis (Committee member) / Lee, Joohyung (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2018
156810-Thumbnail Image.png
Description
Growing understanding of the neural code and how to speak it has allowed for notable advancements in neural prosthetics. With commercially-available implantable systems with bi- directional neural communication on the horizon, there is an increasing imperative to develop high resolution interfaces that can survive the environment and be well tolerated

Growing understanding of the neural code and how to speak it has allowed for notable advancements in neural prosthetics. With commercially-available implantable systems with bi- directional neural communication on the horizon, there is an increasing imperative to develop high resolution interfaces that can survive the environment and be well tolerated by the nervous system under chronic use. The sensory encoding aspect optimally interfaces at a scale sufficient to evoke perception but focal in nature to maximize resolution and evoke more complex and nuanced sensations. Microelectrode arrays can maintain high spatial density, operating on the scale of cortical columns, and can be either penetrating or non-penetrating. The non-penetrating subset sits on the tissue surface without puncturing the parenchyma and is known to engender minimal tissue response and less damage than the penetrating counterpart, improving long term viability in vivo. Provided non-penetrating microelectrodes can consistently evoke perception and maintain a localized region of activation, non-penetrating micro-electrodes may provide an ideal platform for a high performing neural prosthesis; this dissertation explores their functional capacity.

The scale at which non-penetrating electrode arrays can interface with cortex is evaluated in the context of extracting useful information. Articulate movements were decoded from surface microelectrode electrodes, and additional spatial analysis revealed unique signal content despite dense electrode spacing. With a basis for data extraction established, the focus shifts towards the information encoding half of neural interfaces. Finite element modeling was used to compare tissue recruitment under surface stimulation across electrode scales. Results indicated charge density-based metrics provide a reasonable approximation for current levels required to evoke a visual sensation and showed tissue recruitment increases exponentially with electrode diameter. Micro-scale electrodes (0.1 – 0.3 mm diameter) could sufficiently activate layers II/III in a model tuned to striate cortex while maintaining focal radii of activated tissue.

In vivo testing proceeded in a nonhuman primate model. Stimulation consistently evoked visual percepts at safe current thresholds. Tracking perception thresholds across one year reflected stable values within minimal fluctuation. Modulating waveform parameters was found useful in reducing charge requirements to evoke perception. Pulse frequency and phase asymmetry were each used to reduce thresholds, improve charge efficiency, lower charge per phase – charge density metrics associated with tissue damage. No impairments to photic perception were observed during the course of the study, suggesting limited tissue damage from array implantation or electrically induced neurotoxicity. The subject consistently identified stimulation on closely spaced electrodes (2 mm center-to-center) as separate percepts, indicating sub-visual degree discrete resolution may be feasible with this platform. Although continued testing is necessary, preliminary results supports epicortical microelectrode arrays as a stable platform for interfacing with neural tissue and a viable option for bi-directional BCI applications.
ContributorsOswalt, Denise (Author) / Greger, Bradley (Thesis advisor) / Buneo, Christopher (Committee member) / Helms-Tillery, Stephen (Committee member) / Mirzadeh, Zaman (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2018
154916-Thumbnail Image.png
Description
Why do many animals possess multiple classes of photoreceptors that vary in the wavelengths of light to which they are sensitive? Multiple spectral photoreceptor classes are a requirement for true color vision. However, animals may have unconventional vision, in which multiple spectral channels broaden the range of wavelengths that can

Why do many animals possess multiple classes of photoreceptors that vary in the wavelengths of light to which they are sensitive? Multiple spectral photoreceptor classes are a requirement for true color vision. However, animals may have unconventional vision, in which multiple spectral channels broaden the range of wavelengths that can be detected, or in which they use only a subset of receptors for specific behaviors. Branchiopod crustaceans are of interest for the study of unconventional color vision because they express multiple visual pigments in their compound eyes, have a simple repertoire of visually guided behavior, inhabit unique and highly variable light environments, and possess secondary neural simplifications. I first tested the behavioral responses of two representative species of branchiopods from separate orders, Streptocephalus mackini Anostracans (fairy shrimp), and Triops longicaudatus Notostracans (tadpole shrimp). I found that they maintain vertical position in the water column over a broad range of intensities and wavelengths, and respond behaviorally even at intensities below those of starlight. Accordingly, light intensities of their habitats at shallow depths tend to be dimmer than terrestrial habitats under starlight. Using models of how their compound eyes and the first neuropil of their optic lobe process visual cues, I infer that both orders of branchiopods use spatial summation from multiple compound eye ommatidia to respond at low intensities. Then, to understand if branchiopods use unconventional vision to guide these behaviors, I took electroretinographic recordings (ERGs) from their compound eyes and used models of spectral absorptance for a multimodel selection approach to make inferences about the number of photoreceptor classes in their eyes. I infer that both species have four spectral classes of photoreceptors that contribute to their ERGs, suggesting unconventional vision guides the described behavior. I extended the same modeling approach to other organisms, finding that the model inferences align with the empirically determined number of photoreceptor classes for this diverse set of organisms. This dissertation expands the conceptual framework of color vision research, indicating unconventional vision is more widespread than previously considered, and explains why some organisms have more spectral classes than would be expected from their behavioral repertoire.
ContributorsLessios, Nicolas (Author) / Rutowski, Ronald L (Thesis advisor) / Cohen, Jonathan H (Thesis advisor) / Harrison, John (Committee member) / Neuer, Susanne (Committee member) / McGraw, Kevin (Committee member) / Arizona State University (Publisher)
Created2016