Matching Items (9)
Filtering by

Clear all filters

151901-Thumbnail Image.png
Description
ABSTRACT 1. Aposematic signals advertise prey distastefulness or metabolic unprofitability to potential predators and have evolved independently in many prey groups over the course of evolutionary history as a means of protection from predation. Most aposematic signals investigated to date exhibit highly chromatic patterning; however, relatives in these toxic groups

ABSTRACT 1. Aposematic signals advertise prey distastefulness or metabolic unprofitability to potential predators and have evolved independently in many prey groups over the course of evolutionary history as a means of protection from predation. Most aposematic signals investigated to date exhibit highly chromatic patterning; however, relatives in these toxic groups with patterns of very low chroma have been largely overlooked. 2. We propose that bright displays with low chroma arose in toxic prey species because they were more effective at deterring predation than were their chromatic counterparts, especially when viewed in relatively low light environments such as forest understories. 3. We analyzed the reflectance and radiance of color patches on the wings of 90 tropical butterfly species that belong to groups with documented toxicity that vary in their habitat preferences to test this prediction: Warning signal chroma and perceived chromaticity are expected to be higher and brightness lower in species that fly in open environments when compared to those that fly in forested environments. 4. Analyses of the reflectance and radiance of warning color patches and predator visual modeling support this prediction. Moreover, phylogenetic tests, which correct for statistical non-independence due to phylogenetic relatedness of test species, also support the hypothesis of an evolutionary correlation between perceived chromaticity of aposematic signals and the flight habits of the butterflies that exhibit these signals.
ContributorsDouglas, Jonathan Marion (Author) / Rutowski, Ronald L (Thesis advisor) / Gadau, Juergen (Committee member) / McGraw, Kevin J. (Committee member) / Arizona State University (Publisher)
Created2013
153419-Thumbnail Image.png
Description
A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the bilateral placement of implants in both ears and to bimodal placement of a hearing aid in the contralateral ear if residual hearing is present. However, there is potentially more to subsequent speech perception for bilateral and bimodal cochlear implant users than the electric and acoustic input being received via these modalities. For normal listeners vision plays a role and Rosenblum (2005) points out it is a key feature of an integrated perceptual process. Logically, cochlear implant users should also benefit from integrated visual input. The question is how exactly does vision provide benefit to bilateral and bimodal users. Eight (8) bilateral and 5 bimodal participants received randomized experimental phrases previously generated by Liss et al. (1998) in auditory and audiovisual conditions. The participants recorded their perception of the input. Data were consequently analyzed for percent words correct, consonant errors, and lexical boundary error types. Overall, vision was found to improve speech perception for bilateral and bimodal cochlear implant participants. Each group experienced a significant increase in percent words correct when visual input was added. With vision bilateral participants reduced consonant place errors and demonstrated increased use of syllabic stress cues used in lexical segmentation. Therefore, results suggest vision might provide perceptual benefits for bilateral cochlear implant users by granting access to place information and by augmenting cues for syllabic stress in the absence of acoustic input. On the other hand vision did not provide the bimodal participants significantly increased access to place and stress cues. Therefore the exact mechanism by which bimodal implant users improved speech perception with the addition of vision is unknown. These results point to the complexities of audiovisual integration during speech perception and the need for continued research regarding the benefit vision provides to bilateral and bimodal cochlear implant users.
ContributorsLudwig, Cimarron (Author) / Liss, Julie (Thesis advisor) / Dorman, Michael (Committee member) / Azuma, Tamiko (Committee member) / Arizona State University (Publisher)
Created2015
153054-Thumbnail Image.png
Description
During attempted fixation, the eyes are not still but continue to produce so called "fixational eye movements", which include microsaccades, drift, and tremor. Microsaccades are thought to help prevent and restore vision loss during fixation, and to correct fixation errors, but how they contribute to these functions remains a matter

During attempted fixation, the eyes are not still but continue to produce so called "fixational eye movements", which include microsaccades, drift, and tremor. Microsaccades are thought to help prevent and restore vision loss during fixation, and to correct fixation errors, but how they contribute to these functions remains a matter of debate. This dissertation presents the results of four experiments conducted to address current controversies concerning the role of microsaccades in visibility and oculomotor control.

The first two experiments set out to correlate microsaccade production with the visibility of foveal and peripheral targets of varied spatial frequencies, during attempted fixation. The results indicate that microsaccades restore the visibility of both peripheral targets and targets presented entirely within the fovea, as a function of their spatial frequency characteristics.

The last two experiments set out to determine the role of microsaccades and drifts on the correction of gaze-position errors due to blinks in human and non-human primates, and to characterize microsaccades forming square-wave jerks (SWJs) in non-human primates. The results showed that microsaccades, but not drifts, correct gaze-position errors due to blinks, and that SWJ production and dynamic properties are equivalent in human and non-human primates.

These combined findings suggest that microsaccades, like saccades, serve multiple and non-exclusive functional roles in vision and oculomotor control, as opposed to having a single specialized function.
ContributorsCostela, Francisco M (Author) / Crook, Sharon M (Committee member) / Martinez-Conde, Susana (Committee member) / Macknik, Stephen L. (Committee member) / Baer, Stephen (Committee member) / McCamy, Michael B (Committee member) / Arizona State University (Publisher)
Created2014
150499-Thumbnail Image.png
Description
The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain

The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain unclear. Thus several studies have been performed to elucidate the role and influence of sensorimotor noise on movement variability. The first study focuses on sensory integration and movement planning across the reaching workspace. An experiment was designed to examine the relative contributions of vision and proprioception to movement planning by measuring the rotation of the initial movement direction induced by a perturbation of the visual feedback prior to movement onset. The results suggest that contribution of vision was relatively consistent across the evaluated workspace depths; however, the influence of vision differed between the vertical and later axes indicate that additional factors beyond vision and proprioception influence movement planning of 3-dimensional movements. If the first study investigated the role of noise in sensorimotor integration, the second and third studies investigate relative influence of sensorimotor noise on reaching performance. Specifically, they evaluate how the characteristics of neural processing that underlie movement planning and execution manifest in movement variability during natural reaching. Subjects performed reaching movements with and without visual feedback throughout the movement and the patterns of endpoint variability were compared across movement directions. The results of these studies suggest a primary role of visual feedback noise in shaping patterns of variability and in determining the relative influence of planning and execution related noise sources. The final work considers a computational approach to characterizing how sensorimotor processes interact to shape movement variability. A model of multi-modal feedback control was developed to simulate the interaction of planning and execution noise on reaching variability. The model predictions suggest that anisotropic properties of feedback noise significantly affect the relative influence of planning and execution noise on patterns of reaching variability.
ContributorsApker, Gregory Allen (Author) / Buneo, Christopher A (Thesis advisor) / Helms Tillery, Stephen (Committee member) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2012
156586-Thumbnail Image.png
Description
Image Understanding is a long-established discipline in computer vision, which encompasses a body of advanced image processing techniques, that are used to locate (“where”), characterize and recognize (“what”) objects, regions, and their attributes in the image. However, the notion of “understanding” (and the goal of artificial intelligent machines) goes beyond

Image Understanding is a long-established discipline in computer vision, which encompasses a body of advanced image processing techniques, that are used to locate (“where”), characterize and recognize (“what”) objects, regions, and their attributes in the image. However, the notion of “understanding” (and the goal of artificial intelligent machines) goes beyond factual recall of the recognized components and includes reasoning and thinking beyond what can be seen (or perceived). Understanding is often evaluated by asking questions of increasing difficulty. Thus, the expected functionalities of an intelligent Image Understanding system can be expressed in terms of the functionalities that are required to answer questions about an image. Answering questions about images require primarily three components: Image Understanding, question (natural language) understanding, and reasoning based on knowledge. Any question, asking beyond what can be directly seen, requires modeling of commonsense (or background/ontological/factual) knowledge and reasoning.

Knowledge and reasoning have seen scarce use in image understanding applications. In this thesis, we demonstrate the utilities of incorporating background knowledge and using explicit reasoning in image understanding applications. We first present a comprehensive survey of the previous work that utilized background knowledge and reasoning in understanding images. This survey outlines the limited use of commonsense knowledge in high-level applications. We then present a set of vision and reasoning-based methods to solve several applications and show that these approaches benefit in terms of accuracy and interpretability from the explicit use of knowledge and reasoning. We propose novel knowledge representations of image, knowledge acquisition methods, and a new implementation of an efficient probabilistic logical reasoning engine that can utilize publicly available commonsense knowledge to solve applications such as visual question answering, image puzzles. Additionally, we identify the need for new datasets that explicitly require external commonsense knowledge to solve. We propose the new task of Image Riddles, which requires a combination of vision, and reasoning based on ontological knowledge; and we collect a sufficiently large dataset to serve as an ideal testbed for vision and reasoning research. Lastly, we propose end-to-end deep architectures that can combine vision, knowledge and reasoning modules together and achieve large performance boosts over state-of-the-art methods.
ContributorsAditya, Somak (Author) / Baral, Chitta (Thesis advisor) / Yang, Yezhou (Thesis advisor) / Aloimonos, Yiannis (Committee member) / Lee, Joohyung (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2018
156810-Thumbnail Image.png
Description
Growing understanding of the neural code and how to speak it has allowed for notable advancements in neural prosthetics. With commercially-available implantable systems with bi- directional neural communication on the horizon, there is an increasing imperative to develop high resolution interfaces that can survive the environment and be well tolerated

Growing understanding of the neural code and how to speak it has allowed for notable advancements in neural prosthetics. With commercially-available implantable systems with bi- directional neural communication on the horizon, there is an increasing imperative to develop high resolution interfaces that can survive the environment and be well tolerated by the nervous system under chronic use. The sensory encoding aspect optimally interfaces at a scale sufficient to evoke perception but focal in nature to maximize resolution and evoke more complex and nuanced sensations. Microelectrode arrays can maintain high spatial density, operating on the scale of cortical columns, and can be either penetrating or non-penetrating. The non-penetrating subset sits on the tissue surface without puncturing the parenchyma and is known to engender minimal tissue response and less damage than the penetrating counterpart, improving long term viability in vivo. Provided non-penetrating microelectrodes can consistently evoke perception and maintain a localized region of activation, non-penetrating micro-electrodes may provide an ideal platform for a high performing neural prosthesis; this dissertation explores their functional capacity.

The scale at which non-penetrating electrode arrays can interface with cortex is evaluated in the context of extracting useful information. Articulate movements were decoded from surface microelectrode electrodes, and additional spatial analysis revealed unique signal content despite dense electrode spacing. With a basis for data extraction established, the focus shifts towards the information encoding half of neural interfaces. Finite element modeling was used to compare tissue recruitment under surface stimulation across electrode scales. Results indicated charge density-based metrics provide a reasonable approximation for current levels required to evoke a visual sensation and showed tissue recruitment increases exponentially with electrode diameter. Micro-scale electrodes (0.1 – 0.3 mm diameter) could sufficiently activate layers II/III in a model tuned to striate cortex while maintaining focal radii of activated tissue.

In vivo testing proceeded in a nonhuman primate model. Stimulation consistently evoked visual percepts at safe current thresholds. Tracking perception thresholds across one year reflected stable values within minimal fluctuation. Modulating waveform parameters was found useful in reducing charge requirements to evoke perception. Pulse frequency and phase asymmetry were each used to reduce thresholds, improve charge efficiency, lower charge per phase – charge density metrics associated with tissue damage. No impairments to photic perception were observed during the course of the study, suggesting limited tissue damage from array implantation or electrically induced neurotoxicity. The subject consistently identified stimulation on closely spaced electrodes (2 mm center-to-center) as separate percepts, indicating sub-visual degree discrete resolution may be feasible with this platform. Although continued testing is necessary, preliminary results supports epicortical microelectrode arrays as a stable platform for interfacing with neural tissue and a viable option for bi-directional BCI applications.
ContributorsOswalt, Denise (Author) / Greger, Bradley (Thesis advisor) / Buneo, Christopher (Committee member) / Helms-Tillery, Stephen (Committee member) / Mirzadeh, Zaman (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2018
154916-Thumbnail Image.png
Description
Why do many animals possess multiple classes of photoreceptors that vary in the wavelengths of light to which they are sensitive? Multiple spectral photoreceptor classes are a requirement for true color vision. However, animals may have unconventional vision, in which multiple spectral channels broaden the range of wavelengths that can

Why do many animals possess multiple classes of photoreceptors that vary in the wavelengths of light to which they are sensitive? Multiple spectral photoreceptor classes are a requirement for true color vision. However, animals may have unconventional vision, in which multiple spectral channels broaden the range of wavelengths that can be detected, or in which they use only a subset of receptors for specific behaviors. Branchiopod crustaceans are of interest for the study of unconventional color vision because they express multiple visual pigments in their compound eyes, have a simple repertoire of visually guided behavior, inhabit unique and highly variable light environments, and possess secondary neural simplifications. I first tested the behavioral responses of two representative species of branchiopods from separate orders, Streptocephalus mackini Anostracans (fairy shrimp), and Triops longicaudatus Notostracans (tadpole shrimp). I found that they maintain vertical position in the water column over a broad range of intensities and wavelengths, and respond behaviorally even at intensities below those of starlight. Accordingly, light intensities of their habitats at shallow depths tend to be dimmer than terrestrial habitats under starlight. Using models of how their compound eyes and the first neuropil of their optic lobe process visual cues, I infer that both orders of branchiopods use spatial summation from multiple compound eye ommatidia to respond at low intensities. Then, to understand if branchiopods use unconventional vision to guide these behaviors, I took electroretinographic recordings (ERGs) from their compound eyes and used models of spectral absorptance for a multimodel selection approach to make inferences about the number of photoreceptor classes in their eyes. I infer that both species have four spectral classes of photoreceptors that contribute to their ERGs, suggesting unconventional vision guides the described behavior. I extended the same modeling approach to other organisms, finding that the model inferences align with the empirically determined number of photoreceptor classes for this diverse set of organisms. This dissertation expands the conceptual framework of color vision research, indicating unconventional vision is more widespread than previously considered, and explains why some organisms have more spectral classes than would be expected from their behavioral repertoire.
ContributorsLessios, Nicolas (Author) / Rutowski, Ronald L (Thesis advisor) / Cohen, Jonathan H (Thesis advisor) / Harrison, John (Committee member) / Neuer, Susanne (Committee member) / McGraw, Kevin (Committee member) / Arizona State University (Publisher)
Created2016
149503-Thumbnail Image.png
Description
The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images captured from a camera mounted on it. VO offers a

The exponential rise in unmanned aerial vehicles has necessitated the need for accurate pose estimation under any extreme conditions. Visual Odometry (VO) is the estimation of position and orientation of a vehicle based on analysis of a sequence of images captured from a camera mounted on it. VO offers a cheap and relatively accurate alternative to conventional odometry techniques like wheel odometry, inertial measurement systems and global positioning system (GPS). This thesis implements and analyzes the performance of a two camera based VO called Stereo based visual odometry (SVO) in presence of various deterrent factors like shadows, extremely bright outdoors, wet conditions etc... To allow the implementation of VO on any generic vehicle, a discussion on porting of the VO algorithm to android handsets is presented too. The SVO is implemented in three steps. In the first step, a dense disparity map for a scene is computed. To achieve this we utilize sum of absolute differences technique for stereo matching on rectified and pre-filtered stereo frames. Epipolar geometry is used to simplify the matching problem. The second step involves feature detection and temporal matching. Feature detection is carried out by Harris corner detector. These features are matched between two consecutive frames using the Lucas-Kanade feature tracker. The 3D co-ordinates of these matched set of features are computed from the disparity map obtained from the first step and are mapped into each other by a translation and a rotation. The rotation and translation is computed using least squares minimization with the aid of Singular Value Decomposition. Random Sample Consensus (RANSAC) is used for outlier detection. This comprises the third step. The accuracy of the algorithm is quantified based on the final position error, which is the difference between the final position computed by the SVO algorithm and the final ground truth position as obtained from the GPS. The SVO showed an error of around 1% under normal conditions for a path length of 60 m and around 3% in bright conditions for a path length of 130 m. The algorithm suffered in presence of shadows and vibrations, with errors of around 15% and path lengths of 20 m and 100 m respectively.
ContributorsDhar, Anchit (Author) / Saripalli, Srikanth (Thesis advisor) / Li, Baoxin (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2010
161639-Thumbnail Image.png
Description
One of the most pronounced issues affecting the management of fisheries today is bycatch, or the unintentional capture of non-target species of marine life. Bycatch has proven to be detrimental for many species, including marine megafauna and pelagic fishes. One method of reducing bycatch is illuminated gillnets, which involves utilizing

One of the most pronounced issues affecting the management of fisheries today is bycatch, or the unintentional capture of non-target species of marine life. Bycatch has proven to be detrimental for many species, including marine megafauna and pelagic fishes. One method of reducing bycatch is illuminated gillnets, which involves utilizing the differences in biological visual capabilities and behaviors between species of bycatch and target fish catch. To date, all studies conducted on the effects of net illumination on bycatch and target fish catch have been conducted at night. In this study, the effects of net illumination on bycatch, target fish catch, and market value during both night and day periods at Baja California Sur, Mexico were compared. It was found that i) net illumination is effective (p < 0.05) at reducing bycatch of finfish during the day and at night, ii) net illumination at night is more effective (p < 0.05) at reducing bycatch for elasmobranchs, Humboldt squid, and aggregate bycatch than during the day, iii) time of day did not have an effect (p > 0.05) on sea turtle bycatch, and iv) net illumination did not significantly (p > 0.05)affect target catch or market value at night or during the day. These results suggest that net illumination may be an effective strategy for reducing finfish bycatch in fisheries that operate during the day or across 24 h periods, and is especially effective for reducing elasmobranch, Humboldt squid, and total bycatch biomass at night.
ContributorsDenton, Kyli Elise (Author) / Senko, Jesse (Thesis advisor) / Neuer, Susanne (Thesis advisor) / Pratt, Stephen (Committee member) / Arizona State University (Publisher)
Created2021