Matching Items (11)

134241-Thumbnail Image.png

Bulgakov's Leap of Faith

Description

Mikhail Bulgakov's The Master and Margarita has been misunderstood by some scholars in terms of interpreting the character of Ivan, otherwise known as Homeless. Past researchers have looked at Ivan

Mikhail Bulgakov's The Master and Margarita has been misunderstood by some scholars in terms of interpreting the character of Ivan, otherwise known as Homeless. Past researchers have looked at Ivan and found him to be a spiritual failure in not carrying on the tale of the Master's novel. In this paper, it is argued Ivan the poet actually achieves a spiritual state superior to the other character's in the novel. By the novel's end readers witness a rebirth of Ivan into one who yearly has a intimate spiritual vision involving Yeshua. Meanwhile other characters like the Master or his love Margarita end the novel in only a purgatory-like space which leaves them neither in the arms of a spiritual experience nor on earth. But Ivan is able to experience both the spiritual and the physical at the same time. Conclusions of this nature are drawn from the writings of Orthodox Father Pavel Florensky whose writings on aesthetics are heavily reflected in the novel. Ivan comes to embody the standard of Fr. Florensky's writings which glorify a person who has turned towards God. This then is the ultimate in beauty, the reflection of God who is the standard of goodness and beauty. Ivan then at the novel's end is able to experience a sense of closeness with God through a vision where he sees a Christlike figure. The dream causes him to become calm and at peace. He has in effect turned towards God and then embodies the goodness of God. Therefore, the spiritual state of Ivan is one who has garnered a closer connection with God than any other character in the novel.

Contributors

Agent

Created

Date Created
  • 2017-05

132947-Thumbnail Image.png

Comparison of Widespread State Variation in Optometric Care

Description

Optometry is a field in the United States dedicated to analyzing the health of eyes and offering corrective lenses and/or treatments to improve a patient’s ocular health and vision. Since

Optometry is a field in the United States dedicated to analyzing the health of eyes and offering corrective lenses and/or treatments to improve a patient’s ocular health and vision. Since its origin in the U.S. in the late 19th century, the field of optometry has been met with strong opposition from the medical community, ophthalmologists in particular. This ongoing feud between optometrists and ophthalmologists, medical doctors who also specialize in eye health and perform eye surgeries, continues today as ophthalmologists push back against optometrists’ attempts to expand their scope of practice. With this expansion to include certain eye surgeries, it would save patients both time and money. This is just one factor impacting patients, with another being the widely varied state laws surrounding eye health. Procedures optometrists are able to perform is decided by state laws, which leads to vast discrepancies. Optometrists in one state can perform laser eye surgeries, while optometrists in a nearby state cannot even provide simple treatments for ocular diseases they diagnosis. In this study, three states were analyzed to showcase these variations in possible treatment and demonstrate both the positive and negative impacts they are having on patients. First was Massachusetts which has one of the best medical care systems in the U.S., but one of the worst vision care. As the only state to not allow optometrists to treat glaucoma and one of two states to not allow optometrists to prescribe medications for patients, these limitations have caused patients the inconvenience of having to then visit an ophthalmologist for treatment which adds additional costs and delay in treatment which can cause the conditions to possibly worsen. Second was Oklahoma which was the first U.S. state to allow optometrists to perform laser eye surgeries in 1998. This legislation expanded Oklahoma residents access to treatment as before patients would have to travel to other cities or counties to visit one of the few ophthalmologists in the state. Lastly was Maine which in 2015 passed legislation to allow optometrists to regain control of their field from vision insurance companies who can no longer dictate fees patients are charged if the insurance companies will not cover it. This study concluded that there needs to be a universal vision care system across the U.S. that includes expansion of practice for optometrists and allow them to be in control of their own field, not the state government or vision insurance companies.

Contributors

Agent

Created

Date Created
  • 2019-05

135494-Thumbnail Image.png

The Role of Visual Attention In Auditory Localization

Description

Hearing and vision are two senses that most individuals use on a daily basis. The simultaneous presentation of competing visual and auditory stimuli often affects our sensory perception. It is

Hearing and vision are two senses that most individuals use on a daily basis. The simultaneous presentation of competing visual and auditory stimuli often affects our sensory perception. It is often believed that vision is the more dominant sense over audition in spatial localization tasks. Recent work suggests that visual information can influence auditory localization when the sound is emanating from a physical location or from a phantom location generated through stereophony (the so-called "summing localization"). The present study investigates the role of cross-modal fusion in an auditory localization task. The focuses of the experiments are two-fold: (1) reveal the extent of fusion between auditory and visual stimuli and (2) investigate how fusion is correlated with the amount of visual bias a subject experiences. We found that fusion often occurs when light flash and "summing localization" stimuli were presented from the same hemifield. However, little correlation was observed between the magnitude of visual bias and the extent of perceived fusion between light and sound stimuli. In some cases, subjects reported distinctive locations for light and sound and still experienced visual capture.

Contributors

Agent

Created

Date Created
  • 2016-05

153054-Thumbnail Image.png

The significance of microsaccades for perception and oculomotor control

Description

During attempted fixation, the eyes are not still but continue to produce so called "fixational eye movements", which include microsaccades, drift, and tremor. Microsaccades are thought to help prevent and

During attempted fixation, the eyes are not still but continue to produce so called "fixational eye movements", which include microsaccades, drift, and tremor. Microsaccades are thought to help prevent and restore vision loss during fixation, and to correct fixation errors, but how they contribute to these functions remains a matter of debate. This dissertation presents the results of four experiments conducted to address current controversies concerning the role of microsaccades in visibility and oculomotor control.

The first two experiments set out to correlate microsaccade production with the visibility of foveal and peripheral targets of varied spatial frequencies, during attempted fixation. The results indicate that microsaccades restore the visibility of both peripheral targets and targets presented entirely within the fovea, as a function of their spatial frequency characteristics.

The last two experiments set out to determine the role of microsaccades and drifts on the correction of gaze-position errors due to blinks in human and non-human primates, and to characterize microsaccades forming square-wave jerks (SWJs) in non-human primates. The results showed that microsaccades, but not drifts, correct gaze-position errors due to blinks, and that SWJ production and dynamic properties are equivalent in human and non-human primates.

These combined findings suggest that microsaccades, like saccades, serve multiple and non-exclusive functional roles in vision and oculomotor control, as opposed to having a single specialized function.

Contributors

Agent

Created

Date Created
  • 2014

154916-Thumbnail Image.png

Puzzling connections between behavior, spectral photoreceptor classes and visual system simplification: branchiopod crustaceans and unconventional color vision

Description

Why do many animals possess multiple classes of photoreceptors that vary in the wavelengths of light to which they are sensitive? Multiple spectral photoreceptor classes are a requirement for true

Why do many animals possess multiple classes of photoreceptors that vary in the wavelengths of light to which they are sensitive? Multiple spectral photoreceptor classes are a requirement for true color vision. However, animals may have unconventional vision, in which multiple spectral channels broaden the range of wavelengths that can be detected, or in which they use only a subset of receptors for specific behaviors. Branchiopod crustaceans are of interest for the study of unconventional color vision because they express multiple visual pigments in their compound eyes, have a simple repertoire of visually guided behavior, inhabit unique and highly variable light environments, and possess secondary neural simplifications. I first tested the behavioral responses of two representative species of branchiopods from separate orders, Streptocephalus mackini Anostracans (fairy shrimp), and Triops longicaudatus Notostracans (tadpole shrimp). I found that they maintain vertical position in the water column over a broad range of intensities and wavelengths, and respond behaviorally even at intensities below those of starlight. Accordingly, light intensities of their habitats at shallow depths tend to be dimmer than terrestrial habitats under starlight. Using models of how their compound eyes and the first neuropil of their optic lobe process visual cues, I infer that both orders of branchiopods use spatial summation from multiple compound eye ommatidia to respond at low intensities. Then, to understand if branchiopods use unconventional vision to guide these behaviors, I took electroretinographic recordings (ERGs) from their compound eyes and used models of spectral absorptance for a multimodel selection approach to make inferences about the number of photoreceptor classes in their eyes. I infer that both species have four spectral classes of photoreceptors that contribute to their ERGs, suggesting unconventional vision guides the described behavior. I extended the same modeling approach to other organisms, finding that the model inferences align with the empirically determined number of photoreceptor classes for this diverse set of organisms. This dissertation expands the conceptual framework of color vision research, indicating unconventional vision is more widespread than previously considered, and explains why some organisms have more spectral classes than would be expected from their behavioral repertoire.

Contributors

Agent

Created

Date Created
  • 2016

153419-Thumbnail Image.png

The impact of visual input on the ability of bilateral and bimodal cochlear implant users to accurately perceive words and phonemes in experimental phrases

Description

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the bilateral placement of implants in both ears and to bimodal placement of a hearing aid in the contralateral ear if residual hearing is present. However, there is potentially more to subsequent speech perception for bilateral and bimodal cochlear implant users than the electric and acoustic input being received via these modalities. For normal listeners vision plays a role and Rosenblum (2005) points out it is a key feature of an integrated perceptual process. Logically, cochlear implant users should also benefit from integrated visual input. The question is how exactly does vision provide benefit to bilateral and bimodal users. Eight (8) bilateral and 5 bimodal participants received randomized experimental phrases previously generated by Liss et al. (1998) in auditory and audiovisual conditions. The participants recorded their perception of the input. Data were consequently analyzed for percent words correct, consonant errors, and lexical boundary error types. Overall, vision was found to improve speech perception for bilateral and bimodal cochlear implant participants. Each group experienced a significant increase in percent words correct when visual input was added. With vision bilateral participants reduced consonant place errors and demonstrated increased use of syllabic stress cues used in lexical segmentation. Therefore, results suggest vision might provide perceptual benefits for bilateral cochlear implant users by granting access to place information and by augmenting cues for syllabic stress in the absence of acoustic input. On the other hand vision did not provide the bimodal participants significantly increased access to place and stress cues. Therefore the exact mechanism by which bimodal implant users improved speech perception with the addition of vision is unknown. These results point to the complexities of audiovisual integration during speech perception and the need for continued research regarding the benefit vision provides to bilateral and bimodal cochlear implant users.

Contributors

Agent

Created

Date Created
  • 2015

156810-Thumbnail Image.png

Non-Penetrating Microelectrode Interfaces for Cortical Neuroprosthetic Applications with a Focus on Sensory Encoding: Feasibility and Chronic Performance in Striate Cortex

Description

Growing understanding of the neural code and how to speak it has allowed for notable advancements in neural prosthetics. With commercially-available implantable systems with bi- directional neural communication on the

Growing understanding of the neural code and how to speak it has allowed for notable advancements in neural prosthetics. With commercially-available implantable systems with bi- directional neural communication on the horizon, there is an increasing imperative to develop high resolution interfaces that can survive the environment and be well tolerated by the nervous system under chronic use. The sensory encoding aspect optimally interfaces at a scale sufficient to evoke perception but focal in nature to maximize resolution and evoke more complex and nuanced sensations. Microelectrode arrays can maintain high spatial density, operating on the scale of cortical columns, and can be either penetrating or non-penetrating. The non-penetrating subset sits on the tissue surface without puncturing the parenchyma and is known to engender minimal tissue response and less damage than the penetrating counterpart, improving long term viability in vivo. Provided non-penetrating microelectrodes can consistently evoke perception and maintain a localized region of activation, non-penetrating micro-electrodes may provide an ideal platform for a high performing neural prosthesis; this dissertation explores their functional capacity.

The scale at which non-penetrating electrode arrays can interface with cortex is evaluated in the context of extracting useful information. Articulate movements were decoded from surface microelectrode electrodes, and additional spatial analysis revealed unique signal content despite dense electrode spacing. With a basis for data extraction established, the focus shifts towards the information encoding half of neural interfaces. Finite element modeling was used to compare tissue recruitment under surface stimulation across electrode scales. Results indicated charge density-based metrics provide a reasonable approximation for current levels required to evoke a visual sensation and showed tissue recruitment increases exponentially with electrode diameter. Micro-scale electrodes (0.1 – 0.3 mm diameter) could sufficiently activate layers II/III in a model tuned to striate cortex while maintaining focal radii of activated tissue.

In vivo testing proceeded in a nonhuman primate model. Stimulation consistently evoked visual percepts at safe current thresholds. Tracking perception thresholds across one year reflected stable values within minimal fluctuation. Modulating waveform parameters was found useful in reducing charge requirements to evoke perception. Pulse frequency and phase asymmetry were each used to reduce thresholds, improve charge efficiency, lower charge per phase – charge density metrics associated with tissue damage. No impairments to photic perception were observed during the course of the study, suggesting limited tissue damage from array implantation or electrically induced neurotoxicity. The subject consistently identified stimulation on closely spaced electrodes (2 mm center-to-center) as separate percepts, indicating sub-visual degree discrete resolution may be feasible with this platform. Although continued testing is necessary, preliminary results supports epicortical microelectrode arrays as a stable platform for interfacing with neural tissue and a viable option for bi-directional BCI applications.

Contributors

Agent

Created

Date Created
  • 2018

151901-Thumbnail Image.png

Ambient light environment and the evolution of brightness, chroma, and perceived chromaticity in the warning signals of butterflies

Description

ABSTRACT 1. Aposematic signals advertise prey distastefulness or metabolic unprofitability to potential predators and have evolved independently in many prey groups over the course of evolutionary history as a means

ABSTRACT 1. Aposematic signals advertise prey distastefulness or metabolic unprofitability to potential predators and have evolved independently in many prey groups over the course of evolutionary history as a means of protection from predation. Most aposematic signals investigated to date exhibit highly chromatic patterning; however, relatives in these toxic groups with patterns of very low chroma have been largely overlooked. 2. We propose that bright displays with low chroma arose in toxic prey species because they were more effective at deterring predation than were their chromatic counterparts, especially when viewed in relatively low light environments such as forest understories. 3. We analyzed the reflectance and radiance of color patches on the wings of 90 tropical butterfly species that belong to groups with documented toxicity that vary in their habitat preferences to test this prediction: Warning signal chroma and perceived chromaticity are expected to be higher and brightness lower in species that fly in open environments when compared to those that fly in forested environments. 4. Analyses of the reflectance and radiance of warning color patches and predator visual modeling support this prediction. Moreover, phylogenetic tests, which correct for statistical non-independence due to phylogenetic relatedness of test species, also support the hypothesis of an evolutionary correlation between perceived chromaticity of aposematic signals and the flight habits of the butterflies that exhibit these signals.

Contributors

Agent

Created

Date Created
  • 2013

156586-Thumbnail Image.png

Knowledge and Reasoning for Image Understanding

Description

Image Understanding is a long-established discipline in computer vision, which encompasses a body of advanced image processing techniques, that are used to locate (“where”), characterize and recognize (“what”) objects, regions,

Image Understanding is a long-established discipline in computer vision, which encompasses a body of advanced image processing techniques, that are used to locate (“where”), characterize and recognize (“what”) objects, regions, and their attributes in the image. However, the notion of “understanding” (and the goal of artificial intelligent machines) goes beyond factual recall of the recognized components and includes reasoning and thinking beyond what can be seen (or perceived). Understanding is often evaluated by asking questions of increasing difficulty. Thus, the expected functionalities of an intelligent Image Understanding system can be expressed in terms of the functionalities that are required to answer questions about an image. Answering questions about images require primarily three components: Image Understanding, question (natural language) understanding, and reasoning based on knowledge. Any question, asking beyond what can be directly seen, requires modeling of commonsense (or background/ontological/factual) knowledge and reasoning.

Knowledge and reasoning have seen scarce use in image understanding applications. In this thesis, we demonstrate the utilities of incorporating background knowledge and using explicit reasoning in image understanding applications. We first present a comprehensive survey of the previous work that utilized background knowledge and reasoning in understanding images. This survey outlines the limited use of commonsense knowledge in high-level applications. We then present a set of vision and reasoning-based methods to solve several applications and show that these approaches benefit in terms of accuracy and interpretability from the explicit use of knowledge and reasoning. We propose novel knowledge representations of image, knowledge acquisition methods, and a new implementation of an efficient probabilistic logical reasoning engine that can utilize publicly available commonsense knowledge to solve applications such as visual question answering, image puzzles. Additionally, we identify the need for new datasets that explicitly require external commonsense knowledge to solve. We propose the new task of Image Riddles, which requires a combination of vision, and reasoning based on ontological knowledge; and we collect a sufficiently large dataset to serve as an ideal testbed for vision and reasoning research. Lastly, we propose end-to-end deep architectures that can combine vision, knowledge and reasoning modules together and achieve large performance boosts over state-of-the-art methods.

Contributors

Agent

Created

Date Created
  • 2018

150499-Thumbnail Image.png

Influence of sensorimotor noise on the planning and control of reaching in 3-dimensional space

Description

The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result

The ability to plan, execute, and control goal oriented reaching and grasping movements is among the most essential functions of the brain. Yet, these movements are inherently variable; a result of the noise pervading the neural signals underlying sensorimotor processing. The specific influences and interactions of these noise processes remain unclear. Thus several studies have been performed to elucidate the role and influence of sensorimotor noise on movement variability. The first study focuses on sensory integration and movement planning across the reaching workspace. An experiment was designed to examine the relative contributions of vision and proprioception to movement planning by measuring the rotation of the initial movement direction induced by a perturbation of the visual feedback prior to movement onset. The results suggest that contribution of vision was relatively consistent across the evaluated workspace depths; however, the influence of vision differed between the vertical and later axes indicate that additional factors beyond vision and proprioception influence movement planning of 3-dimensional movements. If the first study investigated the role of noise in sensorimotor integration, the second and third studies investigate relative influence of sensorimotor noise on reaching performance. Specifically, they evaluate how the characteristics of neural processing that underlie movement planning and execution manifest in movement variability during natural reaching. Subjects performed reaching movements with and without visual feedback throughout the movement and the patterns of endpoint variability were compared across movement directions. The results of these studies suggest a primary role of visual feedback noise in shaping patterns of variability and in determining the relative influence of planning and execution related noise sources. The final work considers a computational approach to characterizing how sensorimotor processes interact to shape movement variability. A model of multi-modal feedback control was developed to simulate the interaction of planning and execution noise on reaching variability. The model predictions suggest that anisotropic properties of feedback noise significantly affect the relative influence of planning and execution noise on patterns of reaching variability.

Contributors

Agent

Created

Date Created
  • 2012