Matching Items (957)
135378-Thumbnail Image.png
Description
A problem of interest in theoretical physics is the issue of the evaporation of black holes via Hawking radiation subject to a fixed background. We approach this problem by considering an electromagnetic analogue, where we have substituted Hawking radiation with the Schwinger effect. We treat the case of massless QED

A problem of interest in theoretical physics is the issue of the evaporation of black holes via Hawking radiation subject to a fixed background. We approach this problem by considering an electromagnetic analogue, where we have substituted Hawking radiation with the Schwinger effect. We treat the case of massless QED in 1+1 dimensions with the path integral approach to quantum field theory, and discuss the resulting Feynman diagrams from our analysis. The results from this thesis may be useful to find a version of the Schwinger effect that can be solved exactly and perturbatively, as this version may provide insights to the gravitational problem of Hawking radiation.
ContributorsDhumuntarao, Aditya (Author) / Parikh, Maulik (Thesis director) / Davies, Paul C. W. (Committee member) / Department of Physics (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description
Optogenetics presents the ability to control membrane dynamics through the usage of transfected proteins (opsins) and light stimulation. However, as the field continues to grow, the original biological and stimulation tools used have become dated or limited in their uses. The usage of Organic Light Emitting Diodes (OLEDs) in optical

Optogenetics presents the ability to control membrane dynamics through the usage of transfected proteins (opsins) and light stimulation. However, as the field continues to grow, the original biological and stimulation tools used have become dated or limited in their uses. The usage of Organic Light Emitting Diodes (OLEDs) in optical stimulation offers greater resolution, finer control of pixel arrays, and the increased functionality of a flexible display at the cost of lower irradiance power density. This study was done to simulate methods using genetic and optical tools towards decreasing the threshold irradiance needed to initiate an action potential in a ChR2 expressing neuron. Simulations show that pulsatile stimulation can decrease threshold irradiances by increasing the overall duration of stimulus while keeping individual pulse durations below 5 ms. Furthermore, the redistribution of Channelrhodopsin-2 (ChR2) to the apical dendrites and a change in wavelength to 625 nm both result in lower threshold irradiances. However, the model used has many discrepancies and has room for improvement in areas such as the light distribution model and ChR2 dynamics. The simulations run with this model however still present valuable insight and knowledge towards the usage of new stimulation methods and revisions on existing protocols.
ContributorsKyeh, James (Author) / Muthuswamy, Jitendran (Thesis director) / Crook, Sharon (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135386-Thumbnail Image.png
Description
Parkinson's disease is a neurodegenerative disorder in the central nervous system that affects a host of daily activities and involves a variety of symptoms; these include tremors, slurred speech, and rigid muscles. It is the second most common movement disorder globally. In Stage 3 of Parkinson's, afflicted individuals begin to

Parkinson's disease is a neurodegenerative disorder in the central nervous system that affects a host of daily activities and involves a variety of symptoms; these include tremors, slurred speech, and rigid muscles. It is the second most common movement disorder globally. In Stage 3 of Parkinson's, afflicted individuals begin to develop an abnormal gait pattern known as freezing of gait (FoG), which is characterized by decreased step length, shuffling, and eventually complete loss of movement; they are unable to move, and often results in a fall. Surface electromyography (sEMG) is a diagnostic tool to measure electrical activity in the muscles to assess overall muscle function. Most conventional EMG systems, however, are bulky, tethered to a single location, expensive, and primarily used in a lab or clinical setting. This project explores an affordable, open-source, and portable platform called Open Brain-Computer Interface (OpenBCI). The purpose of the proposed device is to detect gait patterns by leveraging the surface electromyography (EMG) signals from the OpenBCI and to help a patient overcome an episode using haptic feedback mechanisms. Previously designed devices with similar intended purposes utilize accelerometry as a method of detection as well as audio and visual feedback mechanisms in their design.
ContributorsAnantuni, Lekha (Author) / McDaniel, Troy (Thesis director) / Tadayon, Arash (Committee member) / Harrington Bioengineering Program (Contributor) / School of Human Evolution and Social Change (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135402-Thumbnail Image.png
Description
It is unknown which regions of the brain are most or least active for golfers during a peak performance state (Flow State or "The Zone") on the putting green. To address this issue, electroencephalographic (EEG) recordings were taken on 10 elite golfers while they performed a putting drill consisting of

It is unknown which regions of the brain are most or least active for golfers during a peak performance state (Flow State or "The Zone") on the putting green. To address this issue, electroencephalographic (EEG) recordings were taken on 10 elite golfers while they performed a putting drill consisting of hitting nine putts spaced uniformly around a hole each five feet away. Data was collected at three time periods, before, during and after the putt. Galvanic Skin Response (GSR) measurements were also recorded on each subject. Three of the subjects performed a visualization of the same putting drill and their brain waves and GSR were recorded and then compared with their actual performance of the drill. EEG data in the Theta (4 \u2014 7 Hz) bandwidth and Alpha (7 \u2014 13 Hz) bandwidth in 11 different locations across the head were analyzed. Relative power spectrum was used to quantify the data. From the results, it was found that there is a higher magnitude of power in both the theta and alpha bandwidths for a missed putt in comparison to a made putt (p<0.05). It was also found that there is a higher average power in the right hemisphere for made putts. There was not a higher power in the occipital region of the brain nor was there a lower power level in the frontal cortical region during made putts. The hypothesis that there would be a difference between the means of the power level in performance compared to visualization techniques was also supported.
ContributorsCarpenter, Andrea (Co-author) / Hool, Nicholas (Co-author) / Muthuswamy, Jitendran (Thesis director) / Crews, Debbie (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135321-Thumbnail Image.png
Description
The purpose of this study is to analyze the stereotypes surrounding four wind instruments (flutes, oboes, clarinets, and saxophones), and the ways in which those stereotypes propagate through various levels of musical professionalism in Western culture. In order to determine what these stereotypes might entail, several thousand social media and

The purpose of this study is to analyze the stereotypes surrounding four wind instruments (flutes, oboes, clarinets, and saxophones), and the ways in which those stereotypes propagate through various levels of musical professionalism in Western culture. In order to determine what these stereotypes might entail, several thousand social media and blog posts were analyzed, and direct quotations detailing the perceived stereotypical personality profiles for each of the four instruments were collected. From these, the three most commonly mentioned characteristics were isolated for each of the instrument groups as follows: female gender, femininity, and giggliness for flutists, intelligence, studiousness, and demographics (specifically being an Asian male) for clarinetists, quirkiness, eccentricity, and being seen as a misfit for oboists, and overconfidence, attention-seeking behavior, and coolness for saxophonists. From these traits, a survey was drafted which asked participating college-aged musicians various multiple choice, opinion scale, and short-answer questions that gathered how much they agree or disagree with each trait describing the instrument from which it was derived. Their responses were then analyzed to determine how much correlation existed between the researched characteristics and the opinions of modern musicians. From these results, it was determined that 75% of the traits that were isolated for a particular instrument were, in fact, recognized as being true in the survey data, demonstrating that the stereotypes do exist and seem to be widely recognizable across many age groups, locations, and levels of musical skill. Further, 89% of participants admitted that the instrument they play has a certain stereotype associated with it, but only 38% of people identify with that profile. Overall, it was concluded that stereotypes, which are overwhelmingly negative and gendered by nature, are indeed propagated, but musicians do not appear to want to identify with them, and they reflect a more archaic and immature sense that does not correlate to the trends observed in modern, professional music.
ContributorsAllison, Lauren Nicole (Author) / Bhattacharjya, Nilanjana (Thesis director) / Ankeny, Casey (Committee member) / School of Life Sciences (Contributor) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135324-Thumbnail Image.png
Description
The Clean Power Plan seeks to reduce CO2 emissions in the energy industry, which is the largest source of CO2 emissions in the United States. In order to comply with the Clean Power Plan, electric utilities in Arizona will need to meet the electricity demand while reducing the use of

The Clean Power Plan seeks to reduce CO2 emissions in the energy industry, which is the largest source of CO2 emissions in the United States. In order to comply with the Clean Power Plan, electric utilities in Arizona will need to meet the electricity demand while reducing the use of fossil fuel sources in generation. The study first outlines the organization of the power sector in the United States and the structural and price changes attempted in the industry during the period of restructuring. The recent final rule of the Clean Power Plan is then described in detail with a narrowed focus on Arizona. Data from APS, a representative utility of Arizona, is used for the remainder of the analysis to determine the price increase necessary to cut Arizona's CO2 emissions in order to meet the federal goal. The first regression models the variables which affect total demand and thus generation load, from which we estimate the marginal effect of price on demand. The second regression models CO2 emissions as a function of different levels of generation. This allows the effect of generation on emissions to fluctuate with ranges of load, following the logic of the merit order of plants and changing rates of emissions for different sources. Two methods are used to find the necessary percentage increase in price to meet the CPP goals: one based on the mass-based goal for Arizona and the other based on the percentage reduction for Arizona. Then a price increase is calculated for a projection into the future using known changes in energy supply.
ContributorsHerman, Laura Alexandra (Author) / Silverman, Daniel (Thesis director) / Kuminoff, Nicolai (Committee member) / Department of Economics (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135327-Thumbnail Image.png
Description
A semi-implicit, fourth-order time-filtered leapfrog numerical scheme is investigated for accuracy and stability, and applied to several test cases, including one-dimensional advection and diffusion, the anelastic equations to simulate the Kelvin-Helmholtz instability, and the global shallow water spectral model to simulate the nonlinear evolution of twin tropical cyclones. The leapfrog

A semi-implicit, fourth-order time-filtered leapfrog numerical scheme is investigated for accuracy and stability, and applied to several test cases, including one-dimensional advection and diffusion, the anelastic equations to simulate the Kelvin-Helmholtz instability, and the global shallow water spectral model to simulate the nonlinear evolution of twin tropical cyclones. The leapfrog scheme leads to computational modes in the solutions to highly nonlinear systems, and time-filters are often used to damp these modes. The proposed filter damps the computational modes without appreciably degrading the physical mode. Its performance in these metrics is superior to the second-order time-filtered leapfrog scheme developed by Robert and Asselin.
Created2016-05
135340-Thumbnail Image.png
Description
Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and

Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and daily operations. One of the most important parts is being able to predict and foreshadow failures in the system, in order to make sure that those are fixed before they turn into large issues. One specific area where preventive maintenance is a very big part of daily activity is the automotive industry. Automobile owners are encouraged to take their cars in for maintenance on a routine schedule (based on mileage or time), or when their car signals that there is an issue (low oil levels for example). Although this level of maintenance is enough when people are in charge of cars, the rise of autonomous vehicles, specifically self-driving cars, changes that. Now instead of a human being able to look at a car and diagnose any issues, the car needs to be able to do this itself. The objective of this project was to create such a system. The Electronics Preventive Maintenance System is an internal system that is designed to meet all these criteria and more. The EPMS system is comprised of a central computer which monitors all major electronic components in an autonomous vehicle through the use of standard off-the-shelf sensors. The central computer compiles the sensor data, and is able to sort and analyze the readings. The filtered data is run through several mathematical models, each of which diagnoses issues in different parts of the vehicle. The data for each component in the vehicle is compared to pre-set operating conditions. These operating conditions are set in order to encompass all normal ranges of output. If the sensor data is outside the margins, the warning and deviation are recorded and a severity level is calculated. In addition to the individual focus, there's also a vehicle-wide model, which predicts how necessary maintenance is for the vehicle. All of these results are analyzed by a simple heuristic algorithm and a decision is made for the vehicle's health status, which is sent out to the Fleet Management System. This system allows for accurate, effortless monitoring of all parts of an autonomous vehicle as well as predictive modeling that allows the system to determine maintenance needs. With this system, human inspectors are no longer necessary for a fleet of autonomous vehicles. Instead, the Fleet Management System is able to oversee inspections, and the system operator is able to set parameters to decide when to send cars for maintenance. All the models used for the sensor and component analysis are tailored specifically to the vehicle. The models and operating margins are created using empirical data collected during normal testing operations. The system is modular and can be used in a variety of different vehicle platforms, including underwater autonomous vehicles and aerial vehicles.
ContributorsMian, Sami T. (Author) / Collofello, James (Thesis director) / Chen, Yinong (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135354-Thumbnail Image.png
Description
Introduction: There are 350 to 400 pediatric heart transplants annually according to the Pediatric Heart Transplant Database (Dipchand et al. 2014). Finding appropriate donors can be challenging especially for the pediatric population. The current standard of care is a donor-to-recipient weight ratio. This ratio is not necessarily

Introduction: There are 350 to 400 pediatric heart transplants annually according to the Pediatric Heart Transplant Database (Dipchand et al. 2014). Finding appropriate donors can be challenging especially for the pediatric population. The current standard of care is a donor-to-recipient weight ratio. This ratio is not necessarily a parameter directly indicative of the size of a heart, potentially leading to ill-fitting allografts (Tang et al. 2010). In this paper, a regression model is presented - developed by correlating total cardiac volume to non-invasive imaging parameters and patient characteristics – for use in determining ideal allograft fit with respect to total cardiac volume.
Methods: A virtual, 3D library of clinically-defined normal hearts was compiled from reconstructed CT and MR scans. Non-invasive imaging parameters and patient characteristics were collected and subjected to backward elimination linear regression to define a model relating patient parameters to the total cardiac volume. This regression model was then used to retrospectively accept or reject an ‘ideal’ donor graft from the library for 3 patients that had undergone heart transplantation. Oversized and undersized grafts were also transplanted to qualitatively analyze virtual transplantation specificity.
Results: The backward elimination approach of the data for the 20 patients rejected the factors of BMI, BSA, sex and both end-systolic and end-diastolic left ventricular measurements from echocardiography. Height and weight were included in the linear regression model yielding an adjusted R-squared of 82.5%. Height and weight showed statistical significance with p-values of 0.005 and 0.02 respectively. The final equation for the linear regression model was TCV = -169.320+ 2.874h + 3.578w ± 73 (h=height, w=weight, TCV= total cardiac volume).
Discussion: With the current regression model, height and weight significantly correlate to total cardiac volume. This regression model and virtual normal heart library provide for the possibility of virtual transplant and size-matching for transplantation. The study and regression model is, however, limited due to a small sample size. Additionally, the lack of volumetric resolution from the MR datasets is a potentially limiting factor. Despite these limitations the virtual library has the potential to be a critical tool for clinical care that will continue to grow as normal hearts are added to the virtual library.
ContributorsSajadi, Susan (Co-author) / Lindquist, Jacob (Co-author) / Frakes, David (Thesis director) / Ryan, Justin (Committee member) / Harrington Bioengineering Program (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135355-Thumbnail Image.png
Description
Glioblastoma multiforme (GBM) is a malignant, aggressive and infiltrative cancer of the central nervous system with a median survival of 14.6 months with standard care. Diagnosis of GBM is made using medical imaging such as magnetic resonance imaging (MRI) or computed tomography (CT). Treatment is informed by medical images and

Glioblastoma multiforme (GBM) is a malignant, aggressive and infiltrative cancer of the central nervous system with a median survival of 14.6 months with standard care. Diagnosis of GBM is made using medical imaging such as magnetic resonance imaging (MRI) or computed tomography (CT). Treatment is informed by medical images and includes chemotherapy, radiation therapy, and surgical removal if the tumor is surgically accessible. Treatment seldom results in a significant increase in longevity, partly due to the lack of precise information regarding tumor size and location. This lack of information arises from the physical limitations of MR and CT imaging coupled with the diffusive nature of glioblastoma tumors. GBM tumor cells can migrate far beyond the visible boundaries of the tumor and will result in a recurring tumor if not killed or removed. Since medical images are the only readily available information about the tumor, we aim to improve mathematical models of tumor growth to better estimate the missing information. Particularly, we investigate the effect of random variation in tumor cell behavior (anisotropy) using stochastic parameterizations of an established proliferation-diffusion model of tumor growth. To evaluate the performance of our mathematical model, we use MR images from an animal model consisting of Murine GL261 tumors implanted in immunocompetent mice, which provides consistency in tumor initiation and location, immune response, genetic variation, and treatment. Compared to non-stochastic simulations, stochastic simulations showed improved volume accuracy when proliferation variability was high, but diffusion variability was found to only marginally affect tumor volume estimates. Neither proliferation nor diffusion variability significantly affected the spatial distribution accuracy of the simulations. While certain cases of stochastic parameterizations improved volume accuracy, they failed to significantly improve simulation accuracy overall. Both the non-stochastic and stochastic simulations failed to achieve over 75% spatial distribution accuracy, suggesting that the underlying structure of the model fails to capture one or more biological processes that affect tumor growth. Two biological features that are candidates for further investigation are angiogenesis and anisotropy resulting from differences between white and gray matter. Time-dependent proliferation and diffusion terms could be introduced to model angiogenesis, and diffusion weighed imaging (DTI) could be used to differentiate between white and gray matter, which might allow for improved estimates brain anisotropy.
ContributorsAnderies, Barrett James (Author) / Kostelich, Eric (Thesis director) / Kuang, Yang (Committee member) / Stepien, Tracy (Committee member) / Harrington Bioengineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05