Matching Items (15)
Filtering by

Clear all filters

133355-Thumbnail Image.png
Description
This study estimates the capitalization effect of golf courses in Maricopa County using the hedonic pricing method. It draws upon a dataset of 574,989 residential transactions from 2000 to 2006 to examine how the aesthetic, non-golf benefits of golf courses capitalize across a gradient of proximity measures. The measures for

This study estimates the capitalization effect of golf courses in Maricopa County using the hedonic pricing method. It draws upon a dataset of 574,989 residential transactions from 2000 to 2006 to examine how the aesthetic, non-golf benefits of golf courses capitalize across a gradient of proximity measures. The measures for amenity value extend beyond home adjacency and include considerations for homes within a range of discrete walkability buffers of golf courses. The models also distinguish between public and private golf courses as a proxy for the level of golf course access perceived by non-golfers. Unobserved spatial characteristics of the neighborhoods around golf courses are controlled for by increasing the extent of spatial fixed effects from city, to census tract, and finally to 2000 meter golf course ‘neighborhoods.’ The estimation results support two primary conclusions. First, golf course proximity is found to be highly valued for adjacent homes and homes up to 50 meters way from a course, still evident but minimal between 50 and 150 meters, and insignificant at all other distance ranges. Second, private golf courses do not command a higher proximity premia compared to public courses with the exception of homes within 25 to 50 meters of a course, indicating that the non-golf benefits of courses capitalize similarly, regardless of course type. The results of this study motivate further investigation into golf course features that signal access or add value to homes in the range of capitalization, particularly for near-adjacent homes between 50 and 150 meters thought previously not to capitalize.
ContributorsJoiner, Emily (Author) / Abbott, Joshua (Thesis director) / Smith, Kerry (Committee member) / Economics Program in CLAS (Contributor) / School of Sustainability (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
136691-Thumbnail Image.png
Description
Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.
ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12
136526-Thumbnail Image.png
Description
The purpose of this thesis is to examine the events surrounding the creation of the oboe and its rapid spread throughout Europe during the mid to late seventeenth century. The first section describes similar instruments that existed for thousands of years before the invention of the oboe. The following sections

The purpose of this thesis is to examine the events surrounding the creation of the oboe and its rapid spread throughout Europe during the mid to late seventeenth century. The first section describes similar instruments that existed for thousands of years before the invention of the oboe. The following sections examine reasons and methods for the oboe's invention, as well as possible causes of its migration from its starting place in France to other European countries, as well as many other places around the world. I conclude that the oboe was invented to suit the needs of composers in the court of Louis XIV, and that it was brought to other countries by French performers who left France for many reasons, including to escape from the authority of composer Jean-Baptiste Lully and in some cases to promote French culture in other countries.
ContributorsCook, Mary Katherine (Author) / Schuring, Martin (Thesis director) / Micklich, Albie (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Music (Contributor)
Created2015-05
136381-Thumbnail Image.png
Description
According to the Tax Policy Center, a joint project of the Brookings Institution and Urban Institute, the Earned Income Tax Credit (EITC) will provide 26 million households with 60 billion dollars of reduced taxes and refunds in 2015 \u2014 resources that serve to lift millions of families above the federal

According to the Tax Policy Center, a joint project of the Brookings Institution and Urban Institute, the Earned Income Tax Credit (EITC) will provide 26 million households with 60 billion dollars of reduced taxes and refunds in 2015 \u2014 resources that serve to lift millions of families above the federal poverty line. Responding to the popularity of EITC programs and recent discussion of its expansion for childless adults, I select three comparative case studies of state-level EITC reform from 2005 to 2013. Each state represents a different kind of policy reform: the creation of a supplemental credit in Connecticut, credit reduction in New Jersey, and finally credit expansion for childless adults in Maryland. For each case study, I use Current Population Survey panel data from the March Supplement to complete a differences-in-differences (DD) analysis of EITC policy changes. Specifically, I analyze effects of policy reform on total earned income, employment and usual hours worked. For comparison groups, I construct unique counterfactual populations of northeastern U.S. states, using people of color with less than a college degree as my treatment group for their increased sensitivity to EITC policy reform. I find no statistically significant effects of policy creation in Connecticut, significant decreases in employment and hours worked in New Jersey, and finally, significant increases in earnings and hours worked in Maryland. My work supports the findings of other empirical work, suggesting that awareness of new supplemental EITC programs is critical to their effectiveness while demonstrating that these types of programs can affect the labor supply and outcomes of eligible groups.
ContributorsRichard, Katherine Rose (Author) / Dillon, Eleanor Wiske (Thesis director) / Silverman, Daniel (Committee member) / Herbst, Chris (Committee member) / Barrett, The Honors College (Contributor) / School of International Letters and Cultures (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor)
Created2015-05
Description
Did the amount of media attention to the H1N1 flu or the information that the Centers for Disease Control (CDC) disseminates about the H1N1 flu, influence individuals' decisions to avoid public locations during the 2009-2010 H1N1 Influenza pandemic? I investigate this question using weekly-confirmed H1N1 cases from the CDC, the

Did the amount of media attention to the H1N1 flu or the information that the Centers for Disease Control (CDC) disseminates about the H1N1 flu, influence individuals' decisions to avoid public locations during the 2009-2010 H1N1 Influenza pandemic? I investigate this question using weekly-confirmed H1N1 cases from the CDC, the American Time Use Survey (ATUS), and the Google Trends weekly search volume index for certain key terms. I found that individuals did exhibit some avoidance behaviour during the flu pandemic in response to the CDC data, but not the measures of media attention. However, the magnitudes of these adjustments are small in comparison to other measures of avoidance behaviour, such as reduced time in public during extreme weather events.
ContributorsGunn, Quentin Lee (Author) / Kuminoff, Nicolai (Thesis director) / Abbott, Joshua (Committee member) / Fenichel, Eli (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor)
Created2013-12
137020-Thumbnail Image.png
Description
In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this

In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this ill-posed problem. Two such algorithms were examined: alternating projections, utilizing iterative Fourier transforms with manipulations performed in each domain on every iteration, and phase lifting, converting the problem to that of trace minimization, allowing for the use of convex optimization algorithms to perform the signal recovery. These recovery algorithms were compared on a basis of robustness as a function of signal-to-noise ratio. A second problem examined was that of unimodular polyphase radar waveform design. Under a finite signal energy constraint, the maximal energy return of a scene operator is obtained by transmitting the eigenvector of the scene Gramian associated with the largest eigenvalue. It is shown that if instead the problem is considered under a power constraint, a unimodular signal can be constructed starting from such an eigenvector that will have a greater return.
ContributorsJones, Scott Robert (Author) / Cochran, Douglas (Thesis director) / Diaz, Rodolfo (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
137164-Thumbnail Image.png
Description
In a season that spans 162 games over the course of six months, MLB teams that travel more face additional fatigue and jetlag from travel. This factor could negatively impact them on the field. To explore this issue, I tested the significance of different variables by creating four models, which

In a season that spans 162 games over the course of six months, MLB teams that travel more face additional fatigue and jetlag from travel. This factor could negatively impact them on the field. To explore this issue, I tested the significance of different variables by creating four models, which compared travel with a team's ability to win games as well as its ability to hit home runs. Based on these models, it appears as though changing time zones does not affect the outcome of games. However, these results did indicate that visiting teams with a greater time zone advantage over their opponent are less likely to hit a home run in a game.
ContributorsAronson, Sean Matthew (Author) / MacFie, Brian (Thesis director) / Eaton, John (Committee member) / Barrett, The Honors College (Contributor) / Department of Economics (Contributor) / WPC Graduate Programs (Contributor) / Department of Finance (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / W. P. Carey School of Business (Contributor)
Created2014-05
134914-Thumbnail Image.png
Description
Many forms of programmable matter have been proposed for various tasks. We use an abstract model of self-organizing particle systems for programmable matter which could be used for a variety of applications, including smart paint and coating materials for engineering or programmable cells for medical uses. Previous research using this

Many forms of programmable matter have been proposed for various tasks. We use an abstract model of self-organizing particle systems for programmable matter which could be used for a variety of applications, including smart paint and coating materials for engineering or programmable cells for medical uses. Previous research using this model has focused on shape formation and other spatial configuration problems, including line formation, compression, and coating. In this work we study foundational computational tasks that exceed the capabilities of the individual constant memory particles described by the model. These tasks represent new ways to use these self-organizing systems, which, in conjunction with previous shape and configuration work, make the systems useful for a wider variety of tasks. We present an implementation of a counter using a line of particles, which makes it possible for the line of particles to count to and store values much larger than their individual capacities. We then present an algorithm that takes a matrix and a vector as input and then sets up and uses a rectangular block of particles to compute the matrix-vector multiplication. This setup also utilizes the counter implementation to store the resulting vector from the matrix-vector multiplication. Operations such as counting and matrix multiplication can leverage the distributed and dynamic nature of the self-organizing system to be more efficient and adaptable than on traditional linear computing hardware. Such computational tools also give the systems more power to make complex decisions when adapting to new situations or to analyze the data they collect, reducing reliance on a central controller for setup and output processing. Finally, we demonstrate an application of similar types of computations with self-organizing systems to image processing, with an implementation of an image edge detection algorithm.
ContributorsPorter, Alexandra Marie (Author) / Richa, Andrea (Thesis director) / Xue, Guoliang (Committee member) / School of Music (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
135739-Thumbnail Image.png
Description
Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as

Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as particles) with limited computational power that each perform fully distributed, local, asynchronous algorithms to solve system-wide problems of movement, configuration, and coordination. In this thesis, we focus on the compression problem, in which the particle system gathers as tightly together as possible, as in a sphere or its equivalent in the presence of some underlying geometry. While there are many ways to formalize what it means for a particle system to be compressed, we address three different notions of compression: (1) local compression, in which each individual particle utilizes local rules to create an overall convex structure containing no holes, (2) hole elimination, in which the particle system seeks to detect and eliminate any holes it contains, and (3) alpha-compression, in which the particle system seeks to shrink its perimeter to be within a constant factor of the minimum possible value. We analyze the behavior of each of these algorithms, examining correctness and convergence where appropriate. In the case of the Markov Chain Algorithm for Compression, we provide improvements to the original bounds for the bias parameter lambda which influences the system to either compress or expand. Lastly, we briefly discuss contributions to the problem of leader election--in which a particle system elects a single leader--since it acts as an important prerequisite for compression algorithms that use a predetermined seed particle.
ContributorsDaymude, Joshua Jungwoo (Author) / Richa, Andrea (Thesis director) / Kierstead, Henry (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135311-Thumbnail Image.png
Description
We examine the bias resulting from temporal and spatial aggregation of weather variables in environmental economics. In order to include temporally and/or spatially continuous environmental variables (such as temperature and precipitation), many studies discritize them. The finer the scale of discrization chosen, the more difficult it can be to obtain

We examine the bias resulting from temporal and spatial aggregation of weather variables in environmental economics. In order to include temporally and/or spatially continuous environmental variables (such as temperature and precipitation), many studies discritize them. The finer the scale of discrization chosen, the more difficult it can be to obtain a complete and reliable data set. Studies performed at very fine scales often find tighter and more dramatic relationships between variables such as temperature and income per capita. We examine this question by repeating the same empirical study at various temporal and spatial scales and comparing the resulting parameter estimates.
Created2016-05