Matching Items (35)
Filtering by

Clear all filters

147863-Thumbnail Image.png
Description

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only a few nanometers big. Researchers at MIT have successfully created Syncells, which are micro-<br/>scale robots with limited computation power and memory that can communicate locally to achieve<br/>complex collective tasks. In order to control these Syncells for a desired outcome, they must each run a<br/>simple distributed algorithm. As they are only capable of local communication, Syncells cannot receive<br/>commands from a control center, so their algorithms cannot be centralized. In this work, we created a<br/>distributed algorithm that each Syncell can execute so that the system of Syncells is able to find and<br/>converge to a specific target within the environment. The most direct applications of this problem are in<br/>medicine. Such a system could be used as a safer alternative to invasive surgery or could be used to treat<br/>internal bleeding or tumors. We tested and analyzed our algorithm through simulation and visualization<br/>in Python. Overall, our algorithm successfully caused the system of particles to converge on a specific<br/>target present within the environment.

ContributorsMartin, Rebecca Clare (Author) / Richa, Andréa (Thesis director) / Lee, Heewook (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147971-Thumbnail Image.png
Description

This survey takes information on a participant’s beliefs on privacy security, the general digital knowledge, demographics, and willingness-to-pay points on if they would delete information on their social media, to see how an information treatment affects those payment points. This information treatment is meant to make half of the participants

This survey takes information on a participant’s beliefs on privacy security, the general digital knowledge, demographics, and willingness-to-pay points on if they would delete information on their social media, to see how an information treatment affects those payment points. This information treatment is meant to make half of the participants think about the deeper ramifications of the information they reveal. The initial hypothesis is that this information will make people want to pay more to remove their information from the web, but the results find a surprising negative correlation with the treatment.

ContributorsDeitrick, Noah Sumner (Author) / Silverman, Daniel (Thesis director) / Kuminoff, Nicolai (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148207-Thumbnail Image.png
Description

Optimal foraging theory provides a suite of tools that model the best way that an animal will <br/>structure its searching and processing decisions in uncertain environments. It has been <br/>successful characterizing real patterns of animal decision making, thereby providing insights<br/>into why animals behave the way they do. However, it does

Optimal foraging theory provides a suite of tools that model the best way that an animal will <br/>structure its searching and processing decisions in uncertain environments. It has been <br/>successful characterizing real patterns of animal decision making, thereby providing insights<br/>into why animals behave the way they do. However, it does not speak to how animals make<br/>decisions that tend to be adaptive. Using simulation studies, prior work has shown empirically<br/>that a simple decision-making heuristic tends to produce prey-choice behaviors that, on <br/>average, match the predicted behaviors of optimal foraging theory. That heuristic chooses<br/>to spend time processing an encountered prey item if that prey item's marginal rate of<br/>caloric gain (in calories per unit of processing time) is greater than the forager's<br/>current long-term rate of accumulated caloric gain (in calories per unit of total searching<br/>and processing time). Although this heuristic may seem intuitive, a rigorous mathematical<br/>argument for why it tends to produce the theorized optimal foraging theory behavior has<br/>not been developed. In this thesis, an analytical argument is given for why this<br/>simple decision-making heuristic is expected to realize the optimal performance<br/>predicted by optimal foraging theory. This theoretical guarantee not only provides support<br/>for why such a heuristic might be favored by natural selection, but it also provides<br/>support for why such a heuristic might a reliable tool for decision-making in autonomous<br/>engineered agents moving through theatres of uncertain rewards. Ultimately, this simple<br/>decision-making heuristic may provide a recipe for reinforcement learning in small robots<br/>with little computational capabilities.

ContributorsCothren, Liliaokeawawa Kiyoko (Author) / Pavlic, Theodore (Thesis director) / Brewer, Naala (Committee member) / School of Mathematical and Statistical Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
135890-Thumbnail Image.png
Description
This paper explores the history of sovereign debt default in developing economies and attempts to highlight the mistakes and accomplishments toward achieving debt sustainability. In the past century, developing economies have received considerable investment due to higher returns and a degree of disregard for the risks accompanying these investments. As

This paper explores the history of sovereign debt default in developing economies and attempts to highlight the mistakes and accomplishments toward achieving debt sustainability. In the past century, developing economies have received considerable investment due to higher returns and a degree of disregard for the risks accompanying these investments. As the former Citibank chairman, Walter Wriston articulated, "Countries don't go bust" (This Time is Different, 51). Still, unexpected negative externalities have shattered this idea as the majority of developing economies follow a cyclical pattern of default. As coined by Reinhart and Rogoff, sovereign governments that fall into this continuous cycle have become known as serial defaulters. Most developed markets have not defaulted since World War II, thus escaping this persistent trap. Still, there have been developing economies that have been able to transition out of serial defaulting. These economies are able to leverage debt to compound growth without incurring the protracted consequences of a default. Although the cases are few, we argue that developing markets such as Chile, Mexico, Russia, and Uruguay have been able to escape this vicious cycle. Thus, our research indicates that collaborative debt restructurings coupled with long term economic policies are imperative to transitioning out of debt intolerance and into a sustainable debt position. Successful economies are able to leverage debt to create strong foundational growth rather than gambling with debt in the hopes of achieving rapid catch- up growth.
ContributorsPitt, Ryan (Co-author) / Martinez, Nick (Co-author) / Choueiri, Robert (Co-author) / Goegan, Brian (Thesis director) / Silverman, Daniel (Committee member) / Department of Economics (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Politics and Global Studies (Contributor) / W. P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
135739-Thumbnail Image.png
Description
Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as

Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as particles) with limited computational power that each perform fully distributed, local, asynchronous algorithms to solve system-wide problems of movement, configuration, and coordination. In this thesis, we focus on the compression problem, in which the particle system gathers as tightly together as possible, as in a sphere or its equivalent in the presence of some underlying geometry. While there are many ways to formalize what it means for a particle system to be compressed, we address three different notions of compression: (1) local compression, in which each individual particle utilizes local rules to create an overall convex structure containing no holes, (2) hole elimination, in which the particle system seeks to detect and eliminate any holes it contains, and (3) alpha-compression, in which the particle system seeks to shrink its perimeter to be within a constant factor of the minimum possible value. We analyze the behavior of each of these algorithms, examining correctness and convergence where appropriate. In the case of the Markov Chain Algorithm for Compression, we provide improvements to the original bounds for the bias parameter lambda which influences the system to either compress or expand. Lastly, we briefly discuss contributions to the problem of leader election--in which a particle system elects a single leader--since it acts as an important prerequisite for compression algorithms that use a predetermined seed particle.
ContributorsDaymude, Joshua Jungwoo (Author) / Richa, Andrea (Thesis director) / Kierstead, Henry (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136691-Thumbnail Image.png
Description
Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.
ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12
136723-Thumbnail Image.png
Description
This paper explores how marginalist economics defines and inevitably constrains Victorian sensation fiction's content and composition. I argue that economic intuition implies that sensationalist heroes and antagonists, writers and readers all pursued a fundamental, "rational" aim: the attainment of pleasure. So although "sensationalism" took on connotations of moral impropriety in

This paper explores how marginalist economics defines and inevitably constrains Victorian sensation fiction's content and composition. I argue that economic intuition implies that sensationalist heroes and antagonists, writers and readers all pursued a fundamental, "rational" aim: the attainment of pleasure. So although "sensationalism" took on connotations of moral impropriety in the Victorian age, sensation fiction primarily involves experiences of pain on the page that excite the reader's pleasure. As such, sensationalism as a whole can be seen as a conformist product, one which mirrors the effects of all commodities on the market, rather than as a rebellious one. Indeed, contrary to modern and contemporary critics' assumptions, sensation fiction may not be as scandalous as it seems.
ContributorsFischer, Brett Andrew (Author) / Bivona, Daniel (Thesis director) / Looser, Devoney (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor) / School of Politics and Global Studies (Contributor) / Department of English (Contributor)
Created2014-12
136966-Thumbnail Image.png
Description
The purpose of this thesis is to examine the current atmosphere of genetic patent law and use economic theory to construct models which describe the consequences of the legal code. I intend to analyze the four specific cases of Diamond v. Chakrabarty, Association for Molecular Pathology v. Myriad Genetics, the

The purpose of this thesis is to examine the current atmosphere of genetic patent law and use economic theory to construct models which describe the consequences of the legal code. I intend to analyze the four specific cases of Diamond v. Chakrabarty, Association for Molecular Pathology v. Myriad Genetics, the Alzheimer's Institute of America v. Jackson Laboratory, and the harm caused by PGx Health's monopoly over the LQTS gene.
ContributorsVolz, Caleb Richard (Author) / DeSerpa, Allan (Thesis director) / Silverman, Daniel (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Chemistry and Biochemistry (Contributor) / Economics Program in CLAS (Contributor)
Created2014-05
137020-Thumbnail Image.png
Description
In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this

In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this ill-posed problem. Two such algorithms were examined: alternating projections, utilizing iterative Fourier transforms with manipulations performed in each domain on every iteration, and phase lifting, converting the problem to that of trace minimization, allowing for the use of convex optimization algorithms to perform the signal recovery. These recovery algorithms were compared on a basis of robustness as a function of signal-to-noise ratio. A second problem examined was that of unimodular polyphase radar waveform design. Under a finite signal energy constraint, the maximal energy return of a scene operator is obtained by transmitting the eigenvector of the scene Gramian associated with the largest eigenvalue. It is shown that if instead the problem is considered under a power constraint, a unimodular signal can be constructed starting from such an eigenvector that will have a greater return.
ContributorsJones, Scott Robert (Author) / Cochran, Douglas (Thesis director) / Diaz, Rodolfo (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
137021-Thumbnail Image.png
Description
Economists, political philosophers, and others have often characterized social preferences regarding inequality by imagining a hypothetical choice of distributions behind "a veil of ignorance". Recent behavioral economics work has shown that subjects care about equality of outcomes, and are willing to sacrifice, in experimental contexts, some amount of personal gain

Economists, political philosophers, and others have often characterized social preferences regarding inequality by imagining a hypothetical choice of distributions behind "a veil of ignorance". Recent behavioral economics work has shown that subjects care about equality of outcomes, and are willing to sacrifice, in experimental contexts, some amount of personal gain in order to achieve greater equality. We review some of this literature and then conduct an experiment of our own, comparing subjects' choices in two risky situations, one being a choice for a purely individualized lottery for themselves, and the other a choice among possible distributions to members of a randomly selected group. We find that choosing in the group situation makes subjects significantly more risk averse than when choosing an individual lottery. This supports the hypothesis that an additional preference for equality exists alongside ordinary risk aversion, and that in a hypothetical "veil of ignorance" scenario, such preferences may make subjects significantly more averse to unequal distributions of rewards than can be explained by risk aversion alone.
ContributorsTheisen, Alexander Scott (Co-author) / McMullin, Caitlin (Co-author) / Li, Marilyn (Co-author) / DeSerpa, Allan (Thesis director) / Schlee, Edward (Committee member) / Baldwin, Marjorie (Committee member) / Barrett, The Honors College (Contributor) / Department of Economics (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor) / School of Historical, Philosophical and Religious Studies (Contributor)
Created2014-05