Matching Items (19)
Filtering by

Clear all filters

147863-Thumbnail Image.png
Description

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only a few nanometers big. Researchers at MIT have successfully created Syncells, which are micro-<br/>scale robots with limited computation power and memory that can communicate locally to achieve<br/>complex collective tasks. In order to control these Syncells for a desired outcome, they must each run a<br/>simple distributed algorithm. As they are only capable of local communication, Syncells cannot receive<br/>commands from a control center, so their algorithms cannot be centralized. In this work, we created a<br/>distributed algorithm that each Syncell can execute so that the system of Syncells is able to find and<br/>converge to a specific target within the environment. The most direct applications of this problem are in<br/>medicine. Such a system could be used as a safer alternative to invasive surgery or could be used to treat<br/>internal bleeding or tumors. We tested and analyzed our algorithm through simulation and visualization<br/>in Python. Overall, our algorithm successfully caused the system of particles to converge on a specific<br/>target present within the environment.

ContributorsMartin, Rebecca Clare (Author) / Richa, Andréa (Thesis director) / Lee, Heewook (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148207-Thumbnail Image.png
Description

Optimal foraging theory provides a suite of tools that model the best way that an animal will <br/>structure its searching and processing decisions in uncertain environments. It has been <br/>successful characterizing real patterns of animal decision making, thereby providing insights<br/>into why animals behave the way they do. However, it does

Optimal foraging theory provides a suite of tools that model the best way that an animal will <br/>structure its searching and processing decisions in uncertain environments. It has been <br/>successful characterizing real patterns of animal decision making, thereby providing insights<br/>into why animals behave the way they do. However, it does not speak to how animals make<br/>decisions that tend to be adaptive. Using simulation studies, prior work has shown empirically<br/>that a simple decision-making heuristic tends to produce prey-choice behaviors that, on <br/>average, match the predicted behaviors of optimal foraging theory. That heuristic chooses<br/>to spend time processing an encountered prey item if that prey item's marginal rate of<br/>caloric gain (in calories per unit of processing time) is greater than the forager's<br/>current long-term rate of accumulated caloric gain (in calories per unit of total searching<br/>and processing time). Although this heuristic may seem intuitive, a rigorous mathematical<br/>argument for why it tends to produce the theorized optimal foraging theory behavior has<br/>not been developed. In this thesis, an analytical argument is given for why this<br/>simple decision-making heuristic is expected to realize the optimal performance<br/>predicted by optimal foraging theory. This theoretical guarantee not only provides support<br/>for why such a heuristic might be favored by natural selection, but it also provides<br/>support for why such a heuristic might a reliable tool for decision-making in autonomous<br/>engineered agents moving through theatres of uncertain rewards. Ultimately, this simple<br/>decision-making heuristic may provide a recipe for reinforcement learning in small robots<br/>with little computational capabilities.

ContributorsCothren, Liliaokeawawa Kiyoko (Author) / Pavlic, Theodore (Thesis director) / Brewer, Naala (Committee member) / School of Mathematical and Statistical Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148281-Thumbnail Image.png
Description

With the rise of fast fashion and its now apparent effects on climate change, there is an evident need for change in terms of how we as individuals use our clothing and footwear. Our team has created Ray Fashion Inc., a sustainable footwear company that focuses on implementing the circular

With the rise of fast fashion and its now apparent effects on climate change, there is an evident need for change in terms of how we as individuals use our clothing and footwear. Our team has created Ray Fashion Inc., a sustainable footwear company that focuses on implementing the circular economy to reduce the amount of waste generated in shoe creation. We have designed a sandal that accommodates the rapid consumption element of fast fashion with a business model that promotes sustainability through a buy-back method to upcycle and retain our materials.

ContributorsLiao, Yuxin (Co-author) / Yang, Andrea (Co-author) / Suresh Kumar, Roshni (Co-author) / Byrne, Jared (Thesis director) / Marseille, Alicia (Committee member) / Jordan, Amanda (Committee member) / Department of Finance (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
136406-Thumbnail Image.png
Description
In this paper, I analyze representations of nature in popular film, using the feminist / deconstructionist concept of a dualism to structure my critique. Using Val Plumwood’s analysis of the logical structure of dualism and the 5 ‘features of a dualism’ that she identifies, I critique 5 popular movies –

In this paper, I analyze representations of nature in popular film, using the feminist / deconstructionist concept of a dualism to structure my critique. Using Val Plumwood’s analysis of the logical structure of dualism and the 5 ‘features of a dualism’ that she identifies, I critique 5 popular movies – Star Wars, Lord of the Rings, Brave, Grizzly Man, and Planet Earth – by locating within each of them one of the 5 features and explaining how the movie functions to reinforce the Nature/Culture dualism . By showing how the Nature/Culture dualism shapes and is shaped by popular cinema, I show how “Nature” is a social construct, created as part of this very dualism, and reified through popular culture. I conclude with the introduction of a number of ‘subversive’ pieces of visual art that undermine and actively deconstruct the Nature/Culture dualism and show to the viewer a more honest presentation of the non-human world.
ContributorsBarton, Christopher Joseph (Author) / Broglio, Ron (Thesis director) / Minteer, Ben (Committee member) / Barrett, The Honors College (Contributor) / School of Sustainability (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Geographical Sciences and Urban Planning (Contributor)
Created2015-05
135739-Thumbnail Image.png
Description
Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as

Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as particles) with limited computational power that each perform fully distributed, local, asynchronous algorithms to solve system-wide problems of movement, configuration, and coordination. In this thesis, we focus on the compression problem, in which the particle system gathers as tightly together as possible, as in a sphere or its equivalent in the presence of some underlying geometry. While there are many ways to formalize what it means for a particle system to be compressed, we address three different notions of compression: (1) local compression, in which each individual particle utilizes local rules to create an overall convex structure containing no holes, (2) hole elimination, in which the particle system seeks to detect and eliminate any holes it contains, and (3) alpha-compression, in which the particle system seeks to shrink its perimeter to be within a constant factor of the minimum possible value. We analyze the behavior of each of these algorithms, examining correctness and convergence where appropriate. In the case of the Markov Chain Algorithm for Compression, we provide improvements to the original bounds for the bias parameter lambda which influences the system to either compress or expand. Lastly, we briefly discuss contributions to the problem of leader election--in which a particle system elects a single leader--since it acts as an important prerequisite for compression algorithms that use a predetermined seed particle.
ContributorsDaymude, Joshua Jungwoo (Author) / Richa, Andrea (Thesis director) / Kierstead, Henry (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136691-Thumbnail Image.png
Description
Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.
ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12
137020-Thumbnail Image.png
Description
In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this

In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this ill-posed problem. Two such algorithms were examined: alternating projections, utilizing iterative Fourier transforms with manipulations performed in each domain on every iteration, and phase lifting, converting the problem to that of trace minimization, allowing for the use of convex optimization algorithms to perform the signal recovery. These recovery algorithms were compared on a basis of robustness as a function of signal-to-noise ratio. A second problem examined was that of unimodular polyphase radar waveform design. Under a finite signal energy constraint, the maximal energy return of a scene operator is obtained by transmitting the eigenvector of the scene Gramian associated with the largest eigenvalue. It is shown that if instead the problem is considered under a power constraint, a unimodular signal can be constructed starting from such an eigenvector that will have a greater return.
ContributorsJones, Scott Robert (Author) / Cochran, Douglas (Thesis director) / Diaz, Rodolfo (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
137727-Thumbnail Image.png
Description
Plastics continue to benefit society in innumerable ways, even though recent public focus on plastics has centered mostly on human health and environmental concerns, including their endocrine-disrupting properties and the long-term pollution they represent. The benefits of plastics are particularly apparent in medicine and public health. Plastics are versatile, cost-effective,

Plastics continue to benefit society in innumerable ways, even though recent public focus on plastics has centered mostly on human health and environmental concerns, including their endocrine-disrupting properties and the long-term pollution they represent. The benefits of plastics are particularly apparent in medicine and public health. Plastics are versatile, cost-effective, require less energy to produce than alternative materials like metal or glass, and can be manufactured to have many different properties. Due to these characteristics, polymers are used in diverse health applications like disposable syringes and intravenous bags, sterile packaging for medical instruments as well as in joint replacements, tissue engineering, etc. However, not all current uses of plastics are prudent and sustainable, as illustrated by the widespread, unwanted human exposure to endocrine-disrupting bisphenol A (BPA) and di-(2-ethylhexyl) phthalate (DEHP), problems arising from the large quantities of plastic being disposed of, and depletion of non-renewable petroleum resources as a result of the ever-increasing mass production of plastic consumer articles. Using the health-care sector as example, this review concentrates on the benefits and downsides of plastics and identifies opportunities to change the composition and disposal practices of these invaluable polymers for a more sustainable future consumption. It highlights ongoing efforts to phase out DEHP and BPA in the health-care and food industry and discusses biodegradable options for plastic packaging, opportunities for reducing plastic medical waste, and recycling in medical facilities in the quest to reap a maximum of benefits from polymers without compromising human health or the environment in the process.
ContributorsNorth, Emily Jean (Co-author) / Halden, Rolf (Co-author, Thesis director) / Mikhail, Chester (Committee member) / Hurlbut, Ben (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Chemical Engineering Program (Contributor)
Created2013-05
137196-Thumbnail Image.png
Description
As society's energy crisis continues to become more imminent many industries and niches are seeking a new, sustainable and renewable source of electricity production. Similar to solar, wind and tidal energy, kinetic energy has the potential to generate electricity as an extremely renewable source of energy generation. While stationary bicycles

As society's energy crisis continues to become more imminent many industries and niches are seeking a new, sustainable and renewable source of electricity production. Similar to solar, wind and tidal energy, kinetic energy has the potential to generate electricity as an extremely renewable source of energy generation. While stationary bicycles can generate small amounts of electricity, the idea behind this project was to expand energy generation into the more common weight lifting side of exercising. The method for solving this problem was to find the average amount of power generated per user on a Smith machine and determine how much power was available from an accompanying energy generator. The generator consists of three phases: a copper coil and magnet generator, a full wave bridge rectifying circuit and a rheostat. These three phases working together formed a fully functioning controllable generator. The resulting issue with the kinetic energy generator was that the system was too inefficient to serve as a viable system for electricity generation. The electrical production of the generator only saved about 2 cents per year based on current Arizona electricity rates. In the end it was determined that the project was not a sustainable energy generation system and did not warrant further experimentation.
ContributorsO'Halloran, Ryan James (Author) / Middleton, James (Thesis director) / Hinrichs, Richard (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / The Design School (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
147666-Thumbnail Image.png
Description

A statistical method is proposed to learn what the diffusion coefficient is at any point in space of a cell membrane. The method used bayesian non-parametrics to learn this value. Learning the diffusion coefficient might be useful for understanding more about cellular dynamics.

ContributorsGallimore, Austin Lee (Author) / Presse, Steve (Thesis director) / Armbruster, Dieter (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05