Matching Items (42)
Filtering by

Clear all filters

147863-Thumbnail Image.png
Description

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only

Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only a few nanometers big. Researchers at MIT have successfully created Syncells, which are micro-<br/>scale robots with limited computation power and memory that can communicate locally to achieve<br/>complex collective tasks. In order to control these Syncells for a desired outcome, they must each run a<br/>simple distributed algorithm. As they are only capable of local communication, Syncells cannot receive<br/>commands from a control center, so their algorithms cannot be centralized. In this work, we created a<br/>distributed algorithm that each Syncell can execute so that the system of Syncells is able to find and<br/>converge to a specific target within the environment. The most direct applications of this problem are in<br/>medicine. Such a system could be used as a safer alternative to invasive surgery or could be used to treat<br/>internal bleeding or tumors. We tested and analyzed our algorithm through simulation and visualization<br/>in Python. Overall, our algorithm successfully caused the system of particles to converge on a specific<br/>target present within the environment.

ContributorsMartin, Rebecca Clare (Author) / Richa, Andréa (Thesis director) / Lee, Heewook (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148049-Thumbnail Image.png
Description

Cancer rates vary between people, between cultures, and between tissue types, driven by clinically relevant distinctions in the risk factors that lead to different cancer types. Despite the importance of cancer location in human health, little is known about tissue-specific cancers in non-human animals. We can gain significant insight into

Cancer rates vary between people, between cultures, and between tissue types, driven by clinically relevant distinctions in the risk factors that lead to different cancer types. Despite the importance of cancer location in human health, little is known about tissue-specific cancers in non-human animals. We can gain significant insight into how evolutionary history has shaped mechanisms of cancer suppression by examining how life history traits impact cancer susceptibility across species. Here, we perform multi-level analysis to test how species-level life history strategies are associated with differences in neoplasia prevalence, and apply this to mammary neoplasia within mammals. We propose that the same patterns of cancer prevalence that have been reported across species will be maintained at the tissue-specific level. We used a combination of factor analysis and phylogenetic regression on 13 life history traits across 90 mammalian species to determine the correlation between a life history trait and how it relates to mammary neoplasia prevalence. The factor analysis presented ways to calculate quantifiable underlying factors that contribute to covariance of entangled life history variables. A greater risk of mammary neoplasia was found to be correlated most significantly with shorter gestation length. With this analysis, a framework is provided for how different life history modalities can influence cancer vulnerability. Additionally, statistical methods developed for this project present a framework for future comparative oncology studies and have the potential for many diverse applications.

ContributorsFox, Morgan Shane (Author) / Maley, Carlo C. (Thesis director) / Boddy, Amy (Committee member) / Compton, Zachary (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of Molecular Sciences (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148207-Thumbnail Image.png
Description

Optimal foraging theory provides a suite of tools that model the best way that an animal will <br/>structure its searching and processing decisions in uncertain environments. It has been <br/>successful characterizing real patterns of animal decision making, thereby providing insights<br/>into why animals behave the way they do. However, it does

Optimal foraging theory provides a suite of tools that model the best way that an animal will <br/>structure its searching and processing decisions in uncertain environments. It has been <br/>successful characterizing real patterns of animal decision making, thereby providing insights<br/>into why animals behave the way they do. However, it does not speak to how animals make<br/>decisions that tend to be adaptive. Using simulation studies, prior work has shown empirically<br/>that a simple decision-making heuristic tends to produce prey-choice behaviors that, on <br/>average, match the predicted behaviors of optimal foraging theory. That heuristic chooses<br/>to spend time processing an encountered prey item if that prey item's marginal rate of<br/>caloric gain (in calories per unit of processing time) is greater than the forager's<br/>current long-term rate of accumulated caloric gain (in calories per unit of total searching<br/>and processing time). Although this heuristic may seem intuitive, a rigorous mathematical<br/>argument for why it tends to produce the theorized optimal foraging theory behavior has<br/>not been developed. In this thesis, an analytical argument is given for why this<br/>simple decision-making heuristic is expected to realize the optimal performance<br/>predicted by optimal foraging theory. This theoretical guarantee not only provides support<br/>for why such a heuristic might be favored by natural selection, but it also provides<br/>support for why such a heuristic might a reliable tool for decision-making in autonomous<br/>engineered agents moving through theatres of uncertain rewards. Ultimately, this simple<br/>decision-making heuristic may provide a recipe for reinforcement learning in small robots<br/>with little computational capabilities.

ContributorsCothren, Liliaokeawawa Kiyoko (Author) / Pavlic, Theodore (Thesis director) / Brewer, Naala (Committee member) / School of Mathematical and Statistical Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
136199-Thumbnail Image.png
Description
Despite the 40-year war on cancer, very limited progress has been made in developing a cure for the disease. This failure has prompted the reevaluation of the causes and development of cancer. One resulting model, coined the atavistic model of cancer, posits that cancer is a default phenotype of the

Despite the 40-year war on cancer, very limited progress has been made in developing a cure for the disease. This failure has prompted the reevaluation of the causes and development of cancer. One resulting model, coined the atavistic model of cancer, posits that cancer is a default phenotype of the cells of multicellular organisms which arises when the cell is subjected to an unusual amount of stress. Since this default phenotype is similar across cell types and even organisms, it seems it must be an evolutionarily ancestral phenotype. We take a phylostratigraphical approach, but systematically add species divergence time data to estimate gene ages numerically and use these ages to investigate the ages of genes involved in cancer. We find that ancient disease-recessive cancer genes are significantly enriched for DNA repair and SOS activity, which seems to imply that a core component of cancer development is not the regulation of growth, but the regulation of mutation. Verification of this finding could drastically improve cancer treatment and prevention.
ContributorsOrr, Adam James (Author) / Davies, Paul (Thesis director) / Bussey, Kimberly (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Chemistry and Biochemistry (Contributor) / School of Life Sciences (Contributor)
Created2015-05
135739-Thumbnail Image.png
Description
Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as

Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as particles) with limited computational power that each perform fully distributed, local, asynchronous algorithms to solve system-wide problems of movement, configuration, and coordination. In this thesis, we focus on the compression problem, in which the particle system gathers as tightly together as possible, as in a sphere or its equivalent in the presence of some underlying geometry. While there are many ways to formalize what it means for a particle system to be compressed, we address three different notions of compression: (1) local compression, in which each individual particle utilizes local rules to create an overall convex structure containing no holes, (2) hole elimination, in which the particle system seeks to detect and eliminate any holes it contains, and (3) alpha-compression, in which the particle system seeks to shrink its perimeter to be within a constant factor of the minimum possible value. We analyze the behavior of each of these algorithms, examining correctness and convergence where appropriate. In the case of the Markov Chain Algorithm for Compression, we provide improvements to the original bounds for the bias parameter lambda which influences the system to either compress or expand. Lastly, we briefly discuss contributions to the problem of leader election--in which a particle system elects a single leader--since it acts as an important prerequisite for compression algorithms that use a predetermined seed particle.
ContributorsDaymude, Joshua Jungwoo (Author) / Richa, Andrea (Thesis director) / Kierstead, Henry (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136857-Thumbnail Image.png
Description
Glioblastoma Multiforme (GBM) is an aggressive and deadly form of brain cancer with a median survival time of about a year with treatment. Due to the aggressive nature of these tumors and the tendency of gliomas to follow white matter tracks in the brain, each tumor mass has a unique

Glioblastoma Multiforme (GBM) is an aggressive and deadly form of brain cancer with a median survival time of about a year with treatment. Due to the aggressive nature of these tumors and the tendency of gliomas to follow white matter tracks in the brain, each tumor mass has a unique growth pattern. Consequently it is difficult for neurosurgeons to anticipate where the tumor will spread in the brain, making treatment planning difficult. Archival patient data including MRI scans depicting the progress of tumors have been helpful in developing a model to predict Glioblastoma proliferation, but limited scans per patient make the tumor growth rate difficult to determine. Furthermore, patient treatment between scan points can significantly compound the challenge of accurately predicting the tumor growth. A partnership with Barrow Neurological Institute has allowed murine studies to be conducted in order to closely observe tumor growth and potentially improve the current model to more closely resemble intermittent stages of GBM growth without treatment effects.
ContributorsSnyder, Lena Haley (Author) / Kostelich, Eric (Thesis director) / Frakes, David (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Harrington Bioengineering Program (Contributor)
Created2014-05
136691-Thumbnail Image.png
Description
Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.
ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12
136723-Thumbnail Image.png
Description
This paper explores how marginalist economics defines and inevitably constrains Victorian sensation fiction's content and composition. I argue that economic intuition implies that sensationalist heroes and antagonists, writers and readers all pursued a fundamental, "rational" aim: the attainment of pleasure. So although "sensationalism" took on connotations of moral impropriety in

This paper explores how marginalist economics defines and inevitably constrains Victorian sensation fiction's content and composition. I argue that economic intuition implies that sensationalist heroes and antagonists, writers and readers all pursued a fundamental, "rational" aim: the attainment of pleasure. So although "sensationalism" took on connotations of moral impropriety in the Victorian age, sensation fiction primarily involves experiences of pain on the page that excite the reader's pleasure. As such, sensationalism as a whole can be seen as a conformist product, one which mirrors the effects of all commodities on the market, rather than as a rebellious one. Indeed, contrary to modern and contemporary critics' assumptions, sensation fiction may not be as scandalous as it seems.
ContributorsFischer, Brett Andrew (Author) / Bivona, Daniel (Thesis director) / Looser, Devoney (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor) / School of Politics and Global Studies (Contributor) / Department of English (Contributor)
Created2014-12
137020-Thumbnail Image.png
Description
In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this

In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this ill-posed problem. Two such algorithms were examined: alternating projections, utilizing iterative Fourier transforms with manipulations performed in each domain on every iteration, and phase lifting, converting the problem to that of trace minimization, allowing for the use of convex optimization algorithms to perform the signal recovery. These recovery algorithms were compared on a basis of robustness as a function of signal-to-noise ratio. A second problem examined was that of unimodular polyphase radar waveform design. Under a finite signal energy constraint, the maximal energy return of a scene operator is obtained by transmitting the eigenvector of the scene Gramian associated with the largest eigenvalue. It is shown that if instead the problem is considered under a power constraint, a unimodular signal can be constructed starting from such an eigenvector that will have a greater return.
ContributorsJones, Scott Robert (Author) / Cochran, Douglas (Thesis director) / Diaz, Rodolfo (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
137023-Thumbnail Image.png
Description
Previous research discusses students' difficulties in grasping an operational understanding of covariational reasoning. In this study, I interviewed four undergraduate students in calculus and pre-calculus classes to determine their ways of thinking when working on an animated covariation problem. With previous studies in mind and with the use of technology,

Previous research discusses students' difficulties in grasping an operational understanding of covariational reasoning. In this study, I interviewed four undergraduate students in calculus and pre-calculus classes to determine their ways of thinking when working on an animated covariation problem. With previous studies in mind and with the use of technology, I devised an interview method, which I structured using multiple phases of pre-planned support. With these interviews, I gathered information about two main aspects about students' thinking: how students think when attempting to reason covariationally and which of the identified ways of thinking are most propitious for the development of an understanding of covariational reasoning. I will discuss how, based on interview data, one of the five identified ways of thinking about covariational reasoning is highly propitious, while the other four are somewhat less propitious.
ContributorsWhitmire, Benjamin James (Author) / Thompson, Patrick (Thesis director) / Musgrave, Stacy (Committee member) / Moore, Kevin C. (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / T. Denny Sanford School of Social and Family Dynamics (Contributor)
Created2014-05