For machine learning acceleration, traditional SRAM and DRAM based system suffer from low capacity, high latency, and high standby power. Instead, emerging memories, such as Phase Change Random Access Memory (PRAM), Spin-Transfer Torque Magnetic Random Access Memory (STT-MRAM), and Resistive Random Access Memory (RRAM), are promising candidates providing low standby power, high data density, fast access and excellent scalability. This dissertation proposes a hierarchical memory modeling framework and models PRAM and STT-MRAM in four different levels of abstraction. With the proposed models, various simulations are conducted to investigate the performance, optimization, variability, reliability, and scalability.
Emerging memory devices such as RRAM can work as a 2-D crosspoint array to speed up the multiplication and accumulation in machine learning algorithms. This dissertation proposes a new parallel programming scheme to achieve in-memory learning with RRAM crosspoint array. The programming circuitry is designed and simulated in TSMC 65nm technology showing 900X speedup for the dictionary learning task compared to the CPU performance.
From the algorithm perspective, inspired by the high accuracy and low power of the brain, this dissertation proposes a bio-plausible feedforward inhibition spiking neural network with Spike-Rate-Dependent-Plasticity (SRDP) learning rule. It achieves more than 95% accuracy on the MNIST dataset, which is comparable to the sparse coding algorithm, but requires far fewer number of computations. The role of inhibition in this network is systematically studied and shown to improve the hardware efficiency in learning.
The era of mass data collection is upon us and only recently have people begun to consider the value of their data. All of our clicks and likes have helped big tech companies build predictive models to tailor their product to the buying patterns of the consumer. Big data collection has its advantages in increasing profitability and efficiency, but many are concerned about the lack of transparency in these technologies (Dwyer). The dependency on algorithms to make and influence decisions has become a growing concern in law enforcement. The use of this technology is commonly referred to as data-driven decision making, which is also known as predictive policing. These technologies are thought to reduce the biases held in traditional policing by creating statistically sound evidence-based models. Although, many lawsuits have highlighted the fact that predictive technologies do more to reflect historical bias rather than to eradicate it. The clandestine measures behind the algorithms may be in conflict with the due process clause and the penumbra of privacy rights enumerated in the First, Third, Fourth, and Fifth Amendments. <br/> Predictive policing technology has come under fire for over policing historically black and latinx neighborhoods. GIS (Geographical Information Systems) is supposed to help officers identify where crime will likely happen over the next twelve hours. However, the LAPD’s own internal audit of their program concluded that the technology did not help officers solve crimes or reduce crime rate any better than traditional patrol methods (Puente). Similarly, other types of tools used to calculate recidivism risk for bond sentencing are disproportionately biased to calculate black people as having a higher risk to reoffend (Angwin). Lawsuits from civil liberties groups have been filed against the police departments that utilized these technologies. This paper will examine the constitutional pitfalls of predictive technology and propose ways that the system could work to ameliorate its practices.
Optimal foraging theory provides a suite of tools that model the best way that an animal will <br/>structure its searching and processing decisions in uncertain environments. It has been <br/>successful characterizing real patterns of animal decision making, thereby providing insights<br/>into why animals behave the way they do. However, it does not speak to how animals make<br/>decisions that tend to be adaptive. Using simulation studies, prior work has shown empirically<br/>that a simple decision-making heuristic tends to produce prey-choice behaviors that, on <br/>average, match the predicted behaviors of optimal foraging theory. That heuristic chooses<br/>to spend time processing an encountered prey item if that prey item's marginal rate of<br/>caloric gain (in calories per unit of processing time) is greater than the forager's<br/>current long-term rate of accumulated caloric gain (in calories per unit of total searching<br/>and processing time). Although this heuristic may seem intuitive, a rigorous mathematical<br/>argument for why it tends to produce the theorized optimal foraging theory behavior has<br/>not been developed. In this thesis, an analytical argument is given for why this<br/>simple decision-making heuristic is expected to realize the optimal performance<br/>predicted by optimal foraging theory. This theoretical guarantee not only provides support<br/>for why such a heuristic might be favored by natural selection, but it also provides<br/>support for why such a heuristic might a reliable tool for decision-making in autonomous<br/>engineered agents moving through theatres of uncertain rewards. Ultimately, this simple<br/>decision-making heuristic may provide a recipe for reinforcement learning in small robots<br/>with little computational capabilities.
Over the years, advances in research have continued to decrease the size of computers from the size of<br/>a room to a small device that could fit in one’s palm. However, if an application does not require extensive<br/>computation power nor accessories such as a screen, the corresponding machine could be microscopic,<br/>only a few nanometers big. Researchers at MIT have successfully created Syncells, which are micro-<br/>scale robots with limited computation power and memory that can communicate locally to achieve<br/>complex collective tasks. In order to control these Syncells for a desired outcome, they must each run a<br/>simple distributed algorithm. As they are only capable of local communication, Syncells cannot receive<br/>commands from a control center, so their algorithms cannot be centralized. In this work, we created a<br/>distributed algorithm that each Syncell can execute so that the system of Syncells is able to find and<br/>converge to a specific target within the environment. The most direct applications of this problem are in<br/>medicine. Such a system could be used as a safer alternative to invasive surgery or could be used to treat<br/>internal bleeding or tumors. We tested and analyzed our algorithm through simulation and visualization<br/>in Python. Overall, our algorithm successfully caused the system of particles to converge on a specific<br/>target present within the environment.
Within the last decade, there has been a lot of hype surrounding the potential medical applications of artificial intelligence (AI) and machine learning (ML) technologies. During the same timespan, big tech companies such as Microsoft, Apple, Amazon, and Google have entered the healthcare market as developers of health-based AI and ML technologies. This project aims to create a comprehensive map of the existing health-AI market landscape for the standard biotech reader and to provide a critical commentary on the existing market structure.