Matching Items (36)

Exterior Ballistics Simulation and Aiming Error Correction

Description

The purpose of this project was to create an algorithm to improve firearm aiming. In order to do so, a simulation of exterior ballistics – the bullet’s behavior between the firearm muzzle and the target – was created in MATLAB.

The purpose of this project was to create an algorithm to improve firearm aiming. In order to do so, a simulation of exterior ballistics – the bullet’s behavior between the firearm muzzle and the target – was created in MATLAB. The simulation of bullet trajectory included consideration of three forces: gravity, air drag, and Coriolis ‘force’. An overall equation of motion for the bullet in flight, comprising the effects of the aforementioned forces, was constructed using formulae and theory given in R. L. McCoy’s Modern Exterior Ballistics. For the project, a reference frame was defined based on firearm muzzle and target positions, and an aim vector described by two angles was defined to describe the direction of the firearm’s barrel. The simulations of bullet trajectory take into account eleven parameters: the two aim angles, initial bullet speed (commonly referred to as muzzle velocity), 3-D Cartesian components of wind velocity, air density, bullet diameter, bullet mass, latitude of the firing area, and azimuth of fire (a quantified compass direction of fire).

The user inputs target position, muzzle position, and estimated environmental parameters to the system. Then, an aim vector would be calculated to hit the target under estimated conditions. Because the eleven trajectory parameters likely cannot all be precisely known, this solution will have some error. In real life, the system would use feedback from real shots of a firearm to correct for this error. For this project, a real-world proxy simulation was created that had built-in random error and variations in the parameters. The correction algorithm uses the error data from all previous shots to calculate adjustments to the original aim vector, so that each successive shot becomes more accurate. The system was tested with specifications of a common rifle platform, with estimated parameters and variations for a location in Tempe, AZ (since data for an urban area is readily available compared to a point in the wilderness). Results from this testing revealed that the system can “hit” a 2-meter-radius circular target in under 30 shots. When the errors and variations in parameters were halved for the real-world stand-in simulation, the system could “hit” a circular target with 0.55 meter radius in less than 25 shots. After analysis, it was found that the corrected aim angles converged on values, suggesting that the correction algorithm functions as intended (taking into account all past shots). Generally, it was found that any reduction of the means and standard deviations of parameter error improved the ability of the system to hit smaller targets, or hit the same target with less shots.

Contributors

Created

Date Created
2019-05

133531-Thumbnail Image.png

Algorithmic Prediction of Binding Sites of TNFα/TNFR2 and PD-1/PD-L1

Description

Predicting the binding sites of proteins has historically relied on the determination of protein structural data. However, the ability to utilize binding data obtained from a simple assay and computationally make the same predictions using only sequence information would be

Predicting the binding sites of proteins has historically relied on the determination of protein structural data. However, the ability to utilize binding data obtained from a simple assay and computationally make the same predictions using only sequence information would be more efficient, both in time and resources. The purpose of this study was to evaluate the effectiveness of an algorithm developed to predict regions of high-binding on proteins as it applies to determining the regions of interaction between binding partners. This approach was applied to tumor necrosis factor alpha (TNFα), its receptor TNFR2, programmed cell death protein-1 (PD-1), and one of its ligand PD-L1. The algorithms applied accurately predicted the binding region between TNFα and TNFR2 in which the interacting residues are sequential on TNFα, however failed to predict discontinuous regions of binding as accurately. The interface of PD-1 and PD-L1 contained continuous residues interacting with each other, however this region was predicted to bind weaker than the regions on the external portions of the molecules. Limitations of this approach include use of a linear search window (resulting in inability to predict discontinuous binding residues), and the use of proteins with unnaturally exposed regions, in the case of PD-1 and PD-L1 (resulting in observed interactions which would not occur normally). However, this method was overall very effective in utilizing the available information to make accurate predictions. The use of the microarray to obtain binding information and a computer algorithm to analyze is a versatile tool capable of being adapted to refine accuracy.

Contributors

Agent

Created

Date Created
2018-05

133428-Thumbnail Image.png

The History and Application of Optical Communications in Deep Space

Description

Optical Communications are at a high point of interest by the space engineering community. After successful projects like the Lunar Laser Communications Demonstration (LLCD), NASA has become interested in augmenting their current Deep Space Network (DSN) with optical communication links.

Optical Communications are at a high point of interest by the space engineering community. After successful projects like the Lunar Laser Communications Demonstration (LLCD), NASA has become interested in augmenting their current Deep Space Network (DSN) with optical communication links. One such link is Deep Space Optical Communications (DSOC) which will be launching with the Psyche mission. To gain a full understanding of the advantages of this network, this thesis will go over the history and benefits of optical communications both on Earth and in space. This thesis will then go in depth on NASAs DSOC project through an algorithmic implementation of the communications channel.

Contributors

Agent

Created

Date Created
2018-05

135269-Thumbnail Image.png

complexMovement

Description

Computer Science and Dance are choice driven disciplines. The output of their processes are compositions of experience. Dancers are not computers and computers are not people but there are comparable traces of humanity in the way each interpret and interact

Computer Science and Dance are choice driven disciplines. The output of their processes are compositions of experience. Dancers are not computers and computers are not people but there are comparable traces of humanity in the way each interpret and interact with their respective inputs, outputs, and environments. These overlaps are perhaps not obvious, but in an increasingly specialized world it is important to discuss them. Dynamic Programming and improvisational movement exist within exclusive corners of their respective fields and are characterized by their inherent adaption to change. Inspired by the work of Ivar Hagendoorn, John Cage and other interdisciplinary artists, complexMovement is motivated by the need to create space for intersections between these two powerful groups and find overlaps in the questions they ask to achieve their goals. Dance and Computer Science are just one example of hidden partnerships between their respective fields. Their respective sides allow for ample side by side comparisons but for the purpose of this work, we will focus upon two smaller sectors of their studies: improvisational movement and the design of Dynamic Programming algorithms.

Contributors

Agent

Created

Date Created
2016-05

134914-Thumbnail Image.png

Collaborative Computation in Self-Organizing Particle Systems

Description

Many forms of programmable matter have been proposed for various tasks. We use an abstract model of self-organizing particle systems for programmable matter which could be used for a variety of applications, including smart paint and coating materials for engineering

Many forms of programmable matter have been proposed for various tasks. We use an abstract model of self-organizing particle systems for programmable matter which could be used for a variety of applications, including smart paint and coating materials for engineering or programmable cells for medical uses. Previous research using this model has focused on shape formation and other spatial configuration problems, including line formation, compression, and coating. In this work we study foundational computational tasks that exceed the capabilities of the individual constant memory particles described by the model. These tasks represent new ways to use these self-organizing systems, which, in conjunction with previous shape and configuration work, make the systems useful for a wider variety of tasks. We present an implementation of a counter using a line of particles, which makes it possible for the line of particles to count to and store values much larger than their individual capacities. We then present an algorithm that takes a matrix and a vector as input and then sets up and uses a rectangular block of particles to compute the matrix-vector multiplication. This setup also utilizes the counter implementation to store the resulting vector from the matrix-vector multiplication. Operations such as counting and matrix multiplication can leverage the distributed and dynamic nature of the self-organizing system to be more efficient and adaptable than on traditional linear computing hardware. Such computational tools also give the systems more power to make complex decisions when adapting to new situations or to analyze the data they collect, reducing reliance on a central controller for setup and output processing. Finally, we demonstrate an application of similar types of computations with self-organizing systems to image processing, with an implementation of an image edge detection algorithm.

Contributors

Created

Date Created
2016-12

135739-Thumbnail Image.png

Compression in Self-Organizing Particle Systems

Description

Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a

Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as particles) with limited computational power that each perform fully distributed, local, asynchronous algorithms to solve system-wide problems of movement, configuration, and coordination. In this thesis, we focus on the compression problem, in which the particle system gathers as tightly together as possible, as in a sphere or its equivalent in the presence of some underlying geometry. While there are many ways to formalize what it means for a particle system to be compressed, we address three different notions of compression: (1) local compression, in which each individual particle utilizes local rules to create an overall convex structure containing no holes, (2) hole elimination, in which the particle system seeks to detect and eliminate any holes it contains, and (3) alpha-compression, in which the particle system seeks to shrink its perimeter to be within a constant factor of the minimum possible value. We analyze the behavior of each of these algorithms, examining correctness and convergence where appropriate. In the case of the Markov Chain Algorithm for Compression, we provide improvements to the original bounds for the bias parameter lambda which influences the system to either compress or expand. Lastly, we briefly discuss contributions to the problem of leader election--in which a particle system elects a single leader--since it acts as an important prerequisite for compression algorithms that use a predetermined seed particle.

Contributors

Created

Date Created
2016-05

137020-Thumbnail Image.png

Phase Recovery and Unimodular Waveform Design

Description

In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original

In many systems, it is difficult or impossible to measure the phase of a signal. Direct recovery from magnitude is an ill-posed problem. Nevertheless, with a sufficiently large set of magnitude measurements, it is often possible to reconstruct the original signal using algorithms that implicitly impose regularization conditions on this ill-posed problem. Two such algorithms were examined: alternating projections, utilizing iterative Fourier transforms with manipulations performed in each domain on every iteration, and phase lifting, converting the problem to that of trace minimization, allowing for the use of convex optimization algorithms to perform the signal recovery. These recovery algorithms were compared on a basis of robustness as a function of signal-to-noise ratio. A second problem examined was that of unimodular polyphase radar waveform design. Under a finite signal energy constraint, the maximal energy return of a scene operator is obtained by transmitting the eigenvector of the scene Gramian associated with the largest eigenvalue. It is shown that if instead the problem is considered under a power constraint, a unimodular signal can be constructed starting from such an eigenvector that will have a greater return.

Contributors

Created

Date Created
2014-05

136691-Thumbnail Image.png

Post-Optimization of Permutation Coverings

Description

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.

Contributors

Created

Date Created
2014-12

135998-Thumbnail Image.png

Gram-ART Applied to Music Recommendation Services

Description

In this paper we explore the design, implementation, and analysis of two different approaches for providing music recommendations to targeted users by implementing the Gram-ART unsupervised learning algorithm. We provide a content filtering approach using a dataset of one million

In this paper we explore the design, implementation, and analysis of two different approaches for providing music recommendations to targeted users by implementing the Gram-ART unsupervised learning algorithm. We provide a content filtering approach using a dataset of one million songs which include various metadata tags and a collaborative filtering approach using the listening histories of over one million users. The two methods are evaluated by their results from Million Song Dataset Challenge. While both placed near the top third of the 150 challenge participants, the knowledge gained from the experiments will help further refine the process and likely produced much higher results in a system with the potential to scale several magnitudes.

Contributors

Agent

Created

Date Created
2015-05

147564-Thumbnail Image.png

A Research Review of The Trials and Errors of Predictive Policing

Description

The era of mass data collection is upon us and only recently have people begun to consider the value of their data. All of our clicks and likes have helped big tech companies build predictive models to tailor their

The era of mass data collection is upon us and only recently have people begun to consider the value of their data. All of our clicks and likes have helped big tech companies build predictive models to tailor their product to the buying patterns of the consumer. Big data collection has its advantages in increasing profitability and efficiency, but many are concerned about the lack of transparency in these technologies (Dwyer). The dependency on algorithms to make and influence decisions has become a growing concern in law enforcement. The use of this technology is commonly referred to as data-driven decision making, which is also known as predictive policing. These technologies are thought to reduce the biases held in traditional policing by creating statistically sound evidence-based models. Although, many lawsuits have highlighted the fact that predictive technologies do more to reflect historical bias rather than to eradicate it. The clandestine measures behind the algorithms may be in conflict with the due process clause and the penumbra of privacy rights enumerated in the First, Third, Fourth, and Fifth Amendments. <br/> Predictive policing technology has come under fire for over policing historically black and latinx neighborhoods. GIS (Geographical Information Systems) is supposed to help officers identify where crime will likely happen over the next twelve hours. However, the LAPD’s own internal audit of their program concluded that the technology did not help officers solve crimes or reduce crime rate any better than traditional patrol methods (Puente). Similarly, other types of tools used to calculate recidivism risk for bond sentencing are disproportionately biased to calculate black people as having a higher risk to reoffend (Angwin). Lawsuits from civil liberties groups have been filed against the police departments that utilized these technologies. This paper will examine the constitutional pitfalls of predictive technology and propose ways that the system could work to ameliorate its practices.

Contributors

Agent

Created

Date Created
2021-05