For my Thesis Project, I worked to operationalize an algorithmic trading application called Trading Dawg. Over the year, I was able to implement several analysis models, including accuracy, performance, volume, and hyperparameter analysis. With these improvements, we are in a strong position to create valuable tools in the algorithmic trading space.
Bad actor reporting has recently grown in popularity as an effective method for social media attacks and harassment, but many mitigation strategies have yet to be investigated. In this study, we created a simulated social media environment of 500,000 users, and let those users create and review a number of posts. We then created four different post-removal algorithms to analyze the simulation, each algorithm building on previous ones, and evaluated them based on their accuracy and effectiveness at removing malicious posts. This thesis work concludes that a trust-reward structure within user report systems is the most effective strategy for removing malicious content while minimizing the removal of genuine content. This thesis also discusses how the structure can be further enhanced to accommodate real-world data and provide a viable solution for reducing bad actor online activity as a whole.
This paper explores the inner workings of algorithms that computers may use to play Chess. First, we discuss the classical Alpha-Beta algorithm and several improvements, including Quiescence Search, Transposition Tables, and more. Next, we examine the state-of-the-art Monte Carlo Tree Search algorithm and relevant optimizations. After that, we consider a recent algorithm that transforms Alpha-Beta into a “Rollout” search, blending it with Monte Carlo Tree Search under the rollout paradigm. We then discuss our C++ Chess Engine, Homura, and explain its implementation of a hybrid algorithm combining Alpha-Beta with MCTS. Finally, we show that Homura can play master-level Chess at a strength currently exceeding that of our backtracking Alpha-Beta.
The sudden turn to artificial intelligence has been widely supported because of the several proposed positive outcomes of using such technologies to support or replace humans. Automating tedious processes and removing potential human error is exciting for society, but some concerns must be addressed. This essay aims to understand how artificial intelligence can automate domains that likely significantly impact underprivileged and underrepresented groups. This essay will address the potentially devastating effects of algorithmic biases and AI’s contribution to perpetual economic inequality by surveying different domains, such as the justice system and the real estate industry. Without society broadly understanding the potential negative side effects on systems that matter, the rapid growth of artificial intelligence is a recipe for disaster. Everyone must become educated about AI’s current and potential implications before it is too late to stop its damaging effects.
Keeping this in mind, non-invasive methods to geo-physically work on it needs attention. There is a lot of potential in this field of work to even help with environmental crisis as this helps in places where water theft is big and is conducted through leaks in the distribution system. Methods like Acoustic sensing and ground penetrating radars have shown good results, and the work done in this thesis helps us realise the limitations and extents to which they can be used in the phoenix are.
The concrete pipes used by SRP are would not be able to generate enough acoustic signals to be affectively picked up by a hydrophone at the opening, so the GPR would be helpful in finding the initial location of the leak, as the water around the leak would make the sand wet and hence show a clear difference on the GPR. After that the frequency spectrum can be checked around that point which would show difference from another where we know a leak is not present.
The main contributions of this thesis are three-fold: First, a bi-criteria approximation algorithm is presented for this all-or-nothing multicommodity flow (ANF) problem. This algorithm is the first to achieve a constant approximation of the maximum throughput with an edge capacity violation ratio that is at most logarithmic in n, with high probability. The approach used is based on a version of randomized rounding that keeps splittable flows, rather than approximating those via a non-splittable path for each commodity: This allows it to work for arbitrary directed edge-capacitated graphs, unlike most of the prior work on the ANF problem. The algorithm also works if a weighted throughput is considered, where the benefit gained by fully satisfying the demand for commodity i is determined by a given weight w_i>0. Second, a derandomization of the algorithm is presented that maintains the same approximation bounds, using novel pessimistic estimators for Bernstein's inequality. In addition, it is shown how the framework can be adapted to achieve a polylogarithmic fraction of the maximum throughput while maintaining a constant edge capacity violation, if the network capacity is large enough. Lastly, one important aspect of the randomized and derandomized algorithms is their simplicity, which lends to efficient implementations in practice. The implementations of both randomized rounding and derandomized algorithms for the ANF problem are presented and show their efficiency in practice.
}}=\tau$. This research will focus on improving approximations on the lower bound of $\tau$. Toward this end we will examine algorithmic enumeration, and series analysis for self-avoiding polygons.