Filtering by
- Member of: Theses and Dissertations
Arizona State course enrollment regularly reaches triple digits. Despite the large enrollment numbers, the level of communication among students remain relatively low. Students often create Discord servers to keep in touch with classmates, but this requires each individual student to track down the invite link. The purpose of this project is to create an inviting chat service for students with minimal barriers of entry. This website, https://gibbl.io, offers a chat room for every class at ASU, making it simple for students to maintain communication.
Company X once dominated the server chip market, but its share has begun to diminish due to numerous competitors, product delays, and smaller profit margins. This market will only keep growing as advancement and demand for server technologies continues to expand, therefore, regaining market share is of utmost importance for Company X. This project analyzes how Company X can look into regaining server market share through a diversion of funds into emerging markets. The paper highlights the importance of being an early entrant into a relatively untapped, promising regional market by addressing the economics, potential consumers, and competition. Analysis of these factors shows the potential net present value (NPV) that can be achieved by increasing investments in India.
Throughout this project, I decided on a number of learning goals to consider it a success. I needed to learn how to use the supporting libraries that would help me to design this system. I also learned how to use the Twitter API, as well as create the infrastructure behind it that would allow me to collect large amounts of data for machine learning. I needed to become familiar with common machine learning libraries in Python in order to create the necessary algorithms and pipelines to make predictions based on Twitter data.
This paper details the steps and decisions needed to determine how to collect this data and apply it to machine learning algorithms. I determined how to create labelled data using pre-existing Botometer ratings, and the levels of confidence I needed to label data for training. I use the scikit-learn library to create these algorithms to best detect these bots. I used a number of pre-processing routines to refine the classifiers’ precision, including natural language processing and data analysis techniques. I eventually move to remotely-hosted versions of the system on Amazon web instances to collect larger amounts of data and train more advanced classifiers. This leads to the details of my final implementation of a user-facing server, hosted on AWS and interfacing over Gmail’s IMAP server.
The current and future development of this system is laid out. This includes more advanced classifiers, better data analysis, conversions to third party Twitter data collection systems, and user features. I detail what it is I have learned from this exercise, and what it is I hope to continue working on.