Matching Items (14)
Filtering by

Clear all filters

134157-Thumbnail Image.png
Description
This paper details the specification and implementation of a single-machine blockchain simulator. It also includes a brief introduction on the history & underlying concepts of blockchain, with explanations on features such as decentralization, openness, trustlessness, and consensus. The introduction features a brief overview of public interest and current implementations of

This paper details the specification and implementation of a single-machine blockchain simulator. It also includes a brief introduction on the history & underlying concepts of blockchain, with explanations on features such as decentralization, openness, trustlessness, and consensus. The introduction features a brief overview of public interest and current implementations of blockchain before stating potential use cases for blockchain simulation software. The paper then gives a brief literature review of blockchain's role, both as a disruptive technology and a foundational technology. The literature review also addresses the potential and difficulties regarding the use of blockchain in Internet of Things (IoT) networks, and also describes the limitations of blockchain in general regarding computational intensity, storage capacity, and network architecture. Next, the paper gives the specification for a generic blockchain structure, with summaries on the behaviors and purposes of transactions, blocks, nodes, miners, public & private key cryptography, signature validation, and hashing. Finally, the author gives an overview of their specific implementation of the blockchain using C/C++ and OpenSSL. The overview includes a brief description of all the classes and data structures involved in the implementation, including their function and behavior. While the implementation meets the requirements set forward in the specification, the results are more qualitative and intuitive, as time constraints did not allow for quantitative measurements of the network simulation. The paper concludes by discussing potential applications for the simulator, and the possibility for future hardware implementations of blockchain.
ContributorsRauschenbach, Timothy Rex (Author) / Vrudhula, Sarma (Thesis director) / Nakamura, Mutsumi (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
134710-Thumbnail Image.png
Description
The fight for equal transgender rights is gaining traction in the public eye, but still has a lot of progress to make in the social and legal spheres. Since public opinion is critical in any civil rights movement, this study attempts to identify the most effective methods to elicit public

The fight for equal transgender rights is gaining traction in the public eye, but still has a lot of progress to make in the social and legal spheres. Since public opinion is critical in any civil rights movement, this study attempts to identify the most effective methods to elicit public reactions in support of transgender rights. Topic analysis through Latent Dirichlet Allocation is performed on Twitter data, along with polarity sentiment analysis, to track the subjects which gain the most effective reactions over time. Graphing techniques are used in an attempt to visually display the trends in topics. The topic analysis techniques are effective in identifying the positive and negative trends in the data, but the graphing algorithm lacks the ability to comprehensibly display complex data with more dimensionality.
ContributorsWilmot, Christina Dory (Author) / Liu, Huan (Thesis director) / Bellis, Camellia (Committee member) / Sanford School of Social and Family Dynamics (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
132930-Thumbnail Image.png
Description
The explosive Web growth in the last decade has drastically changed the way billions of people all around the globe conduct numerous activities including creating, sharing, and consuming information. The massive amount of user-generated information encourages companies and service providers to collect users' information and use it in order to

The explosive Web growth in the last decade has drastically changed the way billions of people all around the globe conduct numerous activities including creating, sharing, and consuming information. The massive amount of user-generated information encourages companies and service providers to collect users' information and use it in order to better their own goals and then further provide personalized services to users as well. However, the users' information contains their private and sensitive information and can lead to breach of users' privacy. Anonymizing users' information before publishing and using such data is vital in securing their privacy. Due to the many forms of user information (e.g., structural, interactions, etc), different techniques are required for anonymization of users' data. In this thesis, first we discuss different anonymization techniques for various types of user-generated data, i.e., network graphs, web browsing history, and user-item interactions. Our experimental results show the effectiveness of such techniques for data anonymization. Then, we briefly touch on securely and privately sharing information through blockchains.
ContributorsNou, Alex Sheavin (Author) / Liu, Huan (Thesis director) / Beigi, Ghazaleh (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133137-Thumbnail Image.png
Description
Third-party mixers are used to heighten the anonymity of Bitcoin users. The mixing techniques implemented by these tools are often untraceable on the blockchain, making them appealing to money launderers. This research aims to analyze mixers currently available on the deep web. In addition, an in-depth case study is done

Third-party mixers are used to heighten the anonymity of Bitcoin users. The mixing techniques implemented by these tools are often untraceable on the blockchain, making them appealing to money launderers. This research aims to analyze mixers currently available on the deep web. In addition, an in-depth case study is done on an open-source bitcoin mixer known as Penguin Mixer. A local version of Penguin Mixer was used to visualize mixer behavior under specific scenarios. This study could lead to the identification of vulnerabilities in mixing tools and detection of these tools on the blockchain.
ContributorsPakki, Jaswant (Author) / Doupe, Adam (Thesis director) / Shoshitaishvili, Yan (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2018-12
Description

Through my work with the Arizona State University Blockchain Research Lab (BRL) and JennyCo, one of the first Healthcare Information (HCI) HIPAA - compliant decentralized exchanges, I have had the opportunity to explore a unique cross-section of some of the most up and coming DLTs including both DAGs and blockchains.

Through my work with the Arizona State University Blockchain Research Lab (BRL) and JennyCo, one of the first Healthcare Information (HCI) HIPAA - compliant decentralized exchanges, I have had the opportunity to explore a unique cross-section of some of the most up and coming DLTs including both DAGs and blockchains. During this research, four major technologies (including JennyCo’s own systems) presented themselves as prime candidates for the comparative analysis of two models for implementing JennyCo’s system architecture for the monetization of healthcare information exchanges (HIEs). These four identified technologies and their underlying mechanisms will be explored thoroughly throughout the course of this paper and are listed with brief definitions as follows: Polygon - “Polygon is a “layer two” or “sidechain” scaling solution that runs alongside the Ethereum blockchain. MATIC is the network’s native cryptocurrency, which is used for fees, staking, and more” [8]. Polygon is the scalable layer involved in the L2SP architecture. Ethereum - “Ethereum is a decentralized blockchain platform that establishes a peer-to-peer network that securely executes and verifies application code, called smart contracts.” [9] This foundational Layer-1 runs thousands of nodes and creates a unique decentralized ecosystem governed by turing complete automated programs. Ethereum is the foundational Layer involved in the L2SP. Constellation - A novel Layer-0 data-centric peer-to-peer network that utilizes the “Hypergraph Transfer Protocol or HGTP, a DLT known as a [DAG] protocol with a novel reputation-based consensus model called Proof of Reputable Observation (PRO). Hypergraph is a feeless decentralized network that supports the transfer of $DAG cryptocurrency.” [10] JennyCo Protocol - Acts as a HIPAA compliant decentralized HIE by allowing consumers, big businesses, and brands to access and exchange user health data on a secure, interoperable, and accessible platform via DLT. The JennyCo Protocol implements utility tokens to reward buyers and sellers for exchanging data. Its protocol nature comes from its DLT implementation which governs the functioning of on-chain actions (e.g. smart contracts). In this case, these actions consist of secure and transparent health data exchange and monetization to reconstitute data ownership to those who generate that data [11]. With the direct experience of working closely with multiple companies behind the technologies listed, I have been exposed to the benefits and deficits of each of these technologies and their corresponding approaches. In this paper, I will use my experience with these technologies and their frameworks to explore two distributed ledger architecture protocols in order to determine the more effective model for implementing JennycCo’s health data exchange. I will begin this paper with an exploration of blockchain and directed acyclic graph (DAG) technologies to better understand their innate architectures and features. I will then move to an in-depth look at layered protocols, and healthcare data in the form of EHRs. Additionally, I will address the main challenges EHRs and HIEs face to present a deeper understanding of the challenges JennyCo is attempting to address. Finally, I will demonstrate my hypothesis: the Hypergraph Transfer Protocol (HGTP) model by Constellation presents significant advantages in scalability, interoperability, and external data security over the Layer-2 Scalability Protocol (L2SP) used by Polygon and Ethereum in implementing the JennyCo protocol. This will be done through a thorough breakdown of each protocol along with an analysis of relevant criteria including but not limited to: security, interoperability, and scalability. In doing so, I hope to determine the best framework for running JennyCo’s HIE Protocol.

ContributorsVan Bussum, Alexander (Author) / Boscovic, Dragan (Thesis director) / Grando, Adela (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
161012-Thumbnail Image.png
Description

This project aims to incorporate the aspect of sentiment analysis into traditional stock analysis to enhance stock rating predictions by applying a reliance on the opinion of various stocks from the Internet. Headlines from eight major news publications and conversations from Yahoo! Finance’s “Conversations” feature were parsed through the Valence

This project aims to incorporate the aspect of sentiment analysis into traditional stock analysis to enhance stock rating predictions by applying a reliance on the opinion of various stocks from the Internet. Headlines from eight major news publications and conversations from Yahoo! Finance’s “Conversations” feature were parsed through the Valence Aware Dictionary for Sentiment Reasoning (VADER) natural language processing package to determine numerical polarities which represented positivity or negativity for a given stock ticker. These generated polarities were paired with stock metrics typically observed by stock analysts as the feature set for a Logistic Regression machine learning model. The model was trained on roughly 1500 major stocks to determine a binary classification between a “Buy” or “Not Buy” rating for each stock, and the results of the model were inserted into the back-end of the Agora Web UI which emulates search engine behavior specifically for stocks found in NYSE and NASDAQ. The model reported an accuracy of 82.5% and for most major stocks, the model’s prediction correlated with stock analysts’ ratings. Given the volatility of the stock market and the propensity for hive-mind behavior in online forums, the performance of the Logistic Regression model would benefit from incorporating historical stock data and more sources of opinion to balance any subjectivity in the model.

ContributorsRamaraju, Venkat (Author) / Rao, Jayanth (Co-author) / Bansal, Ajay (Thesis director) / Smith, James (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2021-12
161079-Thumbnail Image.png
Description

This project aims to incorporate the aspect of sentiment analysis into traditional stock analysis to enhance stock rating predictions by applying a reliance on the opinion of various stocks from the Internet. Headlines from eight major news publications and conversations from Yahoo! Finance’s “Conversations” feature were parsed through the Valence

This project aims to incorporate the aspect of sentiment analysis into traditional stock analysis to enhance stock rating predictions by applying a reliance on the opinion of various stocks from the Internet. Headlines from eight major news publications and conversations from Yahoo! Finance’s “Conversations” feature were parsed through the Valence Aware Dictionary for Sentiment Reasoning (VADER) natural language processing package to determine numerical polarities which represented positivity or negativity for a given stock ticker. These generated polarities were paired with stock metrics typically observed by stock analysts as the feature set for a Logistic Regression machine learning model. The model was trained on roughly 1500 major stocks to determine a binary classification between a “Buy” or “Not Buy” rating for each stock, and the results of the model were inserted into the back-end of the Agora Web UI which emulates search engine behavior specifically for stocks found in NYSE and NASDAQ. The model reported an accuracy of 82.5% and for most major stocks, the model’s prediction correlated with stock analysts’ ratings. Given the volatility of the stock market and the propensity for hive-mind behavior in online forums, the performance of the Logistic Regression model would benefit from incorporating historical stock data and more sources of opinion to balance any subjectivity in the model.

ContributorsRao, Jayanth (Author) / Ramaraju, Venkat (Co-author) / Bansal, Ajay (Thesis director) / Smith, James (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2021-12
Description
In this paper I defend the argument that public reaction to news headlines correlates with the short-term price direction of Bitcoin. I collected a month's worth of Bitcoin data consisting of news headlines, tweets, and the price of the cryptocurrency. I fed this data into a Long Short-Term Memory Neural

In this paper I defend the argument that public reaction to news headlines correlates with the short-term price direction of Bitcoin. I collected a month's worth of Bitcoin data consisting of news headlines, tweets, and the price of the cryptocurrency. I fed this data into a Long Short-Term Memory Neural Network and built a model that predicted Bitcoin price for a new timeframe. The model correctly predicted 75% of test set price trends on 3.25 hour time intervals. This is higher than the 53.57% accuracy tested with a Bitcoin price model without sentiment data. I concluded public reaction to Bitcoin news headlines has an effect on the short-term price direction of the cryptocurrency. Investors can use my model to help them in their decision-making process when making short-term Bitcoin investment decisions.
ContributorsSteinberg, Sam (Author) / Boscovic, Dragan (Thesis director) / Davulcu, Hasan (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131363-Thumbnail Image.png
Description
Behavioral economics suggests that emotions can affect an individual’s decision making. Recent research on this idea’s application on large societies hints that there may exist some correlation or maybe even some causation relationship between public sentiment—at least what can be pulled from Twitter—and the movement of the stock market. One

Behavioral economics suggests that emotions can affect an individual’s decision making. Recent research on this idea’s application on large societies hints that there may exist some correlation or maybe even some causation relationship between public sentiment—at least what can be pulled from Twitter—and the movement of the stock market. One major result of consistent research on whether or not public sentiment can predict the movement of the stock market is that public sentiment, as a feature, is becoming more and more valid as a variable for stock-market-based machine learning models. While raw values typically serve as invaluable points of data, when training a model, many choose to “engineer” new features for their models—deriving rates of change or range values to improve model accuracy.
Since it doesn’t hurt to attempt to utilize feature extracted values to improve a model (if things don’t work out, one can always use their original features), the question may arise: how could the results of feature extraction on values such as sentiment affect a model’s ability to predict the movement of the stock market? This paper attempts to shine some light on to what the answer could be by deriving TextBlob sentiment values from Twitter data, and using Granger Causality Tests and logistic and linear regression to test if there exist a correlation or causation between the stock market and features extracted from public sentiment.
ContributorsYu, James (Author) / Meuth, Ryan (Thesis director) / Nakamura, Mutsumi (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131260-Thumbnail Image.png
Description
Machine learning is the process of training a computer with algorithms to learn from data and make informed predictions. In a world where large amounts of data are constantly collected, machine learning is an important tool to analyze this data to find patterns and learn useful information from it. Machine

Machine learning is the process of training a computer with algorithms to learn from data and make informed predictions. In a world where large amounts of data are constantly collected, machine learning is an important tool to analyze this data to find patterns and learn useful information from it. Machine learning applications expand to numerous fields; however, I chose to focus on machine learning with a business perspective for this thesis, specifically e-commerce.

The e-commerce market utilizes information to target customers and drive business. More and more online services have become available, allowing consumers to make purchases and interact with an online system. For example, Amazon is one of the largest Internet-based retail companies. As people shop through this website, Amazon gathers huge amounts of data on its customers from personal information to shopping history to viewing history. After purchasing a product, the customer may leave reviews and give a rating based on their experience. Performing analytics on all of this data can provide insights into making more informed business and marketing decisions that can lead to business growth and also improve the customer experience.
For this thesis, I have trained binary classification models on a publicly available product review dataset from Amazon to predict whether a review has a positive or negative sentiment. The sentiment analysis process includes analyzing and encoding the human language, then extracting the sentiment from the resulting values. In the business world, sentiment analysis provides value by revealing insights into customer opinions and their behaviors. In this thesis, I will explain how to perform a sentiment analysis and analyze several different machine learning models. The algorithms for which I compared the results are KNN, Logistic Regression, Decision Trees, Random Forest, Naïve Bayes, Linear Support Vector Machines, and Support Vector Machines with an RBF kernel.
ContributorsMadaan, Shreya (Author) / Meuth, Ryan (Thesis director) / Nakamura, Mutsumi (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Dean, W.P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05