Matching Items (10)
Filtering by

Clear all filters

156945-Thumbnail Image.png
Description
Blockchain scalability is one of the issues that concerns its current adopters. The current popular blockchains have initially been designed with imperfections that in- troduce fundamental bottlenecks which limit their ability to have a higher throughput and a lower latency.

One of the major bottlenecks for existing blockchain technologies is fast

Blockchain scalability is one of the issues that concerns its current adopters. The current popular blockchains have initially been designed with imperfections that in- troduce fundamental bottlenecks which limit their ability to have a higher throughput and a lower latency.

One of the major bottlenecks for existing blockchain technologies is fast block propagation. A faster block propagation enables a miner to reach a majority of the network within a time constraint and therefore leading to a lower orphan rate and better profitability. In order to attain a throughput that could compete with the current state of the art transaction processing, while also keeping the block intervals same as today, a 24.3 Gigabyte block will be required every 10 minutes with an average transaction size of 500 bytes, which translates to 48600000 transactions every 10 minutes or about 81000 transactions per second.

In order to synchronize such large blocks faster across the network while maintain- ing consensus by keeping the orphan rate below 50%, the thesis proposes to aggregate partial block data from multiple nodes using digital fountain codes. The advantages of using a fountain code is that all connected peers can send part of data in an encoded form. When the receiving peer has enough data, it then decodes the information to reconstruct the block. Along with them sending only part information, the data can be relayed over UDP, instead of TCP, improving upon the speed of propagation in the current blockchains. Fountain codes applied in this research are Raptor codes, which allow construction of infinite decoding symbols. The research, when applied to blockchains, increases success rate of block delivery on decode failures.
ContributorsChawla, Nakul (Author) / Boscovic, Dragan (Thesis advisor) / Candan, Kasim S (Thesis advisor) / Zhao, Ming (Committee member) / Arizona State University (Publisher)
Created2018
161779-Thumbnail Image.png
Description
Cryptographic voting systems such as Helios rely heavily on a trusted party to maintain privacy or verifiability. This tradeoff can be done away with by using distributed substitutes for the components that need a trusted party. By replacing the encryption, shuffle, and decryption steps described by Helios with the Pedersen

Cryptographic voting systems such as Helios rely heavily on a trusted party to maintain privacy or verifiability. This tradeoff can be done away with by using distributed substitutes for the components that need a trusted party. By replacing the encryption, shuffle, and decryption steps described by Helios with the Pedersen threshold encryption and Neff shuffle, it is possible to obtain a distributed voting system which achieves both privacy and verifiability without trusting any of the contributors. This thesis seeks to examine existing approaches to this problem, and their shortcomings. It provides empirical metrics for comparing different working solutions in detail.
ContributorsBouck, Spencer Joseph (Author) / Bazzi, Rida (Thesis advisor) / Boscovic, Dragan (Committee member) / Shoshitaishvili, Yan (Committee member) / Arizona State University (Publisher)
Created2021
171864-Thumbnail Image.png
Description
Bitcoin (BTC) shares many characteristics with traditional stocks, but it is much more volatile since the cryptocurrency market is unregulated. The high volatility makes BTC a very high risk-high reward investment and predictive analysis can be very useful to obtain good returns and minimize risk. Taking Cocco et al. [1]

Bitcoin (BTC) shares many characteristics with traditional stocks, but it is much more volatile since the cryptocurrency market is unregulated. The high volatility makes BTC a very high risk-high reward investment and predictive analysis can be very useful to obtain good returns and minimize risk. Taking Cocco et al. [1] as the primary reference, this thesis tries to reproduce their findings by building two BTC price forecasting models, Long Short-Term Memory (LSTM) and Bayesian Neural Network (BNN), and finding that the Mean Absolute Percentage Error (MAPE) is lower for the initial BNN model in comparison to the initial LSTM model. In addition to forecasting the value of BTC, a metric called trend% is developed to gauge the models’ ability to capture the trend of how the price varies from one timestep to the next and used to compare the trend prediction performance. It is found that both initial models make random predictions for the trend. Improvements like removing the stochastic component from the data and forecasting returns as opposed to price values show that both models show comparable performance in terms of both MAPE and trend%. The thesis concludes by discussing the future work that can be done to potentially improve the above models. One of the possibilities mentioned is to use on-chain data from the BTC blockchain coupled with the real-world knowledge of BTC exchanges and feed this as input features to the models.
ContributorsMittal, Shivansh (Author) / Boscovic, Dragan (Thesis advisor) / Davulcu, Hasan (Committee member) / Candan, Kasim (Committee member) / Arizona State University (Publisher)
Created2022
157869-Thumbnail Image.png
Description
Blockchain technology enables peer-to-peer transactions through the elimination of the need for a centralized entity governing consensus. Rather than having a centralized database, the data is distributed across multiple computers which enables crash fault tolerance as well as makes the system difficult to tamper with due to a distributed consensus

Blockchain technology enables peer-to-peer transactions through the elimination of the need for a centralized entity governing consensus. Rather than having a centralized database, the data is distributed across multiple computers which enables crash fault tolerance as well as makes the system difficult to tamper with due to a distributed consensus algorithm.

In this research, the potential of blockchain technology to manage energy transactions is examined. The energy production landscape is being reshaped by distributed energy resources (DERs): photo-voltaic panels, electric vehicles, smart appliances, and battery storage. Distributed energy sources such as microgrids, household solar installations, community solar installations, and plug-in hybrid vehicles enable energy consumers to act as providers of energy themselves, hence acting as 'prosumers' of energy.

Blockchain Technology facilitates managing the transactions between involved prosumers using 'Smart Contracts' by tokenizing energy into assets. Better utilization of grid assets lowers costs and also presents the opportunity to buy energy at a reasonable price while staying connected with the utility company. This technology acts as a backbone for 2 models applicable to transactional energy marketplace viz. 'Real-Time Energy Marketplace' and 'Energy Futures'. In the first model, the prosumers are given a choice to bid for a price for energy within a stipulated period of time, while the Utility Company acts as an operating entity. In the second model, the marketplace is more liberal, where the utility company is not involved as an operator. The Utility company facilitates infrastructure and manages accounts for all users, but does not endorse or govern transactions related to energy bidding. These smart contracts are not time bounded and can be suspended by the utility during periods of network instability.
ContributorsSadaye, Raj Anil (Author) / Candan, Kasim S (Thesis advisor) / Boscovic, Dragan (Committee member) / Zhao, Ming (Committee member) / Arizona State University (Publisher)
Created2019
158596-Thumbnail Image.png
Description
Microlending aims at providing low-barrier loans to small to medium scaled family run businesses that are financially disincluded historically. These borrowers might be in third world countries where traditional financing is not accessible. Lenders can be individual investors or institutions making risky investments or willing to help people who cannot

Microlending aims at providing low-barrier loans to small to medium scaled family run businesses that are financially disincluded historically. These borrowers might be in third world countries where traditional financing is not accessible. Lenders can be individual investors or institutions making risky investments or willing to help people who cannot access traditional banks or do not have the credibility to get loans from traditional sources. Microlending involves a charitable cause as well where lenders are not really concerned about what and how they are paid.

This thesis aims at building a platform that will support both commercial microlending as well as charitable donation to support the real cause of microlending. The platform is expected to ensure privacy and transparency to the users in order to attract more users to use the system. Microlending involves monetary transactions, hence possible security threats to the system are discussed.

Blockchain is one of the technologies which has revolutionized financial transactions and microlending involves monetary transactions. Therefore, blockchain is viable option for microlending platform. Permissioned blockchain restricts the user admission to the platform and provides with identity management feature. This feature is required to ensure the security and privacy of various types of participants on the microlending platform.
ContributorsSiddharth, Sourabh (Author) / Boscovic, Dragan (Thesis advisor) / Basnal, Srividya (Thesis advisor) / Sanchez, Javier Gonzalez (Committee member) / Arizona State University (Publisher)
Created2020
161388-Thumbnail Image.png
Description
Blockchain technology is defined as a decentralized, distributed ledger recording the origin of a digital asset and all of its updates without the need of any governing authority. In Supply-Chain Management, Blockchain can be used very effectively, leading to a more open and reliable supply chain. In recent years, different

Blockchain technology is defined as a decentralized, distributed ledger recording the origin of a digital asset and all of its updates without the need of any governing authority. In Supply-Chain Management, Blockchain can be used very effectively, leading to a more open and reliable supply chain. In recent years, different companies have begun to use blockchain to build blockchain-based supply chain solutions. Blockchain has been shown to help provide improved transparency across the supply chain. This research focuses on the supply chain management of medical devices and supplies using blockchain technology. These devices are manufactured by the authorized device manufacturers and are supplied to the different healthcare institutions on their demand. This entire process becomes vulnerable as there is no track of individual product once it gets shipped till it gets used. Traceability of medical devices in this scenario is hardly efficient and not trustworthy. To address this issue, the paper presents a blockchain-based solution to maintain the supply chain of medical devices. The solution provides a distributed environment that can track various medical treatments from production to use. The finished product is stored in the blockchain through its digital thread. Required details are added from time to time which records the entire virtual life-cycle of the medical device forming the digital thread. This digital thread adds traceability to the existing supply chain. Keeping track of devices also helps in returning the expired devices to the manufacturer for its recycling. This blockchain-based solution is mainly composed of two phases. Blockchain-based solution design, this involves the design of the blockchain network architecture, which constitutes the required smart contract. This phase is implemented using the secure network of Hyperledger Fabric (HLF). The next phase includes the deployment of the generated network over the Kubernetes to make the system scalable and more available. To demonstrate and evaluate the performance matrix, a prototype solution of the designed platform is implemented and deployed on the Kubernetes. Finally, this research concludes with the benefits and shortcomings of the solution with future scope to make this platform perform better in all aspects.
ContributorsMhalgi, Kaushal Sanjay (Author) / Boscovic, Dragan (Thesis advisor) / Candan, Kasim Selcuk (Thesis advisor) / Grando, Adela (Committee member) / Arizona State University (Publisher)
Created2021
161280-Thumbnail Image.png
Description
This dissertation focused on the implementation of urine diversion systems in commercial and institutional buildings in the United States with a focus on control of the urea hydrolysis reaction. Urine diversion is the process by which urine is separately collected at the source in order to realize system benefits, including

This dissertation focused on the implementation of urine diversion systems in commercial and institutional buildings in the United States with a focus on control of the urea hydrolysis reaction. Urine diversion is the process by which urine is separately collected at the source in order to realize system benefits, including water conservation, nutrient recovery, and pharmaceutical removal. Urine diversion systems depend greatly on the functionality of nonwater urinals and urine diverting toilets, which are needed to collect undiluted urine. However, the urea hydrolysis reaction creates conditions that lead to precipitation in the fixtures due to the increase in pH from 6 to 9 as ammonia and bicarbonate are produced. Chapter 2 and Chapter 3 describes the creation and use of a cyber-physical system (CPS) to monitor and control urea hydrolysis in the urinal testbed. Two control logics were used to control urea hydrolysis in realistic restroom conditions. In the experiments, acid was added to inhibit urea hydrolysis during periods of high and low building occupancy. These results were able to show that acid should be added based on the restroom use in order to efficiently inhibit urea hydrolysis. Chapter 4 advanced the results from Chapter 3 by testing the acid addition control logics in a real restroom with the urinal-on-wheels. The results showed that adding acid during periods of high building occupancy equated to the least amount of acid added and allowed for urea hydrolysis inhibition. This study also analyzed the bacterial communities of the collected urine and found that acid addition changed the structure of the bacterial communities. Chapter 5 showed an example of the capabilities of a CPS when implemented in CI buildings. The study used data mining methods to predict chlorine residuals in premise plumbing in a CI green building. The results showed that advance modeling methods were able to model the system better than traditional methods. These results show that CPS technology can be used to illuminate systems and can provide information needed to understand conditions within CI buildings.
ContributorsSaetta, Daniella (Author) / Boyer, Treavor H (Thesis advisor) / Hamilton, Kerry (Committee member) / Ross, Heather M. (Committee member) / Boscovic, Dragan (Committee member) / Arizona State University (Publisher)
Created2021
161829-Thumbnail Image.png
Description
The use of spatial data has become very fundamental in today's world. Ranging from fitness trackers to food delivery services, almost all application records users' location information and require clean geospatial data to enhance various application features. As spatial data flows in from heterogeneous sources various problems arise. The study

The use of spatial data has become very fundamental in today's world. Ranging from fitness trackers to food delivery services, almost all application records users' location information and require clean geospatial data to enhance various application features. As spatial data flows in from heterogeneous sources various problems arise. The study of entity matching has been a fervent step in the process of producing clean usable data. Entity matching is an amalgamation of various sub-processes including blocking and matching. At the end of an entity matching pipeline, we get deduplicated records of the same real-world entity. Identifying various mentions of the same real-world locations is known as spatial entity matching. While entity matching received significant interest in the field of relational entity matching, the same cannot be said about spatial entity matching. In this dissertation, I build an end-to-end Geospatial Entity Matching framework, GEM, exploring spatial entity matching from a novel perspective. In the current state-of-the-art systems spatial entity matching is only done on one type of geometrical data variant. Instead of confining to matching spatial entities of only point geometry type, I work on extending the boundaries of spatial entity matching to match the more generic polygon geometry entities as well. I propose a methodology to provide support for three entity matching scenarios across different geometrical data types: point X point, point X polygon, polygon X polygon. As mentioned above entity matching consists of various steps but blocking, feature vector creation, and classification are the core steps of the system. GEM comprises an efficient and lightweight blocking technique, GeoPrune, that uses the geohash encoding mechanism to prune away the obvious non-matching spatial entities. Geohashing is a technique to convert a point location coordinates to an alphanumeric code string. This technique proves to be very effective and swift for the blocking mechanism. I leverage the Apache Sedona engine to create the feature vectors. Apache Sedona is a spatial database management system that holds the capacity of processing spatial SQL queries with multiple geometry types without compromising on their original coordinate vector representation. In this step, I re-purpose the spatial proximity operators (SQL queries) in Apache Sedona to create spatial feature dimensions that capture the proximity between a geospatial entity pair. The last step of an entity matching process is matching or classification. The classification step in GEM is a pluggable component, which consumes the feature vector for a spatial entity pair and determines whether the geolocations match or not. The component provides 3 machine learning models that consume the same feature vector and provide a label for the test data based on the training. I conduct experiments with the three classifiers upon multiple large-scale geospatial datasets consisting of both spatial and relational attributes. Data considered for experiments arrives from heterogeneous sources and we pre-align its schema manually. GEM achieves an F-measure of 1.0 for a point X point dataset with 176k total pairs, which is 42% higher than a state-of-the-art spatial EM baseline. It achieves F-measures of 0.966 and 0.993 for the point X polygon dataset with 302M total pairs, and the polygon X polygon dataset with 16M total pairs respectively.
ContributorsShah, Setu Nilesh (Author) / Sarwat, Mohamed (Thesis advisor) / Pedrielli, Giulia (Committee member) / Boscovic, Dragan (Committee member) / Arizona State University (Publisher)
Created2021
131600-Thumbnail Image.png
Description
This study aims to examine how the use of consensus-based transactions, smart contracts,and interoperability, provided by blockchain, may benefit the blood plasma industry. Plasmafractionation is the process of separating blood into multiple components to garner benefitsof increased lifespan, specialized allocation, and decreased waste, thereby creating a morecomplex and flexible supply

This study aims to examine how the use of consensus-based transactions, smart contracts,and interoperability, provided by blockchain, may benefit the blood plasma industry. Plasmafractionation is the process of separating blood into multiple components to garner benefitsof increased lifespan, specialized allocation, and decreased waste, thereby creating a morecomplex and flexible supply chain. Traditional applications of blockchain are developed onthe basis of decentralization—an infeasible policy for this sector due to stringent governmentregulations, such as HIPAA. However, the trusted nature of the relations in the plasmaindustry’s taxonomy proves private and centralized blockchains as the viable alternative.Implementations of blockchain are widely seen across pharmaceutical supply chains to combatthe falsification of possibly afflictive drugs. This system is more difficult to manage withblood, due to the quick perishable time, tracking/tracing of recycled components, and thenecessity of real-time metrics. Key attributes of private blockchains, such as digital identity,smart contracts, and authorized ledgers, may have the possibility of providing a significantpositive impact on the allocation and management functions of blood banks. Herein, we willidentify the economy and risks of the plasma ecosystem to extrapolate specific applications forthe use of blockchain technology. To understand tangible effects of blockchain, we developeda proof of concept application, aiming to emulate the business logic of modern plasma supplychain ecosystems adopting a blockchain data structure. The application testing simulates thesupply chain via agent-based modeling to analyze the scalability, benefits, and limitations ofblockchain for the plasma fractionation industry.
ContributorsVallabhaneni, Saipavan K (Author) / Boscovic, Dragan (Thesis director) / Kellso, James (Committee member) / Department of Information Systems (Contributor) / Department of Supply Chain Management (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
168532-Thumbnail Image.png
Description
In this work, I propose a novel, unsupervised framework titled SATLAB, to label satellite images, given a classification task at hand. Existing models for satellite image classification such as DeepSAT and DeepSAT-V2 rely on deep learning models that are label-hungry and require a significant amount of training data. Since manual

In this work, I propose a novel, unsupervised framework titled SATLAB, to label satellite images, given a classification task at hand. Existing models for satellite image classification such as DeepSAT and DeepSAT-V2 rely on deep learning models that are label-hungry and require a significant amount of training data. Since manual curation of labels is expensive, I ensure that SATLAB requires zero training labels. SATLAB can work in conjunction with several generative and unsupervised machine learning models by allowing them to be seamlessly plugged into its architecture. I devise three operating modes for SATLAB - manual, semi-automatic and automatic which require varying levels of human intervention in creating the domain-specific labeling functions for each image that can be utilized by the candidate generative models such as Snorkel, as well as other unsupervised learners in SATLAB. Unlike existing supervised learning baselines which only extract textural features from satellite images, I support the extraction of both textural and geospatial features in SATLAB, and I empirically show that geospatial features enhance the classification F1-score by 33%. I build SATLAB on the top of Apache Sedona in order to leverage its rich set of spatial query processing operators for the extraction of geospatial features from satellite raster images. I evaluate SATLAB on a target binary classification task that distinguishes slum from non-slum areas, upon a repository of 100K satellite images captured by the Sentinel satellite program. My 5-Fold Cross Validation (CV) experiments show that SATLAB achieves competitive F1-scores (0.6) using 0% labeled data while the best baseline supervised learning baseline achieves 0.74 F1-score using 80% labeled data. I also show that Snorkel outperforms alternative generative and unsupervised candidate models that can be plugged into SATLAB by 33% to 71% w.r.t. F1-score and 3 times to 73 times w.r.t. latency. I also show that downstream classifiers trained using the labels generated by SATLAB are comparable in quality (0.63 F1) to their counterpart classifiers (0.74 F1) trained on manually curated labels.
ContributorsAggarwal, Shantanu (Author) / Sarwat, Mohamed (Thesis advisor) / Zou, Jia (Committee member) / Boscovic, Dragan (Committee member) / Arizona State University (Publisher)
Created2022