Matching Items (6)
Filtering by

Clear all filters

134339-Thumbnail Image.png
Description
Implementing a distributed algorithm is more complicated than implementing a non-distributed algorithm. This is because distributed systems involve coordination of different processes each of which has a partial view of the global system state. The only way to share information in a distributed system is by message passing. Task that

Implementing a distributed algorithm is more complicated than implementing a non-distributed algorithm. This is because distributed systems involve coordination of different processes each of which has a partial view of the global system state. The only way to share information in a distributed system is by message passing. Task that are straightforward in a non-distributed system, like deciding on the value of a global system state, can be quite complicated to achieve in a distributed system [1]. On top of the difficulties caused by the distributed nature of the computations, distributed systems typically need to be able to operate normally even if some of the nodes in the system are faulty which further adds to the uncertainty that processes have about the global state. Many factors make the implementation of a distributed algorithms difficult. Design patterns [2] are useful in simplifying the development of general algorithms. A design pattern describes a high level solution to a common, abstract problem that many systems may face. Common structural, creational, and behavioral problems are identified and elegantly solved by design patterns. By identifying features that an algorithm uses, and framing each feature as one of the common problems that a specific design pattern solves, designing a robust implementation of an algorithm becomes more manageable. In this way, design patterns can aid the implementation of algorithms. Unfortunately, design patterns are typically not discussed when developing distributed algorithms. Because correctly developing a distributed algorithm is difficult, many papers (eg. [1], [3], [4]) focus on verifying the correctness of the developed algorithm. Papers that are more practical ([5], [6]) establish the correctness of their algorithm and that their algorithm is efficient enough to be practical. However, papers on distributed algorithms usually make little mention of design patterns. The goal of this work was to gain experience implementing distributed systems including learning the application of design patterns and the application of related technical topics. This was achieved by implementing a currently unpublished algorithm that is tentatively called Bakery Consensus. Bakery Consensus is a replicated state-machine protocol that can tolerate servers with Byzantine faults, but assumes non-faulty clients. The algorithm also establishes non-skipping timestamps for each operation completed by the replicated state-machine. The design of the structure, communication, and creation of the different system parts depended heavily upon the book Design Patterns [2]. After implementing the system, the success of the in implementing its various parts was based upon their ability to satisfy the SOLID [7] principles as well as their ability to establish low coupling and high cohesion [8]. The rest of this paper is organized as follows. We begin by providing background information about distributed algorithms, including replicated state-machine protocols and the Bakery Consensus algorithm. Section 3 gives a background on several design patterns and software engineering principles that were used in the development process. Section 4 discusses the well designed parts of the system that used design patterns, and how these design patterns were chosen. Section 5 discusses well designed system parts that relied upon other technical topics. Section 6 discusses system parts that need redesign. The conclusion summarizes what was accomplished by the implementation process and the lessons learned about design patterns for distributed algorithms.
ContributorsStoutenburg, Tristan Kaleb (Author) / Bazzi, Rida (Thesis director) / Richa, Andrea (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
155389-Thumbnail Image.png
Description
Large-scale $\ell_1$-regularized loss minimization problems arise in high-dimensional applications such as compressed sensing and high-dimensional supervised learning, including classification and regression problems. In many applications, it remains challenging to apply the sparse learning model to large-scale problems that have massive data samples with high-dimensional features. One popular and promising strategy

Large-scale $\ell_1$-regularized loss minimization problems arise in high-dimensional applications such as compressed sensing and high-dimensional supervised learning, including classification and regression problems. In many applications, it remains challenging to apply the sparse learning model to large-scale problems that have massive data samples with high-dimensional features. One popular and promising strategy is to scaling up the optimization problem in parallel. Parallel solvers run multiple cores on a shared memory system or a distributed environment to speed up the computation, while the practical usage is limited by the huge dimension in the feature space and synchronization problems.

In this dissertation, I carry out the research along the direction with particular focuses on scaling up the optimization of sparse learning for supervised and unsupervised learning problems. For the supervised learning, I firstly propose an asynchronous parallel solver to optimize the large-scale sparse learning model in a multithreading environment. Moreover, I propose a distributed framework to conduct the learning process when the dataset is distributed stored among different machines. Then the proposed model is further extended to the studies of risk genetic factors for Alzheimer's Disease (AD) among different research institutions, integrating a group feature selection framework to rank the top risk SNPs for AD. For the unsupervised learning problem, I propose a highly efficient solver, termed Stochastic Coordinate Coding (SCC), scaling up the optimization of dictionary learning and sparse coding problems. The common issue for the medical imaging research is that the longitudinal features of patients among different time points are beneficial to study together. To further improve the dictionary learning model, I propose a multi-task dictionary learning method, learning the different task simultaneously and utilizing shared and individual dictionary to encode both consistent and changing imaging features.
ContributorsLi, Qingyang (Author) / Ye, Jieping (Thesis advisor) / Xue, Guoliang (Thesis advisor) / He, Jingrui (Committee member) / Wang, Yalin (Committee member) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2017
155666-Thumbnail Image.png
Description
Imagine that we have a piece of matter that can change its physical properties like its shape, density, conductivity, or color in a programmable fashion based on either user input or autonomous sensing. This is the vision behind what is commonly known as programmable matter. Envisioning systems of nano-sensors devices,

Imagine that we have a piece of matter that can change its physical properties like its shape, density, conductivity, or color in a programmable fashion based on either user input or autonomous sensing. This is the vision behind what is commonly known as programmable matter. Envisioning systems of nano-sensors devices, programmable matter consists of systems of simple computational elements, called particles, that can establish and release bonds, compute, and can actively move in a self-organized way. In this dissertation the feasibility of solving fundamental problems relevant for programmable matter is investigated. As a model for such self-organizing particle systems (SOPS), the geometric amoebot model is introduced. In this model, particles only have local information and have modest computational power. They achieve locomotion by expanding and contracting, which resembles the behavior of amoeba. Under this model, efficient local-control algorithms for the leader election problem in SOPS are presented. As a central problem for programmable matter, shape formation problems are then studied. The limitations of solving the leader election problem and the shape formation problem on a more general version of the amoebot model are also discussed. The \smart paint" problem is also studied which aims at having the particles self-organize in order to uniformly coat the surface of an object of arbitrary shape and size, forming multiple coating layers if necessary. A Universal Coating algorithm is presented and shown to be asymptotically worst-case optimal both in terms of time with high probability and work. In particular, the algorithm always terminates within a linear number of rounds with high probability. A linear lower bound on the competitive gap between fully local coating algorithms and coating algorithms that rely on global information is presented, which implies that the proposed algorithm is also optimal in a competitive sense. Simulation results show that the competitive ratio of the proposed algorithm may be better than linear in practice. Developed algorithms utilize only local control, require only constant-size memory particles, and are asymptotically optimal in terms of the total number of particle movements needed to reach the desired shape configuration.
ContributorsDerakhshandeh, Zahra (Author) / Richa, Andrea (Thesis advisor) / Sen, Arunabha (Thesis advisor) / Xue, Guoliang (Committee member) / Scheideler, Christian (Committee member) / Arizona State University (Publisher)
Created2017
131600-Thumbnail Image.png
Description
This study aims to examine how the use of consensus-based transactions, smart contracts,and interoperability, provided by blockchain, may benefit the blood plasma industry. Plasmafractionation is the process of separating blood into multiple components to garner benefitsof increased lifespan, specialized allocation, and decreased waste, thereby creating a morecomplex and flexible supply

This study aims to examine how the use of consensus-based transactions, smart contracts,and interoperability, provided by blockchain, may benefit the blood plasma industry. Plasmafractionation is the process of separating blood into multiple components to garner benefitsof increased lifespan, specialized allocation, and decreased waste, thereby creating a morecomplex and flexible supply chain. Traditional applications of blockchain are developed onthe basis of decentralization—an infeasible policy for this sector due to stringent governmentregulations, such as HIPAA. However, the trusted nature of the relations in the plasmaindustry’s taxonomy proves private and centralized blockchains as the viable alternative.Implementations of blockchain are widely seen across pharmaceutical supply chains to combatthe falsification of possibly afflictive drugs. This system is more difficult to manage withblood, due to the quick perishable time, tracking/tracing of recycled components, and thenecessity of real-time metrics. Key attributes of private blockchains, such as digital identity,smart contracts, and authorized ledgers, may have the possibility of providing a significantpositive impact on the allocation and management functions of blood banks. Herein, we willidentify the economy and risks of the plasma ecosystem to extrapolate specific applications forthe use of blockchain technology. To understand tangible effects of blockchain, we developeda proof of concept application, aiming to emulate the business logic of modern plasma supplychain ecosystems adopting a blockchain data structure. The application testing simulates thesupply chain via agent-based modeling to analyze the scalability, benefits, and limitations ofblockchain for the plasma fractionation industry.
ContributorsVallabhaneni, Saipavan K (Author) / Boscovic, Dragan (Thesis director) / Kellso, James (Committee member) / Department of Information Systems (Contributor) / Department of Supply Chain Management (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
Description
Realistic lighting is important to improve immersion and make mixed reality applications seem more plausible. To properly blend the AR objects in the real scene, it is important to study the lighting of the environment. The existing illuminationframeworks proposed by Google’s ARCore (Google’s Augmented Reality Software Development Kit) and Apple’s

Realistic lighting is important to improve immersion and make mixed reality applications seem more plausible. To properly blend the AR objects in the real scene, it is important to study the lighting of the environment. The existing illuminationframeworks proposed by Google’s ARCore (Google’s Augmented Reality Software Development Kit) and Apple’s ARKit (Apple’s Augmented Reality Software Development Kit) are computationally expensive and have very slow refresh rates, which make them incompatible for dynamic environments and low-end mobile devices. Recently, there have been other illumination estimation frameworks such as GLEAM, Xihe, which aim at providing better illumination with faster refresh rates. GLEAM is an illumination estimation framework that understands the real scene by collecting pixel data from a reflecting spherical light probe. GLEAM uses this data to form environment cubemaps which are later mapped onto a reflection probe to generate illumination for AR objects. It is noticed that from a single viewpoint only one half of the light probe can be observed at a time which does not give complete information about the environment. This leads to the idea of having a multi-viewpoint estimation for better performance. This thesis work analyzes the multi-viewpoint capabilities of AR illumination frameworks that use physical light probes to understand the environment. The current work builds networking using TCP and UDP protocols on GLEAM. This thesis work also documents how processor load sharing has been done while networking devices and how that benefits the performance of GLEAM on mobile devices. Some enhancements using multi-threading have also been made to the already existing GLEAM model to improve its performance.
ContributorsGurram, Sahithi (Author) / LiKamWa, Robert (Thesis advisor) / Jayasuriya, Suren (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2022
190995-Thumbnail Image.png
Description
This dissertation is an examination of collective systems of computationally limited agents that require coordination to achieve complex ensemble behaviors or goals. The design of coordination strategies can be framed as multiagent optimization problems, which are addressed in this work from both theoretical and practical perspectives. The primary foci of

This dissertation is an examination of collective systems of computationally limited agents that require coordination to achieve complex ensemble behaviors or goals. The design of coordination strategies can be framed as multiagent optimization problems, which are addressed in this work from both theoretical and practical perspectives. The primary foci of this study are models where computation is distributed over the agents themselves, which are assumed to possess onboard computational capabilities. There exist many assumption variants for distributed models, including fairness and concurrency properties. In general, there is a fundamental trade-off whereby weakening model assumptions increases the applicability of proposed solutions, while also increasing the difficulty of proving theoretical guarantees. This dissertation aims to produce a deeper understanding of this trade-off with respect to multiagent optimization and scalability in distributed settings. This study considers four multiagent optimization problems. The model assumptions begin with fully centralized computation for the all-or-nothing multicommodity flow problem, then progress to synchronous distributed models through examination of the unmapped multivehicle routing problem and the distributed target localization problem. The final model is again distributed but assumes an unfair asynchronous adversary in the context of the energy distribution problem for programmable matter. For these problems, a variety of algorithms are presented, each of which is grounded in a theoretical foundation that permits formal guarantees regarding correctness, running time, and other critical properties. These guarantees are then validated with in silico simulations and (in some cases) physical experiments, demonstrating empirically that they may carry over to the real world. Hence, this dissertation bridges a portion of the predictability-practicality gap with respect to multiagent optimization problems.
ContributorsWeber, Jamison Wayne (Author) / Richa, Andréa W (Thesis advisor) / Bertsekas, Dimitri P (Committee member) / Murphey, Todd D (Committee member) / Jiang, Zilin (Committee member) / Arizona State University (Publisher)
Created2023