Matching Items (81)
156215-Thumbnail Image.png
Description
Project portfolio selection (PPS) is a significant problem faced by most organizations. How to best select the many innovative ideas that a company has developed to deploy in a proper and sustained manner with a balanced allocation of its resources over multiple time periods is one of vital importance to

Project portfolio selection (PPS) is a significant problem faced by most organizations. How to best select the many innovative ideas that a company has developed to deploy in a proper and sustained manner with a balanced allocation of its resources over multiple time periods is one of vital importance to a company's goals. This dissertation details the steps involved in deploying a more intuitive portfolio selection framework that facilitates bringing analysts and management to a consensus on ongoing company efforts and buy into final decisions. A binary integer programming selection model that constructs an efficient frontier allows the evaluation of portfolios on many different criteria and allows decision makers (DM) to bring their experience and insight to the table when making a decision is discussed. A binary fractional integer program provides additional choices by optimizing portfolios on cost-benefit ratios over multiple time periods is also presented. By combining this framework with an `elimination by aspects' model of decision making, DMs evaluate portfolios on various objectives and ensure the selection of a portfolio most in line with their goals. By presenting a modeling framework to easily model a large number of project inter-dependencies and an evolutionary algorithm that is intelligently guided in the search for attractive portfolios by a beam search heuristic, practitioners are given a ready recipe to solve big problem instances to generate attractive project portfolios for their organizations. Finally, this dissertation attempts to address the problem of risk and uncertainty in project portfolio selection. After exploring the selection of portfolios based on trade-offs between a primary benefit and a primary cost, the third important dimension of uncertainty of outcome and the risk a decision maker is willing to take on in their quest to select the best portfolio for their organization is examined.
ContributorsSampath, Siddhartha (Author) / Gel, Esma (Thesis advisor) / Fowler, Jown W (Thesis advisor) / Kempf, Karl G. (Committee member) / Pan, Rong (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2018
155983-Thumbnail Image.png
Description
This research develops heuristics to manage both mandatory and optional network capacity reductions to better serve the network flows. The main application discussed relates to transportation networks, and flow cost relates to travel cost of users of the network. Temporary mandatory capacity reductions are required by maintenance activities. The objective

This research develops heuristics to manage both mandatory and optional network capacity reductions to better serve the network flows. The main application discussed relates to transportation networks, and flow cost relates to travel cost of users of the network. Temporary mandatory capacity reductions are required by maintenance activities. The objective of managing maintenance activities and the attendant temporary network capacity reductions is to schedule the required segment closures so that all maintenance work can be completed on time, and the total flow cost over the maintenance period is minimized for different types of flows. The goal of optional network capacity reduction is to selectively reduce the capacity of some links to improve the overall efficiency of user-optimized flows, where each traveler takes the route that minimizes the traveler’s trip cost. In this dissertation, both managing mandatory and optional network capacity reductions are addressed with the consideration of network-wide flow diversions due to changed link capacities.

This research first investigates the maintenance scheduling in transportation networks with service vehicles (e.g., truck fleets and passenger transport fleets), where these vehicles are assumed to take the system-optimized routes that minimize the total travel cost of the fleet. This problem is solved with the randomized fixed-and-optimize heuristic developed. This research also investigates the maintenance scheduling in networks with multi-modal traffic that consists of (1) regular human-driven cars with user-optimized routing and (2) self-driving vehicles with system-optimized routing. An iterative mixed flow assignment algorithm is developed to obtain the multi-modal traffic assignment resulting from a maintenance schedule. The genetic algorithm with multi-point crossover is applied to obtain a good schedule.

Based on the Braess’ paradox that removing some links may alleviate the congestion of user-optimized flows, this research generalizes the Braess’ paradox to reduce the capacity of selected links to improve the efficiency of the resultant user-optimized flows. A heuristic is developed to identify links to reduce capacity, and the corresponding capacity reduction amounts, to get more efficient total flows. Experiments on real networks demonstrate the generalized Braess’ paradox exists in reality, and the heuristic developed solves real-world test cases even when commercial solvers fail.
ContributorsPeng, Dening (Author) / Mirchandani, Pitu B. (Thesis advisor) / Sefair, Jorge (Committee member) / Wu, Teresa (Committee member) / Zhou, Xuesong (Committee member) / Arizona State University (Publisher)
Created2017
157491-Thumbnail Image.png
Description
Researchers and practitioners have widely studied road network traffic data in different areas such as urban planning, traffic prediction and spatial-temporal databases. For instance, researchers use such data to evaluate the impact of road network changes. Unfortunately, collecting large-scale high-quality urban traffic data requires tremendous efforts because participating vehicles must

Researchers and practitioners have widely studied road network traffic data in different areas such as urban planning, traffic prediction and spatial-temporal databases. For instance, researchers use such data to evaluate the impact of road network changes. Unfortunately, collecting large-scale high-quality urban traffic data requires tremendous efforts because participating vehicles must install Global Positioning System(GPS) receivers and administrators must continuously monitor these devices. There have been some urban traffic simulators trying to generate such data with different features. However, they suffer from two critical issues (1) Scalability: most of them only offer single-machine solution which is not adequate to produce large-scale data. Some simulators can generate traffic in parallel but do not well balance the load among machines in a cluster. (2) Granularity: many simulators do not consider microscopic traffic situations including traffic lights, lane changing, car following. This paper proposed GeoSparkSim, a scalable traffic simulator which extends Apache Spark to generate large-scale road network traffic datasets with microscopic traffic simulation. The proposed system seamlessly integrates with a Spark-based spatial data management system, GeoSpark, to deliver a holistic approach that allows data scientists to simulate, analyze and visualize large-scale urban traffic data. To implement microscopic traffic models, GeoSparkSim employs a simulation-aware vehicle partitioning method to partition vehicles among different machines such that each machine has a balanced workload. The experimental analysis shows that GeoSparkSim can simulate the movements of 200 thousand cars over an extensive road network (250 thousand road junctions and 300 thousand road segments).
ContributorsFu, Zishan (Author) / Sarwat, Mohamed (Thesis advisor) / Pedrielli, Giulia (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2019
157496-Thumbnail Image.png
Description
The shift in focus of manufacturing systems to high-mix and low-volume production poses a challenge to both efficient scheduling of manufacturing operations and effective assessment of production capacity. This thesis considers the problem of scheduling a set of jobs that require machine and worker resources to complete their manufacturing operations.

The shift in focus of manufacturing systems to high-mix and low-volume production poses a challenge to both efficient scheduling of manufacturing operations and effective assessment of production capacity. This thesis considers the problem of scheduling a set of jobs that require machine and worker resources to complete their manufacturing operations. Although planners in manufacturing contexts typically focus solely on machines, schedules that only consider machining requirements may be problematic during implementation because machines need skilled workers and cannot run unsupervised. The model used in this research will be beneficial to these environments as planners would be able to determine more realistic assignments and operation sequences to minimize the total time required to complete all jobs. This thesis presents a mathematical formulation for concurrent scheduling of machines and workers that can optimally schedule a set of jobs while accounting for changeover times between operations. The mathematical formulation is based on disjunctive constraints that capture the conflict between operations when trying to schedule them to be performed by the same machine or worker. An additional formulation extends the previous one to consider how cross-training may impact the production capacity and, for a given budget, provide training recommendations for specific workers and operations to reduce the makespan. If training a worker is advantageous to increase production capacity, the model recommends the best time window to complete it such that overlaps with work assignments are avoided. It is assumed that workers can perform tasks involving the recently acquired skills as soon as training is complete. As an alternative to the mixed-integer programming formulations, this thesis provides a math-heuristic approach that fixes the order of some operations based on Largest Processing Time (LPT) and Shortest Processing Time (SPT) procedures, while allowing the exact formulation to find the optimal schedule for the remaining operations. Computational experiments include the use of the solution for the no-training problem as a starting feasible solution to the training problem. Although the models provided are general, the manufacturing of Printed Circuit Boards are used as a case study.
ContributorsAdams, Katherine Bahia (Author) / Sefair, Jorge (Thesis advisor) / Askin, Ronald (Thesis advisor) / Webster, Scott (Committee member) / Arizona State University (Publisher)
Created2019
157244-Thumbnail Image.png
Description
I study the problem of locating Relay nodes (RN) to improve the connectivity of a set

of already deployed sensor nodes (SN) in a Wireless Sensor Network (WSN). This is

known as the Relay Node Placement Problem (RNPP). In this problem, one or more

nodes called Base Stations (BS) serve as the collection

I study the problem of locating Relay nodes (RN) to improve the connectivity of a set

of already deployed sensor nodes (SN) in a Wireless Sensor Network (WSN). This is

known as the Relay Node Placement Problem (RNPP). In this problem, one or more

nodes called Base Stations (BS) serve as the collection point of all the information

captured by SNs. SNs have limited transmission range and hence signals are transmitted

from the SNs to the BS through multi-hop routing. As a result, the WSN

is said to be connected if there exists a path for from each SN to the BS through

which signals can be hopped. The communication range of each node is modeled

with a disk of known radius such that two nodes are said to communicate if their

communication disks overlap. The goal is to locate a given number of RNs anywhere

in the continuous space of the WSN to maximize the number of SNs connected (i.e.,

maximize the network connectivity). To solve this problem, I propose an integer

programming based approach that iteratively approximates the Euclidean distance

needed to enforce sensor communication. This is achieved through a cutting-plane

approach with a polynomial-time separation algorithm that identies distance violations.

I illustrate the use of my algorithm on large-scale instances of up to 75 nodes

which can be solved in less than 60 minutes. The proposed method shows solutions

times many times faster than an alternative nonlinear formulation.
ContributorsSurendran, Vishal Sairam Jaitra (Author) / Sefair, Jorge (Thesis advisor) / Mirchandani, Pitu (Committee member) / Grubesic, Anthony (Committee member) / Arizona State University (Publisher)
Created2019
Description
Distant is a Game Design Document describing an original game by the same name. The game was designed around the principle of core aesthetics, where the user experience is defined first and then the game is built from that experience. Distant is an action-exploration game set on a huge megastructure

Distant is a Game Design Document describing an original game by the same name. The game was designed around the principle of core aesthetics, where the user experience is defined first and then the game is built from that experience. Distant is an action-exploration game set on a huge megastructure floating in the atmosphere of Saturn. Players take on the role of HUE, an artificial intelligence trapped in the body of a maintenance robot, as he explores this strange world and uncovers its secrets. Using acrobatic movement abilities, players will solve puzzles, evade enemies, and explore the world from top to bottom. The world, known as the Strobilus Megastructure, is conical in shape, with living quarters and environmental system in the upper sections and factories and resource mining in the lower sections. The game world is split up into 10 major areas and countless minor and connecting areas. Special movement abilities like wall running and anti-gravity allow players to progress further down in the world. These abilities also allow players to solve more complicated puzzles, and to find more difficult to reach items. The story revolves around six artificial intelligences that were created to maintain the station. Many centuries ago, these AI helped humankind maintain their day-to-day lives and helped researchers working on new scientific breakthroughs. This led to the discovery of faster-than-light travel, and humanity left the station and our solar system to explore the cosmos. HUE, the AI in charge of human relations, fell into depression and shut down. Awakening several hundred years in the future, HUE sets out to find the other AI. Along the way he helps them reconnect and discovers the history and secrets of the station. Distant is intended for players looking for three things: A fantastic world full of discovery, a rich, character driven narrative, and challenging acrobatic gameplay. Players of any age or background are recommended to give it a try, but it will require investment and a willingness to improve. Distant is intended to change players, to force them to confront difficulty and different perspectives. Most games involve upgrading a character; Distant is a game that upgrades the player.
ContributorsGarttmeier, Colin Reiser (Author) / Collins, Daniel (Thesis director) / Amresh, Ashish (Committee member) / School of Arts, Media and Engineering (Contributor) / Computing and Informatics Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136495-Thumbnail Image.png
Description
The objective of this project concentrates on the game Defense of the Ancients 2 (Dota 2). In this game, players are constantly striving to improve their skills, which are fueled by the competitive nature of the game. The design influences the community to engage in this interaction as they play

The objective of this project concentrates on the game Defense of the Ancients 2 (Dota 2). In this game, players are constantly striving to improve their skills, which are fueled by the competitive nature of the game. The design influences the community to engage in this interaction as they play the game cooperatively. This thesis illustrates the importance of player interaction in influencing design as well as how imperative design is in affecting player interaction. These two concepts are not separate, but are deeply entwined. Every action performed within a game has to interact with some element of design. Both determine how games become defined as competitive, casual, or creative. Game designers can benefit from this study as it reinforces the basics of developing a game for players to interact with. However, it is impossible to predict exactly how players will react to a designed element. Designers should remember to tailor the game towards their audience, but also react and change the game depending on how players are using the elements of design. In addition, players should continue to push the boundaries of games to help designers adapt their product to their audience. If there is not constant communication between players and designers, games will not be tailored appropriately. Pushing the limits of a game benefits the players as well as the designers to make a more complete game. Designers do not solely create a game for the players. Rather, players design the game for themselves. Keywords: game design, player interaction, affinity space, emergent behavior, Dota 2
ContributorsLarsen, Austin James (Author) / Gee, James Paul (Thesis director) / Holmes, Jeffrey (Committee member) / Kobayashi, Yoshihiro (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / School of Arts, Media and Engineering (Contributor)
Created2015-05
136516-Thumbnail Image.png
Description
Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot

Bots tamper with social media networks by artificially inflating the popularity of certain topics. In this paper, we define what a bot is, we detail different motivations for bots, we describe previous work in bot detection and observation, and then we perform bot detection of our own. For our bot detection, we are interested in bots on Twitter that tweet Arabic extremist-like phrases. A testing dataset is collected using the honeypot method, and five different heuristics are measured for their effectiveness in detecting bots. The model underperformed, but we have laid the ground-work for a vastly untapped focus on bot detection: extremist ideal diffusion through bots.
ContributorsKarlsrud, Mark C. (Author) / Liu, Huan (Thesis director) / Morstatter, Fred (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
135817-Thumbnail Image.png
Description
In 2010, two gamma-ray /x-ray bubbles were detected in the center of the Milky Way Galaxy. These bubbles extend symmetrically ≈ 30, 000 light years above and below the Galactic Center, with a width of ≈ 27, 000 light years. These bubbles emit gamma-rays at energies between 1 and 100

In 2010, two gamma-ray /x-ray bubbles were detected in the center of the Milky Way Galaxy. These bubbles extend symmetrically ≈ 30, 000 light years above and below the Galactic Center, with a width of ≈ 27, 000 light years. These bubbles emit gamma-rays at energies between 1 and 100 giga-electronvolts, have approximately uniform surface brightness, and are expanding at ≈ 30, 000 km/s. We believe that these Fermi Bubbles are the result of an astrophysical jet pulse that occurred millions of years ago. Utilizing high-performance computing and Euler’s Gas Dynamics Equations, we hope to find a realistic simulation that will tell us more about the age of these Fermi Bubbles and better understand the mechanism that powers the bubbles.
ContributorsWagner, Benjamin Leng (Author) / Gardner, Carl (Thesis director) / Jones, Jeremiah (Committee member) / Computing and Informatics Program (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136153-Thumbnail Image.png
Description
Along with the number of technologies that have been introduced over a few years ago, gesture-based human-computer interactions are becoming the new phase in encompassing the creativity and abilities for users to communicate and interact with devices. Because of how the nature of defining free-space gestures influence user's preference and

Along with the number of technologies that have been introduced over a few years ago, gesture-based human-computer interactions are becoming the new phase in encompassing the creativity and abilities for users to communicate and interact with devices. Because of how the nature of defining free-space gestures influence user's preference and the length of usability of gesture-driven devices, defined low-stress and intuitive gestures for users to interact with gesture recognition systems are necessary to consider. To measure stress, a Galvanic Skin Response instrument was used as a primary indicator, which provided evidence of the relationship between stress and intuitive gestures, as well as user preferences towards certain tasks and gestures during performance. Fifteen participants engaged in creating and performing their own gestures for specified tasks that would be required during the use of free-space gesture-driven devices. The tasks include "activation of the display," scroll, page, selection, undo, and "return to main menu." They were also asked to repeat their gestures for around ten seconds each, which would give them time and further insight of how their gestures would be appropriate or not for them and any given task. Surveys were given at different time to the users: one after they had defined their gestures and another after they had repeated their gestures. In the surveys, they ranked their gestures based on comfort, intuition, and the ease of communication. Out of those user-ranked gestures, health-efficient gestures, given that the participants' rankings were based on comfort and intuition, were chosen in regards to the highest ranked gestures.
ContributorsLam, Christine (Author) / Walker, Erin (Thesis director) / Danielescu, Andreea (Committee member) / Barrett, The Honors College (Contributor) / Ira A. Fulton School of Engineering (Contributor) / School of Arts, Media and Engineering (Contributor) / Department of English (Contributor) / Computing and Informatics Program (Contributor)
Created2015-05