Matching Items (8)
Filtering by

Clear all filters

157491-Thumbnail Image.png
Description
Researchers and practitioners have widely studied road network traffic data in different areas such as urban planning, traffic prediction and spatial-temporal databases. For instance, researchers use such data to evaluate the impact of road network changes. Unfortunately, collecting large-scale high-quality urban traffic data requires tremendous efforts because participating vehicles must

Researchers and practitioners have widely studied road network traffic data in different areas such as urban planning, traffic prediction and spatial-temporal databases. For instance, researchers use such data to evaluate the impact of road network changes. Unfortunately, collecting large-scale high-quality urban traffic data requires tremendous efforts because participating vehicles must install Global Positioning System(GPS) receivers and administrators must continuously monitor these devices. There have been some urban traffic simulators trying to generate such data with different features. However, they suffer from two critical issues (1) Scalability: most of them only offer single-machine solution which is not adequate to produce large-scale data. Some simulators can generate traffic in parallel but do not well balance the load among machines in a cluster. (2) Granularity: many simulators do not consider microscopic traffic situations including traffic lights, lane changing, car following. This paper proposed GeoSparkSim, a scalable traffic simulator which extends Apache Spark to generate large-scale road network traffic datasets with microscopic traffic simulation. The proposed system seamlessly integrates with a Spark-based spatial data management system, GeoSpark, to deliver a holistic approach that allows data scientists to simulate, analyze and visualize large-scale urban traffic data. To implement microscopic traffic models, GeoSparkSim employs a simulation-aware vehicle partitioning method to partition vehicles among different machines such that each machine has a balanced workload. The experimental analysis shows that GeoSparkSim can simulate the movements of 200 thousand cars over an extensive road network (250 thousand road junctions and 300 thousand road segments).
ContributorsFu, Zishan (Author) / Sarwat, Mohamed (Thesis advisor) / Pedrielli, Giulia (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2019
157496-Thumbnail Image.png
Description
The shift in focus of manufacturing systems to high-mix and low-volume production poses a challenge to both efficient scheduling of manufacturing operations and effective assessment of production capacity. This thesis considers the problem of scheduling a set of jobs that require machine and worker resources to complete their manufacturing operations.

The shift in focus of manufacturing systems to high-mix and low-volume production poses a challenge to both efficient scheduling of manufacturing operations and effective assessment of production capacity. This thesis considers the problem of scheduling a set of jobs that require machine and worker resources to complete their manufacturing operations. Although planners in manufacturing contexts typically focus solely on machines, schedules that only consider machining requirements may be problematic during implementation because machines need skilled workers and cannot run unsupervised. The model used in this research will be beneficial to these environments as planners would be able to determine more realistic assignments and operation sequences to minimize the total time required to complete all jobs. This thesis presents a mathematical formulation for concurrent scheduling of machines and workers that can optimally schedule a set of jobs while accounting for changeover times between operations. The mathematical formulation is based on disjunctive constraints that capture the conflict between operations when trying to schedule them to be performed by the same machine or worker. An additional formulation extends the previous one to consider how cross-training may impact the production capacity and, for a given budget, provide training recommendations for specific workers and operations to reduce the makespan. If training a worker is advantageous to increase production capacity, the model recommends the best time window to complete it such that overlaps with work assignments are avoided. It is assumed that workers can perform tasks involving the recently acquired skills as soon as training is complete. As an alternative to the mixed-integer programming formulations, this thesis provides a math-heuristic approach that fixes the order of some operations based on Largest Processing Time (LPT) and Shortest Processing Time (SPT) procedures, while allowing the exact formulation to find the optimal schedule for the remaining operations. Computational experiments include the use of the solution for the no-training problem as a starting feasible solution to the training problem. Although the models provided are general, the manufacturing of Printed Circuit Boards are used as a case study.
ContributorsAdams, Katherine Bahia (Author) / Sefair, Jorge (Thesis advisor) / Askin, Ronald (Thesis advisor) / Webster, Scott (Committee member) / Arizona State University (Publisher)
Created2019
157244-Thumbnail Image.png
Description
I study the problem of locating Relay nodes (RN) to improve the connectivity of a set

of already deployed sensor nodes (SN) in a Wireless Sensor Network (WSN). This is

known as the Relay Node Placement Problem (RNPP). In this problem, one or more

nodes called Base Stations (BS) serve as the collection

I study the problem of locating Relay nodes (RN) to improve the connectivity of a set

of already deployed sensor nodes (SN) in a Wireless Sensor Network (WSN). This is

known as the Relay Node Placement Problem (RNPP). In this problem, one or more

nodes called Base Stations (BS) serve as the collection point of all the information

captured by SNs. SNs have limited transmission range and hence signals are transmitted

from the SNs to the BS through multi-hop routing. As a result, the WSN

is said to be connected if there exists a path for from each SN to the BS through

which signals can be hopped. The communication range of each node is modeled

with a disk of known radius such that two nodes are said to communicate if their

communication disks overlap. The goal is to locate a given number of RNs anywhere

in the continuous space of the WSN to maximize the number of SNs connected (i.e.,

maximize the network connectivity). To solve this problem, I propose an integer

programming based approach that iteratively approximates the Euclidean distance

needed to enforce sensor communication. This is achieved through a cutting-plane

approach with a polynomial-time separation algorithm that identies distance violations.

I illustrate the use of my algorithm on large-scale instances of up to 75 nodes

which can be solved in less than 60 minutes. The proposed method shows solutions

times many times faster than an alternative nonlinear formulation.
ContributorsSurendran, Vishal Sairam Jaitra (Author) / Sefair, Jorge (Thesis advisor) / Mirchandani, Pitu (Committee member) / Grubesic, Anthony (Committee member) / Arizona State University (Publisher)
Created2019
156595-Thumbnail Image.png
Description
Coastal areas are susceptible to man-made disasters, such as oil spills, which not

only have a dreadful impact on the lives of coastal communities and businesses but also

have lasting and hazardous consequences. The United States coastal areas, especially

the Gulf of Mexico, have witnessed devastating oil spills of varied sizes and durations

that

Coastal areas are susceptible to man-made disasters, such as oil spills, which not

only have a dreadful impact on the lives of coastal communities and businesses but also

have lasting and hazardous consequences. The United States coastal areas, especially

the Gulf of Mexico, have witnessed devastating oil spills of varied sizes and durations

that resulted in major economic and ecological losses. These disasters affected the oil,

housing, forestry, tourism, and fishing industries with overall costs exceeding billions

of dollars (Baade et al. (2007); Smith et al. (2011)). Extensive research has been

done with respect to oil spill simulation techniques, spatial optimization models, and

innovative strategies to deal with spill response and planning efforts. However, most

of the research done in those areas is done independently of each other, leaving a

conceptual void between them.

In the following work, this thesis presents a Spatial Decision Support System

(SDSS), which efficiently integrates the independent facets of spill modeling techniques

and spatial optimization to enable officials to investigate and explore the various

options to clean up an offshore oil spill to make a more informed decision. This

thesis utilizes Blowout and Spill Occurrence Model (BLOSOM) developed by Sim

et al. (2015) to simulate hypothetical oil spill scenarios, followed by the Oil Spill

Cleanup and Operational Model (OSCOM) developed by Grubesic et al. (2017) to

spatially optimize the response efforts. The results of this combination are visualized

in the SDSS, featuring geographical maps, so the boat ramps from which the response

should be launched can be easily identified along with the amount of oil that hits the

shore thereby visualizing the intensity of the impact of the spill in the coastal areas

for various cleanup targets.
ContributorsPydi Medini, Prannoy Chandra (Author) / Maciejewski, Ross (Thesis advisor) / Grubesic, Anthony (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2018
157744-Thumbnail Image.png
Description
Graphs are commonly used visualization tools in a variety of fields. Algorithms have been proposed that claim to improve the readability of graphs by reducing edge crossings, adjusting edge length, or some other means. However, little research has been done to determine which of these algorithms best suit human perception

Graphs are commonly used visualization tools in a variety of fields. Algorithms have been proposed that claim to improve the readability of graphs by reducing edge crossings, adjusting edge length, or some other means. However, little research has been done to determine which of these algorithms best suit human perception for particular graph properties. This thesis explores four different graph properties: average local clustering coefficient (ALCC), global clustering coefficient (GCC), number of triangles (NT), and diameter. For each of these properties, three different graph layouts are applied to represent three different approaches to graph visualization: multidimensional scaling (MDS), force directed (FD), and tsNET. In a series of studies conducted through the crowdsourcing platform Amazon Mechanical Turk, participants are tasked with discriminating between two graphs in order to determine their just noticeable differences (JNDs) for the four graph properties and three layout algorithm pairs. These results are analyzed using previously established methods presented by Rensink et al. and Kay and Heer.The average JNDs are analyzed using a linear model that determines whether the property-layout pair seems to follow Weber's Law, and the individual JNDs are run through a log-linear model to determine whether it is possible to model the individual variance of the participant's JNDs. The models are evaluated using the R2 score to determine if they adequately explain the data and compared using the Mann-Whitney pairwise U-test to determine whether the layout has a significant effect on the perception of the graph property. These tests indicate that the data collected in the studies can not always be modelled well with either the linear model or log-linear model, which suggests that some properties may not follow Weber's Law. Additionally, the layout algorithm is not found to have a significant impact on the perception of some of these properties.
ContributorsClayton, Benjamin (Author) / Maciejewski, Ross (Thesis advisor) / Kobourov, Stephen (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2019
157571-Thumbnail Image.png
Description
Breeding seeds to include desirable traits (increased yield, drought/temperature resistance, etc.) is a growing and important method of establishing food security. However, besides breeder intuition, few decision-making tools exist that can provide the breeders with credible evidence to make decisions on which seeds to progress to further stages of development.

Breeding seeds to include desirable traits (increased yield, drought/temperature resistance, etc.) is a growing and important method of establishing food security. However, besides breeder intuition, few decision-making tools exist that can provide the breeders with credible evidence to make decisions on which seeds to progress to further stages of development. This thesis attempts to create a chance-constrained knapsack optimization model, which the breeder can use to make better decisions about seed progression and help reduce the levels of risk in their selections. The model’s objective is to select seed varieties out of a larger pool of varieties and maximize the average yield of the “knapsack” based on meeting some risk criteria. Two models are created for different cases. First is the risk reduction model which seeks to reduce the risk of getting a bad yield but still maximize the total yield. The second model considers the possibility of adverse environmental effects and seeks to mitigate the negative effects it could have on the total yield. In practice, breeders can use these models to better quantify uncertainty in selecting seed varieties
ContributorsOzcan, Ozkan Meric (Author) / Armbruster, Dieter (Thesis advisor) / Gel, Esma (Thesis advisor) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2019
161732-Thumbnail Image.png
Description
Computer vision and tracking has become an area of great interest for many reasons, including self-driving cars, identification of vehicles and drivers on roads, and security camera monitoring, all of which are expanding in the modern digital era. When working with practical systems that are constrained in multiple ways, such

Computer vision and tracking has become an area of great interest for many reasons, including self-driving cars, identification of vehicles and drivers on roads, and security camera monitoring, all of which are expanding in the modern digital era. When working with practical systems that are constrained in multiple ways, such as video quality or viewing angle, algorithms that work well theoretically can have a high error rate in practice. This thesis studies several ways in which that error can be minimized.This thesis describes an application in a practical system. This project is to detect, track and count people entering different lanes at an airport security checkpoint, using CCTV videos as a primary source. This thesis improves an existing algorithm that is not optimized for this particular problem and has a high error rate when comparing the algorithm counts with the true volume of users. The high error rate is caused by many people crowding into security lanes at the same time. The camera from which footage was captured is located at a poor angle, and thus many of the people occlude each other and cause the existing algorithm to miss people. One solution is to count only heads; since heads are smaller than a full body, they will occlude less, and in addition, since the camera is angled from above, the heads in back will appear higher and will not be occluded by people in front. One of the primary improvements to the algorithm is to combine both person detections and head detections to improve the accuracy. The proposed algorithm also improves the accuracy of detections. The existing algorithm used the COCO training dataset, which works well in scenarios where people are visible and not occluded. However, the available video quality in this project was not very good, with people often blocking each other from the camera’s view. Thus, a different training set was needed that could detect people even in poor-quality frames and with occlusion. The new training set is the first algorithmic improvement, and although occasionally performing worse, corrected the error by 7.25% on average.
ContributorsLarsen, Andrei (Author) / Askin, Ronald (Thesis advisor) / Sefair, Jorge (Thesis advisor) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
156643-Thumbnail Image.png
Description
When looking at drawings of graphs, questions about graph density, community structures, local clustering and other graph properties may be of critical importance for analysis. While graph layout algorithms have focused on minimizing edge crossing, symmetry, and other such layout properties, there is not much known about how these algorithms

When looking at drawings of graphs, questions about graph density, community structures, local clustering and other graph properties may be of critical importance for analysis. While graph layout algorithms have focused on minimizing edge crossing, symmetry, and other such layout properties, there is not much known about how these algorithms relate to a user’s ability to perceive graph properties for a given graph layout. This study applies previously established methodologies for perceptual analysis to identify which graph drawing layout will help the user best perceive a particular graph property. A large scale (n = 588) crowdsourced experiment is conducted to investigate whether the perception of two graph properties (graph density and average local clustering coefficient) can be modeled using Weber’s law. Three graph layout algorithms from three representative classes (Force Directed - FD, Circular, and Multi-Dimensional Scaling - MDS) are studied, and the results of this experiment establish the precision of judgment for these graph layouts and properties. The findings demonstrate that the perception of graph density can be modeled with Weber’s law. Furthermore, the perception of the average clustering coefficient can be modeled as an inverse of Weber’s law, and the MDS layout showed a significantly different precision of judgment than the FD layout.
ContributorsSoni, Utkarsh (Author) / Maciejewski, Ross (Thesis advisor) / Kobourov, Stephen (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2018