Matching Items (19)
149332-Thumbnail Image.png
Description
Graph coloring is about allocating resources that can be shared except where there are certain pairwise conflicts between recipients. The simplest coloring algorithm that attempts to conserve resources is called first fit. Interval graphs are used in models for scheduling (in computer science and operations research) and in biochemistry for

Graph coloring is about allocating resources that can be shared except where there are certain pairwise conflicts between recipients. The simplest coloring algorithm that attempts to conserve resources is called first fit. Interval graphs are used in models for scheduling (in computer science and operations research) and in biochemistry for one-dimensional molecules such as genetic material. It is not known precisely how much waste in the worst case is due to the first-fit algorithm for coloring interval graphs. However, after decades of research the range is narrow. Kierstead proved that the performance ratio R is at most 40. Pemmaraju, Raman, and Varadarajan proved that R is at most 10. This can be improved to 8. Witsenhausen, and independently Chrobak and Slusarek, proved that R is at least 4. Slusarek improved this to 4.45. Kierstead and Trotter extended the method of Chrobak and Slusarek to one good for a lower bound of 4.99999 or so. The method relies on number sequences with a certain property of order. It is shown here that each sequence considered in the construction satisfies a linear recurrence; that R is at least 5; that the Fibonacci sequence is in some sense minimally useless for the construction; and that the Fibonacci sequence is a point of accumulation in some space for the useful sequences of the construction. Limitations of all earlier constructions are revealed.
ContributorsSmith, David A. (Author) / Kierstead, Henry A. (Thesis advisor) / Czygrinow, Andrzej (Committee member) / Gelb, Anne (Committee member) / Hurlbert, Glenn H. (Committee member) / Kadell, Kevin W. J. (Committee member) / Arizona State University (Publisher)
Created2010
Description
In this work, we explore the potential for realistic and accurate generation of hourly traffic volume with machine learning (ML), using the ground-truth data of Manhattan road segments collected by the New York State Department of Transportation (NYSDOT). Specifically, we address the following question– can we develop a ML algorithm

In this work, we explore the potential for realistic and accurate generation of hourly traffic volume with machine learning (ML), using the ground-truth data of Manhattan road segments collected by the New York State Department of Transportation (NYSDOT). Specifically, we address the following question– can we develop a ML algorithm that generalizes the existing NYSDOT data to all road segments in Manhattan?– by introducing a supervised learning task of multi-output regression, where ML algorithms use road segment attributes to predict hourly traffic volume. We consider four ML algorithms– K-Nearest Neighbors, Decision Tree, Random Forest, and Neural Network– and hyperparameter tune by evaluating the performances of each algorithm with 10-fold cross validation. Ultimately, we conclude that neural networks are the best-performing models and require the least amount of testing time. Lastly, we provide insight into the quantification of “trustworthiness” in a model, followed by brief discussions on interpreting model performance, suggesting potential project improvements, and identifying the biggest takeaways. Overall, we hope our work can serve as an effective baseline for realistic traffic volume generation, and open new directions in the processes of supervised dataset generation and ML algorithm design.
ContributorsOtstot, Kyle (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
157657-Thumbnail Image.png
Description
This dissertation focuses on creating a pluralistic approach to understanding and measuring interdisciplinarity at various scales to further the study of the evolution of knowledge and innovation. Interdisciplinarity is considered an important research component and is closely linked to higher rates of innovation. If the goal is to

This dissertation focuses on creating a pluralistic approach to understanding and measuring interdisciplinarity at various scales to further the study of the evolution of knowledge and innovation. Interdisciplinarity is considered an important research component and is closely linked to higher rates of innovation. If the goal is to create more innovative research, we must understand how interdisciplinarity operates.

I begin by examining interdisciplinarity with a small scope, the research university. This study uses metadata to create co-authorship networks and examine how a change in university policies to increase interdisciplinarity can be successful. The New American University Initiative (NAUI) at Arizona State University (ASU) set forth the goal of making ASU a world hub for interdisciplinary research. This kind of interdisciplinarity is produced from a deliberate, engineered, reorganization of the individuals within the university and the knowledge they contain. By using a set of social network analysis measurements, I created an algorithm to measure the changes to the co-authorship networks that resulted from increased university support for interdisciplinary research.

The second case study increases the scope of interdisciplinarity from individual universities to a single scientific discourse, the Anthropocene. The idea of the Anthropocene began as an idea about the need for a new geological epoch and underwent unsupervised interdisciplinary expansion due to climate change integrating itself into the core of the discourse. In contrast to the NAUI which was specifically engineered to increase interdisciplinarity, the I use keyword co-occurrence networks to measure how the Anthropocene discourse increases its interdisciplinarity through unsupervised expansion after climate change becomes a core keyword within the network and behaves as an anchor point for new disciplines to connect and join the discourse.

The scope of interdisciplinarity increases again with the final case study about the field of evolutionary medicine. Evolutionary medicine is a case of engineered interdisciplinary integration between evolutionary biology and medicine. The primary goal of evolutionary medicine is to better understand "why we get sick" through the lens of evolutionary biology. This makes it an excellent candidate to understand large-scale interdisciplinarity. I show through multiple type of networks and metadata analyses that evolutionary medicine successfully integrates the concepts of evolutionary biology into medicine.

By increasing our knowledge of interdisciplinarity at various scales and how it behaves in different initial conditions, we are better able to understand the elusive nature of innovation. Interdisciplinary can mean different things depending on how its defined. I show that a pluralistic approach to defining and measuring interdisciplinarity is not only appropriate but necessary if our goal is to increase interdisciplinarity, the frequency of innovations, and our understanding of the evolution of knowledge.
ContributorsPainter, Deryc T (Author) / Laubichler, Manfred D (Thesis advisor) / Maienschein, Jane (Committee member) / Bliss, Nadya T (Committee member) / Simeone, Michael P (Committee member) / Nesse, Randolph M. (Committee member) / Arizona State University (Publisher)
Created2019
158314-Thumbnail Image.png
Description
The chromatic number $\chi(G)$ of a graph $G=(V,E)$ is the minimum

number of colors needed to color $V(G)$ such that no adjacent vertices

receive the same color. The coloring number $\col(G)$ of a graph

$G$ is the minimum number $k$ such that there exists a linear ordering

of $V(G)$ for which each vertex has

The chromatic number $\chi(G)$ of a graph $G=(V,E)$ is the minimum

number of colors needed to color $V(G)$ such that no adjacent vertices

receive the same color. The coloring number $\col(G)$ of a graph

$G$ is the minimum number $k$ such that there exists a linear ordering

of $V(G)$ for which each vertex has at most $k-1$ backward neighbors.

It is well known that the coloring number is an upper bound for the

chromatic number. The weak $r$-coloring number $\wcol_{r}(G)$ is

a generalization of the coloring number, and it was first introduced

by Kierstead and Yang \cite{77}. The weak $r$-coloring number $\wcol_{r}(G)$

is the minimum integer $k$ such that for some linear ordering $L$

of $V(G)$ each vertex $v$ can reach at most $k-1$ other smaller

vertices $u$ (with respect to $L$) with a path of length at most

$r$ and $u$ is the smallest vertex in the path. This dissertation proves that $\wcol_{2}(G)\le23$ for every planar graph $G$.

The exact distance-$3$ graph $G^{[\natural3]}$ of a graph $G=(V,E)$

is a graph with $V$ as its set of vertices, and $xy\in E(G^{[\natural3]})$

if and only if the distance between $x$ and $y$ in $G$ is $3$.

This dissertation improves the best known upper bound of the

chromatic number of the exact distance-$3$ graphs $G^{[\natural3]}$

of planar graphs $G$, which is $105$, to $95$. It also improves

the best known lower bound, which is $7$, to $9$.

A class of graphs is nowhere dense if for every $r\ge 1$ there exists $t\ge 1$ such that no graph in the class contains a topological minor of the complete graph $K_t$ where every edge is subdivided at most $r$ times. This dissertation gives a new characterization of nowhere dense classes using generalized notions of the domination number.
ContributorsAlmulhim, Ahlam (Author) / Kierstead, Henry (Thesis advisor) / Sen, Arunabha (Committee member) / Richa, Andrea (Committee member) / Czygrinow, Andrzej (Committee member) / Fishel, Susanna (Committee member) / Arizona State University (Publisher)
Created2020
161588-Thumbnail Image.png
Description
Ensuring reliable operation of large power systems subjected to multiple outages is a challenging task because of the combinatorial nature of the problem. Traditional methods of steady-state security assessment in power systems involve contingency analysis based on AC or DC power flows. However, power flow based contingency analysis is not

Ensuring reliable operation of large power systems subjected to multiple outages is a challenging task because of the combinatorial nature of the problem. Traditional methods of steady-state security assessment in power systems involve contingency analysis based on AC or DC power flows. However, power flow based contingency analysis is not fast enough to evaluate all contingencies for real-time operations. Therefore, real-time contingency analysis (RTCA) only evaluates a subset of the contingencies (called the contingency list), and hence might miss critical contingencies that lead to cascading failures.This dissertation proposes a new graph-theoretic approach, called the feasibility test (FT) algorithm, for analyzing whether a contingency will create a saturated or over-loaded cut-set in a meshed power network; a cut-set denotes a set of lines which if tripped separates the network into two disjoint islands. A novel feature of the proposed approach is that it lowers the solution time significantly making the approach viable for an exhaustive real-time evaluation of the system. Detecting saturated cut-sets in the power system is important because they represent the vulnerable bottlenecks in the network. The robustness of the FT algorithm is demonstrated on a 17,000+ bus model of the Western Interconnection (WI). Following the detection of post-contingency cut-set saturation, a two-component methodology is proposed to enhance the reliability of large power systems during a series of outages. The first component combines the proposed FT algorithm with RTCA to create an integrated corrective action (iCA), whose goal is to secure the power system against post-contingency cut-set saturation as well as critical branch overloads. The second component only employs the results of the FT to create a relaxed corrective action (rCA) that quickly secures the system against saturated cut-sets. The first component is more comprehensive than the second, but the latter is computationally more efficient. The effectiveness of the two components is evaluated based upon the number of cascade triggering contingencies alleviated, and the computation time. Analysis of different case-studies on the IEEE 118-bus and 2000-bus synthetic Texas systems indicate that the proposed two-component methodology enhances the scope and speed of power system security assessment during multiple outages.
ContributorsSen Biswas, Reetam (Author) / Pal, Anamitra (Thesis advisor) / Vittal, Vijay (Committee member) / Undrill, John (Committee member) / Wu, Meng (Committee member) / Zhang, Yingchen (Committee member) / Arizona State University (Publisher)
Created2021
160061-Thumbnail Image.png
Description
Extremal graph theory results often provide minimum degree

conditions which guarantee a copy of one graph exists within

another. A perfect $F$-tiling of a graph $G$ is a collection

$\mathcal{F}$ of subgraphs of $G$ such that every element of

$\mathcal{F}$ is isomorphic to $F$ and such that every vertex in $G$

is in exactly one

Extremal graph theory results often provide minimum degree

conditions which guarantee a copy of one graph exists within

another. A perfect $F$-tiling of a graph $G$ is a collection

$\mathcal{F}$ of subgraphs of $G$ such that every element of

$\mathcal{F}$ is isomorphic to $F$ and such that every vertex in $G$

is in exactly one element of $\mathcal{F}$. Let $C^{3}_{t}$ denote

the loose cycle on $t = 2s$ vertices, the $3$-uniform hypergraph

obtained by replacing the edges $e = \{u, v\}$ of a graph cycle $C$

on $s$ vertices with edge triples $\{u, x_e, v\}$, where $x_e$ is

uniquely assigned to $e$. This dissertation proves for even

$t \geq 6$, that any sufficiently large $3$-uniform hypergraph $H$

on $n \in t \mathbb{Z}$ vertices with minimum $1$-degree

$\delta^1(H) \geq {n - 1 \choose 2} - {\Bsize \choose 2} + c(t,n) +

1$, where $c(t,n) \in \{0, 1, 3\}$, contains a perfect

$C^{3}_{t}$-tiling. The result is tight, generalizing previous

results on $C^3_4$ by Han and Zhao. For an edge colored graph $G$,

let the minimum color degree $\delta^c(G)$ be the minimum number of

distinctly colored edges incident to a vertex. Call $G$ rainbow if

every edge has a unique color. For $\ell \geq 5$, this dissertation

proves that any sufficiently large edge colored graph $G$ on $n$

vertices with $\delta^c(G) \geq \frac{n + 1}{2}$ contains a rainbow

cycle on $\ell$ vertices. The result is tight for odd $\ell$ and

extends previous results for $\ell = 3$. In addition, for even

$\ell \geq 4$, this dissertation proves that any sufficiently large

edge colored graph $G$ on $n$ vertices with

$\delta^c(G) \geq \frac{n + c(\ell)}{3}$, where

$c(\ell) \in \{5, 7\}$, contains a rainbow cycle on $\ell$

vertices. The result is tight when $6 \nmid \ell$. As a related

result, this dissertation proves for all $\ell \geq 4$, that any

sufficiently large oriented graph $D$ on $n$ vertices with

$\delta^+(D) \geq \frac{n + 1}{3}$ contains a directed cycle on

$\ell$ vertices. This partially generalizes a result by Kelly,

K\uhn" and Osthus that uses minimum semidegree rather than minimum

out degree."
ContributorsOursler, Roy (Author) / Arizona State University (Publisher)
Created2019
132775-Thumbnail Image.png
Description
In the ordering game on a graph G, Alice and Bob take turns placing the vertices of G into a linear ordering. The score of the game is the maximum number of neighbors that any vertex has before it in the ordering. Alice's goal in the ordering game is to

In the ordering game on a graph G, Alice and Bob take turns placing the vertices of G into a linear ordering. The score of the game is the maximum number of neighbors that any vertex has before it in the ordering. Alice's goal in the ordering game is to minimize the score, while Bob's goal is to maximize it. The game coloring number is the least score that Alice can always guarantee in the ordering game, no matter how Bob plays. This paper examines what happens to the game coloring number if Alice or Bob skip turns on the ordering game. Preliminary definitions and examples are provided to give context to the ordering game. These include a polynomial time algorithm to compute the coloring number, a non-competitive version of the game coloring number. The notion of the preordered game is introduced as the primary tool of the paper, along with its naturally defined preordered game coloring number. To emphasize the complex relationship between the coloring number and the preordered game coloring number, a non-polynomial time strategy is given to Alice and Bob that yields the preordered game coloring number on any graph. Additionally, the preordered game coloring number is shown to be monotonic, one of the most useful properties for turn-skipping. Techniques developed throughout the paper are then used to determine that Alice cannot reduce the score and Bob cannot improve the score by skipping any number of their respective turns. This paper can hopefully be used as a stepping stone towards bounding the score on graphs when vertices are removed, as well as in deciphering further properties of the asymmetric marking game.
ContributorsGuglielmo, Jason A (Author) / Kierstead, Henry (Thesis director) / Czygrinow, Andrzej (Committee member) / School of Molecular Sciences (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
168788-Thumbnail Image.png
Description
Little is known about how cognitive and brain aging patterns differ in older adults with autism spectrum disorder (ASD). However, recent evidence suggests that individuals with ASD may be at greater risk of pathological aging conditions than their neurotypical (NT) counterparts. A growing body of research indicates that older adults

Little is known about how cognitive and brain aging patterns differ in older adults with autism spectrum disorder (ASD). However, recent evidence suggests that individuals with ASD may be at greater risk of pathological aging conditions than their neurotypical (NT) counterparts. A growing body of research indicates that older adults with ASD may experience accelerated cognitive decline and neurodegeneration as they age, although studies are limited by their cross-sectional design in a population with strong age-cohort effects. Studying aging in ASD and identifying biomarkers to predict atypical aging is important because the population of older individuals with ASD is growing. Understanding the unique challenges faced as autistic adults age is necessary to develop treatments to improve quality of life and preserve independence. In this study, a longitudinal design was used to characterize cognitive and brain aging trajectories in ASD as a function of autistic trait severity. Principal components analysis (PCA) was used to derive a cognitive metric that best explains performance variability on tasks measuring memory ability and executive function. The slope of the integrated persistent feature (SIP) was used to quantify functional connectivity; the SIP is a novel, threshold-free graph theory metric which summarizes the speed of information diffusion in the brain. Longitudinal mixed models were using to predict cognitive and brain aging trajectories (measured via the SIP) as a function of autistic trait severity, sex, and their interaction. The sensitivity of the SIP was also compared with traditional graph theory metrics. It was hypothesized that older adults with ASD would experience accelerated cognitive and brain aging and furthermore, age-related changes in brain network topology would predict age-related changes in cognitive performance. For both cognitive and brain aging, autistic traits and sex interacted to predict trajectories, such that older men with high autistic traits were most at risk for poorer outcomes. In men with autism, variability in SIP scores across time points trended toward predicting cognitive aging trajectories. Findings also suggested that autistic traits are more sensitive to differences in brain aging than diagnostic group and that the SIP is more sensitive to brain aging trajectories than other graph theory metrics. However, further research is required to determine how physiological biomarkers such as the SIP are associated with cognitive outcomes.
ContributorsSullivan, Georgia (Author) / Braden, Blair (Thesis advisor) / Kodibagkar, Vikram (Thesis advisor) / Schaefer, Sydney (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2022
190697-Thumbnail Image.png
Description
With the ability to observe the atmospheres of terrestrial exoplanets via transit spectroscopy on the near-term horizon, the possibility of atmospheric biosignatures has received considerable attention in astrobiology. While traditionally exoplanet scientists looking for life focused on biologically relevant trace gases such as O2 and CH4, this approach has raised

With the ability to observe the atmospheres of terrestrial exoplanets via transit spectroscopy on the near-term horizon, the possibility of atmospheric biosignatures has received considerable attention in astrobiology. While traditionally exoplanet scientists looking for life focused on biologically relevant trace gases such as O2 and CH4, this approach has raised the spectre of false positives. Therefore, to address these shortcomings, a new set of methods is required to provide higher confidence in life detection. One possible approach is measuring the topology of atmospheric chemical reaction networks (CRNs). To investigate and assess this approach, the ability of network-theoretic metrics to distinguish the distance from thermochemical equilibrium in the atmosphere of hot jupiters was tested. After modeling the atmospheres of hot jupiters over a range of initial conditions using the VULCAN modeling package, atmospheric CRNs were constructed from the converged models and their topology measured using the Python package NetworkX. I found that network metrics were able to predict the distance from thermochemical equilibrium better than atmospheric species abundances alone. Building on this success, I modeled 30,000 terrestrial worlds. These models divided into two categories: Anoxic Archean Earth-like planets that varied in terms of CH4 surface flux (modeled as either biotic or abiotic in origin), and modern Earth-like planets with and without a surface flux of CCl2F2 (to represent the presence of industrial civilizations). I constructed atmospheric CRNs from the converged models, and analyzed their topology. I found that network metrics could distinguish between atmospheres with and without the presence of life or technology. In particular, mean degree and average shortest path length consistently performed better at distinguishing between abiotic and biotic Archean-like atmospheres than CH4 abundance.
ContributorsFisher, Theresa Mason (Author) / Walker, Sara I (Thesis advisor) / Hartnett, Hilairy (Committee member) / Line, Michael (Committee member) / Shkolnik, Evgenya (Committee member) / Okie, Jordan (Committee member) / Arizona State University (Publisher)
Created2023