Matching Items (10)
Filtering by

Clear all filters

151976-Thumbnail Image.png
Description
Parallel Monte Carlo applications require the pseudorandom numbers used on each processor to be independent in a probabilistic sense. The TestU01 software package is the standard testing suite for detecting stream dependence and other properties that make certain pseudorandom generators ineffective in parallel (as well as serial) settings. TestU01 employs

Parallel Monte Carlo applications require the pseudorandom numbers used on each processor to be independent in a probabilistic sense. The TestU01 software package is the standard testing suite for detecting stream dependence and other properties that make certain pseudorandom generators ineffective in parallel (as well as serial) settings. TestU01 employs two basic schemes for testing parallel generated streams. The first applies serial tests to the individual streams and then tests the resulting P-values for uniformity. The second turns all the parallel generated streams into one long vector and then applies serial tests to the resulting concatenated stream. Various forms of stream dependence can be missed by each approach because neither one fully addresses the multivariate nature of the accumulated data when generators are run in parallel. This dissertation identifies these potential faults in the parallel testing methodologies of TestU01 and investigates two different methods to better detect inter-stream dependencies: correlation motivated multivariate tests and vector time series based tests. These methods have been implemented in an extension to TestU01 built in C++ and the unique aspects of this extension are discussed. A variety of different generation scenarios are then examined using the TestU01 suite in concert with the extension. This enhanced software package is found to better detect certain forms of inter-stream dependencies than the original TestU01 suites of tests.
ContributorsIsmay, Chester (Author) / Eubank, Randall (Thesis advisor) / Young, Dennis (Committee member) / Kao, Ming-Hung (Committee member) / Lanchier, Nicolas (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2013
149960-Thumbnail Image.png
Description
By the von Neumann min-max theorem, a two person zero sum game with finitely many pure strategies has a unique value for each player (summing to zero) and each player has a non-empty set of optimal mixed strategies. If the payoffs are independent, identically distributed (iid) uniform (0,1) random

By the von Neumann min-max theorem, a two person zero sum game with finitely many pure strategies has a unique value for each player (summing to zero) and each player has a non-empty set of optimal mixed strategies. If the payoffs are independent, identically distributed (iid) uniform (0,1) random variables, then with probability one, both players have unique optimal mixed strategies utilizing the same number of pure strategies with positive probability (Jonasson 2004). The pure strategies with positive probability in the unique optimal mixed strategies are called saddle squares. In 1957, Goldman evaluated the probability of a saddle point (a 1 by 1 saddle square), which was rediscovered by many authors including Thorp (1979). Thorp gave two proofs of the probability of a saddle point, one using combinatorics and one using a beta integral. In 1965, Falk and Thrall investigated the integrals required for the probabilities of a 2 by 2 saddle square for 2 × n and m × 2 games with iid uniform (0,1) payoffs, but they were not able to evaluate the integrals. This dissertation generalizes Thorp's beta integral proof of Goldman's probability of a saddle point, establishing an integral formula for the probability that a m × n game with iid uniform (0,1) payoffs has a k by k saddle square (k ≤ m,n). Additionally, the probabilities of a 2 by 2 and a 3 by 3 saddle square for a 3 × 3 game with iid uniform(0,1) payoffs are found. For these, the 14 integrals observed by Falk and Thrall are dissected into 38 disjoint domains, and the integrals are evaluated using the basic properties of the dilogarithm function. The final results for the probabilities of a 2 by 2 and a 3 by 3 saddle square in a 3 × 3 game are linear combinations of 1, π2, and ln(2) with rational coefficients.
ContributorsManley, Michael (Author) / Kadell, Kevin W. J. (Thesis advisor) / Kao, Ming-Hung (Committee member) / Lanchier, Nicolas (Committee member) / Lohr, Sharon (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2011
133983-Thumbnail Image.png
Description
There are multiple mathematical models for alignment of individuals moving within a group. In a first class of models, individuals tend to relax their velocity toward the average velocity of other nearby neighbors. These types of models are motivated by the flocking behavior exhibited by birds. Another class of models

There are multiple mathematical models for alignment of individuals moving within a group. In a first class of models, individuals tend to relax their velocity toward the average velocity of other nearby neighbors. These types of models are motivated by the flocking behavior exhibited by birds. Another class of models have been introduced to describe rapid changes of individual velocity, referred to as jump, which better describes behavior of smaller agents (e.g. locusts, ants). In the second class of model, individuals will randomly choose to align with another nearby individual, matching velocities. There are several open questions concerning these two type of behavior: which behavior is the most efficient to create a flock (i.e. to converge toward the same velocity)? Will flocking still emerge when the number of individuals approach infinity? Analysis of these models show that, in the homogeneous case where all individuals are capable of interacting with each other, the variance of the velocities in both the jump model and the relaxation model decays to 0 exponentially for any nonzero number of individuals. This implies the individuals in the system converge to an absorbing state where all individuals share the same velocity, therefore individuals converge to a flock even as the number of individuals approach infinity. Further analysis focused on the case where interactions between individuals were determined by an adjacency matrix. The second eigenvalues of the Laplacian of this adjacency matrix (denoted ƛ2) provided a lower bound on the rate of decay of the variance. When ƛ2 is nonzero, the system is said to converge to a flock almost surely. Furthermore, when the adjacency matrix is generated by a random graph, such that connections between individuals are formed with probability p (where 0

1/N. ƛ2 is a good estimator of the rate of convergence of the system, in comparison to the value of p used to generate the adjacency matrix..

ContributorsTrent, Austin L. (Author) / Motsch, Sebastien (Thesis director) / Lanchier, Nicolas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
149127-Thumbnail Image.png
Description

This brief article, written for a symposium on "Collaboration and the Colorado River," evaluates the U.S. Department of the Interior's Glen Canyon Dam Adaptive Management Program ("AMP"). The AMP has been advanced as a pioneering collaborative and adaptive approach for both decreasing scientific uncertainty in support of regulatory decision-making and

This brief article, written for a symposium on "Collaboration and the Colorado River," evaluates the U.S. Department of the Interior's Glen Canyon Dam Adaptive Management Program ("AMP"). The AMP has been advanced as a pioneering collaborative and adaptive approach for both decreasing scientific uncertainty in support of regulatory decision-making and helping manage contentious resource disputes -- in this case, the increasingly thorny conflict over the Colorado River's finite natural resources. Though encouraging in some respects, the AMP serves as a valuable illustration of the flaws of existing regulatory processes purporting to incorporate collaboration and regulatory adaptation into the decision-making process. Born in the shadow of the law and improvised with too little thought as to its structure, the AMP demonstrates the need to attend to the design of the regulatory process and integrate mechanisms that compel systematic program evaluation and adaptation. As such, the AMP provides vital information on how future collaborative experiments might be modified to enhance their prospects of success.

ContributorsCamacho, Alejandro E. (Author)
Created2008-09-19
149140-Thumbnail Image.png
Description

With a focus on resources of the Colorado River ecosystem below Glen Canyon Dam, the Glen Canyon Dam Adaptive Management Program has included a variety of experimental policy tests, ranging from manipulation of water releases from the dam to removal of non-native fish within Grand Canyon National Park. None of

With a focus on resources of the Colorado River ecosystem below Glen Canyon Dam, the Glen Canyon Dam Adaptive Management Program has included a variety of experimental policy tests, ranging from manipulation of water releases from the dam to removal of non-native fish within Grand Canyon National Park. None of these field-scale experiments has yet produced unambiguous results in terms of management prescriptions. But there has been adaptive learning, mostly from unanticipated or surprising resource responses relative to predictions from ecosystem modeling. Surprise learning opportunities may often be viewed with dismay by some stakeholders who might not be clear about the purpose of science and modeling in adaptive management. However, the experimental results from the Glen Canyon Dam program actually represent scientific successes in terms of revealing new opportunities for developing better river management policies. A new long-term experimental management planning process for Glen Canyon Dam operations, started in 2011 by the U.S. Department of the Interior, provides an opportunity to refocus management objectives, identify and evaluate key uncertainties about the influence of dam releases, and refine monitoring for learning over the next several decades. Adaptive learning since 1995 is critical input to this long-term planning effort. Embracing uncertainty and surprise outcomes revealed by monitoring and ecosystem modeling will likely continue the advancement of resource objectives below the dam, and may also promote efficient learning in other complex programs.

ContributorsMelis, Theodore S. (Author) / Walters, Carl (Author) / Korman, Josh (Author)
Created2015
149142-Thumbnail Image.png
Description

The Glen Canyon Dam Adaptive Management Program (AMP) has been identified as a model for natural resource management. We challenge that assertion, citing the lack of progress toward a long-term management plan for the dam, sustained extra-programmatic conflict, and a downriver ecology that is still in jeopardy, despite over ten

The Glen Canyon Dam Adaptive Management Program (AMP) has been identified as a model for natural resource management. We challenge that assertion, citing the lack of progress toward a long-term management plan for the dam, sustained extra-programmatic conflict, and a downriver ecology that is still in jeopardy, despite over ten years of meetings and an expensive research program. We have examined the primary and secondary sources available on the AMP’s design and operation in light of best practices identified in the literature on adaptive management and collaborative decision-making. We have identified six shortcomings: (1) an inadequate approach to identifying stakeholders; (2) a failure to provide clear goals and involve stakeholders in establishing the operating procedures that guide the collaborative process; (3) inappropriate use of professional neutrals and a failure to cultivate consensus; (4) a failure to establish and follow clear joint fact-finding procedures; (5) a failure to produce functional written agreements; and (6) a failure to manage the AMP adaptively and cultivate long-term problem-solving capacity.

Adaptive management can be an effective approach for addressing complex ecosystem-related processes like the operation of the Glen Canyon Dam, particularly in the face of substantial complexity, uncertainty, and political contentiousness. However, the Glen Canyon Dam AMP shows that a stated commitment to collaboration and adaptive management is insufficient. Effective management of natural resources can only be realized through careful attention to the collaborative design and implementation of appropriate problem-solving and adaptive-management procedures. It also requires the development of an appropriate organizational infrastructure that promotes stakeholder dialogue and agency learning. Though the experimental Glen Canyon Dam AMP is far from a success of collaborative adaptive management, the lessons from its shortcomings can foster more effective collaborative adaptive management in the future by Congress, federal agencies, and local and state authorities.

ContributorsSusskind, Lawrence (Author) / Camacho, Alejandro E. (Author) / Schenk, Todd (Author)
Created2010-03-23
162238-Thumbnail Image.png
DescriptionUnderstanding the evolution of opinions is a delicate task as the dynamics of how one changes their opinion based on their interactions with others are unclear.
ContributorsWeber, Dylan (Author) / Motsch, Sebastien (Thesis advisor) / Lanchier, Nicolas (Committee member) / Platte, Rodrigo (Committee member) / Armbruster, Dieter (Committee member) / Fricks, John (Committee member) / Arizona State University (Publisher)
Created2021
158387-Thumbnail Image.png
Description
Modeling human survivorship is a core area of research within the actuarial com

munity. With life insurance policies and annuity products as dominant financial

instruments which depend on future mortality rates, there is a risk that observed

human mortality experiences will differ from projected when they are sold. From an

insurer’s portfolio perspective, to

Modeling human survivorship is a core area of research within the actuarial com

munity. With life insurance policies and annuity products as dominant financial

instruments which depend on future mortality rates, there is a risk that observed

human mortality experiences will differ from projected when they are sold. From an

insurer’s portfolio perspective, to curb this risk, it is imperative that models of hu

man survivorship are constantly being updated and equipped to accurately gauge and

forecast mortality rates. At present, the majority of actuarial research in mortality

modeling involves factor-based approaches which operate at a global scale, placing

little attention on the determinants and interpretable risk factors of mortality, specif

ically from a spatial perspective. With an abundance of research being performed

in the field of spatial statistics and greater accessibility to localized mortality data,

there is a clear opportunity to extend the existing body of mortality literature to

wards the spatial domain. It is the objective of this dissertation to introduce these

new statistical approaches to equip the field of actuarial science to include geographic

space into the mortality modeling context.

First, this dissertation evaluates the underlying spatial patterns of mortality across

the United States, and introduces a spatial filtering methodology to generate latent

spatial patterns which capture the essence of these mortality rates in space. Second,

local modeling techniques are illustrated, and a multiscale geographically weighted

regression (MGWR) model is generated to describe the variation of mortality rates

across space in an interpretable manner which allows for the investigation of the

presence of spatial variability in the determinants of mortality. Third, techniques for

updating traditional mortality models are introduced, culminating in the development

of a model which addresses the relationship between space, economic growth, and

mortality. It is through these applications that this dissertation demonstrates the

utility in updating actuarial mortality models from a spatial perspective.
ContributorsCupido, Kyran (Author) / Jevtic, Petar (Thesis advisor) / Fotheringham, A. Stewart (Committee member) / Lanchier, Nicolas (Committee member) / Páez, Antonio (Committee member) / Reiser, Mark R. (Committee member) / Zheng, Yi (Committee member) / Arizona State University (Publisher)
Created2020
161386-Thumbnail Image.png
Description
This dissertation consists of three papers about opinion dynamics. The first paper is in collaboration with Prof. Lanchier while the other two papers are individual works. Two models are introduced and studied analytically: the Deffuant model and the Hegselmann-Krause~(HK) model. The main difference between the two models is that the

This dissertation consists of three papers about opinion dynamics. The first paper is in collaboration with Prof. Lanchier while the other two papers are individual works. Two models are introduced and studied analytically: the Deffuant model and the Hegselmann-Krause~(HK) model. The main difference between the two models is that the Deffuant dynamics consists of pairwise interactions whereas the HK dynamics consists of group interactions. Translated into graph, each vertex stands for an agent in both models. In the Deffuant model, two graphs are combined: the social graph and the opinion graph. The social graph is assumed to be a general finite connected graph where each edge is interpreted as a social link, such as a friendship relationship, between two agents. At each time step, two social neighbors are randomly selected and interact if and only if their opinion distance does not exceed some confidence threshold, which results in the neighbors' opinions getting closer to each other. The main result about the Deffuant model is the derivation of a positive lower bound for the probability of consensus that is independent of the size and topology of the social graph but depends on the confidence threshold, the choice of the opinion space and the initial distribution. For the HK model, agent~$i$ updates its opinion~$x_i$ by taking the average opinion of its neighbors, defined as the set of agents with opinion at most~$\epsilon$ apart from~$x_i$. Here,~$\epsilon > 0$ is a confidence threshold. There are two types of HK models: the synchronous and the asynchronous HK models. In the former, all the agents update their opinion simultaneously at each time step, whereas in the latter, only one agent is selected uniformly at random to update its opinion at each time step. The mixed model is a variant of the HK model in which each agent can choose its degree of stubbornness and mix its opinion with the average opinion of its neighbors. The main results of this dissertation about HK models show conditions under which the asymptotic stability holds or a consensus can be achieved, and give a positive lower bound for the probability of consensus and, in the one-dimensional case, an upper bound for the probability of consensus. I demonstrate the bounds for the probability of consensus on a unit cube and a unit interval.
ContributorsLi, Hsin-Lun (Author) / Lanchier, Nicolas (Thesis advisor) / Camacho, Erika (Committee member) / Czygrinow, Andrzej (Committee member) / Fishel, Susanna (Committee member) / Motsch, Sebastien (Committee member) / Arizona State University (Publisher)
Created2021
132394-Thumbnail Image.png
Description
In baseball, a starting pitcher has historically been a more durable pitcher capable of lasting long into games without tiring. For the entire history of Major League Baseball, these pitchers have been expected to last 6 innings or more into a game before being replaced. However, with the advances in

In baseball, a starting pitcher has historically been a more durable pitcher capable of lasting long into games without tiring. For the entire history of Major League Baseball, these pitchers have been expected to last 6 innings or more into a game before being replaced. However, with the advances in statistics and sabermetrics and their gradual acceptance by professional coaches, the role of the starting pitcher is beginning to change. Teams are experimenting with having starters being replaced quicker, challenging the traditional role of the starting pitcher. The goal of this study is to determine if there is an exact point at which a team would benefit from replacing a starting or relief pitcher with another pitcher using statistical analyses. We will use logistic stepwise regression to predict the likelihood of a team scoring a run if a substitution is made or not made given the current game situation.
ContributorsBuckley, Nicholas J (Author) / Samara, Marko (Thesis director) / Lanchier, Nicolas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05