Matching Items (12)
Filtering by

Clear all filters

149960-Thumbnail Image.png
Description
By the von Neumann min-max theorem, a two person zero sum game with finitely many pure strategies has a unique value for each player (summing to zero) and each player has a non-empty set of optimal mixed strategies. If the payoffs are independent, identically distributed (iid) uniform (0,1) random

By the von Neumann min-max theorem, a two person zero sum game with finitely many pure strategies has a unique value for each player (summing to zero) and each player has a non-empty set of optimal mixed strategies. If the payoffs are independent, identically distributed (iid) uniform (0,1) random variables, then with probability one, both players have unique optimal mixed strategies utilizing the same number of pure strategies with positive probability (Jonasson 2004). The pure strategies with positive probability in the unique optimal mixed strategies are called saddle squares. In 1957, Goldman evaluated the probability of a saddle point (a 1 by 1 saddle square), which was rediscovered by many authors including Thorp (1979). Thorp gave two proofs of the probability of a saddle point, one using combinatorics and one using a beta integral. In 1965, Falk and Thrall investigated the integrals required for the probabilities of a 2 by 2 saddle square for 2 × n and m × 2 games with iid uniform (0,1) payoffs, but they were not able to evaluate the integrals. This dissertation generalizes Thorp's beta integral proof of Goldman's probability of a saddle point, establishing an integral formula for the probability that a m × n game with iid uniform (0,1) payoffs has a k by k saddle square (k ≤ m,n). Additionally, the probabilities of a 2 by 2 and a 3 by 3 saddle square for a 3 × 3 game with iid uniform(0,1) payoffs are found. For these, the 14 integrals observed by Falk and Thrall are dissected into 38 disjoint domains, and the integrals are evaluated using the basic properties of the dilogarithm function. The final results for the probabilities of a 2 by 2 and a 3 by 3 saddle square in a 3 × 3 game are linear combinations of 1, π2, and ln(2) with rational coefficients.
ContributorsManley, Michael (Author) / Kadell, Kevin W. J. (Thesis advisor) / Kao, Ming-Hung (Committee member) / Lanchier, Nicolas (Committee member) / Lohr, Sharon (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2011
151976-Thumbnail Image.png
Description
Parallel Monte Carlo applications require the pseudorandom numbers used on each processor to be independent in a probabilistic sense. The TestU01 software package is the standard testing suite for detecting stream dependence and other properties that make certain pseudorandom generators ineffective in parallel (as well as serial) settings. TestU01 employs

Parallel Monte Carlo applications require the pseudorandom numbers used on each processor to be independent in a probabilistic sense. The TestU01 software package is the standard testing suite for detecting stream dependence and other properties that make certain pseudorandom generators ineffective in parallel (as well as serial) settings. TestU01 employs two basic schemes for testing parallel generated streams. The first applies serial tests to the individual streams and then tests the resulting P-values for uniformity. The second turns all the parallel generated streams into one long vector and then applies serial tests to the resulting concatenated stream. Various forms of stream dependence can be missed by each approach because neither one fully addresses the multivariate nature of the accumulated data when generators are run in parallel. This dissertation identifies these potential faults in the parallel testing methodologies of TestU01 and investigates two different methods to better detect inter-stream dependencies: correlation motivated multivariate tests and vector time series based tests. These methods have been implemented in an extension to TestU01 built in C++ and the unique aspects of this extension are discussed. A variety of different generation scenarios are then examined using the TestU01 suite in concert with the extension. This enhanced software package is found to better detect certain forms of inter-stream dependencies than the original TestU01 suites of tests.
ContributorsIsmay, Chester (Author) / Eubank, Randall (Thesis advisor) / Young, Dennis (Committee member) / Kao, Ming-Hung (Committee member) / Lanchier, Nicolas (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2013
149127-Thumbnail Image.png
Description

This brief article, written for a symposium on "Collaboration and the Colorado River," evaluates the U.S. Department of the Interior's Glen Canyon Dam Adaptive Management Program ("AMP"). The AMP has been advanced as a pioneering collaborative and adaptive approach for both decreasing scientific uncertainty in support of regulatory decision-making and

This brief article, written for a symposium on "Collaboration and the Colorado River," evaluates the U.S. Department of the Interior's Glen Canyon Dam Adaptive Management Program ("AMP"). The AMP has been advanced as a pioneering collaborative and adaptive approach for both decreasing scientific uncertainty in support of regulatory decision-making and helping manage contentious resource disputes -- in this case, the increasingly thorny conflict over the Colorado River's finite natural resources. Though encouraging in some respects, the AMP serves as a valuable illustration of the flaws of existing regulatory processes purporting to incorporate collaboration and regulatory adaptation into the decision-making process. Born in the shadow of the law and improvised with too little thought as to its structure, the AMP demonstrates the need to attend to the design of the regulatory process and integrate mechanisms that compel systematic program evaluation and adaptation. As such, the AMP provides vital information on how future collaborative experiments might be modified to enhance their prospects of success.

ContributorsCamacho, Alejandro E. (Author)
Created2008-09-19
149140-Thumbnail Image.png
Description

With a focus on resources of the Colorado River ecosystem below Glen Canyon Dam, the Glen Canyon Dam Adaptive Management Program has included a variety of experimental policy tests, ranging from manipulation of water releases from the dam to removal of non-native fish within Grand Canyon National Park. None of

With a focus on resources of the Colorado River ecosystem below Glen Canyon Dam, the Glen Canyon Dam Adaptive Management Program has included a variety of experimental policy tests, ranging from manipulation of water releases from the dam to removal of non-native fish within Grand Canyon National Park. None of these field-scale experiments has yet produced unambiguous results in terms of management prescriptions. But there has been adaptive learning, mostly from unanticipated or surprising resource responses relative to predictions from ecosystem modeling. Surprise learning opportunities may often be viewed with dismay by some stakeholders who might not be clear about the purpose of science and modeling in adaptive management. However, the experimental results from the Glen Canyon Dam program actually represent scientific successes in terms of revealing new opportunities for developing better river management policies. A new long-term experimental management planning process for Glen Canyon Dam operations, started in 2011 by the U.S. Department of the Interior, provides an opportunity to refocus management objectives, identify and evaluate key uncertainties about the influence of dam releases, and refine monitoring for learning over the next several decades. Adaptive learning since 1995 is critical input to this long-term planning effort. Embracing uncertainty and surprise outcomes revealed by monitoring and ecosystem modeling will likely continue the advancement of resource objectives below the dam, and may also promote efficient learning in other complex programs.

ContributorsMelis, Theodore S. (Author) / Walters, Carl (Author) / Korman, Josh (Author)
Created2015
149142-Thumbnail Image.png
Description

The Glen Canyon Dam Adaptive Management Program (AMP) has been identified as a model for natural resource management. We challenge that assertion, citing the lack of progress toward a long-term management plan for the dam, sustained extra-programmatic conflict, and a downriver ecology that is still in jeopardy, despite over ten

The Glen Canyon Dam Adaptive Management Program (AMP) has been identified as a model for natural resource management. We challenge that assertion, citing the lack of progress toward a long-term management plan for the dam, sustained extra-programmatic conflict, and a downriver ecology that is still in jeopardy, despite over ten years of meetings and an expensive research program. We have examined the primary and secondary sources available on the AMP’s design and operation in light of best practices identified in the literature on adaptive management and collaborative decision-making. We have identified six shortcomings: (1) an inadequate approach to identifying stakeholders; (2) a failure to provide clear goals and involve stakeholders in establishing the operating procedures that guide the collaborative process; (3) inappropriate use of professional neutrals and a failure to cultivate consensus; (4) a failure to establish and follow clear joint fact-finding procedures; (5) a failure to produce functional written agreements; and (6) a failure to manage the AMP adaptively and cultivate long-term problem-solving capacity.

Adaptive management can be an effective approach for addressing complex ecosystem-related processes like the operation of the Glen Canyon Dam, particularly in the face of substantial complexity, uncertainty, and political contentiousness. However, the Glen Canyon Dam AMP shows that a stated commitment to collaboration and adaptive management is insufficient. Effective management of natural resources can only be realized through careful attention to the collaborative design and implementation of appropriate problem-solving and adaptive-management procedures. It also requires the development of an appropriate organizational infrastructure that promotes stakeholder dialogue and agency learning. Though the experimental Glen Canyon Dam AMP is far from a success of collaborative adaptive management, the lessons from its shortcomings can foster more effective collaborative adaptive management in the future by Congress, federal agencies, and local and state authorities.

ContributorsSusskind, Lawrence (Author) / Camacho, Alejandro E. (Author) / Schenk, Todd (Author)
Created2010-03-23
132394-Thumbnail Image.png
Description
In baseball, a starting pitcher has historically been a more durable pitcher capable of lasting long into games without tiring. For the entire history of Major League Baseball, these pitchers have been expected to last 6 innings or more into a game before being replaced. However, with the advances in

In baseball, a starting pitcher has historically been a more durable pitcher capable of lasting long into games without tiring. For the entire history of Major League Baseball, these pitchers have been expected to last 6 innings or more into a game before being replaced. However, with the advances in statistics and sabermetrics and their gradual acceptance by professional coaches, the role of the starting pitcher is beginning to change. Teams are experimenting with having starters being replaced quicker, challenging the traditional role of the starting pitcher. The goal of this study is to determine if there is an exact point at which a team would benefit from replacing a starting or relief pitcher with another pitcher using statistical analyses. We will use logistic stepwise regression to predict the likelihood of a team scoring a run if a substitution is made or not made given the current game situation.
ContributorsBuckley, Nicholas J (Author) / Samara, Marko (Thesis director) / Lanchier, Nicolas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133983-Thumbnail Image.png
Description
There are multiple mathematical models for alignment of individuals moving within a group. In a first class of models, individuals tend to relax their velocity toward the average velocity of other nearby neighbors. These types of models are motivated by the flocking behavior exhibited by birds. Another class of models

There are multiple mathematical models for alignment of individuals moving within a group. In a first class of models, individuals tend to relax their velocity toward the average velocity of other nearby neighbors. These types of models are motivated by the flocking behavior exhibited by birds. Another class of models have been introduced to describe rapid changes of individual velocity, referred to as jump, which better describes behavior of smaller agents (e.g. locusts, ants). In the second class of model, individuals will randomly choose to align with another nearby individual, matching velocities. There are several open questions concerning these two type of behavior: which behavior is the most efficient to create a flock (i.e. to converge toward the same velocity)? Will flocking still emerge when the number of individuals approach infinity? Analysis of these models show that, in the homogeneous case where all individuals are capable of interacting with each other, the variance of the velocities in both the jump model and the relaxation model decays to 0 exponentially for any nonzero number of individuals. This implies the individuals in the system converge to an absorbing state where all individuals share the same velocity, therefore individuals converge to a flock even as the number of individuals approach infinity. Further analysis focused on the case where interactions between individuals were determined by an adjacency matrix. The second eigenvalues of the Laplacian of this adjacency matrix (denoted ƛ2) provided a lower bound on the rate of decay of the variance. When ƛ2 is nonzero, the system is said to converge to a flock almost surely. Furthermore, when the adjacency matrix is generated by a random graph, such that connections between individuals are formed with probability p (where 0

1/N. ƛ2 is a good estimator of the rate of convergence of the system, in comparison to the value of p used to generate the adjacency matrix..

ContributorsTrent, Austin L. (Author) / Motsch, Sebastien (Thesis director) / Lanchier, Nicolas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
171986-Thumbnail Image.png
Description
The increase in the photovoltaic (PV) generation on distribution grids may cause reverse power flows and challenges such as service voltage violations and transformer overloading. To resolve these issues, utilities need situational awareness, e.g., PV-feeder mapping to identify the potential back-feeding feeders and meter-transformer mapping for transformer overloading. As circuit

The increase in the photovoltaic (PV) generation on distribution grids may cause reverse power flows and challenges such as service voltage violations and transformer overloading. To resolve these issues, utilities need situational awareness, e.g., PV-feeder mapping to identify the potential back-feeding feeders and meter-transformer mapping for transformer overloading. As circuit schematics are outdated, this work relies on data. In cases where the advanced metering infrastructure (AMI) data is unavailable, e.g., analog meters or bandwidth limitation, the dissertation proposes to use feeder measurements from utilities and solar panel measurements from solar companies to identify PV-feeder mapping. Several sequentially improved methods based on quantitative association rule mining (QARM) are proposed, where a lower bound for performance guarantee is also provided. However, binning data in QARM leads to information loss. So, bands are designed to replace bins for increased robustness. For cases where AMI data is available but solar PV data is unavailable, the AMI voltage data and location data are used for situational awareness, i.e., meter-transformer mapping, to resolve voltage violation and transformer overloading. A density-based clustering method is proposed that leverages AMI voltage data and geographical information to efficiently segment utility meters such that the segments comprise meters of few transformers only. Although it is helpful for utilities, it may not directly recover the meter-transformer connectivity, which requires transformer-wise segmentation. The proposed density-based method and other past methods ignore two common scenarios, e.g., having large distance between a meter and parent transformer or high similarity of a meter's consumption pattern to a non-parent transformer's meters. However, going from meter-meter can lead to the parent transformer group meters due to the usual observation that the similarity of intra-cluster meter voltages is usually stronger than the similarity of inter-cluster meter voltages. Therefore, performance guarantee is provided via spectral embedding with voltage data under reasonable assumption. Moreover, the assumption is partially relaxed using location data. It will benefit the utility in many ways, e.g., mitigating voltage violations by transformer tap settings and identifying overloaded transformers.
ContributorsSaleem, Muhammad Bilal (Author) / Weng, Yang (Thesis advisor) / Lanchier, Nicolas (Committee member) / Wu, Meng (Committee member) / Cook, Elizabeth (Committee member) / Arizona State University (Publisher)
Created2022
190981-Thumbnail Image.png
Description
As the impacts of climate change worsen in the coming decades, natural hazards are expected to increase in frequency and intensity, leading to increased loss and risk to human livelihood. The spatio-temporal statistical approaches developed and applied in this dissertation highlight the ways in which hazard data can be leveraged

As the impacts of climate change worsen in the coming decades, natural hazards are expected to increase in frequency and intensity, leading to increased loss and risk to human livelihood. The spatio-temporal statistical approaches developed and applied in this dissertation highlight the ways in which hazard data can be leveraged to understand loss trends, build forecasts, and study societal impacts of losses. Specifically, this work makes use of the Spatial Hazard Events and Losses Database which is an unparalleled source of loss data for the United States. The first portion of this dissertation develops accurate loss baselines that are crucial for mitigation planning, infrastructure investment, and risk communication. This is accomplished thorough a stationarity analysis of county level losses following a normalization procedure. A wide variety of studies employ loss data without addressing stationarity assumptions or the possibility for spurious regression. This work enables the statistically rigorous application of such loss time series to modeling applications. The second portion of this work develops a novel matrix variate dynamic factor model for spatio-temporal loss data stratified across multiple correlated hazards or perils. The developed model is employed to analyze and forecast losses from convective storms, which constitute some of the highest losses covered by insurers. Adopting factor-based approach, forecasts are achieved despite the complex and often unobserved underlying drivers of these losses. The developed methodology extends the literature on dynamic factor models to matrix variate time series. Specifically, a covariance structure is imposed that is well suited to spatio-temporal problems while significantly reducing model complexity. The model is fit via the EM algorithm and Kalman filter. The third and final part of this dissertation investigates the impact of compounding hazard events on state and regional migration in the United States. Any attempt to capture trends in climate related migration must account for the inherent uncertainties surrounding climate change, natural hazard occurrences, and socioeconomic factors. For this reason, I adopt a Bayesian modeling approach that enables the explicit estimation of the inherent uncertainty. This work can provide decision-makers with greater clarity regarding the extent of knowledge on climate trends.
ContributorsBoyle, Esther Sarai (Author) / Jevtic, Petar (Thesis advisor) / Lanchier, Nicolas (Thesis advisor) / Lan, Shiwei (Committee member) / Cheng, Dan (Committee member) / Fricks, John (Committee member) / Gall, Melanie (Committee member) / Cutter, Susan (Committee member) / McNicholas, Paul (Committee member) / Arizona State University (Publisher)
Created2023
162238-Thumbnail Image.png
DescriptionUnderstanding the evolution of opinions is a delicate task as the dynamics of how one changes their opinion based on their interactions with others are unclear.
ContributorsWeber, Dylan (Author) / Motsch, Sebastien (Thesis advisor) / Lanchier, Nicolas (Committee member) / Platte, Rodrigo (Committee member) / Armbruster, Dieter (Committee member) / Fricks, John (Committee member) / Arizona State University (Publisher)
Created2021