Matching Items (4)
In 2010, for the first time in human history, more than half of the world's total population lived in cities; this number is expected to increase to 60% or more by 2050. The goal of this research effort is to create a comprehensive model and modelling framework for megacities, middleweight cities, and urban agglomerations, collectively referred to as dense urban areas. The motivation for this project comes from the United States Army's desire for readiness in all operating environments including dense urban areas. Though there is valuable insight in research to support Army operational behaviors, megacities are of unique interest to nearly every societal sector imaginable. A novel application for determining both main effects and interactive effects between factors within a dense urban area is a Design of Experiments- providing insight on factor causations. Regression Modelling can also be employed for analysis of dense urban areas, providing wide ranging insights into correlations between factors and their interactions. Past studies involving megacities concern themselves with general trend of cities and their operation. This study is unique in its efforts to model a singular megacity to enable decision support for military operational planning, as well as potential decision support to city planners to increase the sustainability of these dense urban areas and megacities.
The current Enterprise Requirements and Acquisition Model (ERAM), a discrete event simulation of the major tasks and decisions within the DoD acquisition system, identifies several what-if intervention strategies to improve program completion time. However, processes that contribute to the program acquisition completion time were not explicitly identified in the simulation study. This research seeks to determine the acquisition processes that contribute significantly to total simulated program time in the acquisition system for all programs reaching Milestone C. Specifically, this research examines the effect of increased scope management, technology maturity, and decreased variation and mean process times in post-Design Readiness Review contractor activities by performing additional simulation analyses. Potential policies are formulated from the results to further improve program acquisition completion time.
The widespread use of statistical analysis in sports-particularly Baseball- has made it increasingly necessary for small and mid-market teams to find ways to maintain their analytical advantages over large market clubs. In baseball, an opportunity for exists for teams with limited financial resources to sign players under team control to long-term contracts before other teams can bid for their services in free agency. If small and mid-market clubs can successfully identify talented players early, clubs can save money, achieve cost certainty and remain competitive for longer periods of time. These deals are also advantageous to players since they receive job security and greater financial dividends earlier in their career. The objective of this paper is to develop a regression-based predictive model that teams can use to forecast the performance of young baseball players with limited Major League experience. There were several tasks conducted to achieve this goal: (1) Data was obtained from Major League Baseball and Lahman's Baseball Database and sorted using Excel macros for easier analysis. (2) Players were separated into three positional groups depending on similar fielding requirements and offensive profiles: Group I was comprised of first and third basemen, Group II contains second basemen, shortstops, and center fielders and Group III contains left and right fielders. (3) Based on the context of baseball and the nature of offensive performance metrics, only players who achieve greater than 200 plate appearances within the first two years of their major league debut are included in this analysis. (4) The statistical software package JMP was used to create regression models of each group and analyze the residuals for any irregularities or normality violations. Once the models were developed, slight adjustments were made to improve the accuracy of the forecasts and identify opportunities for future work. It was discovered that Group I and Group III were the easiest player groupings to forecast while Group II required several attempts to improve the model.
Abstract Chess has been a common research topic for expert-novice studies and thus for learning science as a whole because of its limited framework and longevity as a game. One factor is that chess studies are good at measuring how expert chess players use their memory and skills to approach a new chessboard conï¬�guration. Studies have shown that chess skill is based on memory, speciï¬�cally, "chunks" of chess piece positions that have been previously encountered by players. However, debate exists concerning how these chunks are constructed in players' memory. These chunks could be constructed by proximity of pieces on the chessboard as well as their precise location or constructed through attack-defense relations. The primary objective of this study is to support which one is more in line with chess players' actual chess abilities based off their memory, proximity or attack/defense. This study replicates and extends an experiment conducted by McGregor and Howe (2002), which explored the argument that pieces are primed more by attack and defense relations than by proximity. Like their study, the present study examined novice and expert chess players' response times for correct and error responses by showing slides of game configurations. In addition to these metrics, the present study also incorporated an eye-tracker to measure visual attention and EEG to measure affective and cognitive states. They were added to allow the comparison of subtle and unconscious behaviors of both novices and expert chess players. Overall, most McGregor and Howe's (2002) results were replicated supporting their theory on chess expertise. This included statistically significance for skill in the error rates with the mean error rates on the piece recognition tests were 70.1% for novices and 87.9% for experts, as well as significance for the two-way interaction for relatedness and proximity with error rates of 22.4% for unrelated/far, 18.8% for related/far, 15.8% for unrelated
ear, and 29.3% for related
ear. Unfortunately, there were no statistically significance for any of the response time effects, which McGregor and Howe found for the interaction between skill and proximity. Despite eye-tracking and EEG data not either support nor confirm McGregor and Howe's theory on how chess players memorize chessboard configurations, these metrics did help build a secondary theory on how novices typically rely on proximity to approach chess and new visual problems in general. This was exemplified by the statistically significant results for short-term excitement for the two-way interaction of skill and proximity, where the largest short-term excitement score was between novices on near proximity slides. This may indicate that novices, because they may lean toward using proximity to try to recall these pieces, experience a short burst of excitement when the pieces are close to each other because they are more likely to recall these configurations.