Matching Items (5)
150506-Thumbnail Image.png
Description
The development of microsimulation approaches to urban systems modeling has occurred largely in three parallel streams of research, namely, land use, travel demand and traffic assignment. However, there are important dependencies and inter-relationships between the model systems which need to be accounted to accurately and comprehensively model the urban system.

The development of microsimulation approaches to urban systems modeling has occurred largely in three parallel streams of research, namely, land use, travel demand and traffic assignment. However, there are important dependencies and inter-relationships between the model systems which need to be accounted to accurately and comprehensively model the urban system. Location choices affect household activity-travel behavior, household activity-travel behavior affects network level of service (performance), and network level of service, in turn, affects land use and activity-travel behavior. The development of conceptual designs and operational frameworks that represent such complex inter-relationships in a consistent fashion across behavioral units, geographical entities, and temporal scales has proven to be a formidable challenge. In this research, an integrated microsimulation modeling framework called SimTRAVEL (Simulator of Transport, Routes, Activities, Vehicles, Emissions, and Land) that integrates the component model systems in a behaviorally consistent fashion, is presented. The model system is designed such that the activity-travel behavior model and the dynamic traffic assignment model are able to communicate with one another along continuous time with a view to simulate emergent activity-travel patterns in response to dynamically changing network conditions. The dissertation describes the operational framework, presents the modeling methodologies, and offers an extensive discussion on the advantages that such a framework may provide for analyzing the impacts of severe network disruptions on activity-travel choices. A prototype of the model system is developed and implemented for a portion of the Greater Phoenix metropolitan area in Arizona to demonstrate the capabilities of the model system.
ContributorsKonduri, Karthik Charan (Author) / Pendyala, Ram M. (Thesis advisor) / Ahn, Soyoung (Committee member) / Kuby, Michael (Committee member) / Kaloush, Kamil (Committee member) / Arizona State University (Publisher)
Created2012
149519-Thumbnail Image.png
Description
In the middle of the 20th century in the United States, transportation and infrastructure development became a priority on the national agenda, instigating the development of mathematical models that would predict transportation network performance. Approximately 40 years later, transportation planning models again became a national priority, this time instigating the

In the middle of the 20th century in the United States, transportation and infrastructure development became a priority on the national agenda, instigating the development of mathematical models that would predict transportation network performance. Approximately 40 years later, transportation planning models again became a national priority, this time instigating the development of highly disaggregate activity-based traffic models called microsimulations. These models predict the travel on a network at the level of the individual decision-maker, but do so with a large computational complexity and processing time requirement. The vast resources and steep learning curve required to integrate microsimulation models into the general transportation plan have deterred planning agencies from incorporating these tools. By researching the stochastic variability in the results of a microsimulation model with varying random number seeds, this paper evaluates the number of simulation trials necessary, and therefore the computational effort, for a planning agency to reach stable model outcomes. The microsimulation tool used to complete this research is the Transportation Analysis and Simulation System (TRANSIMS). The requirements for initiating a TRANSIMS simulation are described in the paper. Two analysis corridors are chosen in the Metropolitan Phoenix Area, and the roadway performance characteristics volume, vehicle-miles of travel, and vehicle-hours of travel are examined in each corridor under both congested and uncongested conditions. Both congested and uncongested simulations are completed in twenty trials, each with a unique random number seed. Performance measures are averaged for each trial, providing a distribution of average performance measures with which to test the stability of the system. The results of this research show that the variability in outcomes increases with increasing congestion. Although twenty trials are sufficient to achieve stable solutions for the uncongested state, convergence in the congested state is not achieved. These results indicate that a highly congested urban environment requires more than twenty simulation runs for each tested scenario before reaching a solution that can be assumed to be stable. The computational effort needed for this type of analysis is something that transportation planning agencies should take into consideration before beginning a traffic microsimulation program.
ContributorsZiems, Sarah Elia (Author) / Pendyala, Ram M. (Thesis advisor) / Ahn, Soyoung (Committee member) / Kaloush, Kamil (Committee member) / Arizona State University (Publisher)
Created2010
157129-Thumbnail Image.png
Description
With the development of computer and sensing technology, rich datasets have become available in many fields such as health care, manufacturing, transportation, just to name a few. Also, data come from multiple heterogeneous sources or modalities. This is a common phenomenon in health care systems. While multi-modality data fusion is

With the development of computer and sensing technology, rich datasets have become available in many fields such as health care, manufacturing, transportation, just to name a few. Also, data come from multiple heterogeneous sources or modalities. This is a common phenomenon in health care systems. While multi-modality data fusion is a promising research area, there are several special challenges in health care applications. (1) The integration of biological and statistical model is a big challenge; (2) It is commonplace that data from various modalities is not available for every patient due to cost, accessibility, and other reasons. This results in a special missing data structure in which different modalities may be missed in “blocks”. Therefore, how to train a predictive model using such a dataset poses a significant challenge to statistical learning. (3) It is well known that different modality data may contain different aspects of information about the response. The current studies cannot afford to solve this problem. My dissertation includes new statistical learning model development to address each of the aforementioned challenges as well as application case studies using real health care datasets, included in three chapters (Chapter 2, 3, and 4), respectively. Collectively, it is expected that my dissertation could provide a new sets of statistical learning models, algorithms, and theory contributed to multi-modality heterogeneous data fusion driven by the unique challenges in this area. Also, application of these new methods to important medical problems using real-world datasets is expected to provide solutions to these problems, and therefore contributing to the application domains.
ContributorsLiu, Xiaonan (Ph.D.) (Author) / Li, Jing (Thesis advisor) / Wu, Teresa (Committee member) / Pan, Rong (Committee member) / Fatyga, Mirek (Committee member) / Arizona State University (Publisher)
Created2019
155625-Thumbnail Image.png
Description
The process of combining data is one in which information from disjoint datasets sharing at least a number of common variables is merged. This process is commonly referred to as data fusion, with the main objective of creating a new dataset permitting more flexible analyses than the separate analysis of

The process of combining data is one in which information from disjoint datasets sharing at least a number of common variables is merged. This process is commonly referred to as data fusion, with the main objective of creating a new dataset permitting more flexible analyses than the separate analysis of each individual dataset. Many data fusion methods have been proposed in the literature, although most utilize the frequentist framework. This dissertation investigates a new approach called Bayesian Synthesis in which information obtained from one dataset acts as priors for the next analysis. This process continues sequentially until a single posterior distribution is created using all available data. These informative augmented data-dependent priors provide an extra source of information that may aid in the accuracy of estimation. To examine the performance of the proposed Bayesian Synthesis approach, first, results of simulated data with known population values under a variety of conditions were examined. Next, these results were compared to those from the traditional maximum likelihood approach to data fusion, as well as the data fusion approach analyzed via Bayes. The assessment of parameter recovery based on the proposed Bayesian Synthesis approach was evaluated using four criteria to reflect measures of raw bias, relative bias, accuracy, and efficiency. Subsequently, empirical analyses with real data were conducted. For this purpose, the fusion of real data from five longitudinal studies of mathematics ability varying in their assessment of ability and in the timing of measurement occasions was used. Results from the Bayesian Synthesis and data fusion approaches with combined data using Bayesian and maximum likelihood estimation methods were reported. The results illustrate that Bayesian Synthesis with data driven priors is a highly effective approach, provided that the sample sizes for the fused data are large enough to provide unbiased estimates. Bayesian Synthesis provides another beneficial approach to data fusion that can effectively be used to enhance the validity of conclusions obtained from the merging of data from different studies.
ContributorsMarcoulides, Katerina M (Author) / Grimm, Kevin (Thesis advisor) / Levy, Roy (Thesis advisor) / MacKinnon, David (Committee member) / Suk, Hye Won (Committee member) / Arizona State University (Publisher)
Created2017
149315-Thumbnail Image.png
Description
In today's global market, companies are facing unprecedented levels of uncertainties in supply, demand and in the economic environment. A critical issue for companies to survive increasing competition is to monitor the changing business environment and manage disturbances and changes in real time. In this dissertation, an integrated framework is

In today's global market, companies are facing unprecedented levels of uncertainties in supply, demand and in the economic environment. A critical issue for companies to survive increasing competition is to monitor the changing business environment and manage disturbances and changes in real time. In this dissertation, an integrated framework is proposed using simulation and online calibration methods to enable the adaptive management of large-scale complex supply chain systems. The design, implementation and verification of the integrated approach are studied in this dissertation. The research contributions are two-fold. First, this work enriches symbiotic simulation methodology by proposing a framework of simulation and advanced data fusion methods to improve simulation accuracy. Data fusion techniques optimally calibrate the simulation state/parameters by considering errors in both the simulation models and in measurements of the real-world system. Data fusion methods - Kalman Filtering, Extended Kalman Filtering, and Ensemble Kalman Filtering - are examined and discussed under varied conditions of system chaotic levels, data quality and data availability. Second, the proposed framework is developed, validated and demonstrated in `proof-of-concept' case studies on representative supply chain problems. In the case study of a simplified supply chain system, Kalman Filtering is applied to fuse simulation data and emulation data to effectively improve the accuracy of the detection of abnormalities. In the case study of the `beer game' supply chain model, the system's chaotic level is identified as a key factor to influence simulation performance and the choice of data fusion method. Ensemble Kalman Filtering is found more robust than Extended Kalman Filtering in a highly chaotic system. With appropriate tuning, the improvement of simulation accuracy is up to 80% in a chaotic system, and 60% in a stable system. In the last study, the integrated framework is applied to adaptive inventory control of a multi-echelon supply chain with non-stationary demand. It is worth pointing out that the framework proposed in this dissertation is not only useful in supply chain management, but also suitable to model other complex dynamic systems, such as healthcare delivery systems and energy consumption networks.
ContributorsWang, Shanshan (Author) / Wu, Teresa (Thesis advisor) / Fowler, John (Thesis advisor) / Pfund, Michele (Committee member) / Li, Jing (Committee member) / Pavlicek, William (Committee member) / Arizona State University (Publisher)
Created2010