Matching Items (119)
150070-Thumbnail Image.png
Description
This dissertation creates models of past potential vegetation in the Southern Levant during most of the Holocene, from the beginnings of farming through the rise of urbanized civilization (12 to 2.5 ka BP). The time scale encompasses the rise and collapse of the earliest agrarian civilizations in this region. The

This dissertation creates models of past potential vegetation in the Southern Levant during most of the Holocene, from the beginnings of farming through the rise of urbanized civilization (12 to 2.5 ka BP). The time scale encompasses the rise and collapse of the earliest agrarian civilizations in this region. The archaeological record suggests that increases in social complexity were linked to climatic episodes (e.g., favorable climatic conditions coincide with intervals of prosperity or marked social development such as the Neolithic Revolution ca. 11.5 ka BP, the Secondary Products Revolution ca. 6 ka BP, and the Middle Bronze Age ca. 4 ka BP). The opposite can be said about periods of climatic deterioration, when settled villages were abandoned as the inhabitants returned to nomadic or semi nomadic lifestyles (e.g., abandonment of the largest Neolithic farming towns after 8 ka BP and collapse of Bronze Age towns and cities after 3.5 ka BP during the Late Bronze Age). This study develops chronologically refined models of past vegetation from 12 to 2.5 ka BP, at 500 year intervals, using GIS, remote sensing and statistical modeling tools (MAXENT) that derive from species distribution modeling. Plants are sensitive to alterations in their environment and respond accordingly. Because of this, they are valuable indicators of landscape change. An extensive database of historical and field gathered observations was created. Using this database as well as environmental variables that include temperature and precipitation surfaces for the whole study period (also at 500 year intervals), the potential vegetation of the region was modeled. Through this means, a continuous chronology of potential vegetation of the Southern Levantwas built. The produced paleo-vegetation models generally agree with the proxy records. They indicate a gradual decline of forests and expansion of steppe and desert throughout the Holocene, interrupted briefly during the Mid Holocene (ca. 4 ka BP, Middle Bronze Age). They also suggest that during the Early Holocene, forest areas were extensive, spreading into the Northern Negev. The two remaining forested areas in the Northern and Southern Plateau Region in Jordan were also connected during this time. The models also show general agreement with the major cultural developments, with forested areas either expanding or remaining stable during prosperous periods (e.g., Pre Pottery Neolithic and Middle Bronze Age), and significantly contracting during moments of instability (e.g., Late Bronze Age).
ContributorsSoto-Berelov, Mariela (Author) / Fall, Patricia L. (Thesis advisor) / Myint, Soe (Committee member) / Turner, Billie L (Committee member) / Falconer, Steven (Committee member) / Arizona State University (Publisher)
Created2011
150223-Thumbnail Image.png
Description
Overcrowding of Emergency Departments (EDs) put the safety of patients at risk. Decision makers implement Ambulance Diversion (AD) as a way to relieve congestion and ensure timely treatment delivery. However, ineffective design of AD policies reduces the accessibility to emergency care and adverse events may arise. The objective of this

Overcrowding of Emergency Departments (EDs) put the safety of patients at risk. Decision makers implement Ambulance Diversion (AD) as a way to relieve congestion and ensure timely treatment delivery. However, ineffective design of AD policies reduces the accessibility to emergency care and adverse events may arise. The objective of this dissertation is to propose methods to design and analyze effective AD policies that consider performance measures that are related to patient safety. First, a simulation-based methodology is proposed to evaluate the mean performance and variability of single-factor AD policies in a single hospital environment considering the trade-off between average waiting time and percentage of time spent on diversion. Regression equations are proposed to obtain parameters of AD policies that yield desired performance level. The results suggest that policies based on the total number of patients waiting are more consistent and provide a high precision in predicting policy performance. Then, a Markov Decision Process model is proposed to obtain the optimal AD policy assuming that information to start treatment in a neighboring hospital is available. The model is designed to minimize the average tardiness per patient in the long run. Tardiness is defined as the time that patients have to wait beyond a safety time threshold to start receiving treatment. Theoretical and computational analyses show that there exists an optimal policy that is of threshold type, and diversion can be a good alternative to decrease tardiness when ambulance patients cause excessive congestion in the ED. Furthermore, implementation of AD policies in a simulation model that accounts for several relaxations of the assumptions suggests that the model provides consistent policies under multiple scenarios. Finally, a genetic algorithm is combined with simulation to design effective policies for multiple hospitals simultaneously. The model has the objective of minimizing the time that patients spend in non-value added activities, including transportation, waiting and boarding in the ED. Moreover, the AD policies are combined with simple ambulance destination policies to create ambulance flow control mechanisms. Results show that effective ambulance management can significantly reduce the time that patients have to wait to receive appropriate level of care.
ContributorsRamirez Nafarrate, Adrian (Author) / Fowler, John W. (Thesis advisor) / Wu, Teresa (Thesis advisor) / Gel, Esma S. (Committee member) / Limon, Jorge (Committee member) / Arizona State University (Publisher)
Created2011
151810-Thumbnail Image.png
Description
Hepatocellular carcinoma (HCC) is a malignant tumor and seventh most common cancer in human. Every year there is a significant rise in the number of patients suffering from HCC. Most clinical research has focused on HCC early detection so that there are high chances of patient's survival. Emerging advancements in

Hepatocellular carcinoma (HCC) is a malignant tumor and seventh most common cancer in human. Every year there is a significant rise in the number of patients suffering from HCC. Most clinical research has focused on HCC early detection so that there are high chances of patient's survival. Emerging advancements in functional and structural imaging techniques have provided the ability to detect microscopic changes in tumor micro environment and micro structure. The prime focus of this thesis is to validate the applicability of advanced imaging modality, Magnetic Resonance Elastography (MRE), for HCC diagnosis. The research was carried out on three HCC patient's data and three sets of experiments were conducted. The main focus was on quantitative aspect of MRE in conjunction with Texture Analysis, an advanced imaging processing pipeline and multi-variate analysis machine learning method for accurate HCC diagnosis. We analyzed the techniques to handle unbalanced data and evaluate the efficacy of sampling techniques. Along with this we studied different machine learning algorithms and developed models using them. Performance metrics such as Prediction Accuracy, Sensitivity and Specificity have been used for evaluation for the final developed model. We were able to identify the significant features in the dataset and also the selected classifier was robust in predicting the response class variable with high accuracy.
ContributorsBansal, Gaurav (Author) / Wu, Teresa (Thesis advisor) / Mitchell, Ross (Thesis advisor) / Li, Jing (Committee member) / Arizona State University (Publisher)
Created2013
152183-Thumbnail Image.png
Description
Two critical limitations for hyperspatial imagery are higher imagery variances and large data sizes. Although object-based analyses with a multi-scale framework for diverse object sizes are the solution, more data sources and large amounts of testing at high costs are required. In this study, I used tree density segmentation as

Two critical limitations for hyperspatial imagery are higher imagery variances and large data sizes. Although object-based analyses with a multi-scale framework for diverse object sizes are the solution, more data sources and large amounts of testing at high costs are required. In this study, I used tree density segmentation as the key element of a three-level hierarchical vegetation framework for reducing those costs, and a three-step procedure was used to evaluate its effects. A two-step procedure, which involved environmental stratifications and the random walker algorithm, was used for tree density segmentation. I determined whether variation in tone and texture could be reduced within environmental strata, and whether tree density segmentations could be labeled by species associations. At the final level, two tree density segmentations were partitioned into smaller subsets using eCognition in order to label individual species or tree stands in two test areas of two tree densities, and the Z values of Moran's I were used to evaluate whether imagery objects have different mean values from near segmentations as a measure of segmentation accuracy. The two-step procedure was able to delineating tree density segments and label species types robustly, compared to previous hierarchical frameworks. However, eCognition was not able to produce detailed, reasonable image objects with optimal scale parameters for species labeling. This hierarchical vegetation framework is applicable for fine-scale, time-series vegetation mapping to develop baseline data for evaluating climate change impacts on vegetation at low cost using widely available data and a personal laptop.
ContributorsLiau, Yan-ting (Author) / Franklin, Janet (Thesis advisor) / Turner, Billie (Committee member) / Myint, Soe (Committee member) / Arizona State University (Publisher)
Created2013
151545-Thumbnail Image.png
Description
A Pairwise Comparison Matrix (PCM) is used to compute for relative priorities of criteria or alternatives and are integral components of widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, a PCM suffers from several issues limiting its application

A Pairwise Comparison Matrix (PCM) is used to compute for relative priorities of criteria or alternatives and are integral components of widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, a PCM suffers from several issues limiting its application to large-scale decision problems, specifically: (1) to the curse of dimensionality, that is, a large number of pairwise comparisons need to be elicited from a decision maker (DM), (2) inconsistent and (3) imprecise preferences maybe obtained due to the limited cognitive power of DMs. This dissertation proposes a PCM Framework for Large-Scale Decisions to address these limitations in three phases as follows. The first phase proposes a binary integer program (BIP) to intelligently decompose a PCM into several mutually exclusive subsets using interdependence scores. As a result, the number of pairwise comparisons is reduced and the consistency of the PCM is improved. Since the subsets are disjoint, the most independent pivot element is identified to connect all subsets. This is done to derive the global weights of the elements from the original PCM. The proposed BIP is applied to both AHP and ANP methodologies. However, it is noted that the optimal number of subsets is provided subjectively by the DM and hence is subject to biases and judgement errors. The second phase proposes a trade-off PCM decomposition methodology to decompose a PCM into a number of optimally identified subsets. A BIP is proposed to balance the: (1) time savings by reducing pairwise comparisons, the level of PCM inconsistency, and (2) the accuracy of the weights. The proposed methodology is applied to the AHP to demonstrate its advantages and is compared to established methodologies. In the third phase, a beta distribution is proposed to generalize a wide variety of imprecise pairwise comparison distributions via a method of moments methodology. A Non-Linear Programming model is then developed that calculates PCM element weights which maximizes the preferences of the DM as well as minimizes the inconsistency simultaneously. Comparison experiments are conducted using datasets collected from literature to validate the proposed methodology.
ContributorsJalao, Eugene Rex Lazaro (Author) / Shunk, Dan L. (Thesis advisor) / Wu, Teresa (Thesis advisor) / Askin, Ronald G. (Committee member) / Goul, Kenneth M (Committee member) / Arizona State University (Publisher)
Created2013
151928-Thumbnail Image.png
Description
Land transformation under conditions of rapid urbanization has significantly altered the structure and functioning of Earth's systems. Land fragmentation, a characteristic of land transformation, is recognized as a primary driving force in the loss of biological diversity worldwide. However, little is known about its implications in complex urban settings where

Land transformation under conditions of rapid urbanization has significantly altered the structure and functioning of Earth's systems. Land fragmentation, a characteristic of land transformation, is recognized as a primary driving force in the loss of biological diversity worldwide. However, little is known about its implications in complex urban settings where interaction with social dynamics is intense. This research asks: How do patterns of land cover and land fragmentation vary over time and space, and what are the socio-ecological drivers and consequences of land transformation in a rapidly growing city? Using Metropolitan Phoenix as a case study, the research links pattern and process relationships between land cover, land fragmentation, and socio-ecological systems in the region. It examines population growth, water provision and institutions as major drivers of land transformation, and the changes in bird biodiversity that result from land transformation. How to manage socio-ecological systems is one of the biggest challenges of moving towards sustainability. This research project provides a deeper understanding of how land transformation affects socio-ecological dynamics in an urban setting. It uses a series of indices to evaluate land cover and fragmentation patterns over the past twenty years, including land patch numbers, contagion, shapes, and diversities. It then generates empirical evidence on the linkages between land cover patterns and ecosystem properties by exploring the drivers and impacts of land cover change. An interdisciplinary approach that integrates social, ecological, and spatial analysis is applied in this research. Findings of the research provide a documented dataset that can help researchers study the relationship between human activities and biotic processes in an urban setting, and contribute to sustainable urban development.
ContributorsZhang, Sainan (Author) / Boone, Christopher G. (Thesis advisor) / York, Abigail M. (Committee member) / Myint, Soe (Committee member) / Arizona State University (Publisher)
Created2013
152414-Thumbnail Image.png
Description
Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may

Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may be desired at component, sub-system or full system level. Two issues that are considered in this work are: 1. Information about design ideas is incomplete, informal and sketchy 2. Designers often work at multiple levels; different aspects or subsystems may be at different levels of abstraction Thus, high fidelity analysis and simulation tools are not appropriate for this purpose. This thesis looks at the requirements for a simulation tool and how it could facilitate concept evaluation. The specific tasks reported in this thesis are: 1. The typical types of information available after an ideation session 2. The typical types of technical evaluations done in early stages 3. How to conduct low fidelity design evaluation given a well-defined feasibility question A computational tool for supporting idea evaluation was designed and implemented. It was assumed that the results of the ideation session are represented as a morphological chart and each entry is expressed as some combination of a sketch, text and references to physical effects and machine components. Approximately 110 physical effects were identified and represented in terms of algebraic equations, physical variables and a textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 16 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works. textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 15 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works.
ContributorsKhorshidi, Maryam (Author) / Shah, Jami J. (Thesis advisor) / Wu, Teresa (Committee member) / Gel, Esma (Committee member) / Arizona State University (Publisher)
Created2014
152398-Thumbnail Image.png
Description
Identifying important variation patterns is a key step to identifying root causes of process variability. This gives rise to a number of challenges. First, the variation patterns might be non-linear in the measured variables, while the existing research literature has focused on linear relationships. Second, it is important to remove

Identifying important variation patterns is a key step to identifying root causes of process variability. This gives rise to a number of challenges. First, the variation patterns might be non-linear in the measured variables, while the existing research literature has focused on linear relationships. Second, it is important to remove noise from the dataset in order to visualize the true nature of the underlying patterns. Third, in addition to visualizing the pattern (preimage), it is also essential to understand the relevant features that define the process variation pattern. This dissertation considers these variation challenges. A base kernel principal component analysis (KPCA) algorithm transforms the measurements to a high-dimensional feature space where non-linear patterns in the original measurement can be handled through linear methods. However, the principal component subspace in feature space might not be well estimated (especially from noisy training data). An ensemble procedure is constructed where the final preimage is estimated as the average from bagged samples drawn from the original dataset to attenuate noise in kernel subspace estimation. This improves the robustness of any base KPCA algorithm. In a second method, successive iterations of denoising a convex combination of the training data and the corresponding denoised preimage are used to produce a more accurate estimate of the actual denoised preimage for noisy training data. The number of primary eigenvectors chosen in each iteration is also decreased at a constant rate. An efficient stopping rule criterion is used to reduce the number of iterations. A feature selection procedure for KPCA is constructed to find the set of relevant features from noisy training data. Data points are projected onto sparse random vectors. Pairs of such projections are then matched, and the differences in variation patterns within pairs are used to identify the relevant features. This approach provides robustness to irrelevant features by calculating the final variation pattern from an ensemble of feature subsets. Experiments are conducted using several simulated as well as real-life data sets. The proposed methods show significant improvement over the competitive methods.
ContributorsSahu, Anshuman (Author) / Runger, George C. (Thesis advisor) / Wu, Teresa (Committee member) / Pan, Rong (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2013
151008-Thumbnail Image.png
Description
Buildings (approximately half commercial and half residential) consume over 70% of the electricity among all the consumption units in the United States. Buildings are also responsible for approximately 40% of CO2 emissions, which is more than any other industry sectors. As a result, the initiative smart building which aims to

Buildings (approximately half commercial and half residential) consume over 70% of the electricity among all the consumption units in the United States. Buildings are also responsible for approximately 40% of CO2 emissions, which is more than any other industry sectors. As a result, the initiative smart building which aims to not only manage electrical consumption in an efficient way but also reduce the damaging effect of greenhouse gases on the environment has been launched. Another important technology being promoted by government agencies is the smart grid which manages energy usage across a wide range of buildings in an effort to reduce cost and increase reliability and transparency. As a great amount of efforts have been devoted to these two initiatives by either exploring the smart grid designs or developing technologies for smart buildings, the research studying how the smart buildings and smart grid coordinate thus more efficiently use the energy is currently lacking. In this dissertation, a "system-of-system" approach is employed to develop an integrated building model which consists a number of buildings (building cluster) interacting with smart grid. The buildings can function as both energy consumption unit as well as energy generation/storage unit. Memetic Algorithm (MA) and Particle Swarm Optimization (PSO) based decision framework are developed for building operation decisions. In addition, Particle Filter (PF) is explored as a mean for fusing online sensor and meter data so adaptive decision could be made in responding to dynamic environment. The dissertation is divided into three inter-connected research components. First, an integrated building energy model including building consumption, storage, generation sub-systems for the building cluster is developed. Then a bi-level Memetic Algorithm (MA) based decentralized decision framework is developed to identify the Pareto optimal operation strategies for the building cluster. The Pareto solutions not only enable multiple dimensional tradeoff analysis, but also provide valuable insight for determining pricing mechanisms and power grid capacity. Secondly, a multi-objective PSO based decision framework is developed to reduce the computational effort of the MA based decision framework without scarifying accuracy. With the improved performance, the decision time scale could be refined to make it capable for hourly operation decisions. Finally, by integrating the multi-objective PSO based decision framework with PF, an adaptive framework is developed for adaptive operation decisions for smart building cluster. The adaptive framework not only enables me to develop a high fidelity decision model but also enables the building cluster to respond to the dynamics and uncertainties inherent in the system.
ContributorsHu, Mengqi (Author) / Wu, Teresa (Thesis advisor) / Weir, Jeffery (Thesis advisor) / Wen, Jin (Committee member) / Fowler, John (Committee member) / Shunk, Dan (Committee member) / Arizona State University (Publisher)
Created2012
150733-Thumbnail Image.png
Description
This research by studies the computational performance of four different mixed integer programming (MIP) formulations for single machine scheduling problems with varying complexity. These formulations are based on (1) start and completion time variables, (2) time index variables, (3) linear ordering variables and (4) assignment and positional date variables. The

This research by studies the computational performance of four different mixed integer programming (MIP) formulations for single machine scheduling problems with varying complexity. These formulations are based on (1) start and completion time variables, (2) time index variables, (3) linear ordering variables and (4) assignment and positional date variables. The objective functions that are studied in this paper are total weighted completion time, maximum lateness, number of tardy jobs and total weighted tardiness. Based on the computational results, discussion and recommendations are made on which MIP formulation might work best for these problems. The performances of these formulations very much depend on the objective function, number of jobs and the sum of the processing times of all the jobs. Two sets of inequalities are presented that can be used to improve the performance of the formulation with assignment and positional date variables. Further, this research is extend to single machine bicriteria scheduling problems in which jobs belong to either of two different disjoint sets, each set having its own performance measure. These problems have been referred to as interfering job sets in the scheduling literature and also been called multi-agent scheduling where each agent's objective function is to be minimized. In the first single machine interfering problem (P1), the criteria of minimizing total completion time and number of tardy jobs for the two sets of jobs is studied. A Forward SPT-EDD heuristic is presented that attempts to generate set of non-dominated solutions. The complexity of this specific problem is NP-hard. The computational efficiency of the heuristic is compared against the pseudo-polynomial algorithm proposed by Ng et al. [2006]. In the second single machine interfering job sets problem (P2), the criteria of minimizing total weighted completion time and maximum lateness is studied. This is an established NP-hard problem for which a Forward WSPT-EDD heuristic is presented that attempts to generate set of supported points and the solution quality is compared with MIP formulations. For both of these problems, all jobs are available at time zero and the jobs are not allowed to be preempted.
ContributorsKhowala, Ketan (Author) / Fowler, John (Thesis advisor) / Keha, Ahmet (Thesis advisor) / Balasubramanian, Hari J (Committee member) / Wu, Teresa (Committee member) / Zhang, Muhong (Committee member) / Arizona State University (Publisher)
Created2012