Matching Items (7)
Filtering by

Clear all filters

150867-Thumbnail Image.png
Description
The growing use of synthetic population, which is a disaggregate representation of the population of an area similar to the real population currently or in the future, has motivated the analysis of its sensitivity in the population generation procedure. New methods in PopGen have enhanced the generation of synthetic populations

The growing use of synthetic population, which is a disaggregate representation of the population of an area similar to the real population currently or in the future, has motivated the analysis of its sensitivity in the population generation procedure. New methods in PopGen have enhanced the generation of synthetic populations whereby both household-level and person-level characteristics of interest can be matched in a computationally efficient manner. In the process of set up, population synthesis procedures need sample records for households and persons to match the marginal totals with a specific set of control variables for both the household and person levels, or only the household level, for a specific geographic resolution. In this study, an approach has been taken to analyze the sensitivity by changing and varying this number of controls, with and without taking person controls. The implementation of alternative constraints has been applied on a sample of three hundred block groups in Maricopa County, Arizona. The two datasets that have been used in this study are Census 2000 and a combination of Census 2000 and ACS 2005-2009 dataset. The variation in results for two different rounding methods: arithmetic and bucket rounding have been examined. Finally, the combined sample prepared from the available Census 2000 and ACS 2005-2009 dataset was used to investigate how the results differ when flexibility for drawing households is greater. Study shows that fewer constraints both in household and person levels match the aggregate total population more accurately but could not match distributions of individual attributes. A greater number of attributes both in household and person levels need to be controlled. Where number of controls is higher, using bucket rounding improves the accuracy of the results in both aggregate and disaggregates level. Using combined sample gives the software more flexibility as well as a rich seed matrix to draw households which generates more accurate synthetic population. Therefore, combined sample is another potential option to improve the accuracy in matching both aggregate and disaggregate level household and person distributions.
ContributorsDey, Rumpa Rani (Author) / Pendyala, Ram M. (Thesis advisor) / Ahn, Soyoung (Committee member) / Mamlouk, Michael S. (Committee member) / Arizona State University (Publisher)
Created2012
154589-Thumbnail Image.png
Description
Bank institutions employ several marketing strategies to maximize new customer acquisition as well as current customer retention. Telemarketing is one such approach taken where individual customers are contacted by bank representatives with offers. These telemarketing strategies can be improved in combination with data mining techniques that allow predictability

Bank institutions employ several marketing strategies to maximize new customer acquisition as well as current customer retention. Telemarketing is one such approach taken where individual customers are contacted by bank representatives with offers. These telemarketing strategies can be improved in combination with data mining techniques that allow predictability of customer information and interests. In this thesis, bank telemarketing data from a Portuguese banking institution were analyzed to determine predictability of several client demographic and financial attributes and find most contributing factors in each. Data were preprocessed to ensure quality, and then data mining models were generated for the attributes with logistic regression, support vector machine (SVM) and random forest using Orange as the data mining tool. Results were analyzed using precision, recall and F1 score.
ContributorsEjaz, Samira (Author) / Davulcu, Hasan (Thesis advisor) / Balasooriya, Janaka (Committee member) / Candan, Kasim (Committee member) / Arizona State University (Publisher)
Created2016
154781-Thumbnail Image.png
Description
Researchers who conduct longitudinal studies are inherently interested in studying individual and population changes over time (e.g., mathematics achievement, subjective well-being). To answer such research questions, models of change (e.g., growth models) make the assumption of longitudinal measurement invariance. In many applied situations, key constructs are measured by a collection

Researchers who conduct longitudinal studies are inherently interested in studying individual and population changes over time (e.g., mathematics achievement, subjective well-being). To answer such research questions, models of change (e.g., growth models) make the assumption of longitudinal measurement invariance. In many applied situations, key constructs are measured by a collection of ordered-categorical indicators (e.g., Likert scale items). To evaluate longitudinal measurement invariance with ordered-categorical indicators, a set of hierarchical models can be sequentially tested and compared. If the statistical tests of measurement invariance fail to be supported for one of the models, it is useful to have a method with which to gauge the practical significance of the differences in measurement model parameters over time. Drawing on studies of latent growth models and second-order latent growth models with continuous indicators (e.g., Kim & Willson, 2014a; 2014b; Leite, 2007; Wirth, 2008), this study examined the performance of a potential sensitivity analysis to gauge the practical significance of violations of longitudinal measurement invariance for ordered-categorical indicators using second-order latent growth models. The change in the estimate of the second-order growth parameters following the addition of an incorrect level of measurement invariance constraints at the first-order level was used as an effect size for measurement non-invariance. This study investigated how sensitive the proposed sensitivity analysis was to different locations of non-invariance (i.e., non-invariance in the factor loadings, the thresholds, and the unique factor variances) given a sufficient sample size. This study also examined whether the sensitivity of the proposed sensitivity analysis depended on a number of other factors including the magnitude of non-invariance, the number of non-invariant indicators, the number of non-invariant occasions, and the number of response categories in the indicators.
ContributorsLiu, Yu, Ph.D (Author) / West, Stephen G. (Thesis advisor) / Tein, Jenn-Yun (Thesis advisor) / Green, Samuel (Committee member) / Grimm, Kevin J. (Committee member) / Arizona State University (Publisher)
Created2016
155845-Thumbnail Image.png
Description
City administrators and real-estate developers have been setting up rather aggressive energy efficiency targets. This, in turn, has led the building science research groups across the globe to focus on urban scale building performance studies and level of abstraction associated with the simulations of the same. The increasing maturity of

City administrators and real-estate developers have been setting up rather aggressive energy efficiency targets. This, in turn, has led the building science research groups across the globe to focus on urban scale building performance studies and level of abstraction associated with the simulations of the same. The increasing maturity of the stakeholders towards energy efficiency and creating comfortable working environment has led researchers to develop methodologies and tools for addressing the policy driven interventions whether it’s urban level energy systems, buildings’ operational optimization or retrofit guidelines. Typically, these large-scale simulations are carried out by grouping buildings based on their design similarities i.e. standardization of the buildings. Such an approach does not necessarily lead to potential working inputs which can make decision-making effective. To address this, a novel approach is proposed in the present study.

The principle objective of this study is to propose, to define and evaluate the methodology to utilize machine learning algorithms in defining representative building archetypes for the Stock-level Building Energy Modeling (SBEM) which are based on operational parameter database. The study uses “Phoenix- climate” based CBECS-2012 survey microdata for analysis and validation.

Using the database, parameter correlations are studied to understand the relation between input parameters and the energy performance. Contrary to precedence, the study establishes that the energy performance is better explained by the non-linear models.

The non-linear behavior is explained by advanced learning algorithms. Based on these algorithms, the buildings at study are grouped into meaningful clusters. The cluster “mediod” (statistically the centroid, meaning building that can be represented as the centroid of the cluster) are established statistically to identify the level of abstraction that is acceptable for the whole building energy simulations and post that the retrofit decision-making. Further, the methodology is validated by conducting Monte-Carlo simulations on 13 key input simulation parameters. The sensitivity analysis of these 13 parameters is utilized to identify the optimum retrofits.

From the sample analysis, the envelope parameters are found to be more sensitive towards the EUI of the building and thus retrofit packages should also be directed to maximize the energy usage reduction.
ContributorsPathak, Maharshi P. (Author) / Reddy, T Agami (Thesis advisor) / Addison, Marlin (Committee member) / Bryan, Harvey (Committee member) / Arizona State University (Publisher)
Created2017
149636-Thumbnail Image.png
Description
The modeling and simulation of airflow dynamics in buildings has many applications including indoor air quality and ventilation analysis, contaminant dispersion prediction, and the calculation of personal occupant exposure. Multi-zone airflow model software programs provide such capabilities in a manner that is practical for whole building analysis. This research addresses

The modeling and simulation of airflow dynamics in buildings has many applications including indoor air quality and ventilation analysis, contaminant dispersion prediction, and the calculation of personal occupant exposure. Multi-zone airflow model software programs provide such capabilities in a manner that is practical for whole building analysis. This research addresses the need for calibration methodologies to improve the prediction accuracy of multi-zone software programs. Of particular interest is accurate modeling of airflow dynamics in response to extraordinary events, i.e. chemical and biological attacks. This research developed and explored a candidate calibration methodology which utilizes tracer gas (e.g., CO2) data. A key concept behind this research was that calibration of airflow models is a highly over-parameterized problem and that some form of model reduction is imperative. Model reduction was achieved by proposing the concept of macro-zones, i.e. groups of rooms that can be combined into one zone for the purposes of predicting or studying dynamic airflow behavior under different types of stimuli. The proposed calibration methodology consists of five steps: (i) develop a "somewhat" realistic or partially calibrated multi-zone model of a building so that the subsequent steps yield meaningful results, (ii) perform an airflow-based sensitivity analysis to determine influential system drivers, (iii) perform a tracer gas-based sensitivity analysis to identify macro-zones for model reduction, (iv) release CO2 in the building and measure tracer gas concentrations in at least one room within each macro-zone (some replication in other rooms is highly desirable) and use these measurements to further calibrate aggregate flow parameters of macro-zone flow elements so as to improve the model fit, and (v) evaluate model adequacy of the updated model based on some metric. The proposed methodology was first evaluated with a synthetic building and subsequently refined using actual measured airflows and CO2 concentrations for a real building. The airflow dynamics of the buildings analyzed were found to be dominated by the HVAC system. In such buildings, rectifying differences between measured and predicted tracer gas behavior should focus on factors impacting room air change rates first and flow parameter assumptions between zones second.
ContributorsSnyder, Steven Christopher (Author) / Reddy, T. Agami (Thesis advisor) / Addison, Marlin S. (Committee member) / Bryan, Harvey J. (Committee member) / Arizona State University (Publisher)
Created2011
158246-Thumbnail Image.png
Description
Cancer is a worldwide burden in every aspect: physically, emotionally, and financially. A need for innovation in cancer research has led to a vast interdisciplinary effort to search for the next breakthrough. Mathematical modeling allows for a unique look into the underlying cellular dynamics and allows for testing treatment strategies

Cancer is a worldwide burden in every aspect: physically, emotionally, and financially. A need for innovation in cancer research has led to a vast interdisciplinary effort to search for the next breakthrough. Mathematical modeling allows for a unique look into the underlying cellular dynamics and allows for testing treatment strategies without the need for clinical trials. This dissertation explores several iterations of a dendritic cell (DC) therapy model and correspondingly investigates what each iteration teaches about response to treatment.

In Chapter 2, motivated by the work of de Pillis et al. (2013), a mathematical model employing six ordinary differential (ODEs) and delay differential equations (DDEs) is formulated to understand the effectiveness of DC vaccines, accounting for cell trafficking with a blood and tumor compartment. A preliminary analysis is performed, with numerical simulations used to show the existence of oscillatory behavior. The model is then reduced to a system of four ODEs. Both models are validated using experimental data from melanoma-induced mice. Conditions under which the model admits rich dynamics observed in a clinical setting, such as periodic solutions and bistability, are established. Mathematical analysis proves the existence of a backward bifurcation and establishes thresholds for R0 that ensure tumor elimination or existence. A sensitivity analysis determines which parameters most significantly impact the reproduction number R0. Identifiability analysis reveals parameters of interest for estimation. Results are framed in terms of treatment implications, including effective combination and monotherapy strategies.

In Chapter 3, a study of whether the observed complexity can be represented with a simplified model is conducted. The DC model of Chapter 2 is reduced to a non-dimensional system of two DDEs. Mathematical and numerical analysis explore the impact of immune response time on the stability and eradication of the tumor, including an analytical proof of conditions necessary for the existence of a Hopf bifurcation. In a limiting case, conditions for global stability of the tumor-free equilibrium are outlined.

Lastly, Chapter 4 discusses future directions to explore. There still remain open questions to investigate and much work to be done, particularly involving uncertainty analysis. An outline of these steps is provided for future undertakings.
ContributorsDickman, Lauren (Author) / Kuang, Yang (Thesis advisor) / Baer, Steven M. (Committee member) / Gardner, Carl (Committee member) / Gumel, Abba B. (Committee member) / Kostelich, Eric J. (Committee member) / Arizona State University (Publisher)
Created2020
190764-Thumbnail Image.png
Description
Global emissions of carbon dioxide are reaching new heights every year since the Industrial Revolution. A major contributor to this is fossil fuel consumption. The consumption trend has indicated all this. It has also strengthened the argument for the need to cut down emissions and sweep out historical emissions through

Global emissions of carbon dioxide are reaching new heights every year since the Industrial Revolution. A major contributor to this is fossil fuel consumption. The consumption trend has indicated all this. It has also strengthened the argument for the need to cut down emissions and sweep out historical emissions through the implementation of Carbon Capture, Utilization, and Storage (CCUS) and Carbon Dioxide Removal (CDR) technologies respectively. This is required to control global warming. Direct Air Capture (DAC) is one of the CDR technologies. Extensive research and projections have suggested that DAC has tremendous potential to achieve global climate change mitigation goals. The feasibility of DAC is proven but work is required to bridge gaps in DAC research to make it affordable and scalable. Process modelling is an approach used to address these concerns. Current DAC research in system design and modelling is discrete and existing models have limited use cases. This work is focused on the development of a generalized process mass transfer model for the capture stage of solid sorbent DAC contactors. It provides flexibility for defining contactor geometry, selection of ambient conditions, and versatility to plug different sorbents in it for CO2 capture. The modelling procedure is explained, and a robustness check is performed to ensure model integrity. The results of the robustness check and sensitivity analysis are then explained. This research is part of a long-term effort to create a complete modelling package for the DAC community to boost research and development to large-scale deployments.
ContributorsPatel, Kshitij Mukeshbhai (Author) / Green, Matthew D (Thesis advisor) / Lackner, Klaus S (Committee member) / Cirucci, John (Committee member) / Arizona State University (Publisher)
Created2023