Matching Items (6)
Filtering by

Clear all filters

151994-Thumbnail Image.png
Description
Under the framework of intelligent management of power grids by leveraging advanced information, communication and control technologies, a primary objective of this study is to develop novel data mining and data processing schemes for several critical applications that can enhance the reliability of power systems. Specifically, this study is broadly

Under the framework of intelligent management of power grids by leveraging advanced information, communication and control technologies, a primary objective of this study is to develop novel data mining and data processing schemes for several critical applications that can enhance the reliability of power systems. Specifically, this study is broadly organized into the following two parts: I) spatio-temporal wind power analysis for wind generation forecast and integration, and II) data mining and information fusion of synchrophasor measurements toward secure power grids. Part I is centered around wind power generation forecast and integration. First, a spatio-temporal analysis approach for short-term wind farm generation forecasting is proposed. Specifically, using extensive measurement data from an actual wind farm, the probability distribution and the level crossing rate of wind farm generation are characterized using tools from graphical learning and time-series analysis. Built on these spatial and temporal characterizations, finite state Markov chain models are developed, and a point forecast of wind farm generation is derived using the Markov chains. Then, multi-timescale scheduling and dispatch with stochastic wind generation and opportunistic demand response is investigated. Part II focuses on incorporating the emerging synchrophasor technology into the security assessment and the post-disturbance fault diagnosis of power systems. First, a data-mining framework is developed for on-line dynamic security assessment by using adaptive ensemble decision tree learning of real-time synchrophasor measurements. Under this framework, novel on-line dynamic security assessment schemes are devised, aiming to handle various factors (including variations of operating conditions, forced system topology change, and loss of critical synchrophasor measurements) that can have significant impact on the performance of conventional data-mining based on-line DSA schemes. Then, in the context of post-disturbance analysis, fault detection and localization of line outage is investigated using a dependency graph approach. It is shown that a dependency graph for voltage phase angles can be built according to the interconnection structure of power system, and line outage events can be detected and localized through networked data fusion of the synchrophasor measurements collected from multiple locations of power grids. Along a more practical avenue, a decentralized networked data fusion scheme is proposed for efficient fault detection and localization.
ContributorsHe, Miao (Author) / Zhang, Junshan (Thesis advisor) / Vittal, Vijay (Thesis advisor) / Hedman, Kory (Committee member) / Si, Jennie (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
154130-Thumbnail Image.png
Description
Given the importance of buildings as major consumers of resources worldwide, several organizations are working avidly to ensure the negative impacts of buildings are minimized. The U.S. Green Building Council's (USGBC) Leadership in Energy and Environmental Design (LEED) rating system is one such effort to recognize buildings that are designed

Given the importance of buildings as major consumers of resources worldwide, several organizations are working avidly to ensure the negative impacts of buildings are minimized. The U.S. Green Building Council's (USGBC) Leadership in Energy and Environmental Design (LEED) rating system is one such effort to recognize buildings that are designed to achieve a superior performance in several areas including energy consumption and indoor environmental quality (IEQ). The primary objectives of this study are to investigate the performance of LEED certified facilities in terms of energy consumption and occupant satisfaction with IEQ, and introduce a framework to assess the performance of LEED certified buildings.

This thesis attempts to achieve the research objectives by examining the LEED certified buildings on the Arizona State University (ASU) campus in Tempe, AZ, from two complementary perspectives: the Macro-level and the Micro-level. Heating, cooling, and electricity data were collected from the LEED-certified buildings on campus, and their energy use intensity was calculated in order to investigate the buildings' actual energy performance. Additionally, IEQ occupant satisfaction surveys were used to investigate users' satisfaction with the space layout, space furniture, thermal comfort, indoor air quality, lighting level, acoustic quality, water efficiency, cleanliness and maintenance of the facilities they occupy.

From a Macro-level perspective, the results suggest ASU LEED buildings consume less energy than regional counterparts, and exhibit higher occupant satisfaction than national counterparts. The occupant satisfaction results are in line with the literature on LEED buildings, whereas the energy results contribute to the inconclusive body of knowledge on energy performance improvements linked to LEED certification. From a Micro-level perspective, data analysis suggest an inconsistency between the LEED points earned for the Energy & Atmosphere and IEQ categories, on one hand, and the respective levels of energy consumption and occupant satisfaction on the other hand. Accordingly, this study showcases the variation in the performance results when approached from different perspectives. This contribution highlights the need to consider the Macro-level and Micro-level assessments in tandem, and assess LEED building performance from these two distinct but complementary perspectives in order to develop a more comprehensive understanding of the actual building performance.
ContributorsChokor, Abbas (Author) / El Asmar, Mounir (Thesis advisor) / Chong, Oswald (Committee member) / Parrish, Kristen (Committee member) / Arizona State University (Publisher)
Created2015
156457-Thumbnail Image.png
Description
Resilience is emerging as the preferred way to improve the protection of infrastructure systems beyond established risk management practices. Massive damages experienced during tragedies like Hurricane Katrina showed that risk analysis is incapable to prevent unforeseen infrastructure failures and shifted expert focus towards resilience to absorb and recover from adverse

Resilience is emerging as the preferred way to improve the protection of infrastructure systems beyond established risk management practices. Massive damages experienced during tragedies like Hurricane Katrina showed that risk analysis is incapable to prevent unforeseen infrastructure failures and shifted expert focus towards resilience to absorb and recover from adverse events. Recent, exponential growth in research is now producing consensus on how to think about infrastructure resilience centered on definitions and models from influential organizations like the US National Academy of Sciences. Despite widespread efforts, massive infrastructure failures in 2017 demonstrate that resilience is still not working, raising the question: Are the ways people think about resilience producing resilient infrastructure systems?



This dissertation argues that established thinking harbors misconceptions about infrastructure systems that diminish attempts to improve their resilience. Widespread efforts based on the current canon focus on improving data analytics, establishing resilience goals, reducing failure probabilities, and measuring cascading losses. Unfortunately, none of these pursuits change the resilience of an infrastructure system, because none of them result in knowledge about how data is used, goals are set, or failures occur. Through the examination of each misconception, this dissertation results in practical, new approaches for infrastructure systems to respond to unforeseen failures via sensing, adapting, and anticipating processes. Specifically, infrastructure resilience is improved by sensing when data analytics include the modeler-in-the-loop, adapting to stress contexts by switching between multiple resilience strategies, and anticipating crisis coordination activities prior to experiencing a failure.

Overall, results demonstrate that current resilience thinking needs to change because it does not differentiate resilience from risk. The majority of research thinks resilience is a property that a system has, like a noun, when resilience is really an action a system does, like a verb. Treating resilience as a noun only strengthens commitment to risk-based practices that do not protect infrastructure from unknown events. Instead, switching to thinking about resilience as a verb overcomes prevalent misconceptions about data, goals, systems, and failures, and may bring a necessary, radical change to the way infrastructure is protected in the future.
ContributorsEisenberg, Daniel Alexander (Author) / Seager, Thomas P. (Thesis advisor) / Park, Jeryang (Thesis advisor) / Alderson, David L. (Committee member) / Lai, Ying-Cheng (Committee member) / Arizona State University (Publisher)
Created2018
156827-Thumbnail Image.png
Description
Our daily life is becoming more and more reliant on services provided by the infrastructures

power, gas , communication networks. Ensuring the security of these

infrastructures is of utmost importance. This task becomes ever more challenging as

the inter-dependence among these infrastructures grows and a security breach in one

infrastructure can spill over to

Our daily life is becoming more and more reliant on services provided by the infrastructures

power, gas , communication networks. Ensuring the security of these

infrastructures is of utmost importance. This task becomes ever more challenging as

the inter-dependence among these infrastructures grows and a security breach in one

infrastructure can spill over to the others. The implication is that the security practices/

analysis recommended for these infrastructures should be done in coordination.

This thesis, focusing on the power grid, explores strategies to secure the system that

look into the coupling of the power grid to the cyber infrastructure, used to manage

and control it, and to the gas grid, that supplies an increasing amount of reserves to

overcome contingencies.

The first part (Part I) of the thesis, including chapters 2 through 4, focuses on

the coupling of the power and the cyber infrastructure that is used for its control and

operations. The goal is to detect malicious attacks gaining information about the

operation of the power grid to later attack the system. In chapter 2, we propose a

hierarchical architecture that correlates the analysis of high resolution Micro-Phasor

Measurement Unit (microPMU) data and traffic analysis on the Supervisory Control

and Data Acquisition (SCADA) packets, to infer the security status of the grid and

detect the presence of possible intruders. An essential part of this architecture is

tied to the analysis on the microPMU data. In chapter 3 we establish a set of anomaly

detection rules on microPMU data that

flag "abnormal behavior". A placement strategy

of microPMU sensors is also proposed to maximize the sensitivity in detecting anomalies.

In chapter 4, we focus on developing rules that can localize the source of an events

using microPMU to further check whether a cyber attack is causing the anomaly, by

correlating SCADA traffic with the microPMU data analysis results. The thread that

unies the data analysis in this chapter is the fact that decision are made without fully estimating the state of the system; on the contrary, decisions are made using

a set of physical measurements that falls short by orders of magnitude to meet the

needs for observability. More specifically, in the first part of this chapter (sections 4.1-

4.2), using microPMU data in the substation, methodologies for online identification of

the source Thevenin parameters are presented. This methodology is used to identify

reconnaissance activity on the normally-open switches in the substation, initiated

by attackers to gauge its controllability over the cyber network. The applications

of this methodology in monitoring the voltage stability of the grid is also discussed.

In the second part of this chapter (sections 4.3-4.5), we investigate the localization

of faults. Since the number of PMU sensors available to carry out the inference

is insufficient to ensure observability, the problem can be viewed as that of under-sampling

a "graph signal"; the analysis leads to a PMU placement strategy that can

achieve the highest resolution in localizing the fault, for a given number of sensors.

In both cases, the results of the analysis are leveraged in the detection of cyber-physical

attacks, where microPMU data and relevant SCADA network traffic information

are compared to determine if a network breach has affected the integrity of the system

information and/or operations.

In second part of this thesis (Part II), the security analysis considers the adequacy

and reliability of schedules for the gas and power network. The motivation for

scheduling jointly supply in gas and power networks is motivated by the increasing

reliance of power grids on natural gas generators (and, indirectly, on gas pipelines)

as providing critical reserves. Chapter 5 focuses on unveiling the challenges and

providing solution to this problem.
ContributorsJamei, Mahdi (Author) / Scaglioe, Anna (Thesis advisor) / Ayyanar, Raja (Committee member) / Hedman, Kory W (Committee member) / Kosut, Oliver (Committee member) / Arizona State University (Publisher)
Created2018
156859-Thumbnail Image.png
Description
The analysis of clinical workflow offers many challenges to clinical stakeholders and researchers, especially in environments characterized by dynamic and concurrent processes. Workflow analysis in such environments is essential for monitoring performance and finding bottlenecks and sources of error. Clinical workflow analysis has been enhanced with the inclusion of modern

The analysis of clinical workflow offers many challenges to clinical stakeholders and researchers, especially in environments characterized by dynamic and concurrent processes. Workflow analysis in such environments is essential for monitoring performance and finding bottlenecks and sources of error. Clinical workflow analysis has been enhanced with the inclusion of modern technologies. One such intervention is automated location tracking which is a system that detects the movement of clinicians and equipment. Utilizing the data produced from automated location tracking technologies can lead to the development of novel workflow analytics that can be used to complement more traditional approaches such as ethnography and grounded-theory based qualitative methods. The goals of this research are to: (i) develop a series of analytic techniques to derive deeper workflow-related insight in an emergency department setting, (ii) overlay data from disparate sources (quantitative and qualitative) to develop strategies that facilitate workflow redesign, and (iii) incorporate visual analytics methods to improve the targeted visual feedback received by providers based on the findings. The overarching purpose is to create a framework to demonstrate the utility of automated location tracking data used in conjunction with clinical data like EHR logs and its vital role in the future of clinical workflow analysis/analytics. This document is categorized based on two primary aims of the research. The first aim deals with the use of automated location tracking data to develop a novel methodological/exploratory framework for clinical workflow. The second aim is to overlay the quantitative data generated from the previous aim on data from qualitative observation and shadowing studies (mixed methods) to develop a deeper view of clinical workflow that can be used to facilitate workflow redesign. The final sections of the document speculate on the direction of this work where the potential of this research in the creation of fully integrated clinical environments i.e. environments with state-of-the-art location tracking and other data collection mechanisms, is discussed. The main purpose of this research is to demonstrate ways by which clinical processes can be continuously monitored allowing for proactive adaptations in the face of technological and process changes to minimize any negative impact on the quality of patient care and provider satisfaction.
ContributorsVankipuram, Akshay (Author) / Patel, Vimla L. (Thesis advisor) / Wang, Dongwen (Thesis advisor) / Shortliffe, Edward H (Committee member) / Kaufman, David R. (Committee member) / Traub, Stephen J (Committee member) / Arizona State University (Publisher)
Created2018
154747-Thumbnail Image.png
Description
Text Classification is a rapidly evolving area of Data Mining while Requirements Engineering is a less-explored area of Software Engineering which deals the process of defining, documenting and maintaining a software system's requirements. When researchers decided to blend these two streams in, there was research on automating the process of

Text Classification is a rapidly evolving area of Data Mining while Requirements Engineering is a less-explored area of Software Engineering which deals the process of defining, documenting and maintaining a software system's requirements. When researchers decided to blend these two streams in, there was research on automating the process of classification of software requirements statements into categories easily comprehensible for developers for faster development and delivery, which till now was mostly done manually by software engineers - indeed a tedious job. However, most of the research was focused on classification of Non-functional requirements pertaining to intangible features such as security, reliability, quality and so on. It is indeed a challenging task to automatically classify functional requirements, those pertaining to how the system will function, especially those belonging to different and large enterprise systems. This requires exploitation of text mining capabilities. This thesis aims to investigate results of text classification applied on functional software requirements by creating a framework in R and making use of algorithms and techniques like k-nearest neighbors, support vector machine, and many others like boosting, bagging, maximum entropy, neural networks and random forests in an ensemble approach. The study was conducted by collecting and visualizing relevant enterprise data manually classified previously and subsequently used for training the model. Key components for training included frequency of terms in the documents and the level of cleanliness of data. The model was applied on test data and validated for analysis, by studying and comparing parameters like precision, recall and accuracy.
ContributorsSwadia, Japa (Author) / Ghazarian, Arbi (Thesis advisor) / Bansal, Srividya (Committee member) / Gaffar, Ashraf (Committee member) / Arizona State University (Publisher)
Created2016