Matching Items (218)
150422-Thumbnail Image.png
Description
Among the various end-use sectors, the commercial sector is expected to have the second-largest increase in total primary energy consump¬tion from 2009 to 2035 (5.8 quadrillion Btu) with a growth rate of 1.1% per year, it is the fastest growing end-use sectors. In order to make major gains in reducing

Among the various end-use sectors, the commercial sector is expected to have the second-largest increase in total primary energy consump¬tion from 2009 to 2035 (5.8 quadrillion Btu) with a growth rate of 1.1% per year, it is the fastest growing end-use sectors. In order to make major gains in reducing U.S. building energy use commercial sector buildings must be improved. Energy benchmarking of buildings gives the facility manager or the building owner a quick evaluation of energy use and the potential for energy savings. It is the process of comparing the energy performance of a building to standards and codes, to a set target performance or to a range of energy performance values of similar buildings in order to help assess opportunities for improvement. Commissioning of buildings is the process of ensuring that systems are designed, installed, functionally tested and capable of being operated and maintained according to the owner's operational needs. It is the first stage in the building upgrade process after it has been assessed using benchmarking tools. The staged approach accounts for the interactions among all the energy flows in a building and produces a systematic method for planning upgrades that increase energy savings. This research compares and analyzes selected benchmarking and retrocommissioning tools to validate their accuracy such that they could be used in the initial audit process of a building. The benchmarking study analyzes the Energy Use Intensities (EUIs) and Ratings assigned by Portfolio Manager and Oak Ridge National Laboratory (ORNL) Spreadsheets. The 90.1 Prototype models and Commercial Reference Building model for Large Office building type were used for this comparative analysis. A case-study building from the DOE - funded Energize Phoenix program was also benchmarked for its EUI and rating. The retrocommissioning study was conducted by modeling these prototype models and the case-study building in the Facility Energy Decision System (FEDS) tool to simulate their energy consumption and analyze the retrofits suggested by the tool. The results of the benchmarking study proved that a benchmarking tool could be used as a first step in the audit process, encouraging the building owner to conduct an energy audit and realize the energy savings potential. The retrocommissioning study established the validity of FEDS as an accurate tool to simulate a building for its energy performance using basic inputs and to accurately predict the energy savings achieved by the retrofits recommended on the basis of maximum LCC savings.
ContributorsAgnihotri, Shreya Prabodhkumar (Author) / Reddy, T Agami (Thesis advisor) / Bryan, Harvey (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2011
150924-Thumbnail Image.png
Description
Approximately 1% of the world population suffers from epilepsy. Continuous long-term electroencephalographic (EEG) monitoring is the gold-standard for recording epileptic seizures and assisting in the diagnosis and treatment of patients with epilepsy. However, this process still requires that seizures are visually detected and marked by experienced and trained electroencephalographers. The

Approximately 1% of the world population suffers from epilepsy. Continuous long-term electroencephalographic (EEG) monitoring is the gold-standard for recording epileptic seizures and assisting in the diagnosis and treatment of patients with epilepsy. However, this process still requires that seizures are visually detected and marked by experienced and trained electroencephalographers. The motivation for the development of an automated seizure detection algorithm in this research was to assist physicians in such a laborious, time consuming and expensive task. Seizures in the EEG vary in duration (seconds to minutes), morphology and severity (clinical to subclinical, occurrence rate) within the same patient and across patients. The task of seizure detection is also made difficult due to the presence of movement and other recording artifacts. An early approach towards the development of automated seizure detection algorithms utilizing both EEG changes and clinical manifestations resulted to a sensitivity of 70-80% and 1 false detection per hour. Approaches based on artificial neural networks have improved the detection performance at the cost of algorithm's training. Measures of nonlinear dynamics, such as Lyapunov exponents, have been applied successfully to seizure prediction. Within the framework of this MS research, a seizure detection algorithm based on measures of linear and nonlinear dynamics, i.e., the adaptive short-term maximum Lyapunov exponent (ASTLmax) and the adaptive Teager energy (ATE) was developed and tested. The algorithm was tested on long-term (0.5-11.7 days) continuous EEG recordings from five patients (3 with intracranial and 2 with scalp EEG) and a total of 56 seizures, producing a mean sensitivity of 93% and mean specificity of 0.048 false positives per hour. The developed seizure detection algorithm is data-adaptive, training-free and patient-independent. It is expected that this algorithm will assist physicians in reducing the time spent on detecting seizures, lead to faster and more accurate diagnosis, better evaluation of treatment, and possibly to better treatments if it is incorporated on-line and real-time with advanced neuromodulation therapies for epilepsy.
ContributorsVenkataraman, Vinay (Author) / Jassemidis, Leonidas (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2012
150942-Thumbnail Image.png
Description
The ease of use of mobile devices and tablets by students has generated a lot of interest in the area of engineering education. By using mobile technologies in signal analysis and applied mathematics, undergraduate-level courses can broaden the scope and effectiveness of technical education in classrooms. The current mobile devices

The ease of use of mobile devices and tablets by students has generated a lot of interest in the area of engineering education. By using mobile technologies in signal analysis and applied mathematics, undergraduate-level courses can broaden the scope and effectiveness of technical education in classrooms. The current mobile devices have abundant memory and powerful processors, in addition to providing interactive interfaces. Therefore, these devices can support the implementation of non-trivial signal processing algorithms. Several existing visual programming environments such as Java Digital Signal Processing (J-DSP), are built using the platform-independent infrastructure of Java applets. These enable students to perform signal-processing exercises over the Internet. However, some mobile devices do not support Java applets. Furthermore, mobile simulation environments rely heavily on establishing robust Internet connections with a remote server where the processing is performed. The interactive Java Digital Signal Processing tool (iJDSP) has been developed as graphical mobile app on iOS devices (iPads, iPhones and iPod touches). In contrast to existing mobile applications, iJDSP has the ability to execute simulations directly on the mobile devices, and is a completely stand-alone application. In addition to a substantial set of signal processing algorithms, iJDSP has a highly interactive graphical interface where block diagrams can be constructed using a simple drag-n-drop procedure. Functions such as visualization of the convolution operation, and an interface to wireless sensors have been developed. The convolution module animates the process of the continuous and discrete convolution operations, including time-shift and integration, so that users can observe and learn, intuitively. The current set of DSP functions in the application enables students to perform simulation exercises on continuous and discrete convolution, z-transform, filter design and the Fast Fourier Transform (FFT). The interface to wireless sensors in iJDSP allows users to import data from wireless sensor networks, and use the rich suite of functions in iJDSP for data processing. This allows users to perform operations such as localization, activity detection and data fusion. The exercises and the iJDSP application were evaluated by senior-level students at Arizona State University (ASU), and the results of those assessments are analyzed and reported in this thesis.
ContributorsHu, Shuang (Author) / Spanias, Andreas (Thesis advisor) / Tsakalis, Kostas (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2012
150788-Thumbnail Image.png
Description
Interictal spikes, together with seizures, have been recognized as the two hallmarks of epilepsy, a brain disorder that 1% of the world's population suffers from. Even though the presence of spikes in brain's electromagnetic activity has diagnostic value, their dynamics are still elusive. It was an objective of this dissertation

Interictal spikes, together with seizures, have been recognized as the two hallmarks of epilepsy, a brain disorder that 1% of the world's population suffers from. Even though the presence of spikes in brain's electromagnetic activity has diagnostic value, their dynamics are still elusive. It was an objective of this dissertation to formulate a mathematical framework within which the dynamics of interictal spikes could be thoroughly investigated. A new epileptic spike detection algorithm was developed by employing data adaptive morphological filters. The performance of the spike detection algorithm was favorably compared with others in the literature. A novel spike spatial synchronization measure was developed and tested on coupled spiking neuron models. Application of this measure to individual epileptic spikes in EEG from patients with temporal lobe epilepsy revealed long-term trends of increase in synchronization between pairs of brain sites before seizures and desynchronization after seizures, in the same patient as well as across patients, thus supporting the hypothesis that seizures may occur to break (reset) the abnormal spike synchronization in the brain network. Furthermore, based on these results, a separate spatial analysis of spike rates was conducted that shed light onto conflicting results in the literature about variability of spike rate before and after seizure. The ability to automatically classify seizures into clinical and subclinical was a result of the above findings. A novel method for epileptogenic focus localization from interictal periods based on spike occurrences was also devised, combining concepts from graph theory, like eigenvector centrality, and the developed spike synchronization measure, and tested very favorably against the utilized gold rule in clinical practice for focus localization from seizures onset. Finally, in another application of resetting of brain dynamics at seizures, it was shown that it is possible to differentiate with a high accuracy between patients with epileptic seizures (ES) and patients with psychogenic nonepileptic seizures (PNES). The above studies of spike dynamics have elucidated many unknown aspects of ictogenesis and it is expected to significantly contribute to further understanding of the basic mechanisms that lead to seizures, the diagnosis and treatment of epilepsy.
ContributorsKrishnan, Balu (Author) / Iasemidis, Leonidas (Thesis advisor) / Tsakalis, Kostantinos (Committee member) / Spanias, Andreas (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2012
150773-Thumbnail Image.png
Description
Photovoltaics (PV) is an important and rapidly growing area of research. With the advent of power system monitoring and communication technology collectively known as the "smart grid," an opportunity exists to apply signal processing techniques to monitoring and control of PV arrays. In this paper a monitoring system which provides

Photovoltaics (PV) is an important and rapidly growing area of research. With the advent of power system monitoring and communication technology collectively known as the "smart grid," an opportunity exists to apply signal processing techniques to monitoring and control of PV arrays. In this paper a monitoring system which provides real-time measurements of each PV module's voltage and current is considered. A fault detection algorithm formulated as a clustering problem and addressed using the robust minimum covariance determinant (MCD) estimator is described; its performance on simulated instances of arc and ground faults is evaluated. The algorithm is found to perform well on many types of faults commonly occurring in PV arrays. Among several types of detection algorithms considered, only the MCD shows high performance on both types of faults.
ContributorsBraun, Henry (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2012
150756-Thumbnail Image.png
Description
Energy efficient design and management of data centers has seen considerable interest in the recent years owing to its potential to reduce the overall energy consumption and thereby the costs associated with it. Therefore, it is of utmost importance that new methods for improved physical design of data centers, resource

Energy efficient design and management of data centers has seen considerable interest in the recent years owing to its potential to reduce the overall energy consumption and thereby the costs associated with it. Therefore, it is of utmost importance that new methods for improved physical design of data centers, resource management schemes for efficient workload distribution and sustainable operation for improving the energy efficiency, be developed and tested before implementation on an actual data center. The BlueTool project, provides such a state-of-the-art platform, both software and hardware, to design and analyze energy efficiency of data centers. The software platform, namely GDCSim uses cyber-physical approach to study the physical behavior of the data center in response to the management decisions by taking into account the heat recirculation patterns in the data center room. Such an approach yields best possible energy savings owing to the characterization of cyber-physical interactions and the ability of the resource management to take decisions based on physical behavior of data centers. The GDCSim mainly uses two Computational Fluid Dynamics (CFD) based cyber-physical models namely, Heat Recirculation Matrix (HRM) and Transient Heat Distribution Model (THDM) for thermal predictions based on different management schemes. They are generated using a model generator namely BlueSim. To ensure the accuracy of the thermal predictions using the GDCSim, the models, HRM and THDM and the model generator, BlueSim need to be validated experimentally. For this purpose, the hardware platform of the BlueTool project, namely the BlueCenter, a mini data center, can be used. As a part of this thesis, the HRM and THDM were generated using the BlueSim and experimentally validated using the BlueCenter. An average error of 4.08% was observed for BlueSim, 5.84% for HRM and 4.24% for THDM. Further, a high initial error was observed for transient thermal prediction, which is due to the inability of BlueSim to account for the heat retained by server components.
ContributorsGilbert, Rose Robin (Author) / Gupta, Sandeep K.S (Thesis advisor) / Artemiadis, Panagiotis (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2012
150530-Thumbnail Image.png
Description
With increased usage of green energy, the number of photovoltaic arrays used in power generation is increasing rapidly. Many of the arrays are located at remote locations where faults that occur within the array often go unnoticed and unattended for large periods of time. Technicians sent to rectify the faults

With increased usage of green energy, the number of photovoltaic arrays used in power generation is increasing rapidly. Many of the arrays are located at remote locations where faults that occur within the array often go unnoticed and unattended for large periods of time. Technicians sent to rectify the faults have to spend a large amount of time determining the location of the fault manually. Automated monitoring systems are needed to obtain the information about the performance of the array and detect faults. Such systems must monitor the DC side of the array in addition to the AC side to identify non catastrophic faults. This thesis focuses on two of the requirements for DC side monitoring of an automated PV array monitoring system. The first part of the thesis quantifies the advantages of obtaining higher resolution data from a PV array on detection of faults. Data for the monitoring system can be gathered for the array as a whole or from additional places within the array such as individual modules and end of strings. The fault detection rate and the false positive rates are compared for array level, string level and module level PV data. Monte Carlo simulations are performed using PV array models developed in Simulink and MATLAB for fault and no fault cases. The second part describes a graphical user interface (GUI) that can be used to visualize the PV array for module level monitoring system information. A demonstration GUI is built in MATLAB using data obtained from a PV array test facility in Tempe, AZ. Visualizations are implemented to display information about the array as a whole or individual modules and locate faults in the array.
ContributorsKrishnan, Venkatachalam (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Ayyanar, Raja (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
151028-Thumbnail Image.png
Description
In this thesis, we consider the problem of fast and efficient indexing techniques for time sequences which evolve on manifold-valued spaces. Using manifolds is a convenient way to work with complex features that often do not live in Euclidean spaces. However, computing standard notions of geodesic distance, mean etc. can

In this thesis, we consider the problem of fast and efficient indexing techniques for time sequences which evolve on manifold-valued spaces. Using manifolds is a convenient way to work with complex features that often do not live in Euclidean spaces. However, computing standard notions of geodesic distance, mean etc. can get very involved due to the underlying non-linearity associated with the space. As a result a complex task such as manifold sequence matching would require very large number of computations making it hard to use in practice. We believe that one can device smart approximation algorithms for several classes of such problems which take into account the geometry of the manifold and maintain the favorable properties of the exact approach. This problem has several applications in areas of human activity discovery and recognition, where several features and representations are naturally studied in a non-Euclidean setting. We propose a novel solution to the problem of indexing manifold-valued sequences by proposing an intrinsic approach to map sequences to a symbolic representation. This is shown to enable the deployment of fast and accurate algorithms for activity recognition, motif discovery, and anomaly detection. Toward this end, we present generalizations of key concepts of piece-wise aggregation and symbolic approximation for the case of non-Euclidean manifolds. Experiments show that one can replace expensive geodesic computations with much faster symbolic computations with little loss of accuracy in activity recognition and discovery applications. The proposed methods are ideally suited for real-time systems and resource constrained scenarios.
ContributorsAnirudh, Rushil (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2012
150428-Thumbnail Image.png
Description
Evacuated tube solar thermal collector arrays have a wide range of applications. While most of these applications are limited in performance due to relatively low maximum operating temperatures, these collectors can still be useful in low grade thermal systems. An array of fifteen Apricus AP-30 evacuated tube collectors was designed,

Evacuated tube solar thermal collector arrays have a wide range of applications. While most of these applications are limited in performance due to relatively low maximum operating temperatures, these collectors can still be useful in low grade thermal systems. An array of fifteen Apricus AP-30 evacuated tube collectors was designed, assembled, and tested on the Arizona State University campus in Tempe, AZ. An existing system model was reprogrammed and updated for increased flexibility and ease of use. The model predicts the outlet temperature of the collector array based on the specified environmental conditions. The model was verified through a comparative analysis to the data collected during a three-month test period. The accuracy of this model was then compared against data calculated from the Solar Rating and Certification Corporation (SRCC) efficiency curve to determine the relative performance. It was found that both the original and updated models were able to generate reasonable predictions of the performance of the collector array with overall average percentage errors of 1.0% and 1.8%, respectively.
ContributorsStonebraker, Matthew (Author) / Phelan, Patrick (Thesis advisor) / Reddy, Agami (Committee member) / Bryan, Harvey (Committee member) / Arizona State University (Publisher)
Created2011
150473-Thumbnail Image.png
Description
ABSTRACT The heat recovery steam generator (HRSG) is a key component of Combined Cycle Power Plants (CCPP). The exhaust (flue gas) from the CCPP gas turbine flows through the HRSG − this gas typically contains a high concentration of NO and cannot be discharged directly to the atmosphere because of

ABSTRACT The heat recovery steam generator (HRSG) is a key component of Combined Cycle Power Plants (CCPP). The exhaust (flue gas) from the CCPP gas turbine flows through the HRSG − this gas typically contains a high concentration of NO and cannot be discharged directly to the atmosphere because of environmental restrictions. In the HRSG, one method of reducing the flue gas NO concentration is to inject ammonia into the gas at a plane upstream of the Selective Catalytic Reduction (SCR) unit through an injection grid (AIG); the SCR is where the NO is reduced to N2 and H2O. The amount and spatial distribution of the injected ammonia are key considerations for NO reduction while using the minimum possible amount of ammonia. This work had three objectives. First, a flow network model of the Ammonia Flow Control Unit (AFCU) was to be developed to calculate the quantity of ammonia released into the flue gas from each AIG perforation. Second, CFD simulation of the flue gas flow was to be performed to obtain the velocity, temperature, and species concentration fields in the gas upstream and downstream of the SCR. Finally, performance characteristics of the ammonia injection system were to be evaluated. All three objectives were reached. The AFCU was modeled using JAVA - with a graphical user interface provided for the user. The commercial software Fluent was used for CFD simulation. To evaluate the efficacy of the ammonia injection system in reducing the flue gas NO concentration, the twelve butterfly valves in the AFCU ammonia delivery piping (risers) were throttled by various degrees in the model and the NO concentration distribution computed for each operational scenario. When the valves were kept fully open, it was found that it led to a more uniform reduction in NO concentration compared to throttling the valves such that the riser flows were equal. Additionally, the SCR catalyst was consumed somewhat more uniformly, and ammonia slip (ammonia not consumed in reaction) was found lower. The ammonia use could be decreased by 10 percent while maintaining the NO concentration limit in the flue gas exhausting into the atmosphere.
ContributorsAdulkar, Sajesh (Author) / Roy, Ramendra (Thesis advisor) / Lee, Taewoo (Thesis advisor) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2011