Matching Items (541)
150059-Thumbnail Image.png
Description
Dynamic loading is the term used for one way of optimally loading a transformer. Dynamic loading means the utility takes into account the thermal time constant of the transformer along with the cooling mode transitions, loading profile and ambient temperature when determining the time-varying loading capability of a transformer. Knowing

Dynamic loading is the term used for one way of optimally loading a transformer. Dynamic loading means the utility takes into account the thermal time constant of the transformer along with the cooling mode transitions, loading profile and ambient temperature when determining the time-varying loading capability of a transformer. Knowing the maximum dynamic loading rating can increase utilization of the transformer while not reducing life-expectancy, delaying the replacement of the transformer. This document presents the progress on the transformer dynamic loading project sponsored by Salt River Project (SRP). A software application which performs dynamic loading for substation distribution transformers with appropriate transformer thermal models is developed in this project. Two kinds of thermal hottest-spot temperature (HST) and top-oil temperature (TOT) models that will be used in the application--the ASU HST/TOT models and the ANSI models--are presented. Brief validations of the ASU models are presented, showing that the ASU models are accurate in simulating the thermal processes of the transformers. For this production grade application, both the ANSI and the ASU models are built and tested to select the most appropriate models to be used in the dynamic loading calculations. An existing application to build and select the TOT model was used as a starting point for the enhancements developed in this work. These enhancements include:  Adding the ability to develop HST models to the existing application,  Adding metrics to evaluate the models accuracy and selecting which model will be used in dynamic loading calculation  Adding the capability to perform dynamic loading calculations,  Production of a maximum dynamic load profile that the transformer can tolerate without acceleration of the insulation aging,  Provide suitable output (plots and text) for the results of the dynamic loading calculation. Other challenges discussed include: modification to the input data format, data-quality control, cooling mode estimation. Efforts to overcome these challenges are discussed in this work.
ContributorsLiu, Yi (Author) / Tylavksy, Daniel J (Thesis advisor) / Karady, George G. (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2011
150019-Thumbnail Image.png
Description
Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java

Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java code that runs within a JVM to interoperate with applications or libraries that are written in other languages and compiled to the host CPU ISA. JNI plays an important role in embedded system as it provides a mechanism to interact with libraries specific to the platform. This thesis addresses the overhead incurred in the JNI due to reflection and serialization when objects are accessed on android based mobile devices. It provides techniques to reduce this overhead. It also provides an API to access objects through its reference through pinning its memory location. The Android emulator was used to evaluate the performance of these techniques and we observed that there was 5 - 10 % performance gain in the new Java Native Interface.
ContributorsChandrian, Preetham (Author) / Lee, Yann-Hang (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
150026-Thumbnail Image.png
Description
As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily

As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily retrieving information from them given a user's information needs. Learning and using a structured query language (e.g., SQL and XQuery) is overwhelmingly burdensome for most users, as not only are these languages sophisticated, but the users need to know the data schema. Keyword search provides us with opportunities to conveniently access structured data and potentially significantly enhances the usability of structured data. However, processing keyword search on structured data is challenging due to various types of ambiguities such as structural ambiguity (keyword queries have no structure), keyword ambiguity (the keywords may not be accurate), user preference ambiguity (the user may have implicit preferences that are not indicated in the query), as well as the efficiency challenges due to large search space. This dissertation performs an expansive study on keyword search processing techniques as a gateway for users to access structured data and retrieve desired information. The key issues addressed include: (1) Resolving structural ambiguities in keyword queries by generating meaningful query results, which involves identifying relevant keyword matches, identifying return information, composing query results based on relevant matches and return information. (2) Resolving structural, keyword and user preference ambiguities through result analysis, including snippet generation, result differentiation, result clustering, result summarization/query expansion, etc. (3) Resolving the efficiency challenge in processing keyword search on structured data by utilizing and efficiently maintaining materialized views. These works deliver significant technical contributions towards building a full-fledged search engine for structured data.
ContributorsLiu, Ziyang (Author) / Chen, Yi (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Jagadish, H V (Committee member) / Arizona State University (Publisher)
Created2011
150050-Thumbnail Image.png
Description
The development of a Solid State Transformer (SST) that incorporates a DC-DC multiport converter to integrate both photovoltaic (PV) power generation and battery energy storage is presented in this dissertation. The DC-DC stage is based on a quad-active-bridge (QAB) converter which not only provides isolation for the load, but also

The development of a Solid State Transformer (SST) that incorporates a DC-DC multiport converter to integrate both photovoltaic (PV) power generation and battery energy storage is presented in this dissertation. The DC-DC stage is based on a quad-active-bridge (QAB) converter which not only provides isolation for the load, but also for the PV and storage. The AC-DC stage is implemented with a pulse-width-modulated (PWM) single phase rectifier. A unified gyrator-based average model is developed for a general multi-active-bridge (MAB) converter controlled through phase-shift modulation (PSM). Expressions to determine the power rating of the MAB ports are also derived. The developed gyrator-based average model is applied to the QAB converter for faster simulations of the proposed SST during the control design process as well for deriving the state-space representation of the plant. Both linear quadratic regulator (LQR) and single-input-single-output (SISO) types of controllers are designed for the DC-DC stage. A novel technique that complements the SISO controller by taking into account the cross-coupling characteristics of the QAB converter is also presented herein. Cascaded SISO controllers are designed for the AC-DC stage. The QAB demanded power is calculated at the QAB controls and then fed into the rectifier controls in order to minimize the effect of the interaction between the two SST stages. The dynamic performance of the designed control loops based on the proposed control strategies are verified through extensive simulation of the SST average and switching models. The experimental results presented herein show that the transient responses for each control strategy match those from the simulations results thus validating them.
ContributorsFalcones, Sixifo Daniel (Author) / Ayyanar, Raja (Thesis advisor) / Karady, George G. (Committee member) / Tylavsky, Daniel (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
150353-Thumbnail Image.png
Description
Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.
ContributorsMuralidhar, Ashwini (Author) / Saripalli, Srikanth (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2011
147857-Thumbnail Image.png
Description

Mutations in the DNA of somatic cells, resulting from inaccuracies in DNA<br/>replication or exposure to harsh conditions (ionizing radiation, carcinogens), may be<br/>loss-of-function mutations, and the compounding of these mutations can lead to cancer.<br/>Such mutations can come in the form of thymine dimers, N-𝛽 glycosyl bond hydrolysis,<br/>oxidation by hydrogen peroxide or

Mutations in the DNA of somatic cells, resulting from inaccuracies in DNA<br/>replication or exposure to harsh conditions (ionizing radiation, carcinogens), may be<br/>loss-of-function mutations, and the compounding of these mutations can lead to cancer.<br/>Such mutations can come in the form of thymine dimers, N-𝛽 glycosyl bond hydrolysis,<br/>oxidation by hydrogen peroxide or other radicals, and deamination of cytosine to uracil.<br/>However, many cells possess the machinery to counteract the deleterious effects of<br/>such mutations. While eukaryotic DNA repair enzymes decrease the incidence of<br/>mutations from 1 mistake per 10^7 nucleotides to 1 mistake per 10^9 nucleotides, these<br/>mutations, however sparse, are problematic. Of particular interest is a mutation in which<br/>uracil is incorporated into DNA, either by spontaneous deamination of cysteine or<br/>misincorporation. Such mutations occur about one in every 107 cytidine residues in 24<br/>hours. DNA uracil glycosylase (UDG) recognizes these mutations and cleaves the<br/>glycosidic bond, creating an abasic site. However, the rate of this form of DNA repair<br/>varies, depending on the nucleotides that surround the uracil. Most enzyme-DNA<br/>interactions depend on the sequence of DNA (which may change the duplex twist),<br/>even if they only bind to the sugar-phosphate backbone. In the mechanism of uracil<br/>excision, UDG flips the uracil out of the DNA double helix, and this step may be<br/>impaired by base pairs that neighbor the uracil. The deformability of certain regions of<br/>DNA may facilitate this step in the mechanism, causing these regions to be less<br/>mutable. In DNA, base stacking, a form of van der Waals forces between the aromatic<br/>nucleic bases, may make these uracil inclusions more difficult to excise. These regions,<br/>stabilized by base stacking interactions, may be less susceptible to repair by<br/>glycosylases such as UDG, and thus, more prone to mutation.

ContributorsUgaz, Bryan T (Author) / Levitus, Marcia (Thesis director) / Van Horn, Wade (Committee member) / Department of Physics (Contributor) / School of Molecular Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148168-Thumbnail Image.png
Description

The COVID-19 pandemic has resulted in preventative measures and has led to extensive changes in lifestyle for the vast majority of the American population. As the pandemic progresses, a growing amount of evidence shows that minority groups, such as the Deaf community, are often disproportionately and uniquely affected. Deaf

The COVID-19 pandemic has resulted in preventative measures and has led to extensive changes in lifestyle for the vast majority of the American population. As the pandemic progresses, a growing amount of evidence shows that minority groups, such as the Deaf community, are often disproportionately and uniquely affected. Deaf people are directly affected in their ability to personally socialize and continue with daily routines. More specifically, this can constitute their ability to meet new people, connect with friends/family, and to perform in their work or learning environment. It also may result in further mental health changes and an increased reliance on technology. The impact of COVID-19 on the Deaf community in clinical settings must also be considered. This includes changes in policies for in-person interpreters and a rise in telehealth. Often, these effects can be representative of the pre-existing low health literacy, frequency of miscommunication, poor treatment, and the inconvenience felt by Deaf people when trying to access healthcare. Ultimately, these effects on the Deaf community must be taken into account when attempting to create a full picture of the societal shift caused by COVID-19.

ContributorsAsuncion, David Leonard Esquiera (Co-author) / Dubey, Shreya (Co-author) / Patterson, Lindsey (Thesis director) / Lee, Lindsay (Committee member) / Harrington Bioengineering Program (Contributor) / Department of Physics (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147988-Thumbnail Image.png
Description

Stardust grains can provide useful information about the Solar System environment before the Sun was born. Stardust grains show distinct isotopic compositions that indicate their origins, like the atmospheres of red giant stars, asymptotic giant branch stars, and supernovae (e.g., Bose et al. 2010). It has been argued that some

Stardust grains can provide useful information about the Solar System environment before the Sun was born. Stardust grains show distinct isotopic compositions that indicate their origins, like the atmospheres of red giant stars, asymptotic giant branch stars, and supernovae (e.g., Bose et al. 2010). It has been argued that some stardust grains likely condensed in classical nova outbursts (e.g., Amari et al. 2001). These nova candidate grains contain 13C, 15N and 17O-rich nuclides which are produced by proton burning. However, these nuclides alone cannot constrain the stellar source of nova candidate grains. Nova ejecta is rich in 7Be that decays to 7Li (which has a half-life of ~53 days). I want to measure 6,7Li isotopes in nova candidate grains using the NanoSIMS 50L (nanoscale secondary ion mass spectrometry) to establish their nova origins without ambiguity. Several stardust grains that are nova candidate grains were identified in meteorite Acfer 094 on the basis of their oxygen isotopes. The identified silicate and oxide stardust grains are <500 nm in size and exist in the meteorite surrounded by meteoritic silicates. Therefore, 6,7Li isotopic measurements on these grains are hindered because of the large 300-500 nm oxygen ion beam in the NanoSIMS. I devised a methodology to isolate stardust grains by performing Focused Ion Beam milling with the FIB – Nova 200 NanoLab (FEI) instrument. We proved that the current FIB instrument cannot be used to prepare stardust grains smaller than 1 𝜇m due to lacking capabilities of the FIB. For future analyses, we could either use the same milling technique with the new and improved FIB – Helios 5 UX or use the recently constructed duoplasmatron on the NanoSIMS that can achieve a size of ~75 nm oxygen ion beam.

ContributorsDuncan, Ethan Jay (Author) / Bose, Miatrayee (Thesis director) / Starrfield, Sumner (Committee member) / Desch, Steve (Committee member) / School of Earth and Space Exploration (Contributor) / Department of Physics (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147894-Thumbnail Image.png
Description

This research endeavor explores the 1964 reasoning of Irish physicist John Bell and how it pertains to the provoking Einstein-Podolsky-Rosen Paradox. It is necessary to establish the machinations of formalisms ranging from conservation laws to quantum mechanical principles. The notion that locality is unable to be reconciled with the quantum

This research endeavor explores the 1964 reasoning of Irish physicist John Bell and how it pertains to the provoking Einstein-Podolsky-Rosen Paradox. It is necessary to establish the machinations of formalisms ranging from conservation laws to quantum mechanical principles. The notion that locality is unable to be reconciled with the quantum paradigm is upheld through analysis and the subsequent Aspect experiments in the years 1980-1982. No matter the complexity, any local hidden variable theory is incompatible with the formulation of standard quantum mechanics. A number of strikingly ambiguous and abstract concepts are addressed in this pursuit to deduce quantum's validity, including separability and reality. `Elements of reality' characteristic of unique spaces are defined using basis terminology and logic from EPR. The discussion draws directly from Bell's succinct 1964 Physics 1 paper as well as numerous other useful sources. The fundamental principle and insight gleaned is that quantum physics is indeed nonlocal; the door into its metaphysical and philosophical implications has long since been opened. Yet the nexus of information pertaining to Bell's inequality and EPR logic does nothing but assert the impeccable success of quantum physics' ability to describe nature.

ContributorsRapp, Sean R (Author) / Foy, Joseph (Thesis director) / Martin, Thomas (Committee member) / School of Earth and Space Exploration (Contributor) / Department of Physics (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05