Matching Items (464)
150059-Thumbnail Image.png
Description
Dynamic loading is the term used for one way of optimally loading a transformer. Dynamic loading means the utility takes into account the thermal time constant of the transformer along with the cooling mode transitions, loading profile and ambient temperature when determining the time-varying loading capability of a transformer. Knowing

Dynamic loading is the term used for one way of optimally loading a transformer. Dynamic loading means the utility takes into account the thermal time constant of the transformer along with the cooling mode transitions, loading profile and ambient temperature when determining the time-varying loading capability of a transformer. Knowing the maximum dynamic loading rating can increase utilization of the transformer while not reducing life-expectancy, delaying the replacement of the transformer. This document presents the progress on the transformer dynamic loading project sponsored by Salt River Project (SRP). A software application which performs dynamic loading for substation distribution transformers with appropriate transformer thermal models is developed in this project. Two kinds of thermal hottest-spot temperature (HST) and top-oil temperature (TOT) models that will be used in the application--the ASU HST/TOT models and the ANSI models--are presented. Brief validations of the ASU models are presented, showing that the ASU models are accurate in simulating the thermal processes of the transformers. For this production grade application, both the ANSI and the ASU models are built and tested to select the most appropriate models to be used in the dynamic loading calculations. An existing application to build and select the TOT model was used as a starting point for the enhancements developed in this work. These enhancements include:  Adding the ability to develop HST models to the existing application,  Adding metrics to evaluate the models accuracy and selecting which model will be used in dynamic loading calculation  Adding the capability to perform dynamic loading calculations,  Production of a maximum dynamic load profile that the transformer can tolerate without acceleration of the insulation aging,  Provide suitable output (plots and text) for the results of the dynamic loading calculation. Other challenges discussed include: modification to the input data format, data-quality control, cooling mode estimation. Efforts to overcome these challenges are discussed in this work.
ContributorsLiu, Yi (Author) / Tylavksy, Daniel J (Thesis advisor) / Karady, George G. (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2011
150019-Thumbnail Image.png
Description
Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java

Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java code that runs within a JVM to interoperate with applications or libraries that are written in other languages and compiled to the host CPU ISA. JNI plays an important role in embedded system as it provides a mechanism to interact with libraries specific to the platform. This thesis addresses the overhead incurred in the JNI due to reflection and serialization when objects are accessed on android based mobile devices. It provides techniques to reduce this overhead. It also provides an API to access objects through its reference through pinning its memory location. The Android emulator was used to evaluate the performance of these techniques and we observed that there was 5 - 10 % performance gain in the new Java Native Interface.
ContributorsChandrian, Preetham (Author) / Lee, Yann-Hang (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
150026-Thumbnail Image.png
Description
As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily

As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily retrieving information from them given a user's information needs. Learning and using a structured query language (e.g., SQL and XQuery) is overwhelmingly burdensome for most users, as not only are these languages sophisticated, but the users need to know the data schema. Keyword search provides us with opportunities to conveniently access structured data and potentially significantly enhances the usability of structured data. However, processing keyword search on structured data is challenging due to various types of ambiguities such as structural ambiguity (keyword queries have no structure), keyword ambiguity (the keywords may not be accurate), user preference ambiguity (the user may have implicit preferences that are not indicated in the query), as well as the efficiency challenges due to large search space. This dissertation performs an expansive study on keyword search processing techniques as a gateway for users to access structured data and retrieve desired information. The key issues addressed include: (1) Resolving structural ambiguities in keyword queries by generating meaningful query results, which involves identifying relevant keyword matches, identifying return information, composing query results based on relevant matches and return information. (2) Resolving structural, keyword and user preference ambiguities through result analysis, including snippet generation, result differentiation, result clustering, result summarization/query expansion, etc. (3) Resolving the efficiency challenge in processing keyword search on structured data by utilizing and efficiently maintaining materialized views. These works deliver significant technical contributions towards building a full-fledged search engine for structured data.
ContributorsLiu, Ziyang (Author) / Chen, Yi (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Jagadish, H V (Committee member) / Arizona State University (Publisher)
Created2011
150050-Thumbnail Image.png
Description
The development of a Solid State Transformer (SST) that incorporates a DC-DC multiport converter to integrate both photovoltaic (PV) power generation and battery energy storage is presented in this dissertation. The DC-DC stage is based on a quad-active-bridge (QAB) converter which not only provides isolation for the load, but also

The development of a Solid State Transformer (SST) that incorporates a DC-DC multiport converter to integrate both photovoltaic (PV) power generation and battery energy storage is presented in this dissertation. The DC-DC stage is based on a quad-active-bridge (QAB) converter which not only provides isolation for the load, but also for the PV and storage. The AC-DC stage is implemented with a pulse-width-modulated (PWM) single phase rectifier. A unified gyrator-based average model is developed for a general multi-active-bridge (MAB) converter controlled through phase-shift modulation (PSM). Expressions to determine the power rating of the MAB ports are also derived. The developed gyrator-based average model is applied to the QAB converter for faster simulations of the proposed SST during the control design process as well for deriving the state-space representation of the plant. Both linear quadratic regulator (LQR) and single-input-single-output (SISO) types of controllers are designed for the DC-DC stage. A novel technique that complements the SISO controller by taking into account the cross-coupling characteristics of the QAB converter is also presented herein. Cascaded SISO controllers are designed for the AC-DC stage. The QAB demanded power is calculated at the QAB controls and then fed into the rectifier controls in order to minimize the effect of the interaction between the two SST stages. The dynamic performance of the designed control loops based on the proposed control strategies are verified through extensive simulation of the SST average and switching models. The experimental results presented herein show that the transient responses for each control strategy match those from the simulations results thus validating them.
ContributorsFalcones, Sixifo Daniel (Author) / Ayyanar, Raja (Thesis advisor) / Karady, George G. (Committee member) / Tylavsky, Daniel (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
150353-Thumbnail Image.png
Description
Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.
ContributorsMuralidhar, Ashwini (Author) / Saripalli, Srikanth (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2011
148117-Thumbnail Image.png
Description

Something Like Human explores corporate social responsibility through a triple lens, providing a content analysis using previous literature and history as the standards for evaluation. Section I reviews the history of corporate social responsibility and how it is currently understood and employed today. Section II turns its focus to a

Something Like Human explores corporate social responsibility through a triple lens, providing a content analysis using previous literature and history as the standards for evaluation. Section I reviews the history of corporate social responsibility and how it is currently understood and employed today. Section II turns its focus to a specific socially conscious corporation, Lush Cosmetics, examining its practices considering the concepts provided in Section I and performing a close analysis of its promotional materials. Section III consists of a mock marketing campaign designed for Lush in light of their social commitments. By the end of this thesis, the goal for the reader is to ask: Can major corporations be something like human?

ContributorsDalgleish, Alayna Rose (Author) / Gruber, Diane (Thesis director) / Thornton, Leslie-Jean (Committee member) / School of Social and Behavioral Sciences (Contributor, Contributor) / Walter Cronkite School of Journalism and Mass Comm (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148122-Thumbnail Image.png
Description

Coverage of Black soccer players by Italian media outlets perpetuate narratives rooted in anti-Black racism. These narratives reflect the country’s changing attitude toward immigration. Historically a country from which citizens emigrated, it is now a recipient of immigrants from Africa. These changing demographics have also caused a shift in the

Coverage of Black soccer players by Italian media outlets perpetuate narratives rooted in anti-Black racism. These narratives reflect the country’s changing attitude toward immigration. Historically a country from which citizens emigrated, it is now a recipient of immigrants from Africa. These changing demographics have also caused a shift in the focus of racism in Italy, from discrimination against southern Italians to anti-Black racism. As the country has explored what defines a unified Italian identity, Afro-Italians have been excluded. This study evaluates how these perceptions of Afro-Italian soccer players manifest according to various racial frames, and the frequency with which they do so in three Italian sports dailies: La Gazzetta dello Sport, Corriere dello Sport – Stadio, and Tuttosport. In this context, Afro-Italian refers to an Italian citizen of African descent, and anti-Black racism denotes any form of discrimination, stereotyping, or racism that specifically impacts those of African descent. For this study, a representative sample was collected consisting of website coverage published by the three sports dailies: articles devoted to Mario Balotelli that appeared between 2007 and 2014, and articles devoted to Moise Kean between 2016 and 2019. Three coders recorded the content of the sample articles on a spreadsheet organized by the type of racial frame applied to Black athletes. The analysis reveals that the players were frequently portrayed as being incapable of self-determination and of having an innate, natural athletic capability, rather than one honed through practice. The coders noted that in addition to explicit racial framing, there were also instances of implicit and subtle ways these racial frames manifest. In future research, the coding procedure will need to be adapted to account for these more layered and nuanced manifestations of anti-Black racism.

Created2021-05
148125-Thumbnail Image.png
Description

In recent years, advanced metrics have dominated the game of Major League Baseball. One such metric, the Pythagorean Win-Loss Formula, is commonly used by fans, reporters, analysts and teams alike to use a team’s runs scored and runs allowed to estimate their expected winning percentage. However, this method is not

In recent years, advanced metrics have dominated the game of Major League Baseball. One such metric, the Pythagorean Win-Loss Formula, is commonly used by fans, reporters, analysts and teams alike to use a team’s runs scored and runs allowed to estimate their expected winning percentage. However, this method is not perfect, and shows notable room for improvement. One such area that could be improved is its ability to be affected drastically by a single blowout game, a game in which one team significantly outscores their opponent.<br/>We hypothesize that meaningless runs scored in blowouts are harming the predictive power of Pythagorean Win-Loss and similar win expectancy statistics such as the Linear Formula for Baseball and BaseRuns. We developed a win probability-based cutoff approach that tallied the score of each game once a certain win probability threshold was passed, effectively removing those meaningless runs from a team’s season-long runs scored and runs allowed totals. These truncated totals were then inserted into the Pythagorean Win-Loss and Linear Formulas and tested against the base models.<br/>The preliminary results show that, while certain runs are more meaningful than others depending on the situation in which they are scored, the base models more accurately predicted future record than our truncated versions. For now, there is not enough evidence to either confirm or reject our hypothesis. In this paper, we suggest several potential improvement strategies for the results.<br/>At the end, we address how these results speak to the importance of responsibility and restraint when using advanced statistics within reporting.

ContributorsIversen, Joshua Allen (Author) / Satpathy, Asish (Thesis director) / Kurland, Brett (Committee member) / Department of Information Systems (Contributor) / Walter Cronkite School of Journalism and Mass Comm (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147853-Thumbnail Image.png
Description

This thesis research aims to define, identify, and promote community theatre as a “third space” for disadvantaged youth. A third space is defined by the Oxford dictionary as “...the in-between, or hybrid, spaces, where the first and second spaces work together to generate a new third space. First and second

This thesis research aims to define, identify, and promote community theatre as a “third space” for disadvantaged youth. A third space is defined by the Oxford dictionary as “...the in-between, or hybrid, spaces, where the first and second spaces work together to generate a new third space. First and second spaces are two different, and possibly conflicting, spatial groupings where people interact physically and socially: such as home (everyday knowledge) and school (academic knowledge)” (Oxford Dictionary, 2021). For disadvantaged youth, the creation of a third space in the theatre can give them a safe environment away from issues they may have at home or at school, it can further their learning about themselves and others, and it can also help those youth feel a sense of belonging to a community larger than themselves. Because of these benefits, it is clear that performing arts programs can offer a great impact on disadvantaged youth; however, many theatre companies struggle to market their programming to said communities. This may be in part, due to low marketing budgets, no specificity in labor resources dedicated to youth programming, or ineffective marketing strategies and tactics.<br/>In order to ideate marketing recommendations for these organizations, primary research was conducted to determine the attitudes and beliefs revolving around youth participation in community theatre, as well as the current marketing strategies and tactics being utilized by programmers. Participants included program managers of youth theatre programs, as well as youth participants from several major cities in the U. S. The secondary research aims to better understand the target demographic (disadvantaged youth), the benefits derived from participation in arts programming, and marketing strategies for the performing arts. Following data analysis are several recommendations for the learning, planning, and implementation of marketing strategies for theatre programmers.

ContributorsNarducci, Emily Nicole (Co-author) / Feuerstein, Kaleigh (Co-author) / Gray, Nancy (Thesis director) / Woodson, Stephani (Committee member) / Department of Marketing (Contributor) / Department of Information Systems (Contributor) / Walter Cronkite School of Journalism and Mass Comm (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05