Matching Items (6)
147587-Thumbnail Image.png
Description

The purpose of this project is to create a useful tool for musicians that utilizes the harmonic content of their playing to recommend new, relevant chords to play. This is done by training various Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNNs) on the lead sheets of 100 different jazz

The purpose of this project is to create a useful tool for musicians that utilizes the harmonic content of their playing to recommend new, relevant chords to play. This is done by training various Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNNs) on the lead sheets of 100 different jazz standards. A total of 200 unique datasets were produced and tested, resulting in the prediction of nearly 51 million chords. A note-prediction accuracy of 82.1% and a chord-prediction accuracy of 34.5% were achieved across all datasets. Methods of data representation that were rooted in valid music theory frameworks were found to increase the efficacy of harmonic prediction by up to 6%. Optimal LSTM input sizes were also determined for each method of data representation.

ContributorsRangaswami, Sriram Madhav (Author) / Lalitha, Sankar (Thesis director) / Jayasuriya, Suren (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
132995-Thumbnail Image.png
Description
Lyric classification and generation are trending in topics in the machine learning community. Long Short-Term Networks (LSTMs) are effective tools for classifying and generating text. We explored their effectiveness in the generation and classification of lyrical data and proposed methods of evaluating their accuracy. We found that LSTM networks with

Lyric classification and generation are trending in topics in the machine learning community. Long Short-Term Networks (LSTMs) are effective tools for classifying and generating text. We explored their effectiveness in the generation and classification of lyrical data and proposed methods of evaluating their accuracy. We found that LSTM networks with dropout layers were effective at lyric classification. We also found that Word embedding LSTM networks were extremely effective at lyric generation.
ContributorsTallapragada, Amit (Author) / Ben Amor, Heni (Thesis director) / Caviedes, Jorge (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
171883-Thumbnail Image.png
Description
With the continued increase in the amount of renewable generation in the formof distributed energy resources, reliability planning has progressively become a more challenging task for the modern power system. This is because with higher penetration of renewable generation, the system has to bear a higher degree of variability and uncertainty. One way

With the continued increase in the amount of renewable generation in the formof distributed energy resources, reliability planning has progressively become a more challenging task for the modern power system. This is because with higher penetration of renewable generation, the system has to bear a higher degree of variability and uncertainty. One way to address this problem is by generating realistic scenarios that complement and supplement actual system conditions. This thesis presents a methodology to create such correlated synthetic scenarios for load and renewable generation using machine learning. Machine learning algorithms need to have ample amounts of data available to them for training purposes. However, real-world datasets are often skewed in the distribution of the different events in the sample space. Data augmentation and scenario generation techniques are often utilized to complement the datasets with additional samples or by filling in missing data points. Datasets pertaining to the electric power system are especially prone to having very few samples for certain events, such as abnormal operating conditions, as they are not very common in an actual power system. A recurrent generative adversarial network (GAN) model is presented in this thesis to generate solar and load scenarios in a correlated manner using an actual dataset obtained from a power utility located in the U.S. Southwest. The generated solar and load profiles are verified both statistically and by implementation on a simulated test system, and the performance of correlated scenario generation vs. uncorrelated scenario generation is investigated. Given the interconnected relationships between the variables of the dataset, it is observed that correlated scenario generation results in more realistic synthetic scenarios, particularly for abnormal system conditions. When combined with actual but scarce abnormal conditions, the augmented dataset of system conditions provides a better platform for performing contingency studies for a more thorough reliability planning. The proposed scenario generation method is scalable and can be modified to work with different time-series datasets. Moreover, when the model is trained in a conditional manner, it can be used to synthesise any number of scenarios for the different events present in a given dataset. In summary, this thesis explores scenario generation using a recurrent conditional GAN and investigates the benefits of correlated generation compared to uncorrelated synthesis of profiles for the reliability planning problem of power systems.
ContributorsBilal, Muhammad (Author) / Pal, Anamitra (Thesis advisor) / Holbert, Keith (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2022
161997-Thumbnail Image.png
Description
Many real-world engineering problems require simulations to evaluate the design objectives and constraints. Often, due to the complexity of the system model, simulations can be prohibitive in terms of computation time. One approach to overcome this issue is to construct a surrogate model, which approximates the original model. The focus

Many real-world engineering problems require simulations to evaluate the design objectives and constraints. Often, due to the complexity of the system model, simulations can be prohibitive in terms of computation time. One approach to overcome this issue is to construct a surrogate model, which approximates the original model. The focus of this work is on the data-driven surrogate models, in which empirical approximations of the output are performed given the input parameters. Recently neural networks (NN) have re-emerged as a popular method for constructing data-driven surrogate models. Although, NNs have achieved excellent accuracy and are widely used, they pose their own challenges. This work addresses two common challenges, the need for: (1) hardware acceleration and (2) uncertainty quantification (UQ) in the presence of input variability. The high demand in the inference phase of deep NNs in cloud servers/edge devices calls for the design of low power custom hardware accelerators. The first part of this work describes the design of an energy-efficient long short-term memory (LSTM) accelerator. The overarching goal is to aggressively reduce the power consumption and area of the LSTM components using approximate computing, and then use architectural level techniques to boost the performance. The proposed design is synthesized and placed and routed as an application-specific integrated circuit (ASIC). The results demonstrate that this accelerator is 1.2X and 3.6X more energy-efficient and area-efficient than the baseline LSTM. In the second part of this work, a robust framework is developed based on an alternate data-driven surrogate model referred to as polynomial chaos expansion (PCE) for addressing UQ. In contrast to many existing approaches, no assumptions are made on the elements of the function space and UQ is a function of the expansion coefficients. Moreover, the sensitivity of the output with respect to any subset of the input variables can be computed analytically by post-processing the PCE coefficients. This provides a systematic and incremental method to pruning or changing the order of the model. This framework is evaluated on several real-world applications from different domains and is extended for classification tasks as well.
ContributorsAzari, Elham (Author) / Vrudhula, Sarma (Thesis advisor) / Fainekos, Georgios (Committee member) / Ren, Fengbo (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
161678-Thumbnail Image.png
Description
An ontology is a vocabulary that provides a formal description of entities within a domain and their relationships with other entities. Along with basic schema information, it also captures information in the form of metadata about cardinality, restrictions, hierarchy, and semantic meaning. With the rapid growth of semantic (RDF) data

An ontology is a vocabulary that provides a formal description of entities within a domain and their relationships with other entities. Along with basic schema information, it also captures information in the form of metadata about cardinality, restrictions, hierarchy, and semantic meaning. With the rapid growth of semantic (RDF) data on the web, many organizations like DBpedia, Earth Science Information Partners (ESIP) are publishing more and more data in RDF format. The ontology alignment task aims at linking two or more different ontologies from the same domain or different domains. It is a process of finding the semantic relationship between two or more ontological entities and/or instances. Information/data sharing among different systems is quite limited because of differences in data based on syntax, structures, and semantics. Ontology alignment is used to overcome the limitation of semantic interoperability of current vast distributed systems available on the Web. In spite of the availability of large hierarchical domain-specific datasets, automated ontology mapping is still a complex problem. Over the years, many techniques have been proposed for ontology instance alignment, schema alignment, and link discovery. Most of the available approaches require human intervention or work within a specific domain. The challenge involves representing an entity as a vector that encodes all context information of the entity such as hierarchical information, properties, constraints, etc. The ontological representation is rich in comparison with the regular data schema because of metadata about various properties, constraints, relationship to other entities within the domain, etc. While finding similarities between entities this metadata is often overlooked. The second challenge is that the comparison of two ontologies is an intense operation and highly depends on the domain and the language that the ontologies are expressed in. Most current methods require human intervention that leads to a time-consuming and cumbersome process and the output is prone to human errors. The proposed unsupervised recursive neural network technique achieves an F-measure of 80.3% on the Anatomy dataset and the proposed graph neural network technique achieves an F-measure of 81.0% on the Anatomy dataset.
ContributorsChakraborty, Jaydeep (Author) / Bansal, Srividya (Thesis advisor) / Sherif, Mohamed (Committee member) / Bansal, Ajay (Committee member) / Hsiao, Sharon (Committee member) / Arizona State University (Publisher)
Created2021
Description
In a pursuit-evasion setup where one group of agents tracks down another adversarial group, vision-based algorithms have been known to make use of techniques such as Linear Dynamic Estimation to determine the probable future location of an evader in a given environment. This helps a pursuer attain an edge over

In a pursuit-evasion setup where one group of agents tracks down another adversarial group, vision-based algorithms have been known to make use of techniques such as Linear Dynamic Estimation to determine the probable future location of an evader in a given environment. This helps a pursuer attain an edge over the evader that has conventionally benefited from the uncertainty of the pursuit. The pursuer can utilize this knowledge to enable a faster capture of the evader, as opposed to a pursuer that only knows the evader's current location. Inspired by the function of dorsal anterior cingulate cortex (dACC) neurons in natural predators, the use of a predictive model that is built using an encoder-decoder Long Short-Term Memory (LSTM) Network and can produce a more accurate estimate of the evader's future location is proposed. This enables an even quicker capture of a target when compared to previously used filtering-based methods. The effectiveness of the approach is evaluated by setting up these agents in an environment based in the Modular Open Robots Simulation Engine (MORSE). Cross-domain adaptability of the method, without the explicit need to retrain the prediction model is demonstrated by evaluating it in another domain.
ContributorsGodbole, Sumedh (Author) / Yang, Yezhou (Thesis advisor) / Srivastava, Siddharth (Committee member) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2021