Matching Items (158)
Filtering by

Clear all filters

147992-Thumbnail Image.png
Description

The research presented in this Honors Thesis provides development in machine learning models which predict future states of a system with unknown dynamics, based on observations of the system. Two case studies are presented for (1) a non-conservative pendulum and (2) a differential game dictating a two-car uncontrolled intersection scenario.

The research presented in this Honors Thesis provides development in machine learning models which predict future states of a system with unknown dynamics, based on observations of the system. Two case studies are presented for (1) a non-conservative pendulum and (2) a differential game dictating a two-car uncontrolled intersection scenario. In the paper we investigate how learning architectures can be manipulated for problem specific geometry. The result of this research provides that these problem specific models are valuable for accurate learning and predicting the dynamics of physics systems.<br/><br/>In order to properly model the physics of a real pendulum, modifications were made to a prior architecture which was sufficient in modeling an ideal pendulum. The necessary modifications to the previous network [13] were problem specific and not transferrable to all other non-conservative physics scenarios. The modified architecture successfully models real pendulum dynamics. This case study provides a basis for future research in augmenting the symplectic gradient of a Hamiltonian energy function to provide a generalized, non-conservative physics model.<br/><br/>A problem specific architecture was also utilized to create an accurate model for the two-car intersection case. The Costate Network proved to be an improvement from the previously used Value Network [17]. Note that this comparison is applied lightly due to slight implementation differences. The development of the Costate Network provides a basis for using characteristics to decompose functions and create a simplified learning problem.<br/><br/>This paper is successful in creating new opportunities to develop physics models, in which the sample cases should be used as a guide for modeling other real and pseudo physics. Although the focused models in this paper are not generalizable, it is important to note that these cases provide direction for future research.

ContributorsMerry, Tanner (Author) / Ren, Yi (Thesis director) / Zhang, Wenlong (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148001-Thumbnail Image.png
Description

High-entropy alloys possessing mechanical, chemical, and electrical properties that far exceed those of conventional alloys have the potential to make a significant impact on many areas of engineering. Identifying element combinations and configurations to form these alloys, however, is a difficult, time-consuming, computationally intensive task. Machine learning has revolutionized many

High-entropy alloys possessing mechanical, chemical, and electrical properties that far exceed those of conventional alloys have the potential to make a significant impact on many areas of engineering. Identifying element combinations and configurations to form these alloys, however, is a difficult, time-consuming, computationally intensive task. Machine learning has revolutionized many different fields due to its ability to generalize well to different problems and produce computationally efficient, accurate predictions regarding the system of interest. In this thesis, we demonstrate the effectiveness of machine learning models applied to toy cases representative of simplified physics that are relevant to high-entropy alloy simulation. We show these models are effective at learning nonlinear dynamics for single and multi-particle cases and that more work is needed to accurately represent complex cases in which the system dynamics are chaotic. This thesis serves as a demonstration of the potential benefits of machine learning applied to high-entropy alloy simulations to generate fast, accurate predictions of nonlinear dynamics.

ContributorsDaly, John H (Author) / Ren, Yi (Thesis director) / Zhuang, Houlong (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

Robots are often used in long-duration scenarios, such as on the surface of Mars,where they may need to adapt to environmental changes. Typically, robots have been built specifically for single tasks, such as moving boxes in a warehouse

Robots are often used in long-duration scenarios, such as on the surface of Mars,where they may need to adapt to environmental changes. Typically, robots have been built specifically for single tasks, such as moving boxes in a warehouse or surveying construction sites. However, there is a modern trend away from human hand-engineering and toward robot learning. To this end, the ideal robot is not engineered,but automatically designed for a specific task. This thesis focuses on robots which learn path-planning algorithms for specific environments. Learning is accomplished via genetic programming. Path-planners are represented as Python code, which is optimized via Pareto evolution. These planners are encouraged to explore curiously and efficiently. This research asks the questions: “How can robots exhibit life-long learning where they adapt to changing environments in a robust way?”, and “How can robots learn to be curious?”.

ContributorsSaldyt, Lucas P (Author) / Ben Amor, Heni (Thesis director) / Pavlic, Theodore (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148088-Thumbnail Image.png
Description

Colorimetric assays are an important tool in point-of-care testing that offers several advantages to traditional testing methods such as rapid response times and inexpensive costs. A factor that currently limits the portability and accessibility of these assays are methods that can objectively determine the results of these assays. Current solutions

Colorimetric assays are an important tool in point-of-care testing that offers several advantages to traditional testing methods such as rapid response times and inexpensive costs. A factor that currently limits the portability and accessibility of these assays are methods that can objectively determine the results of these assays. Current solutions consist of creating a test reader that standardizes the conditions the strip is under before being measured in some way. However, this increases the cost and decreases the portability of these assays. The focus of this study is to create a machine learning algorithm that can objectively determine results of colorimetric assays under varying conditions. To ensure the flexibility of a model to several types of colorimetric assays, three models were trained on the same convolutional neural network with different datasets. The images these models are trained on consist of positive and negative images of ETG, fentanyl, and HPV Antibodies test strips taken under different lighting and background conditions. A fourth model is trained on an image set composed of all three strip types. The results from these models show it is able to predict positive and negative results to a high level of accuracy.

ContributorsFisher, Rachel (Author) / Blain Christen, Jennifer (Thesis director) / Anderson, Karen (Committee member) / School of Life Sciences (Contributor) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
150158-Thumbnail Image.png
Description
Multi-label learning, which deals with data associated with multiple labels simultaneously, is ubiquitous in real-world applications. To overcome the curse of dimensionality in multi-label learning, in this thesis I study multi-label dimensionality reduction, which extracts a small number of features by removing the irrelevant, redundant, and noisy information while considering

Multi-label learning, which deals with data associated with multiple labels simultaneously, is ubiquitous in real-world applications. To overcome the curse of dimensionality in multi-label learning, in this thesis I study multi-label dimensionality reduction, which extracts a small number of features by removing the irrelevant, redundant, and noisy information while considering the correlation among different labels in multi-label learning. Specifically, I propose Hypergraph Spectral Learning (HSL) to perform dimensionality reduction for multi-label data by exploiting correlations among different labels using a hypergraph. The regularization effect on the classical dimensionality reduction algorithm known as Canonical Correlation Analysis (CCA) is elucidated in this thesis. The relationship between CCA and Orthonormalized Partial Least Squares (OPLS) is also investigated. To perform dimensionality reduction efficiently for large-scale problems, two efficient implementations are proposed for a class of dimensionality reduction algorithms, including canonical correlation analysis, orthonormalized partial least squares, linear discriminant analysis, and hypergraph spectral learning. The first approach is a direct least squares approach which allows the use of different regularization penalties, but is applicable under a certain assumption; the second one is a two-stage approach which can be applied in the regularization setting without any assumption. Furthermore, an online implementation for the same class of dimensionality reduction algorithms is proposed when the data comes sequentially. A Matlab toolbox for multi-label dimensionality reduction has been developed and released. The proposed algorithms have been applied successfully in the Drosophila gene expression pattern image annotation. The experimental results on some benchmark data sets in multi-label learning also demonstrate the effectiveness and efficiency of the proposed algorithms.
ContributorsSun, Liang (Author) / Ye, Jieping (Thesis advisor) / Li, Baoxin (Committee member) / Liu, Huan (Committee member) / Mittelmann, Hans D. (Committee member) / Arizona State University (Publisher)
Created2011
150095-Thumbnail Image.png
Description
Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It

Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It is particularly desirable to share the domain knowledge (among the tasks) when there are a number of related tasks but only limited training data is available for each task. Modeling the relationship of multiple tasks is critical to the generalization performance of the MTL algorithms. In this dissertation, I propose a series of MTL approaches which assume that multiple tasks are intrinsically related via a shared low-dimensional feature space. The proposed MTL approaches are developed to deal with different scenarios and settings; they are respectively formulated as mathematical optimization problems of minimizing the empirical loss regularized by different structures. For all proposed MTL formulations, I develop the associated optimization algorithms to find their globally optimal solution efficiently. I also conduct theoretical analysis for certain MTL approaches by deriving the globally optimal solution recovery condition and the performance bound. To demonstrate the practical performance, I apply the proposed MTL approaches on different real-world applications: (1) Automated annotation of the Drosophila gene expression pattern images; (2) Categorization of the Yahoo web pages. Our experimental results demonstrate the efficiency and effectiveness of the proposed algorithms.
ContributorsChen, Jianhui (Author) / Ye, Jieping (Thesis advisor) / Kumar, Sudhir (Committee member) / Liu, Huan (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2011
148322-Thumbnail Image.png
Description

The field of biomedical research relies on the knowledge of binding interactions between various proteins of interest to create novel molecular targets for therapeutic purposes. While many of these interactions remain a mystery, knowledge of these properties and interactions could have significant medical applications in terms of understanding cell signaling

The field of biomedical research relies on the knowledge of binding interactions between various proteins of interest to create novel molecular targets for therapeutic purposes. While many of these interactions remain a mystery, knowledge of these properties and interactions could have significant medical applications in terms of understanding cell signaling and immunological defenses. Furthermore, there is evidence that machine learning and peptide microarrays can be used to make reliable predictions of where proteins could interact with each other without the definitive knowledge of the interactions. In this case, a neural network was used to predict the unknown binding interactions of TNFR2 onto LT-ɑ and TRAF2, and PD-L1 onto CD80, based off of the binding data from a sampling of protein-peptide interactions on a microarray. The accuracy and reliability of these predictions would rely on future research to confirm the interactions of these proteins, but the knowledge from these methods and predictions could have a future impact with regards to rational and structure-based drug design.

ContributorsPoweleit, Andrew Michael (Author) / Woodbury, Neal (Thesis director) / Diehnelt, Chris (Committee member) / Chiu, Po-Lin (Committee member) / School of Molecular Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
136271-Thumbnail Image.png
Description
The OMFIT (One Modeling Framework for Integrated Tasks) modeling environment and the BRAINFUSE module have been deployed on the PPPL (Princeton Plasma Physics Laboratory) computing cluster with modifications that have rendered the application of artificial neural networks (NNs) to the TRANSP databases for the JET (Joint European Torus), TFTR (Tokamak

The OMFIT (One Modeling Framework for Integrated Tasks) modeling environment and the BRAINFUSE module have been deployed on the PPPL (Princeton Plasma Physics Laboratory) computing cluster with modifications that have rendered the application of artificial neural networks (NNs) to the TRANSP databases for the JET (Joint European Torus), TFTR (Tokamak Fusion Test Reactor), and NSTX (National Spherical Torus Experiment) devices possible through their use. This development has facilitated the investigation of NNs for predicting heat transport profiles in JET, TFTR, and NSTX, and has promoted additional investigations to discover how else NNs may be of use to scientists at PPPL. In applying NNs to the aforementioned devices for predicting heat transport, the primary goal of this endeavor is to reproduce the success shown in Meneghini et al. in using NNs for heat transport prediction in DIII-D. Being able to reproduce the results from is important because this in turn would provide scientists at PPPL with a quick and efficient toolset for reliably predicting heat transport profiles much faster than any existing computational methods allow; the progress towards this goal is outlined in this report, and potential additional applications of the NN framework are presented.
ContributorsLuna, Christopher Joseph (Author) / Tang, Wenbo (Thesis director) / Treacy, Michael (Committee member) / Orso, Meneghini (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2015-05
136409-Thumbnail Image.png
Description
Twitter, the microblogging platform, has grown in prominence to the point that the topics that trend on the network are often the subject of the news and other traditional media. By predicting trends on Twitter, it could be possible to predict the next major topic of interest to the public.

Twitter, the microblogging platform, has grown in prominence to the point that the topics that trend on the network are often the subject of the news and other traditional media. By predicting trends on Twitter, it could be possible to predict the next major topic of interest to the public. With this motivation, this paper develops a model for trends leveraging previous work with k-nearest-neighbors and dynamic time warping. The development of this model provides insight into the length and features of trends, and successfully generalizes to identify 74.3% of trends in the time period of interest. The model developed in this work provides understanding into why par- ticular words trend on Twitter.
ContributorsMarshall, Grant A (Author) / Liu, Huan (Thesis director) / Morstatter, Fred (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136475-Thumbnail Image.png
Description
Epilepsy affects numerous people around the world and is characterized by recurring seizures, prompting the ability to predict them so precautionary measures may be employed. One promising algorithm extracts spatiotemporal correlation based features from intracranial electroencephalography signals for use with support vector machines. The robustness of this methodology is tested

Epilepsy affects numerous people around the world and is characterized by recurring seizures, prompting the ability to predict them so precautionary measures may be employed. One promising algorithm extracts spatiotemporal correlation based features from intracranial electroencephalography signals for use with support vector machines. The robustness of this methodology is tested through a sensitivity analysis. Doing so also provides insight about how to construct more effective feature vectors.
ContributorsMa, Owen (Author) / Bliss, Daniel (Thesis director) / Berisha, Visar (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2015-05