Matching Items (681)

Filtering by

Clear all filters

131374-Thumbnail Image.png

Surface Mechanical Attrition Treatment (SMAT) of 7075 Aluminum Alloy to Induce a Protective Corrosion Resistant Layer

Description

This paper investigates Surface Mechanical Attrition Treatment (SMAT) and the influence of treatment temperature and initial sample surface finish on the corrosion resistance of 7075-T651 aluminum alloy. Ambient SMAT was performed on AA7075 samples polished to 80-grit initial surface roughness.

This paper investigates Surface Mechanical Attrition Treatment (SMAT) and the influence of treatment temperature and initial sample surface finish on the corrosion resistance of 7075-T651 aluminum alloy. Ambient SMAT was performed on AA7075 samples polished to 80-grit initial surface roughness. Potentiodynamic polarization and electrochemical impedance spectroscopy (EIS) tests were used to characterize the corrosion behavior of samples before and after SMAT. Electrochemical tests indicated an improved corrosion resistance after application of SMAT process. The observed improvements in corrosion properties are potentially due to microstructural changes in the material surface induced by SMAT which encouraged the formation of a passive oxide layer. Further testing and research are required to understand the corrosion related effects of cryogenic SMAT and initial-surface finish as the COVID-19 pandemic inhibited experimentation plans.

Contributors

Created

Date Created
2020-05

133339-Thumbnail Image.png

Prescription Information Extraction from Electronic Health Records using BiLSTM-CRF and Word Embeddings

Description

Medical records are increasingly being recorded in the form of electronic health records (EHRs), with a significant amount of patient data recorded as unstructured natural language text. Consequently, being able to extract and utilize clinical data present within these records

Medical records are increasingly being recorded in the form of electronic health records (EHRs), with a significant amount of patient data recorded as unstructured natural language text. Consequently, being able to extract and utilize clinical data present within these records is an important step in furthering clinical care. One important aspect within these records is the presence of prescription information. Existing techniques for extracting prescription information — which includes medication names, dosages, frequencies, reasons for taking, and mode of administration — from unstructured text have focused on the application of rule- and classifier-based methods. While state-of-the-art systems can be effective in extracting many types of information, they require significant effort to develop hand-crafted rules and conduct effective feature engineering. This paper presents the use of a bidirectional LSTM with CRF tagging model initialized with precomputed word embeddings for extracting prescription information from sentences without requiring significant feature engineering. The experimental results, run on the i2b2 2009 dataset, achieve an F1 macro measure of 0.8562, and scores above 0.9449 on four of the six categories, indicating significant potential for this model.

Contributors

Agent

Created

Date Created
2018-05

133880-Thumbnail Image.png

ReL GoalD (Reinforcement Learning for Goal Dependencies)

Description

In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft.

In this project, the use of deep neural networks for the process of selecting actions to execute within an environment to achieve a goal is explored. Scenarios like this are common in crafting based games such as Terraria or Minecraft. Goals in these environments have recursive sub-goal dependencies which form a dependency tree. An agent operating within these environments have access to low amounts of data about the environment before interacting with it, so it is crucial that this agent is able to effectively utilize a tree of dependencies and its environmental surroundings to make judgements about which sub-goals are most efficient to pursue at any point in time. A successful agent aims to minimizes cost when completing a given goal. A deep neural network in combination with Q-learning techniques was employed to act as the agent in this environment. This agent consistently performed better than agents using alternate models (models that used dependency tree heuristics or human-like approaches to make sub-goal oriented choices), with an average performance advantage of 33.86% (with a standard deviation of 14.69%) over the best alternate agent. This shows that machine learning techniques can be consistently employed to make goal-oriented choices within an environment with recursive sub-goal dependencies and low amounts of pre-known information.

Contributors

Agent

Created

Date Created
2018-05

133887-Thumbnail Image.png

Evaluation of an Original Design for a Cost-Effective Wheel-Mounted Dynamometer for Road Vehicles

Description

This thesis evaluates the viability of an original design for a cost-effective wheel-mounted dynamometer for road vehicles. The goal is to show whether or not a device that generates torque and horsepower curves by processing accelerometer data collected at the

This thesis evaluates the viability of an original design for a cost-effective wheel-mounted dynamometer for road vehicles. The goal is to show whether or not a device that generates torque and horsepower curves by processing accelerometer data collected at the edge of a wheel can yield results that are comparable to results obtained using a conventional chassis dynamometer. Torque curves were generated via the experimental method under a variety of circumstances and also obtained professionally by a precision engine testing company. Metrics were created to measure the precision of the experimental device's ability to consistently generate torque curves and also to compare the similarity of these curves to the professionally obtained torque curves. The results revealed that although the test device does not quite provide the same level of precision as the professional chassis dynamometer, it does create torque curves that closely resemble the chassis dynamometer torque curves and exhibit a consistency between trials comparable to the professional results, even on rough road surfaces. The results suggest that the test device provides enough accuracy and precision to satisfy the needs of most consumers interested in measuring their vehicle's engine performance but probably lacks the level of accuracy and precision needed to appeal to professionals.

Contributors

Created

Date Created
2018-05

133901-Thumbnail Image.png

Data Management Behind Machine Learning

Description

This thesis dives into the world of artificial intelligence by exploring the functionality of a single layer artificial neural network through a simple housing price classification example while simultaneously considering its impact from a data management perspective on both the

This thesis dives into the world of artificial intelligence by exploring the functionality of a single layer artificial neural network through a simple housing price classification example while simultaneously considering its impact from a data management perspective on both the software and hardware level. To begin this study, the universally accepted model of an artificial neuron is broken down into its key components and then analyzed for functionality by relating back to its biological counterpart. The role of a neuron is then described in the context of a neural network, with equal emphasis placed on how it individually undergoes training and then for an entire network. Using the technique of supervised learning, the neural network is trained with three main factors for housing price classification, including its total number of rooms, bathrooms, and square footage. Once trained with most of the generated data set, it is tested for accuracy by introducing the remainder of the data-set and observing how closely its computed output for each set of inputs compares to the target value. From a programming perspective, the artificial neuron is implemented in C so that it would be more closely tied to the operating system and therefore make the collected profiler data more precise during the program's execution. The program is designed to break down each stage of the neuron's training process into distinct functions. In addition to utilizing more functional code, the struct data type is used as the underlying data structure for this project to not only represent the neuron but for implementing the neuron's training and test data. Once fully trained, the neuron's test results are then graphed to visually depict how well the neuron learned from its sample training set. Finally, the profiler data is analyzed to describe how the program operated from a data management perspective on the software and hardware level.

Contributors

Agent

Created

Date Created
2018-05

133654-Thumbnail Image.png

In situ SEM Testing for Fatigue Crack Growth: Mechanical Investigation of Titanium

Description

Widespread knowledge of fracture mechanics is mostly based on previous models that generalize crack growth in materials over several loading cycles. The objective of this project is to characterize crack growth that occurs in titanium alloys, specifically Grade 5 Ti-6Al-4V,

Widespread knowledge of fracture mechanics is mostly based on previous models that generalize crack growth in materials over several loading cycles. The objective of this project is to characterize crack growth that occurs in titanium alloys, specifically Grade 5 Ti-6Al-4V, at the sub-cycle scale, or within a single loading cycle. Using scanning electron microscopy (SEM), imaging analysis is performed to observe crack behavior at ten loading steps throughout the loading and unloading paths. Analysis involves measuring the incremental crack growth and crack tip opening displacement (CTOD) of specimens at loading ratios of 0.1, 0.3, and 0.5. This report defines the relationship between crack growth and the stress intensity factor, K, of the specimens, as well as the relationship between the R-ratio and stress opening level. The crack closure phenomena and effect of microcracks are discussed as they influence the crack growth behavior. This method has previously been used to characterize crack growth in Al 7075-T6. The results for Ti-6Al-4V are compared to these previous findings in order to strengthen conclusions about crack growth behavior.

Contributors

Agent

Created

Date Created
2018-05

133669-Thumbnail Image.png

An Examination of the Impact of Support Design on 316 Stainless Steel Supports

Description

The removal of support material from metal 3D printed objects is a laborious necessity for the post-processing of powder bed fusion printing (PBF). Supports are typically mechanically removed by machining techniques. Sacrificial supports are necessary in PBF printing to relieve

The removal of support material from metal 3D printed objects is a laborious necessity for the post-processing of powder bed fusion printing (PBF). Supports are typically mechanically removed by machining techniques. Sacrificial supports are necessary in PBF printing to relieve thermal stresses and support overhanging parts often resulting in the inclusion of supports in regions of the part that are not easily accessed by mechanical removal methods. Recent innovations in PBF support removal include dissolvable metal supports through an electrochemical etching process. Dissolvable PBF supports have the potential to significantly reduce the costs and time associated with traditional support removal. However, the speed and effectiveness of this approach is inhibited by numerous factors such as support geometry and metal powder entrapment within supports. To fully realize this innovative approach, it is necessary to model and understand the design parameters necessary to optimize support structures applicable to an electrochemical etching process. The objective of this study was to evaluate the impact of block additive manufacturing support parameters on key process outcomes of the dissolution of 316 stainless steel support structures. The parameters investigated included hatch spacing and perforation, and the outcomes of interests included time required for completion, surface roughness, and effectiveness of the etching process. Electrical current was also evaluated as an indicator of process completion. Analysis of the electrical current throughout the etching process showed that the dissolution is diffusion limited to varying degrees, and is dependent on support structure parameters. Activation and passivation behavior was observed during current leveling, and appeared to be more pronounced in non-perforated samples with less dense hatch spacing. The correlation between electrical current and completion of the etching process was unclear, as the support structures became mechanically removable well before the current leveled. The etching process was shown to improve surface finish on unsupported surfaces, but support was shown to negatively impact surface finish. Tighter hatch spacing was shown to correlate to larger variation in surface finish, due to ridges left behind by the support structures. In future studies, it is recommended current be more closely correlated to process completion and more roughness data be collected to identify a trend between hatch spacing and surface roughness.

Contributors

Agent

Created

Date Created
2018-05

132774-Thumbnail Image.png

Using Machine Learning to Predict the NBA

Description

Machine learning is one of the fastest growing fields and it has applications in almost any industry. Predicting sports games is an obvious use case for machine learning, data is relatively easy to collect, generally complete data is available, and

Machine learning is one of the fastest growing fields and it has applications in almost any industry. Predicting sports games is an obvious use case for machine learning, data is relatively easy to collect, generally complete data is available, and outcomes are easily measurable. Predicting the outcomes of sports events may also be easily profitable, predictions can be taken to a sportsbook and wagered on. A successful prediction model could easily turn a profit. The goal of this project was to build a model using machine learning to predict the outcomes of NBA games.
In order to train the model, data was collected from the NBA statistics website. The model was trained on games dating from the 2010 NBA season through the 2017 NBA season. Three separate models were built, predicting the winner, predicting the total points, and finally predicting the margin of victory for a team. These models learned on 80 percent of the data and validated on the other 20 percent. These models were trained for 40 epochs with a batch size of 15.
The model for predicting the winner achieved an accuracy of 65.61 percent, just slightly below the accuracy of other experts in the field of predicting the NBA. The model for predicting total points performed decently as well, it could beat Las Vegas’ prediction 50.04 percent of the time. The model for predicting margin of victory also did well, it beat Las Vegas 50.58 percent of the time.

Contributors

Created

Date Created
2019-05

132430-Thumbnail Image.png

Twitch Streamer-Game Recommender System

Description

Abstract
Matrix Factorization techniques have been proven to be more effective in recommender systems than standard user based or item based methods. Using this knowledge, Funk SVD and SVD++ are compared by the accuracy of their predictions of Twitch streamer

Abstract
Matrix Factorization techniques have been proven to be more effective in recommender systems than standard user based or item based methods. Using this knowledge, Funk SVD and SVD++ are compared by the accuracy of their predictions of Twitch streamer data.

Introduction
As watching video games is becoming more popular, those interested are becoming interested in Twitch.tv, an online platform for guests to watch streamers play video games and interact with them. A streamer is an person who broadcasts them-self playing a video game or some other thing for an audience (the guests of the website.) The site allows the guest to first select the game/category to view and then displays currently active streamers for the guest to select and watch. Twitch records the games that a streamer plays along with the amount of time that a streamer spends streaming that game. This is how the score is generated for a streamer’s game. These three terms form the streamer-game-score (user-item-rating) tuples that we use to train out models.
The our problem’s solution is similar to the purpose of the Netflix prize; however, as opposed to suggesting a user a movie, the goal is to suggest a user a game. We built a model to predict the score that a streamer will have for a game. The score field in our data is fundamentally different from a movie rating in Netflix because the way a user influences a game’s score is by actively streaming it, not by giving it an score based off opinion. The dataset being used it the Twitch.tv dataset provided by Isaac Jones [1]. Also, the only data used in training the models is in the form of the streamer-game-score (user-item-rating) tuples. It will be known if these data points with limited information will be able to give an accurate prediction of a streamer’s score for a game. SVD and SVD++ are the baseis of the models being trained and tested. Scikit’s Surprise library in Python3 is used for the implementation of the models.

Contributors

Agent

Created

Date Created
2019-05

Detecting Propaganda Bots on Twitter Using Machine Learning

Description

Propaganda bots are malicious bots on Twitter that spread divisive opinions and support political accounts. This project is based on detecting propaganda bots on Twitter using machine learning. Once I began to observe patterns within propaganda followers on

Propaganda bots are malicious bots on Twitter that spread divisive opinions and support political accounts. This project is based on detecting propaganda bots on Twitter using machine learning. Once I began to observe patterns within propaganda followers on Twitter, I determined that I could train algorithms to detect these bots. The paper focuses on my development and process of training classifiers and using them to create a user-facing server that performs prediction functions automatically. The learning goals of this project were detailed, the focus of which was to learn some form of machine learning architecture. I needed to learn some aspect of large data handling, as well as being able to maintain these datasets for training use. I also needed to develop a server that would execute these functionalities on command. I wanted to be able to design a full-stack system that allowed me to create every aspect of a user-facing server that can execute predictions using the classifiers that I design.
Throughout this project, I decided on a number of learning goals to consider it a success. I needed to learn how to use the supporting libraries that would help me to design this system. I also learned how to use the Twitter API, as well as create the infrastructure behind it that would allow me to collect large amounts of data for machine learning. I needed to become familiar with common machine learning libraries in Python in order to create the necessary algorithms and pipelines to make predictions based on Twitter data.
This paper details the steps and decisions needed to determine how to collect this data and apply it to machine learning algorithms. I determined how to create labelled data using pre-existing Botometer ratings, and the levels of confidence I needed to label data for training. I use the scikit-learn library to create these algorithms to best detect these bots. I used a number of pre-processing routines to refine the classifiers’ precision, including natural language processing and data analysis techniques. I eventually move to remotely-hosted versions of the system on Amazon web instances to collect larger amounts of data and train more advanced classifiers. This leads to the details of my final implementation of a user-facing server, hosted on AWS and interfacing over Gmail’s IMAP server.
The current and future development of this system is laid out. This includes more advanced classifiers, better data analysis, conversions to third party Twitter data collection systems, and user features. I detail what it is I have learned from this exercise, and what it is I hope to continue working on.

Contributors

Agent

Created

Date Created
2019-05