Matching Items (6)

131311-Thumbnail Image.png

Predicting Outcome of a Pitch Given the Type of Pitch for any Baseball Scenario

Description

This thesis serves as a baseline for the potential for prediction through machine learning (ML) in baseball. Hopefully, it also will serve as motivation for future work to expand and reach the potential of sabermetrics, advanced Statcast data and machine

This thesis serves as a baseline for the potential for prediction through machine learning (ML) in baseball. Hopefully, it also will serve as motivation for future work to expand and reach the potential of sabermetrics, advanced Statcast data and machine learning. The problem this thesis attempts to solve is predicting the outcome of a pitch. Given proper pitch data and situational data, is it possible to predict the result or outcome of a pitch? The result or outcome refers to the specific outcome of a pitch, beyond ball or strike, but if the hitter puts the ball in play for a double, this thesis shows how I attempted to predict that type of outcome. Before diving into my methods, I take a deep look into sabermetrics, advanced statistics and the history of the two in Major League Baseball. After this, I describe my implemented machine learning experiment. First, I found a dataset that is suitable for training a pitch prediction model, I then analyzed the features and used some feature engineering to select a set of 16 features, and finally, I trained and tested a pair of ML models on the data. I used a decision tree classifier and random forest classifier to test the data. I attempted to us a long short-term memory to improve my score, but came up short. Each classifier performed at around 60% accuracy. I also experimented using a neural network approach with a long short-term memory (LSTM) model, but this approach requires more feature engineering to beat the simpler classifiers. In this thesis, I show examples of five hitters that I test the models on and the accuracy for each hitter. This work shows promise that advanced classification models (likely requiring more feature engineering) can provide even better prediction outcomes, perhaps with 70% accuracy or higher! There is much potential for future work and to improve on this thesis, mainly through the proper construction of a neural network, more in-depth feature analysis/selection/extraction, and data visualization.

Contributors

Agent

Created

Date Created
2020-05

148180-Thumbnail Image.png

Modeling the Complexity of Sankey Diagrams

Description

In this Barrett Honors Thesis, I developed a model to quantify the complexity of Sankey diagrams, which are a type of visualization technique that shows flow between groups. To do this, I created a carefully controlled dataset of synthetic Sankey

In this Barrett Honors Thesis, I developed a model to quantify the complexity of Sankey diagrams, which are a type of visualization technique that shows flow between groups. To do this, I created a carefully controlled dataset of synthetic Sankey diagrams of varying sizes as study stimuli. Then, a pair of online crowdsourced user studies were conducted and analyzed. User performance for Sankey diagrams of varying size and features (number of groups, number of timesteps, and number of flow crossings) were algorithmically modeled as a formula to quantify the complexity of these diagrams. Model accuracy was measured based on the performance of users in the second crowdsourced study. The results of my experiment conclusively demonstrates that the algorithmic complexity formula I created closely models the visual complexity of the Sankey Diagrams in the dataset.

Contributors

Agent

Created

Date Created
2021-05

161949-Thumbnail Image.png

Effect of Image Captioning with Description on the Working Memory

Description

Working memory plays an important role in human activities across academic,professional, and social settings. Working memory is dened as the memory extensively
involved in goal-directed behaviors in which information must be retained and
manipulated to ensure successful task execution. The

Working memory plays an important role in human activities across academic,professional, and social settings. Working memory is dened as the memory extensively
involved in goal-directed behaviors in which information must be retained and
manipulated to ensure successful task execution. The aim of this research is to understand
the effect of image captioning with image description on an individual's
working memory. A study was conducted with eight neutral images comprising situations
relatable to daily life such that each image could have a positive or negative
description associated with the outcome of the situation in the image. The study
consisted of three rounds where the first and second round involved two parts and
the third round consisted of one part. The image was captioned a total of five times
across the entire study. The findings highlighted that only 25% of participants were
able to recall the captions which they captioned for an image after a span of 9-15
days; when comparing the recall rate of the captions, 50% of participants were able
to recall the image caption from the previous round in the present round; and out of
the positive and negative description associated with the image, 65% of participants
recalled the former description rather than the latter. The conclusions drawn from the
study are participants tend to retain information for longer periods than the expected
duration for working memory, which may be because participants were able to relate
the images with their everyday life situations and given a situation with positive and
negative information, the human brain is aligned towards positive information over
negative information.

Contributors

Agent

Created

Date Created
2021

161676-Thumbnail Image.png

Examining User Engagement via Facial Expressions in Augmented Reality with Dynamic Time Warping

Description

Augmented Reality (AR) has progressively demonstrated its helpfulness for novicesto learn highly complex and abstract concepts by visualizing details in an immersive
environment. However, some studies show that similar results could also be obtained
in environments that do not involve

Augmented Reality (AR) has progressively demonstrated its helpfulness for novicesto learn highly complex and abstract concepts by visualizing details in an immersive
environment. However, some studies show that similar results could also be obtained
in environments that do not involve AR. To explore the potential of AR in advancing
transformative engagement in education, I propose modeling facial expressions
as implicit feedback when one is being immersed in the environment. I developed a
Unity application to record and log the users' application operations and facial images.
A neural network-based model, Visual Geometry Group 19 (VGG19, Simonyan
and Zisserman (2014)), is adopted to recognize emotions from the captured facial
images. A within-subject user study was designed and conducted to assess the sentiment
and user engagement differences in AR and non-AR tasks. To analyze the
collected data, Dynamic Time Warping (DTW) was applied to identify the emotional
similarities between AR and non-AR environments. The results indicate that users
showed an increase in emotion patterns and application operations throughout the
AR tasks in comparison to non-AR tasks. The emotion patterns observed in the
analysis show that non-AR provides less implicit feedback compared to AR tasks.
The DTW analysis reveals that users' emotion change patterns appear to be more
distant from neutral emotions in AR than non-AR tasks. Succinctly put, the users
in the AR task demonstrated more active use of the application and yielded ranges
of emotions while operating it.

Contributors

Agent

Created

Date Created
2021

158908-Thumbnail Image.png

Exploring the Impact of Augmented Reality on Collaborative Decision-Making in Small Teams

Description

While significant qualitative, user study-focused research has been done on augmented reality, relatively few studies have been conducted on multiple, co-located synchronously collaborating users in augmented reality. Recognizing the need for more collaborative user studies in augmented reality and the

While significant qualitative, user study-focused research has been done on augmented reality, relatively few studies have been conducted on multiple, co-located synchronously collaborating users in augmented reality. Recognizing the need for more collaborative user studies in augmented reality and the value such studies present, a user study is conducted of collaborative decision-making in augmented reality to investigate the following research question: “Does presenting data visualizations in augmented reality influence the collaborative decision-making behaviors of a team?” This user study evaluates how viewing data visualizations with augmented reality headsets impacts collaboration in small teams compared to viewing together on a single 2D desktop monitor as a baseline. Teams of two participants performed closed and open-ended evaluation tasks to collaboratively analyze data visualized in both augmented reality and on a desktop monitor. Multiple means of collecting and analyzing data were employed to develop a well-rounded context for results and conclusions, including software logging of participant interactions, qualitative analysis of video recordings of participant sessions, and pre- and post-study participant questionnaires. The results indicate that augmented reality doesn’t significantly change the quantity of team member communication but does impact the means and strategies participants use to collaborate.

Contributors

Agent

Created

Date Created
2020

Augnosis: Self-Diagnosis in Augmented Reality

Description

Oftentimes, patients struggle to accurately describe their symptoms to medical professionals, which produces erroneous diagnoses, delaying and preventing treatment. My app, Augnosis, will streamline constructive communication between patient and doctor, and allow for more accurate diagnoses. The goal of this

Oftentimes, patients struggle to accurately describe their symptoms to medical professionals, which produces erroneous diagnoses, delaying and preventing treatment. My app, Augnosis, will streamline constructive communication between patient and doctor, and allow for more accurate diagnoses. The goal of this project was to create an app capable of gathering data on visual symptoms of facial acne and categorizing it to differentiate between diagnoses using image recognition and identification. “Augnosis”, is a combination of the words “Augmented Reality” and “Self-Diagnosis”, the former being the medium in which it is immersed and the latter detailing its functionality.

Contributors

Agent

Created

Date Created
2022-05