Matching Items (5)
Filtering by

Clear all filters

148325-Thumbnail Image.png
Description

The NBA yields billions of dollars each year and serves as a pastime and hobby for millions of Americans. However, many people do not have the time to watch several 2-hour games every week, especially when only a fraction of the game is actually exciting footage. The goal of Sports

The NBA yields billions of dollars each year and serves as a pastime and hobby for millions of Americans. However, many people do not have the time to watch several 2-hour games every week, especially when only a fraction of the game is actually exciting footage. The goal of Sports Summary is to take the ``fluff'' out of these games and create a distilled summary that includes only the most exciting and relevant events. The Sports Summary model records visual and auditory data, camera angles, and game clock readings and correlates it with the game's play-by-play data. On average, a game of more than 2 hours long is shortened to a summary of less than 20 minutes. This summary is then uploaded to the Sports Summary website, where users can filter by the type of event, giving more autonomy and a more comprehensive viewing experience than highlight reels. Additionally, the website allows for users to submit footage they would like to watch for processing and later viewing. Sports Summary creates an enjoyable and accessible way to watch games.

ContributorsZimmerman, Kenna Marleen (Author) / Espanol, Malena (Thesis director) / Dahlberg, Samantha (Committee member) / Pasha, Mirjeta (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

When creating computer vision applications, it is important to have a clear image of what is represented such that further processing has the best representation of the underlying data. A common factor that impacts image quality is blur, caused either by an intrinsic property of the camera lens or by

When creating computer vision applications, it is important to have a clear image of what is represented such that further processing has the best representation of the underlying data. A common factor that impacts image quality is blur, caused either by an intrinsic property of the camera lens or by introducing motion while the camera’s shutter is capturing an image. Possible solutions for reducing the impact of blur include cameras with faster shutter speeds or higher resolutions; however, both of these solutions require utilizing more expensive equipment, which is infeasible for instances where images are already captured. This thesis discusses an iterative solution for deblurring an image using an alternating minimization technique through regularization and PSF reconstruction. The alternating minimizer is then used to deblur a sample image of a pumpkin field to demonstrate its capabilities.

ContributorsSmith, Zachary (Author) / Espanol, Malena (Thesis director) / Ozcan, Burcin (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2023-05
Description
The goal of this thesis is to explore and present a range of approaches to “algorithmic choreography.” In the context of this thesis, algorithmic choreography is defined as choreography with computational influence or elements. Traditionally, algorithmic choreography, despite containing works that use computation in a variety of ways, has been

The goal of this thesis is to explore and present a range of approaches to “algorithmic choreography.” In the context of this thesis, algorithmic choreography is defined as choreography with computational influence or elements. Traditionally, algorithmic choreography, despite containing works that use computation in a variety of ways, has been used as an umbrella term for all works that involve computation.
This thesis intends to show that the diversity of algorithmic choreography can be reduced into more specific categories. As algorithmic choreography is fundamentally intertwined with the concept of computation, it is natural to propose that algorithmic choreography works be separated based on a spectrum that is defined by the extent of the involvement of computation within each piece.
This thesis seeks to specifically outline three primary categories that algorithmic works can fall into: pieces that involve minimal computational influence, entirely computationally generated pieces, and pieces that lie in between. Three original works were created to reflect each of these categories. These works provide examples of the various methods by which computation can influence and enhance choreography.
The first piece, entitled Rαinwater, displays a minimal amount of computational influence. The use of space in the piece was limited to random, computationally generated paths. The dancers extracted a narrative element from the random paths. This iteration resulted in a piece that explores the dancers’ emotional interaction within the context of a rainy environment. The second piece, entitled Mymec, utilizes an intermediary amount of computation. The piece sees a dancer interact with a projected display of an Ant Colony Optimization (ACO) algorithm. The dancer is to take direct inspiration from the movement of the virtual ants and embody the visualization of the algorithm. The final piece, entitled nSkeleton, exhibited maximal computational influence. Kinect position data was manipulated using iterative methods from computational mathematics to create computer-generated movement to be performed by a dancer on-stage.
Each original piece was originally intended to be presented to the public as part of an evening-length show. However, due to the rise of the COVID-19 pandemic caused by the novel coronavirus, all public campus events have been canceled and the government has recommended that gatherings with more than 10 people be entirely avoided. Thus, the pieces will instead be presented in the form of a video published online. This video will encompass information about the creation of each piece as well as clips of choreography.
ContributorsJawaid, Zeeshan (Co-author, Co-author) / Jackson, Naomi (Thesis director) / Curry, Nicole (Committee member) / Espanol, Malena (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Dean, W.P. Carey School of Business (Contributor) / School of Film, Dance and Theatre (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
Description
Deforestation in the Amazon rainforest has the potential to have devastating effects on ecosystems on both a local and global scale, making it one of the most environmentally threatening phenomena occurring today. In order to minimize deforestation in the Ama- zon and its consequences, it is helpful to analyze its occurrence using machine

Deforestation in the Amazon rainforest has the potential to have devastating effects on ecosystems on both a local and global scale, making it one of the most environmentally threatening phenomena occurring today. In order to minimize deforestation in the Ama- zon and its consequences, it is helpful to analyze its occurrence using machine learning architectures such as the U-Net. The U-Net is a type of Fully Convolutional Network that has shown significant capability in performing semantic segmentation. It is built upon a symmetric series of downsampling and upsampling layers that propagate feature infor- mation into higher spatial resolutions, allowing for the precise identification of features on the pixel scale. Such an architecture is well-suited for identifying features in satellite imagery. In this thesis, we construct and train a U-Net to identify deforested areas in satellite imagery of the Amazon through semantic segmentation.
ContributorsDouglas, Liam (Author) / Giel, Joshua (Co-author) / Espanol, Malena (Thesis director) / Cochran, Douglas (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2024-05
Description
Deforestation in the Amazon rainforest has the potential to have devastating effects on ecosystems on both a local and global scale, making it one of the most environmentally threatening phenomena occurring today. In order to minimize deforestation in the Amazon and its consequences, it is helpful to analyze its occurrence

Deforestation in the Amazon rainforest has the potential to have devastating effects on ecosystems on both a local and global scale, making it one of the most environmentally threatening phenomena occurring today. In order to minimize deforestation in the Amazon and its consequences, it is helpful to analyze its occurrence using machine learning architectures such as the U-Net. The U-Net is a type of Fully Convolutional Network that has shown significant capability in performing semantic segmentation. It is built upon a symmetric series of downsampling and upsampling layers that propagate feature information into higher spatial resolutions, allowing for the precise identification of features on the pixel scale. Such an architecture is well-suited for identifying features in satellite imagery. In this thesis, we construct and train a U-Net to identify deforested areas in satellite imagery of the Amazon through semantic segmentation.
ContributorsGiel, Joshua (Author) / Douglas, Liam (Co-author) / Espanol, Malena (Thesis director) / Cochran, Douglas (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Sustainability (Contributor)
Created2024-05