Matching Items (7)
Filtering by

Clear all filters

133551-Thumbnail Image.png
Description
I, Christopher Negrich, am the sole author of this paper, but the tools described were designed in collaboration with Andrew Hoetker. ConstrictR (constrictor) and ConstrictPy are an R package and python tool designed together. ConstrictPy implements the functions and methods defined in ConstrictR and applies data handling, data parsing, input/output

I, Christopher Negrich, am the sole author of this paper, but the tools described were designed in collaboration with Andrew Hoetker. ConstrictR (constrictor) and ConstrictPy are an R package and python tool designed together. ConstrictPy implements the functions and methods defined in ConstrictR and applies data handling, data parsing, input/output (I/O), and a user interface to increase usability. ConstrictR implements a variety of common data analysis methods used for statistical and subnetwork analysis. The majority of these methods are inspired by Lionel Guidi's 2016 paper, Plankton networks driving carbon export in the oligotrophic ocean. Additional methods were added to expand functionality, usability, and applicability to different areas of data science. Both ConstrictR and ConstrictPy are currently publicly available and usable, however, they are both ongoing projects. ConstrictR is available at github.com/cnegrich and ConstrictPy is available at github.com/ahoetker. Currently, ConstrictR has implemented functions for descriptive statistics, correlation, covariance, rank, sparsity, and weighted correlation network analysis with clustering, centrality, profiling, error handling, and data parsing methods to be released soon. ConstrictPy has fully implemented and integrated the features in ConstrictR as well as created functions for I/O and conversion between pandas and R data frames with a full feature user interface to be released soon. Both ConstrictR and ConstrictPy are designed to work with minimal dependencies and maximum available information on the algorithms implemented. As a result, ConstrictR is only dependent on base R (v3.4.4) functions with no libraries imported. ConstrictPy is dependent upon only pandas, Rpy2, and ConstrictR. This was done to increase longevity and independence of these tools. Additionally, all mathematical information is documented alongside the code, increasing the available information on how these tools function. Although neither tool is in its final version, this paper documents the code, mathematics, and instructions for use, in addition to plans for future work, for of the current versions of ConstrictR (v0.0.1) and ConstrictPy (v0.0.1).
ContributorsNegrich, Christopher Alec (Author) / Can, Huansheng (Thesis director) / Hansford, Dianne (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description

When creating computer vision applications, it is important to have a clear image of what is represented such that further processing has the best representation of the underlying data. A common factor that impacts image quality is blur, caused either by an intrinsic property of the camera lens or by

When creating computer vision applications, it is important to have a clear image of what is represented such that further processing has the best representation of the underlying data. A common factor that impacts image quality is blur, caused either by an intrinsic property of the camera lens or by introducing motion while the camera’s shutter is capturing an image. Possible solutions for reducing the impact of blur include cameras with faster shutter speeds or higher resolutions; however, both of these solutions require utilizing more expensive equipment, which is infeasible for instances where images are already captured. This thesis discusses an iterative solution for deblurring an image using an alternating minimization technique through regularization and PSF reconstruction. The alternating minimizer is then used to deblur a sample image of a pumpkin field to demonstrate its capabilities.

ContributorsSmith, Zachary (Author) / Espanol, Malena (Thesis director) / Ozcan, Burcin (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2023-05
Description
In this work, we explore the potential for realistic and accurate generation of hourly traffic volume with machine learning (ML), using the ground-truth data of Manhattan road segments collected by the New York State Department of Transportation (NYSDOT). Specifically, we address the following question– can we develop a ML algorithm

In this work, we explore the potential for realistic and accurate generation of hourly traffic volume with machine learning (ML), using the ground-truth data of Manhattan road segments collected by the New York State Department of Transportation (NYSDOT). Specifically, we address the following question– can we develop a ML algorithm that generalizes the existing NYSDOT data to all road segments in Manhattan?– by introducing a supervised learning task of multi-output regression, where ML algorithms use road segment attributes to predict hourly traffic volume. We consider four ML algorithms– K-Nearest Neighbors, Decision Tree, Random Forest, and Neural Network– and hyperparameter tune by evaluating the performances of each algorithm with 10-fold cross validation. Ultimately, we conclude that neural networks are the best-performing models and require the least amount of testing time. Lastly, we provide insight into the quantification of “trustworthiness” in a model, followed by brief discussions on interpreting model performance, suggesting potential project improvements, and identifying the biggest takeaways. Overall, we hope our work can serve as an effective baseline for realistic traffic volume generation, and open new directions in the processes of supervised dataset generation and ML algorithm design.
ContributorsOtstot, Kyle (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
Description
Since the 20th century, Arizona has undergone shifts in agricultural practices, driven by urban expansion and crop irrigation regulations. These changes present environmental challenges, altering atmospheric processes and influencing climate dynamics. Given the potential threats of climate change and drought on water availability for agriculture, further modifications in the agricultural

Since the 20th century, Arizona has undergone shifts in agricultural practices, driven by urban expansion and crop irrigation regulations. These changes present environmental challenges, altering atmospheric processes and influencing climate dynamics. Given the potential threats of climate change and drought on water availability for agriculture, further modifications in the agricultural landscape are expected. To understand these land use changes and their impact on carbon dynamics, our study quantified aboveground carbon storage in both cultivated and abandoned agricultural fields. To accomplish this, we employed Python and various geospatial libraries in Jupyter Notebook files, for thorough dataset assembly and visual, quantitative analysis. We focused on nine counties known for high cultivation levels, primarily located in the lower latitudes of Arizona. Our analysis investigated carbon dynamics across not only abandoned and actively cultivated croplands but also neighboring uncultivated land, for which we estimated the extent. Additionally, we compared these trends with those observed in developed land areas. The findings revealed a hierarchy in aboveground carbon storage, with currently cultivated lands having the lowest levels, followed by abandoned croplands and uncultivated wilderness. However, wilderness areas exhibited significant variation in carbon storage by county compared to cultivated and abandoned lands. Developed lands ranked highest in aboveground carbon storage, with the median value being the highest. Despite county-wide variations, abandoned croplands generally contained more carbon than currently cultivated areas, with adjacent wilderness lands containing even more than both. This trend suggests that cultivating croplands in the region reduces aboveground carbon stores, while abandonment allows for some replenishment, though only to a limited extent. Enhancing carbon stores in Arizona can be achieved through active restoration efforts on abandoned cropland. By promoting native plant regeneration and boosting aboveground carbon levels, these measures are crucial for improving carbon sequestration. We strongly advocate for implementing this step to facilitate the regrowth of native plants and enhance overall carbon storage in the region.
ContributorsGoodwin, Emily (Author) / Eikenberry, Steffen (Thesis director) / Kuang, Yang (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2024-05
132164-Thumbnail Image.png
Description
With the coming advances of computational power, algorithmic trading has become one of the primary strategies to trading on the stock market. To understand why and how these strategies have been effective, this project has taken a look at the complete process of creating tools and applications to analyze and

With the coming advances of computational power, algorithmic trading has become one of the primary strategies to trading on the stock market. To understand why and how these strategies have been effective, this project has taken a look at the complete process of creating tools and applications to analyze and predict stock prices in order to perform low-frequency trading. The project is composed of three main components. The first component is integrating several public resources to acquire and process financial trading data and store it in order to complete the other components. Alpha Vantage API, a free open source application, provides an accurate and comprehensive dataset of features for each stock ticker requested. The second component is researching, prototyping, and implementing various trading algorithms in code. We began by focusing on the Mean Reversion algorithm as a proof of concept algorithm to develop meaningful trading strategies and identify patterns within our datasets. To augment our market prediction power (“alpha”), we implemented a Long Short-Term Memory recurrent neural network. Neural Networks are an incredibly effective but often complex tool used frequently in data science when traditional methods are found lacking. Following the implementation, the last component is to optimize, analyze, compare, and contrast all of the algorithms and identify key features to conclude the overall effectiveness of each algorithm. We were able to identify conclusively which aspects of each algorithm provided better alpha and create an entire pipeline to automate this process for live trading implementation. An additional reason for automation is to provide an educational framework such that any who may be interested in quantitative finance in the future can leverage this project to gain further insight.
ContributorsYurowkin, Alexander (Co-author) / Kumar, Rohit (Co-author) / Welfert, Bruno (Thesis director) / Li, Baoxin (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
Description
The Tutoring Center Management System is a web-based application for ASU’s University Academic Success Programs (UASP) department, particularly the Math Tutoring Center. It is aimed at providing a user-friendly interface to track queue requests from students visiting the tutoring centers and convert that information into actionable data with the potential

The Tutoring Center Management System is a web-based application for ASU’s University Academic Success Programs (UASP) department, particularly the Math Tutoring Center. It is aimed at providing a user-friendly interface to track queue requests from students visiting the tutoring centers and convert that information into actionable data with the potential to live-track and assess the performance of each tutoring center and each tutor. Numerous UASP processes are streamlined to create an efficient and integrated workflow, such as tutor scheduling, tutor search, shift coverage requests, and analytics. The intended users of the application feature ASU students and the UASP staff, including tutors and supervisors.
ContributorsJain, Prakshal (Co-author) / Gulati, Sachit (Co-author) / Nakamura, Mutsumi (Thesis director) / Selgrad, Justin (Committee member) / Department of Information Systems (Contributor) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
132397-Thumbnail Image.png
Description
Professor Alarcon’s lab is producing proton beam detectors, and this project is focused on informing the decision as to which layout of detector is more effective at producing an accurate backprojection for an equal number of data channels. The comparison is between “square pad” detectors and “wire pad” detectors. The

Professor Alarcon’s lab is producing proton beam detectors, and this project is focused on informing the decision as to which layout of detector is more effective at producing an accurate backprojection for an equal number of data channels. The comparison is between “square pad” detectors and “wire pad” detectors. The square pad detector consists of a grid of square pads all of identical size, that each collect their own data. The wire pad detector consists of large rectangular pads that span the entire detector in one direction, with 2 additional layers of identical pads each rotated by 60° from the previous. In order to test each design Python was used to simulate Gaussian beams of varying amplitudes, position and size and integrate them in each of the two methods. They were then backprojected and fit to a Gaussian function and the error between the backprojected parameters and the original parameters of the beam were measured.
ContributorsFoley, Brendan (Author) / Alarcon, Ricardo (Thesis director) / Galyaev, Eugene (Committee member) / Department of Physics (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05