Matching Items (109)
Filtering by

Clear all filters

135516-Thumbnail Image.png
Description
Research has indicated that, with the rise of the digital age, social ability, emotional maturity, and the capability to empathize have decreased significantly in the newer generations (Generation X and Millenials) compared with previous generations. The primary purpose of this thesis was to discover a way to counteract the

Research has indicated that, with the rise of the digital age, social ability, emotional maturity, and the capability to empathize have decreased significantly in the newer generations (Generation X and Millenials) compared with previous generations. The primary purpose of this thesis was to discover a way to counteract the negative effects of constant screen-time with a space that encourages face-to-face interactions while also contributing monetarily to the community by which it is surrounded.
This thesis explores the viability of the creation of a board game café in downtown Phoenix that would donate a percentage of its profits to local charities and other initiatives for the improvement of the Phoenix area. Using a combination of different entrepreneurship and business model templates, fourteen questions were answered to complete the business model, including questions about resources and partnerships necessary for the venture’s success in addition to what the cost structure and revenue streams would look like. These fourteen questions make up the fourteen different parts of the Lean Launch Business Model Canvas, the template primarily used for the display of the final business model. The business model canvas undergoes “cycles” – that is, different drafts of the canvas are created and added to or modified as needed. This particular business model canvas underwent as many as 15 cycles before becoming finalized and receiving approval.
The completion of the business model canvas invites speculation about its actual viability, bringing up questions about financing, projected sales, and the length of the venture’s future. “Pivots,” modifications of the business model to either increase revenue or decrease costs, are also explored at this point. While this particular business idea does have a sustainable competitive advantage in the Phoenix area as a first mover, it would be unwise to pursue the idea further, as the costs are far too high and the required activities far too numerous to outweigh the revenues and benefits. In addition, it would be difficult to obtain funding at a reasonable interest rate for a venture with such a high risk of failure. In this case, a pivot was considered that eliminated nearly all costs and risk, while still relying on a very similar revenue stream. This pivot suggested a far simpler and more economical way of accomplishing the original goal of bettering the Phoenix metro community and giving customers the chance to rediscover in-person communication.
ContributorsNahon, Rachel Ann (Author) / Westlake, Garret (Thesis director) / Manning, Michael (Committee member) / W. P. Carey School of Business (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135771-Thumbnail Image.png
Description
Background: As the growth of social media platforms continues, the use of the constantly increasing amount of freely available, user-generated data they receive becomes of great importance. One apparent use of this content is public health surveillance; such as for increasing understanding of substance abuse. In this study, Facebook was

Background: As the growth of social media platforms continues, the use of the constantly increasing amount of freely available, user-generated data they receive becomes of great importance. One apparent use of this content is public health surveillance; such as for increasing understanding of substance abuse. In this study, Facebook was used to monitor nicotine addiction through the public support groups users can join to aid their quitting process. Objective: The main objective of this project was to gain a better understanding of the mechanisms of nicotine addiction online and provide content analysis of Facebook posts obtained from "quit smoking" support groups. Methods: Using the Facebook Application Programming Interface (API) for Python, a sample of 9,970 posts were collected in October 2015. Information regarding the user's name and the number of likes and comments they received on their post were also included. The posts crawled were then manually classified by one annotator into one of three categories: positive, negative, and neutral. Where positive posts are those that describe current quits, negative posts are those that discuss relapsing, and neutral posts are those that were not be used to train the classifiers, which include posts where users have yet to attempt a quit, ads, random questions, etc. For this project, the performance of two machine learning algorithms on a corpus of manually labeled Facebook posts were compared. The classification goal was to test the plausibility of creating a natural language processing machine learning classifier which could be used to distinguish between relapse (labeled negative) and quitting success (labeled positive) posts from a set of smoking related posts. Results: From the corpus of 9,970 posts that were manually labeled: 6,254 (62.7%) were labeled positive, 1,249 (12.5%) were labeled negative, and 2467 (24.8%) were labeled neutral. Since the posts labeled neutral are those which are irrelevant to the classification task, 7,503 posts were used to train the classifiers: 83.4% positive and 16.6% negative. The SVM classifier was 84.1% accurate and 84.1% precise, had a recall of 1, and an F-score of 0.914. The MNB classifier was 82.8% accurate and 82.8% precise, had a recall of 1, and an F-score of 0.906. Conclusions: From the Facebook surveillance results, a small peak is given into the behavior of those looking to quit smoking. Ultimately, what makes Facebook a great tool for public health surveillance is that it has an extremely large and diverse user base with information that is easily obtainable. This, and the fact that so many people are actually willing to use Facebook support groups to aid their quitting processes demonstrates that it can be used to learn a lot about quitting and smoking behavior.
ContributorsMolina, Daniel Antonio (Author) / Li, Baoxin (Thesis director) / Tian, Qiongjie (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description
This thesis project focused on determining the primary causes of flight delays within the United States then building a machine learning model using the collected flight data to determine a more efficient flight route from Phoenix Sky Harbor International Airport in Phoenix, Arizona to Harry Reid International Airport in Las

This thesis project focused on determining the primary causes of flight delays within the United States then building a machine learning model using the collected flight data to determine a more efficient flight route from Phoenix Sky Harbor International Airport in Phoenix, Arizona to Harry Reid International Airport in Las Vegas, Nevada. In collaboration with Honeywell Aerospace as part of the Ira A. Fulton Schools of Engineering Capstone Course, CSE 485 and 486, this project consisted of using open source data from FlightAware and the United States Bureau of Transportation Statistics to identify 5 primary causes of flight delays and determine if any of them could be solved using machine learning. The machine learning model was a 3-layer Feedforward Neural Network that focused on reducing the impact of Late Arriving Aircraft for the Phoenix to Las Vegas route. Evaluation metrics used to determine the efficiency and success of the model include Mean Squared Error (MSE), Mean Average Error (MAE), and R-Squared Score. The benefits of this project are wide-ranging, for both consumers and corporations. Consumers will be able to arrive at their destination earlier than expected, which would provide them a better experience with the airline. On the other side, the airline can take credit for the customer's satisfaction, in addition to reducing fuel usage, thus making their flights more environmentally friendly. This project represents a significant contribution to the field of aviation as it proves that flights can be made more efficient through the usage of open source data.
Created2024-05
Description
Manually determining the health of a plant requires time and expertise from a human. Automating this process utilizing machine learning could provide significant benefits to the agricultural field. The detection and classification of health defects in crops by analyzing visual data using computer vision tools can accomplish this. In this

Manually determining the health of a plant requires time and expertise from a human. Automating this process utilizing machine learning could provide significant benefits to the agricultural field. The detection and classification of health defects in crops by analyzing visual data using computer vision tools can accomplish this. In this paper, the task is completed using two different types of existing machine learning algorithms, ResNet50 and CapsNet, which take images of crops as input and return a classification that denotes the health defect the crop suffers from. Specifically, the models analyze the images to determine if a nutritional deficiency or disease is present and, if so, identify it. The purpose of this project is to apply the proven deep learning architecture, ResNet50, to the data, which serves as a baseline for comparison of performance with the less researched architecture, CapsNet. This comparison highlights differences in the performance of the two architectures when applied to a complex dataset with a multitude of classes. This report details the data pipeline process, including dataset collection and validation, as well as preprocessing and application to the model. Additionally, methods of improving the accuracy of the models are recorded and analyzed to provide further insights into the comparison of the different architectures. The ResNet-50 model achieved an accuracy of 100% after being trained on the nutritional deficiency dataset. It achieved an accuracy of 88.5% on the disease dataset. The CapsNet model achieved an accuracy of 90% on the nutritional deficiency dataset but only 70% on the disease dataset. In comparing the performance of the two models, the ResNet model outperformed the other; however, the CapsNet model shows promise for future implementations. With larger, more complete datasets as well as improvements to the design of capsule networks, they will likely provide exceptional performance for complex image classification tasks.
ContributorsChristner, Drew (Author) / Carter, Lynn (Thesis director) / Ghayekhloo, Samira (Committee member) / Barrett, The Honors College (Contributor) / Computing and Informatics Program (Contributor) / Computer Science and Engineering Program (Contributor)
Created2024-05
Description
The performance of modern machine learning algorithms depends upon the selection of a set of hyperparameters. Common examples of hyperparameters are learning rate and the number of layers in a dense neural network. Auto-ML is a branch of optimization that has produced important contributions in this area. Within Auto-ML, multi-fidelity approaches, which eliminate poorly-performing

The performance of modern machine learning algorithms depends upon the selection of a set of hyperparameters. Common examples of hyperparameters are learning rate and the number of layers in a dense neural network. Auto-ML is a branch of optimization that has produced important contributions in this area. Within Auto-ML, multi-fidelity approaches, which eliminate poorly-performing configurations after evaluating them at low budgets, are among the most effective. However, the performance of these algorithms strongly depends on how effectively they allocate the computational budget to various hyperparameter configurations. We first present Parameter Optimization with Conscious Allocation 1.0 (POCA 1.0), a hyperband- based algorithm for hyperparameter optimization that adaptively allocates the inputted budget to the hyperparameter configurations it generates following a Bayesian sampling scheme. We then present its successor Parameter Optimization with Conscious Allocation 2.0 (POCA 2.0), which follows POCA 1.0’s successful philosophy while utilizing a time-series model to reduce wasted computational cost and providing a more flexible framework. We compare POCA 1.0 and 2.0 to its nearest competitor BOHB at optimizing the hyperparameters of a multi-layered perceptron and find that both POCA algorithms exceed BOHB in low-budget hyperparameter optimization while performing similarly in high-budget scenarios.
ContributorsInman, Joshua (Author) / Sankar, Lalitha (Thesis director) / Pedrielli, Giulia (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2024-05
DescriptionBuck-It is a budgeting application designed to meet the unique needs of college students. As financial literacy is crucial for developing good long-term financial habits, Buck-It aims to promote budgeting among college students through an appealing user interface, robust customization, and effective categorization.
ContributorsDoyle, Michael (Author) / Davitt, Ryan (Co-author) / Walle, Andrew (Co-author) / Vemuri, Rajeev (Co-author) / Baptista, Asher (Co-author) / Byrne, Jared (Thesis director) / Lee, Peggy (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2024-05
Description
In this thesis, I propose a framework for automatically generating custom orthotic insoles for Intelligent Mobility and ANDBOUNDS. Towards the end, the entire framework works together to ensure users receive the highest quality insoles through human quality checks. Three machine learning models were assembled: the Quality Model, the Meta-point Model,

In this thesis, I propose a framework for automatically generating custom orthotic insoles for Intelligent Mobility and ANDBOUNDS. Towards the end, the entire framework works together to ensure users receive the highest quality insoles through human quality checks. Three machine learning models were assembled: the Quality Model, the Meta-point Model, and the Multimodal Model. The Quality Model ensures that user uploaded foot scans are high quality. The Meta-point Model ensures that the meta-point coordinates assigned to the foot scans are below the required tolerance to align an insole mesh onto a foot scan. The Multimodal Model uses customer foot pain descriptors and the foot scan to customize an insole to the customers’ ailments. The results demonstrate that this is a viable option for insole creation and has the potential to aid or replace human insole designers.
ContributorsNucuta, Raymond (Author) / Osburn, Steven (Thesis director) / Joseph, Jeshua (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2024-05
Description
This study presents a comparative analysis of machine learning models on their ability to determine match outcomes in the English Premier League (EPL), focusing on optimizing prediction accuracy. The research leverages a variety of models, including logistic regression, decision trees, random forests, gradient boosting machines, support vector machines, k-nearest

This study presents a comparative analysis of machine learning models on their ability to determine match outcomes in the English Premier League (EPL), focusing on optimizing prediction accuracy. The research leverages a variety of models, including logistic regression, decision trees, random forests, gradient boosting machines, support vector machines, k-nearest neighbors, and extreme gradient boosting, to predict the outcomes of soccer matches in the EPL. Utilizing a comprehensive dataset from Kaggle, the study uses the Sport Result Prediction CRISP-DM framework for data preparation and model evaluation, comparing the accuracy, precision, recall, F1-score, ROC-AUC score, and confusion matrices of each model used in the study. The findings reveal that ensemble methods, notably Random Forest and Extreme Gradient Boosting, outperform other models in accuracy, highlighting their potential in sports analytics. This research contributes to the field of sports analytics by demonstrating the effectiveness of machine learning in sports outcome prediction, while also identifying the challenges and complexities inherent in predicting the outcomes of EPL matches. This research not only highlights the significance of ensemble learning techniques in handling sports data complexities but also opens avenues for future exploration into advanced machine learning and deep learning approaches for enhancing predictive accuracy in sports analytics.
ContributorsTashildar, Ninad (Author) / Osburn, Steven (Thesis director) / Simari, Gerardo (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor) / Computer Science and Engineering Program (Contributor)
Created2024-05
Description
Machine learning continues to grow in applications and its influence is felt across the world. This paper builds off the foundations of machine learning used for sports analysis and its specific implementations in tennis by attempting to predict the winner of ATP men’s singles tennis matches. Tennis provides a unique

Machine learning continues to grow in applications and its influence is felt across the world. This paper builds off the foundations of machine learning used for sports analysis and its specific implementations in tennis by attempting to predict the winner of ATP men’s singles tennis matches. Tennis provides a unique challenge due to the individual nature of singles and the varying career lengths, experiences, and backgrounds of players from around the globe. Related work has explored prediction with features such as rank differentials, physical characteristics, and past performance. This work expands on the studies by including raw player statistics and relevant environment features. State of the art models such as LightGBM and XGBoost, as well as a standard logistic regression are trained and evaluated against a dataset containing matches from 1991 to 2023. All models surpassed the baseline and each has their own strengths and weaknesses. Future work may involve expanding the feature space to include more robust features such as player profiles and ELO ratings, as well as utilizing deep neural networks to improve understanding of past player performance and better comprehend the context of a given match.
ContributorsBandemer, Nathaniel (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Computer Science and Engineering Program (Contributor)
Created2024-05