Matching Items (8)
Filtering by

Clear all filters

156904-Thumbnail Image.png
Description
Machine learning tutorials often employ an application and runtime specific solution for a given problem in which users are expected to have a broad understanding of data analysis and software programming. This thesis focuses on designing and implementing a new, hands-on approach to teaching machine learning by streamlining the process

Machine learning tutorials often employ an application and runtime specific solution for a given problem in which users are expected to have a broad understanding of data analysis and software programming. This thesis focuses on designing and implementing a new, hands-on approach to teaching machine learning by streamlining the process of generating Inertial Movement Unit (IMU) data from multirotor flight sessions, training a linear classifier, and applying said classifier to solve Multi-rotor Activity Recognition (MAR) problems in an online lab setting. MAR labs leverage cloud computing and data storage technologies to host a versatile environment capable of logging, orchestrating, and visualizing the solution for an MAR problem through a user interface. MAR labs extends Arizona State University’s Visual IoT/Robotics Programming Language Environment (VIPLE) as a control platform for multi-rotors used in data collection. VIPLE is a platform developed for teaching computational thinking, visual programming, Internet of Things (IoT) and robotics application development. As a part of this education platform, this work also develops a 3D simulator capable of simulating the programmable behaviors of a robot within a maze environment and builds a physical quadrotor for use in MAR lab experiments.
ContributorsDe La Rosa, Matthew Lee (Author) / Chen, Yinong (Thesis advisor) / Collofello, James (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2018
Description

Breast cancer is one of the most common types of cancer worldwide. Early detection and diagnosis are crucial for improving the chances of successful treatment and survival. In this thesis, many different machine learning algorithms were evaluated and compared to predict breast cancer malignancy from diagnostic features extracted from digitized

Breast cancer is one of the most common types of cancer worldwide. Early detection and diagnosis are crucial for improving the chances of successful treatment and survival. In this thesis, many different machine learning algorithms were evaluated and compared to predict breast cancer malignancy from diagnostic features extracted from digitized images of breast tissue samples, called fine-needle aspirates. Breast cancer diagnosis typically involves a combination of mammography, ultrasound, and biopsy. However, machine learning algorithms can assist in the detection and diagnosis of breast cancer by analyzing large amounts of data and identifying patterns that may not be discernible to the human eye. By using these algorithms, healthcare professionals can potentially detect breast cancer at an earlier stage, leading to more effective treatment and better patient outcomes. The results showed that the gradient boosting classifier performed the best, achieving an accuracy of 96% on the test set. This indicates that this algorithm can be a useful tool for healthcare professionals in the early detection and diagnosis of breast cancer, potentially leading to improved patient outcomes.

ContributorsMallya, Aatmik (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description
Recent advances in quantum computing have broadened the available techniques towards addressing existing computing problems. One area of interest is that of the emerging field of machine learning. The intersection of these fields, quantum machine learning, has the ability to perform high impact work such as that in the health

Recent advances in quantum computing have broadened the available techniques towards addressing existing computing problems. One area of interest is that of the emerging field of machine learning. The intersection of these fields, quantum machine learning, has the ability to perform high impact work such as that in the health industry. Use cases seen in previous research include that of the detection of illnesses in medical imaging through image classification. In this work, we explore the utilization of a hybrid quantum-classical approach for the classification of brain Magnetic Resonance Imaging (MRI) images for brain tumor detection utilizing public Kaggle datasets. More specifically, we aim to assess the performance and utility of a hybrid model, comprised of a classical pretrained portion and a quantum variational circuit. We will compare these results to purely classical approaches, one utilizing transfer learning and one without, for the stated datasets. While more research should be done for proving generalized quantum advantage, our work shows potential quantum advantages in validation accuracy and sensitivity for the specified task, particularly when training with limited data availability in a minimally skewed dataset under specific conditions. Utilizing the IBM’s Qiskit Runtime Estimator with built in error mitigation, our experiments on a physical quantum system confirmed some results generated through simulations.
ContributorsDiaz, Maryannette (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
Description
In this work, we explore the potential for realistic and accurate generation of hourly traffic volume with machine learning (ML), using the ground-truth data of Manhattan road segments collected by the New York State Department of Transportation (NYSDOT). Specifically, we address the following question– can we develop a ML algorithm

In this work, we explore the potential for realistic and accurate generation of hourly traffic volume with machine learning (ML), using the ground-truth data of Manhattan road segments collected by the New York State Department of Transportation (NYSDOT). Specifically, we address the following question– can we develop a ML algorithm that generalizes the existing NYSDOT data to all road segments in Manhattan?– by introducing a supervised learning task of multi-output regression, where ML algorithms use road segment attributes to predict hourly traffic volume. We consider four ML algorithms– K-Nearest Neighbors, Decision Tree, Random Forest, and Neural Network– and hyperparameter tune by evaluating the performances of each algorithm with 10-fold cross validation. Ultimately, we conclude that neural networks are the best-performing models and require the least amount of testing time. Lastly, we provide insight into the quantification of “trustworthiness” in a model, followed by brief discussions on interpreting model performance, suggesting potential project improvements, and identifying the biggest takeaways. Overall, we hope our work can serve as an effective baseline for realistic traffic volume generation, and open new directions in the processes of supervised dataset generation and ML algorithm design.
ContributorsOtstot, Kyle (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
Description

The field of quantum computing is an exciting area of research that allows quantum mechanics such as superposition, interference, and entanglement to be utilized in solving complex computing problems. One real world application of quantum computing involves applying it to machine learning problems. In this thesis, I explore the effects

The field of quantum computing is an exciting area of research that allows quantum mechanics such as superposition, interference, and entanglement to be utilized in solving complex computing problems. One real world application of quantum computing involves applying it to machine learning problems. In this thesis, I explore the effects of choosing different circuit ansatz and optimizers on the performance of a variational quantum classifier tasked with binary classification.

ContributorsHsu, Brightan (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-12
131233-Thumbnail Image.png
Description
Although Spotify’s extensive library of songs are often seen broken up by “Top 100” and main lyrical genres, these categories are primarily based on popularity, artist and general mood alone. If a user wanted to create a playlist based on specific or situationally specific qualifiers from their own downloaded library,

Although Spotify’s extensive library of songs are often seen broken up by “Top 100” and main lyrical genres, these categories are primarily based on popularity, artist and general mood alone. If a user wanted to create a playlist based on specific or situationally specific qualifiers from their own downloaded library, he/she would have to hand pick songs that fit the mold and create a new playlist. This is a time consuming process that may not produce the most efficient result due to human error. The objective of this project, therefore, was to develop an application to streamline this process, optimize efficiency, and fill this user need.

Song Sift is an application built using Angular that allows users to filter and sort their song library to create specific playlists using the Spotify Web API. Utilizing the audio feature data that Spotify attaches to every song in their library, users can filter their downloaded Spotify songs based on four main attributes: (1) energy (how energetic a song sounds), (2) danceability (how danceable a song is), (3) valence (how happy a song sounds), and (4) loudness (average volume of a song). Once the user has created a playlist that fits their desired genre, he/she can easily export it to their Spotify account with the click of a button.
ContributorsDiMuro, Louis (Author) / Balasooriya, Janaka (Thesis director) / Chen, Yinong (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
Description

For my Honors Thesis, I decided to create an Artificial Intelligence Project to predict Fantasy NFL Football Points of players and team's defense. I created a Tensorflow Keras AI Regression model and created a Flask API that holds the AI model, and a Django Try-It Page for the user to

For my Honors Thesis, I decided to create an Artificial Intelligence Project to predict Fantasy NFL Football Points of players and team's defense. I created a Tensorflow Keras AI Regression model and created a Flask API that holds the AI model, and a Django Try-It Page for the user to use the model. These services are hosted on ASU's AWS service. In my Flask API, it actively gathers data from Pro-Football-Reference, then calculates the fantasy points. Let’s say the current year is 2022, then the model analyzes each player and trains on all data from available from 2000 to 2020 data, tests the data on 2021 data, and predicts for 2022 year. The Django Website asks the user to input the current year, then the user clicks the submit button runs the AI model, and the process explained earlier. Next, the user enters the player's name for the point prediction and the website predicts the last 5 rows with 4 being the previous fantasy points and the 5th row being the prediction.

ContributorsPanikulam, Caleb (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-12
Description
The goal of this project is to measure the effects of the use of dynamic circuit technology within quantum neural networks. Quantum neural networks are a type of neural network that utilizes quantum encoding and manipulation techniques to learn to solve a problem using quantum or classical data. In their

The goal of this project is to measure the effects of the use of dynamic circuit technology within quantum neural networks. Quantum neural networks are a type of neural network that utilizes quantum encoding and manipulation techniques to learn to solve a problem using quantum or classical data. In their current form these neural networks are linear in nature, not allowing for alternative execution paths, but using dynamic circuits they can be made nonlinear and can execute different paths. We measured the effects of these dynamic circuits on the training time, accuracy, and effective dimension of the quantum neural network across multiple trials to see the impacts of the nonlinear behavior.
ContributorsLynch, Brian (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-12