Matching Items (9)
136442-Thumbnail Image.png
Description
A model has been developed to modify Euler-Bernoulli beam theory for wooden beams, using visible properties of wood knot-defects. Treating knots in a beam as a system of two ellipses that change the local bending stiffness has been shown to improve the fit of a theoretical beam displacement function to

A model has been developed to modify Euler-Bernoulli beam theory for wooden beams, using visible properties of wood knot-defects. Treating knots in a beam as a system of two ellipses that change the local bending stiffness has been shown to improve the fit of a theoretical beam displacement function to edge-line deflection data extracted from digital imagery of experimentally loaded beams. In addition, an Ellipse Logistic Model (ELM) has been proposed, using L1-regularized logistic regression, to predict the impact of a knot on the displacement of a beam. By classifying a knot as severely positive or negative, vs. mildly positive or negative, ELM can classify knots that lead to large changes to beam deflection, while not over-emphasizing knots that may not be a problem. Using ELM with a regression-fit Young's Modulus on three-point bending of Douglass Fir, it is possible estimate the effects a knot will have on the shape of the resulting displacement curve.
Created2015-05
133225-Thumbnail Image.png
Description
Speech nasality disorders are characterized by abnormal resonance in the nasal cavity. Hypernasal speech is of particular interest, characterized by an inability to prevent improper nasalization of vowels, and poor articulation of plosive and fricative consonants, and can lead to negative communicative and social consequences. It can be associated with

Speech nasality disorders are characterized by abnormal resonance in the nasal cavity. Hypernasal speech is of particular interest, characterized by an inability to prevent improper nasalization of vowels, and poor articulation of plosive and fricative consonants, and can lead to negative communicative and social consequences. It can be associated with a range of conditions, including cleft lip or palate, velopharyngeal dysfunction (a physical or neurological defective closure of the soft palate that regulates resonance between the oral and nasal cavity), dysarthria, or hearing impairment, and can also be an early indicator of developing neurological disorders such as ALS. Hypernasality is typically scored perceptually by a Speech Language Pathologist (SLP). Misdiagnosis could lead to inadequate treatment plans and poor treatment outcomes for a patient. Also, for some applications, particularly screening for early neurological disorders, the use of an SLP is not practical. Hence this work demonstrates a data-driven approach to objective assessment of hypernasality, through the use of Goodness of Pronunciation features. These features capture the overall precision of articulation of speaker on a phoneme-by-phoneme basis, allowing demonstrated models to achieve a Pearson correlation coefficient of 0.88 on low-nasality speakers, the population of most interest for this sort of technique. These results are comparable to milestone methods in this domain.
ContributorsSaxon, Michael Stephen (Author) / Berisha, Visar (Thesis director) / McDaniel, Troy (Committee member) / Electrical Engineering Program (Contributor, Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135056-Thumbnail Image.png
Description
In this paper, I will show that news headlines of global events can predict changes in stock price by using Machine Learning and eight years of data from r/WorldNews, a popular forum on Reddit.com. My data is confined to the top 25 daily posts on the forum, and due to

In this paper, I will show that news headlines of global events can predict changes in stock price by using Machine Learning and eight years of data from r/WorldNews, a popular forum on Reddit.com. My data is confined to the top 25 daily posts on the forum, and due to the implicit filtering mechanism in the online community, these 25 posts are representative of the most popular news headlines and influential global events of the day. Hence, these posts shine a light on how large-scale social and political events affect the stock market. Using a Logistic Regression and a Naive Bayes classifier, I am able to predict with approximately 85% accuracy a binary change in stock price using term-feature vectors gathered from the news headlines. The accuracy, precision and recall results closely rival the best models in this field of research. In addition to the results, I will also describe the mathematical underpinnings of the two models; preceded by a general investigation of the intersection between the multiple academic disciplines related to this project. These range from social to computer science and from statistics to philosophy. The goal of this additional discussion is to further illustrate the interdisciplinary nature of the research and hopefully inspire a non-monolithic mindset when further investigations are pursued.
Created2016-12
132169-Thumbnail Image.png
Description
In materials science, developing GeSn alloys is major current research interest concerning the production of efficient Group-IV photonics. These alloys are particularly interesting because the development of next-generation semiconductors for ultrafast (terahertz) optoelectronic communication devices could be accomplished through integrating these novel alloys with industry-standard silicon technology. Unfortunately, incorporating a

In materials science, developing GeSn alloys is major current research interest concerning the production of efficient Group-IV photonics. These alloys are particularly interesting because the development of next-generation semiconductors for ultrafast (terahertz) optoelectronic communication devices could be accomplished through integrating these novel alloys with industry-standard silicon technology. Unfortunately, incorporating a maximal amount of Sn into a Ge lattice has been difficult to achieve experimentally. At ambient conditions, pure Ge and Sn adopt cubic (α) and tetragonal (β) structures, respectively, however, to date the relative stability and structure of α and β phase GeSn alloys versus percent composition Sn has not been thoroughly studied. In this research project, computational tools were used to perform state-of-the-art predictive quantum simulations to study the structural, bonding and energetic trends in GeSn alloys in detail over a range of experimentally accessible compositions. Since recent X-Ray and vibrational studies have raised some controversy about the nanostructure of GeSn alloys, the investigation was conducted with ordered, random and clustered alloy models.
By means of optimized geometry analysis, pure Ge and Sn were found to adopt the alpha and beta structures, respectively, as observed experimentally. For all theoretical alloys, the corresponding αphase structure was found to have the lowest energy, for Sn percent compositions up to 90%. However at 50% Sn, the correspondingβ alloy energies are predicted to be only ~70 meV higher. The formation energy of α-phase alloys was found to be positive for all compositions, whereas only two beta formation energies were negative. Bond length distributions were analyzed and dependence on Sn incorporation was found, perhaps surprisingly, not to be directly correlated with cell volume. It is anticipated that the data collected in this project may help to elucidate observed complex vibrational properties in these systems.
ContributorsLiberman-Martin, Zoe Elise (Author) / Chizmeshya, Andrew (Thesis director) / Sayres, Scott (Committee member) / Wolf, George (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of Molecular Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132796-Thumbnail Image.png
Description
This thesis surveys and analyzes applications of machine learning techniques to the fields of animation and computer graphics. Data-driven techniques utilizing machine learning have in recent years been successfully applied to many subfields of animation and computer graphics. These include, but are not limited to, fluid dynamics, kinematics, and character

This thesis surveys and analyzes applications of machine learning techniques to the fields of animation and computer graphics. Data-driven techniques utilizing machine learning have in recent years been successfully applied to many subfields of animation and computer graphics. These include, but are not limited to, fluid dynamics, kinematics, and character modeling. I argue that such applications offer significant advantages which will be pivotal in advancing the fields of animation and computer graphics. Further, I argue these advantages are especially relevant in real-time implementations when working with finite computational resources.
ContributorsSaba, Raphael Lucas (Author) / Foy, Joseph (Thesis director) / Olson, Loren (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
Description
Non-Destructive Testing (NDT) is integral to preserving the structural health of materials. Techniques that fall under the NDT category are able to evaluate integrity and condition of a material without permanently altering any property of the material. Additionally, they can typically be used while the material is in

Non-Destructive Testing (NDT) is integral to preserving the structural health of materials. Techniques that fall under the NDT category are able to evaluate integrity and condition of a material without permanently altering any property of the material. Additionally, they can typically be used while the material is in active use instead of needing downtime for inspection.
The two general categories of structural health monitoring (SHM) systems include passive and active monitoring. Active SHM systems utilize an input of energy to monitor the health of a structure (such as sound waves in ultrasonics), while passive systems do not. As such, passive SHM tends to be more desirable. A system could be permanently fixed to a critical location, passively accepting signals until it records a damage event, then localize and characterize the damage. This is the goal of acoustic emissions testing.
When certain types of damage occur, such as matrix cracking or delamination in composites, the corresponding release of energy creates sound waves, or acoustic emissions, that propagate through the material. Audio sensors fixed to the surface can pick up data from both the time and frequency domains of the wave. With proper data analysis, a time of arrival (TOA) can be calculated for each sensor allowing for localization of the damage event. The frequency data can be used to characterize the damage.
In traditional acoustic emissions testing, the TOA combined with wave velocity and information about signal attenuation in the material is used to localize events. However, in instances of complex geometries or anisotropic materials (such as carbon fibre composites), velocity and attenuation can vary wildly based on the direction of interest. In these cases, localization can be based off of the time of arrival distances for each sensor pair. This technique is called Delta T mapping, and is the main focus of this study.
ContributorsBriggs, Nathaniel (Author) / Chattopadhyay, Aditi (Thesis director) / Papandreou-Suppappola, Antonia (Committee member) / Skinner, Travis (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
165084-Thumbnail Image.png
Description
This project aspires to develop an AI capable of playing on a variety of maps in a Risk-like board game. While AI has been successfully applied to many other board games, such as Chess and Go, most research is confined to a single board and is inflexible to topological changes.

This project aspires to develop an AI capable of playing on a variety of maps in a Risk-like board game. While AI has been successfully applied to many other board games, such as Chess and Go, most research is confined to a single board and is inflexible to topological changes. Further, almost all of these games are played on a rectangular grid. Contrarily, this project develops an AI player, referred to as GG-net, to play the online strategy game Warzone, which is based on the classic board game Risk. Warzone is played on a wide variety of irregularly shaped maps. Prior research has struggled to create an effective AI for Risk-like games due to the immense branching factor. The most successful attempts tended to rely on manually restricting the set of actions the AI considered while also engineering useful features for the AI to consider. GG-net uses no human knowledge, but rather a genetic algorithm combined with a graph neural network. Together, these methods allow GG-net to perform competitively across a multitude of maps. GG-net outperformed the built-in rule-based AI by 413 Elo (representing an 80.7% chance of winning) and an approach based on AlphaZero using graph neural networks by 304 Elo (representing a 74.2% chance of winning). This same advantage holds across both seen and unseen maps. GG-net appears to be a strong opponent on both small and medium maps, however, on large maps with hundreds of territories, inefficiencies in GG-net become more significant and GG-net struggles against the rule-based approach. Overall, GG-net was able to successfully learn the game and generalize across maps of a similar size, albeit further work is required for GG-net to become more successful on large maps.
ContributorsBauer, Andrew (Author) / Yang, Yezhou (Thesis director) / Harrison, Blake (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2022-05
Description

For my Honors Thesis, I decided to create an Artificial Intelligence Project to predict Fantasy NFL Football Points of players and team's defense. I created a Tensorflow Keras AI Regression model and created a Flask API that holds the AI model, and a Django Try-It Page for the user to

For my Honors Thesis, I decided to create an Artificial Intelligence Project to predict Fantasy NFL Football Points of players and team's defense. I created a Tensorflow Keras AI Regression model and created a Flask API that holds the AI model, and a Django Try-It Page for the user to use the model. These services are hosted on ASU's AWS service. In my Flask API, it actively gathers data from Pro-Football-Reference, then calculates the fantasy points. Let’s say the current year is 2022, then the model analyzes each player and trains on all data from available from 2000 to 2020 data, tests the data on 2021 data, and predicts for 2022 year. The Django Website asks the user to input the current year, then the user clicks the submit button runs the AI model, and the process explained earlier. Next, the user enters the player's name for the point prediction and the website predicts the last 5 rows with 4 being the previous fantasy points and the 5th row being the prediction.

ContributorsPanikulam, Caleb (Author) / De Luca, Gennaro (Thesis director) / Chen, Yinong (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-12
166246-Thumbnail Image.png
Description
In the age of information, collecting and processing large amounts of data is an integral part of running a business. From training artificial intelligence to driving decision making, the applications of data are far-reaching. However, it is difficult to process many types of data; namely, unstructured data. Unstructured data is

In the age of information, collecting and processing large amounts of data is an integral part of running a business. From training artificial intelligence to driving decision making, the applications of data are far-reaching. However, it is difficult to process many types of data; namely, unstructured data. Unstructured data is “information that either does not have a predefined data model or is not organized in a pre-defined manner” (Balducci & Marinova 2018). Such data are difficult to put into spreadsheets and relational databases due to their lack of numeric values and often come in the form of text fields written by the consumers (Wolff, R. 2020). The goal of this project is to help in the development of a machine learning model to aid CommonSpirit Health and ServiceNow, hence why this approach using unstructured data was selected. This paper provides a general overview of the process of unstructured data management and explores some existing implementations and their efficacy. It will then discuss our approach to converting unstructured cases into usable data that were used to develop an artificial intelligence model which is estimated to be worth $400,000 and save CommonSpirit Health $1,200,000 in organizational impact.
ContributorsBergsagel, Matteo (Author) / De Waard, Jan (Co-author) / Chavez-Echeagaray, Maria Elena (Thesis director) / Burns, Christopher (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05