Matching Items (805)
Filtering by

Clear all filters

ContributorsChang, Ruihong (Performer) / ASU Library. Music Library (Publisher)
Created2018-03-29
153808-Thumbnail Image.png
Description
Four Souvenirs for Violin and Piano was composed by Paul Schoenfeld (b.1947) in 1990 as a showpiece, spotlighting the virtuosity of both the violin and piano in equal measure. Each movement is a modern interpretation of a folk or popular genre, re- envisioned over intricate jazz harmonies and rhythms. The

Four Souvenirs for Violin and Piano was composed by Paul Schoenfeld (b.1947) in 1990 as a showpiece, spotlighting the virtuosity of both the violin and piano in equal measure. Each movement is a modern interpretation of a folk or popular genre, re- envisioned over intricate jazz harmonies and rhythms. The work was commissioned by violinist Lev Polyakin, who specifically requested some short pieces that could be performed in a local jazz establishment named Night Town in Cleveland, Ohio. The result is a work that is approximately fifteen minutes in length. Schoenfeld is a respected composer in the contemporary classical music community, whose Café Music (1986) for piano trio has recently become a staple of the standard chamber music repertoire. Many of his other works, however, remain in relative obscurity. It is the focus of this document to shed light on at least one other notable composition; Four Souvenirs for Violin and Piano. Among the topics to be discussed regarding this piece are a brief history behind the genesis of this composition, a structural summary of the entire work and each of its movements, and an appended practice guide based on interview and coaching sessions with the composer himself. With this project, I hope to provide a better understanding and appreciation of this work.
ContributorsJanczyk, Kristie Annette (Author) / Ryan, Russell (Thesis advisor) / Campbell, Andrew (Committee member) / Norton, Kay (Committee member) / Arizona State University (Publisher)
Created2015
156200-Thumbnail Image.png
Description
Modern, advanced statistical tools from data mining and machine learning have become commonplace in molecular biology in large part because of the “big data” demands of various kinds of “-omics” (e.g., genomics, transcriptomics, metabolomics, etc.). However, in other fields of biology where empirical data sets are conventionally smaller, more

Modern, advanced statistical tools from data mining and machine learning have become commonplace in molecular biology in large part because of the “big data” demands of various kinds of “-omics” (e.g., genomics, transcriptomics, metabolomics, etc.). However, in other fields of biology where empirical data sets are conventionally smaller, more traditional statistical methods of inference are still very effective and widely used. Nevertheless, with the decrease in cost of high-performance computing, these fields are starting to employ simulation models to generate insights into questions that have been elusive in the laboratory and field. Although these computational models allow for exquisite control over large numbers of parameters, they also generate data at a qualitatively different scale than most experts in these fields are accustomed to. Thus, more sophisticated methods from big-data statistics have an opportunity to better facilitate the often-forgotten area of bioinformatics that might be called “in-silicomics”.

As a case study, this thesis develops methods for the analysis of large amounts of data generated from a simulated ecosystem designed to understand how mammalian biomechanics interact with environmental complexity to modulate the outcomes of predator–prey interactions. These simulations investigate how other biomechanical parameters relating to the agility of animals in predator–prey pairs are better predictors of pursuit outcomes. Traditional modelling techniques such as forward, backward, and stepwise variable selection are initially used to study these data, but the number of parameters and potentially relevant interaction effects render these methods impractical. Consequently, new modelling techniques such as LASSO regularization are used and compared to the traditional techniques in terms of accuracy and computational complexity. Finally, the splitting rules and instances in the leaves of classification trees provide the basis for future simulation with an economical number of additional runs. In general, this thesis shows the increased utility of these sophisticated statistical techniques with simulated ecological data compared to the approaches traditionally used in these fields. These techniques combined with methods from industrial Design of Experiments will help ecologists extract novel insights from simulations that combine habitat complexity, population structure, and biomechanics.
ContributorsSeto, Christian (Author) / Pavlic, Theodore (Thesis advisor) / Li, Jing (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2018
ContributorsASU Library. Music Library (Publisher)
Created2018-02-23
131535-Thumbnail Image.png
Description
Visualizations are an integral component for communicating and evaluating modern networks. As data becomes more complex, info-graphics require a balance between visual noise and effective storytelling that is often restricted by layouts unsuitable for scalability. The challenge then rests upon researchers to effectively structure their information in a way that

Visualizations are an integral component for communicating and evaluating modern networks. As data becomes more complex, info-graphics require a balance between visual noise and effective storytelling that is often restricted by layouts unsuitable for scalability. The challenge then rests upon researchers to effectively structure their information in a way that allows for flexible, transparent illustration. We propose network graphing as an operative alternative for demonstrating community behavior over traditional charts which are unable to look past numeric data. In this paper, we explore methods for manipulating, processing, cleaning, and aggregating data in Python; a programming language tailored for handling structured data, which can then be formatted for analysis and modeling of social network tendencies in Gephi. We implement this data by applying an algorithm known as the Fruchterman-Reingold force-directed layout to datasets of Arizona State University’s research and collaboration network. The result is a visualization that analyzes the university’s infrastructure by providing insight about community behaviors between colleges. Furthermore, we highlight how the flexibility of this visualization provides a foundation for specific use cases by demonstrating centrality measures to find important liaisons that connect distant communities.
ContributorsMcMichael, Jacob Andrew (Author) / LiKamWa, Robert (Thesis director) / Anderson, Derrick (Committee member) / Goshert, Maxwell (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
134185-Thumbnail Image.png
Description
37,461 automobile accident fatalities occured in the United States in 2016 ("Quick Facts 2016", 2017). Improving the safety of roads has traditionally been approached by governmental agencies including the National Highway Traffic Safety Administration and State Departments of Transporation. In past literature, automobile crash data is analyzed using time-series prediction

37,461 automobile accident fatalities occured in the United States in 2016 ("Quick Facts 2016", 2017). Improving the safety of roads has traditionally been approached by governmental agencies including the National Highway Traffic Safety Administration and State Departments of Transporation. In past literature, automobile crash data is analyzed using time-series prediction technicques to identify road segments and/or intersections likely to experience future crashes (Lord & Mannering, 2010). After dangerous zones have been identified road modifications can be implemented improving public safety. This project introduces a historical safety metric for evaluating the relative danger of roads in a road network. The historical safety metric can be used to update routing choices of individual drivers improving public safety by avoiding historically more dangerous routes. The metric is constructed using crash frequency, severity, location and traffic information. An analysis of publically-available crash and traffic data in Allgeheny County, Pennsylvania is used to generate the historical safety metric for a specific road network. Methods for evaluating routes based on the presented historical safety metric are included using the Mann Whitney U Test to evaluate the significance of routing decisions. The evaluation method presented requires routes have at least 20 crashes to be compared with significance testing. The safety of the road network is visualized using a heatmap to present distribution of the metric throughout Allgeheny County.
ContributorsGupta, Ariel Meron (Author) / Bansal, Ajay (Thesis director) / Sodemann, Angela (Committee member) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
136587-Thumbnail Image.png
Description
In the words of W. Edwards Deming, "the central problem in management and in leadership is failure to understand the information in variation." While many quality management programs propose the institution of technical training in advanced statistical methods, this paper proposes that by understanding the fundamental information behind statistical theory,

In the words of W. Edwards Deming, "the central problem in management and in leadership is failure to understand the information in variation." While many quality management programs propose the institution of technical training in advanced statistical methods, this paper proposes that by understanding the fundamental information behind statistical theory, and by minimizing bias and variance while fully utilizing the available information about the system at hand, one can make valuable, accurate predictions about the future. Combining this knowledge with the work of quality gurus W. E. Deming, Eliyahu Goldratt, and Dean Kashiwagi, a framework for making valuable predictions for continuous improvement is made. After this information is synthesized, it is concluded that the best way to make accurate, informative predictions about the future is to "balance the present and future," seeing the future through the lens of the present and thus minimizing bias, variance, and risk.
ContributorsSynodis, Nicholas Dahn (Author) / Kashiwagi, Dean (Thesis director, Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
ContributorsWhite, Aaron (Performer) / Kim, Olga (Performer) / Hammond, Marinne (Performer) / Shaner, Hayden (Performer) / Yoo, Katie (Performer) / Shoemake, Crista (Performer) / Gebe, Vladimir, 1987- (Performer) / Wills, Grace (Performer) / McKinch, Riley (Performer) / Freshmen Four (Performer) / ASU Library. Music Library (Publisher)
Created2018-04-27
ContributorsRosenfeld, Albor (Performer) / Pagano, Caio, 1940- (Performer) / ASU Library. Music Library (Publisher)
Created2018-10-03
ContributorsASU Library. Music Library (Publisher)
Created2018-10-04