Matching Items (52)
151781-Thumbnail Image.png
Description
This study compares the Hummel Concertos in A Minor, Op. 85 and B Minor, Op. 89 and the Chopin Concertos in E Minor, Op. 11 and F Minor, Op. 21. On initial hearing of Hummel's rarely played concertos, one immediately detects similarities with Chopin's concerto style. Upon closer examination, one

This study compares the Hummel Concertos in A Minor, Op. 85 and B Minor, Op. 89 and the Chopin Concertos in E Minor, Op. 11 and F Minor, Op. 21. On initial hearing of Hummel's rarely played concertos, one immediately detects similarities with Chopin's concerto style. Upon closer examination, one discovers a substantial number of interesting and significant parallels with Chopin's concertos, many of which are highlighted in this research project. Hummel belongs to a generation of composers who made a shift away from the Classical style, and Chopin, as an early Romantic, absorbed much from his immediate predecessors in establishing his highly unique style. I have chosen to focus on Chopin's concertos to demonstrate this association. The essay begins with a discussion of the historical background of Chopin's formative years as it pertains to the formation of his compositional style, Hummel's role and influence in the contemporary musical arena, as well as interactions between the two composers. It then provides the historical background of the aforementioned concertos leading to a comparative analysis, which includes structural, melodic, harmonic, and motivic parallels. With a better understanding of his stylistic influences, and of how Chopin assimilated them in the creation of his masterful works, the performer can adopt a more informed approach to the interpretation of these two concertos, which are among the most beloved masterpieces in piano literature.
ContributorsYam, Jessica (Author) / Hamilton, Robert (Thesis advisor) / Levy, Benjamin (Committee member) / Ryan, Russell (Committee member) / Arizona State University (Publisher)
Created2013
151788-Thumbnail Image.png
Description
There has been a tremendous amount of innovation in policing over the last 40 years, from community and problem-oriented policing to hot spots and intelligence-led policing. Many of these innovations have been subjected to empirical testing, with mixed results on effectiveness. The latest innovation in policing is the Bureau of

There has been a tremendous amount of innovation in policing over the last 40 years, from community and problem-oriented policing to hot spots and intelligence-led policing. Many of these innovations have been subjected to empirical testing, with mixed results on effectiveness. The latest innovation in policing is the Bureau of Justice Assistance's Smart Policing Initiative (2009). Created in 2009, the SPI provides funding to law enforcement agencies to develop and test evidence-based practices to address crime and disorder. Researchers have not yet tested the impact of the SPI on the funded agencies, particularly with regard to core principles of the Initiative. The most notable of these is the collaboration between law enforcement agencies and their research partners. The current study surveyed SPI agencies and their research partners on key aspects of their Initiative. The current study uses mean score comparisons and qualitative responses to evaluate this partnership to determine the extent of its value and effect. It also seeks to determine the areas of police agency crime analysis and research units that are most in need of enhancement. Findings indicate that the research partners are actively involved in a range of aspects involved in problem solving under the Smart Policing Initiative, and that they have positively influenced police agencies' research and crime analysis functions, and to a lesser extent, have positively impacted police agencies' tactical operations. Additionally, personnel, technology, and training were found to be the main areas of the crime analysis and research units that still need to be enhanced. The thesis concludes with a discussion of the implications of these findings for police policy and practice.
ContributorsMartin-Roethele, Chelsie (Author) / White, Michael D. (Thesis advisor) / Ready, Justin (Committee member) / D'Anna, Matthew (Committee member) / Arizona State University (Publisher)
Created2013
151823-Thumbnail Image.png
Description
This research conducts two methods of rhetorical analysis of State of the Union Addresses: 1. Computational linguistic analysis of all State of the Union Addresses from 1790-2007, and 2. Close-readings and rhetorical analyses of two addresses: one by President Truman and one by President Reagan. This research shows the following

This research conducts two methods of rhetorical analysis of State of the Union Addresses: 1. Computational linguistic analysis of all State of the Union Addresses from 1790-2007, and 2. Close-readings and rhetorical analyses of two addresses: one by President Truman and one by President Reagan. This research shows the following key findings: 1. I am able to see general shifts in the authors' approaches to the State of the Union Address through historical computational analyses of the content of all speeches, and 2. Through close readings, I can understand the impact of the author's ethos and the historical context on the addresses, something that would not be readily revealed in a computational analysis. This study starts with a historical computational linguistic analysis of all State of the Union Addresses between 1790 and 2007. The study follows with close-readings of two State of the Union Addresses from the early and late Cold War period in-context: 1. Harry Truman's 1951 Address and 2. Ronald Reagan's 1986 Address. The main conclusions drawn from this research are that close-readings of State of the Union Addresses cannot be replaced by computational analyses, but can work in tandem with computerized text analysis to reveal shifts in rhetorical and topical features. This paper argues that there must be more close analyses in coordination with large-scale text analysis in order to understand the complexities of rhetorical situations.
ContributorsWegner, Peter (Author) / Goggin, Maureen (Thesis advisor) / Boyd, Patricia (Committee member) / Goggin, Peter (Committee member) / Arizona State University (Publisher)
Created2013
151303-Thumbnail Image.png
Description
Nino Rota was a prolific composer of twentieth-century film and concert music, including the Concerto for bassoon and orchestra in b-flat major. Composing over 150 film scores for directors such as Federico Fellini, Francis Ford Coppola, Henry Cass, King Vidor and Franco Zeffirelli, Rota received distinguished acclaim from several film

Nino Rota was a prolific composer of twentieth-century film and concert music, including the Concerto for bassoon and orchestra in b-flat major. Composing over 150 film scores for directors such as Federico Fellini, Francis Ford Coppola, Henry Cass, King Vidor and Franco Zeffirelli, Rota received distinguished acclaim from several film institutions, professional film reviewers and film music experts for his contributions to the art form. Rota also composed a great deal of diverse repertoire for the concert stage (ballet, opera, incidental music, concerti, symphonies, as well as several chamber works). The purpose of this analysis is to emphasize the expressive charm and accessibility of his concerto in the bassoon repertoire. The matter of this analysis of the Concerto for bassoon and orchestra concentrates on a single concerto from his concert repertoire completed in 1977, two years before Rota's death. The discussion includes a brief introduction to Nino Rota and his accomplishments as a musician and film composer, and a detailed outline of the motivic and structural events of contained in each movement of the concerto. The shape of the work is analyzed both in detailed discussion and by the use of charts, including reduced score figures of excerpts of the piece, which illustrate significant thematic events and relationships. The analysis reveals how Rota uses lyrical thematic material in a consistently, and he develops the music by creating melodic sequences and varied repetitions of thematic material. He is comfortable writing several forms, as indicated by the first movement, Toccata - a sonata-type form; the second movement, Recitativo, opening with a cadenza and followed by a theme and brief development; and the third movement, a theme (Andantino) and set of six variations. Rota's writing also includes contrapuntal techniques such as imitation, inversion, retrograde and augmentation, all creating expressive interest during thematic development. It is clear from the discussion that Rota is an accomplished, well-studied and lyrical composer. This analysis will inform the bassoonist and conductor, and aid in developing a fondness for the Concerto for bassoon and orchestra and perhaps other concert works.
ContributorsKluesener, Joseph (Author) / Micklich, Albie (Thesis advisor) / Hill, Gary (Committee member) / Levy, Benjamin (Committee member) / Russell, Timothy (Committee member) / Schuring, Martin (Committee member) / Arizona State University (Publisher)
Created2012
151500-Thumbnail Image.png
Description
Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding

Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding the location of node/link faults, i.e., the faulty nodes and links may be close to each other or far from each other. However, in many real life scenarios, there exists a strong spatial correlation among the faulty nodes and links. Such failures are often encountered in disaster situations, e.g., natural calamities or enemy attacks. In presence of such region-based faults, many of traditional network analysis and fault-tolerant metrics, that are valid under non-spatially correlated faults, are no longer applicable. To this effect, the main thrust of this research is design and analysis of robust networks in presence of such region-based faults. One important finding of this research is that if some prior knowledge is available on the maximum size of the region that might be affected due to a region-based fault, this piece of knowledge can be effectively utilized for resource efficient design of networks. It has been shown in this dissertation that in some scenarios, effective utilization of this knowledge may result in substantial saving is transmission power in wireless networks. In this dissertation, the impact of region-based faults on the connectivity of wireless networks has been studied and a new metric, region-based connectivity, is proposed to measure the fault-tolerance capability of a network. In addition, novel metrics, such as the region-based component decomposition number(RBCDN) and region-based largest component size(RBLCS) have been proposed to capture the network state, when a region-based fault disconnects the network. Finally, this dissertation presents efficient resource allocation techniques that ensure tolerance against region-based faults, in distributed file storage networks and data center networks.
ContributorsBanerjee, Sujogya (Author) / Sen, Arunabha (Thesis advisor) / Xue, Guoliang (Committee member) / Richa, Andrea (Committee member) / Hurlbert, Glenn (Committee member) / Arizona State University (Publisher)
Created2013
152586-Thumbnail Image.png
Description
The computation of the fundamental mode in structural moment frames provides valuable insight into the physical response of the frame to dynamic or time-varying loads. In standard practice, it is not necessary to solve for all n mode shapes in a structural system; it is therefore practical to limit the

The computation of the fundamental mode in structural moment frames provides valuable insight into the physical response of the frame to dynamic or time-varying loads. In standard practice, it is not necessary to solve for all n mode shapes in a structural system; it is therefore practical to limit the system to some determined number of r significant mode shapes. Current building codes, such as the American Society of Civil Engineers (ASCE), require certain class of structures to obtain 90% effective mass participation as a way to estimate the accuracy of a solution for base shear motion. A parametric study was performed from the collected data obtained by the analysis of a large number of framed structures. The purpose of this study was the development of rules for the required number of r significant modes to meet the ASCE code requirements. The study was based on the implementation of an algorithm and a computer program developed in the past. The algorithm is based on Householders Transformations, QR Factorization, and Inverse Iteration and it extracts a requested s (s<< n) number of predominate mode shapes and periods. Only the first r (r < s) of these modes are accurate. To verify the accuracy of the algorithm a variety of building frames have been analyzed using the commercially available structural software (RISA 3D) as a benchmark. The salient features of the algorithm are presented briefly in this study.
ContributorsGrantham, Jonathan (Author) / Fafitis, Apostolos (Thesis advisor) / Attard, Thomas (Committee member) / Houston, Sandra (Committee member) / Hjelmstad, Keith (Committee member) / Arizona State University (Publisher)
Created2014
152777-Thumbnail Image.png
Description
The objective of this thesis is to investigate the various types of energy end-uses to be expected in future high efficiency single family residences. For this purpose, this study has analyzed monitored data from 14 houses in the 2013 Solar Decathlon competition, and segregates the energy consumption patterns in various

The objective of this thesis is to investigate the various types of energy end-uses to be expected in future high efficiency single family residences. For this purpose, this study has analyzed monitored data from 14 houses in the 2013 Solar Decathlon competition, and segregates the energy consumption patterns in various residential end-uses (such as lights, refrigerators, washing machines, ...). The analysis was not straight-forward since these homes were operated according to schedules previously determined by the contest rules. The analysis approach allowed the isolation of the comfort energy use by the Heating, Venting and Cooling (HVAC) systems. HVAC are the biggest contributors to energy consumption during operation of a building, and therefore are a prime concern for energy performance during the building design and the operation. Both steady state and dynamic models of comfort energy use which take into account variations in indoor and outdoor temperatures, solar radiation and thermal mass of the building were explicitly considered. Steady State Inverse Models are frequently used for thermal analysis to evaluate HVAC energy performance. These are fast, accurate, offer great flexibility for mathematical modifications and can be applied to a variety of buildings. The results are presented as a horizontal study that compares energy consumption across homes to arrive at a generic rather than unique model - to be used in future discussions in the context of ultra efficient homes. It is suggested that similar analyses of the energy-use data that compare the performance of variety of ultra efficient technologies be conducted to provide more accurate indications of the consumption by end use for future single family residences. These can be used alongside the Residential Energy Consumption Survey (RECS) and the Leading Indicator for Remodeling Activity (LIRA) indices to assist in planning and policy making related to residential energy sector.
ContributorsGarkhail, Rahul (Author) / Reddy, T Agami (Thesis advisor) / Bryan, Harvey (Committee member) / Addison, Marlin (Committee member) / Arizona State University (Publisher)
Created2014
150344-Thumbnail Image.png
Description
The uncertainty of change inherent in issues such as climate change and regional growth has created a significant challenge for public decision makers trying to decide what adaptation actions are needed to respond to these possible changes. This challenge threatens the resiliency and thus the long term sustainability of our

The uncertainty of change inherent in issues such as climate change and regional growth has created a significant challenge for public decision makers trying to decide what adaptation actions are needed to respond to these possible changes. This challenge threatens the resiliency and thus the long term sustainability of our social-ecological systems. Using an empirical embedded case study approach to explore the application of advanced scenario analysis methods to regional growth visioning projects in two regions, this dissertation provides empirical evidence that for issues with high uncertainty, advanced scenario planning (ASP) methods are effective tools for helping decision makers to anticipate and prepare to adapt to change.
ContributorsQuay, Ray (Author) / Pijawka, David (Thesis advisor) / Shangraw, Ralph (Committee member) / Holway, James (Committee member) / Arizona State University (Publisher)
Created2011
150990-Thumbnail Image.png
Description
The world of healthcare can be seen as dynamic, often an area where technology and science meet to consummate a greater good for humanity. This relationship has been working well for the last century as evident by the average life expectancy change. For the greater of the last five decades

The world of healthcare can be seen as dynamic, often an area where technology and science meet to consummate a greater good for humanity. This relationship has been working well for the last century as evident by the average life expectancy change. For the greater of the last five decades the average life expectancy at birth increased globally by almost 20 years. In the United States specifically, life expectancy has grown from 50 years in 1900 to 78 years in 2009. That is a 76% increase in just over a century. As great as this increase sounds for humanity it means there are soon to be real issues in the healthcare world. A larger older population will need more healthcare services but have fewer young professionals to provide those services. Technology and science will need to continue to push the boundaries in order to develop and provide the solutions needed to continue providing the aging world population sufficient healthcare. One solution sure to help provide a brighter future for healthcare is mobile health (m-health). M-health can help provide a means for healthcare professionals to treat more patients with less work expenditure and do so with more personalized healthcare advice which will lead to better treatments. This paper discusses one area of m-health devices specifically; human breath analysis devices. The current laboratory methods of breath analysis and why these methods are not adequate for common healthcare practices will be discussed in more detail. Then more specifically, mobile breath analysis devices are discussed. The topic will encompass the challenges that need to be met in developing such devices, possible solutions to these challenges, two real examples of mobile breath analysis devices and finally possible future directions for m-health technologies.
ContributorsLester, Bryan (Author) / Forzani, Erica (Thesis advisor) / Xian, Xiaojun (Committee member) / Trimble, Steve (Committee member) / Arizona State University (Publisher)
Created2012
151152-Thumbnail Image.png
Description
Access control is one of the most fundamental security mechanisms used in the design and management of modern information systems. However, there still exists an open question on how formal access control models can be automatically analyzed and fully realized in secure system development. Furthermore, specifying and managing access control

Access control is one of the most fundamental security mechanisms used in the design and management of modern information systems. However, there still exists an open question on how formal access control models can be automatically analyzed and fully realized in secure system development. Furthermore, specifying and managing access control policies are often error-prone due to the lack of effective analysis mechanisms and tools. In this dissertation, I present an Assurance Management Framework (AMF) that is designed to cope with various assurance management requirements from both access control system development and policy-based computing. On one hand, the AMF framework facilitates comprehensive analysis and thorough realization of formal access control models in secure system development. I demonstrate how this method can be applied to build role-based access control systems by adopting the NIST/ANSI RBAC standard as an underlying security model. On the other hand, the AMF framework ensures the correctness of access control policies in policy-based computing through automated reasoning techniques and anomaly management mechanisms. A systematic method is presented to formulate XACML in Answer Set Programming (ASP) that allows users to leverage off-the-shelf ASP solvers for a variety of analysis services. In addition, I introduce a novel anomaly management mechanism, along with a grid-based visualization approach, which enables systematic and effective detection and resolution of policy anomalies. I further evaluate the AMF framework through modeling and analyzing multiparty access control in Online Social Networks (OSNs). A MultiParty Access Control (MPAC) model is formulated to capture the essence of multiparty authorization requirements in OSNs. In particular, I show how AMF can be applied to OSNs for identifying and resolving privacy conflicts, and representing and reasoning about MPAC model and policy. To demonstrate the feasibility of the proposed methodology, a suite of proof-of-concept prototype systems is implemented as well.
ContributorsHu, Hongxin (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Dasgupta, Partha (Committee member) / Ye, Nong (Committee member) / Arizona State University (Publisher)
Created2012