Matching Items (307)
Filtering by

Clear all filters

149707-Thumbnail Image.png
Description
Emission of CO2 into the atmosphere has become an increasingly concerning issue as we progress into the 21st century Flue gas from coal-burning power plants accounts for 40% of all carbon dioxide emissions. The key to successful separation and sequestration is to separate CO2 directly from flue gas

Emission of CO2 into the atmosphere has become an increasingly concerning issue as we progress into the 21st century Flue gas from coal-burning power plants accounts for 40% of all carbon dioxide emissions. The key to successful separation and sequestration is to separate CO2 directly from flue gas (10-15% CO2, 70% N2), which can range from a few hundred to as high as 1000°C. Conventional microporous membranes (carbons/silicas/zeolites) are capable of separating CO2 from N2 at low temperatures, but cannot achieve separation above 200°C. To overcome the limitations of microporous membranes, a novel ceramic-carbonate dual-phase membrane for high temperature CO2 separation was proposed. The membrane was synthesized from porous La0.6Sr0.4Co0.8Fe0.2O3-d (LSCF) supports and infiltrated with molten carbonate (Li2CO3/Na2CO3/K2CO3). The CO2 permeation mechanism involves a reaction between CO2 (gas phase) and O= (solid phase) to form CO3=, which is then transported through the molten carbonate (liquid phase) to achieve separation. The effects of membrane thickness, temperature and CO2 partial pressure were studied. Decreasing thickness from 3.0 to 0.375 mm led to higher fluxes at 900°C, ranging from 0.186 to 0.322 mL.min-1.cm-2 respectively. CO2 flux increased with temperature from 700 to 900°C. Activation energy for permeation was similar to that for oxygen ion conduction in LSCF. For partial pressures above 0.05 atm, the membrane exhibited a nearly constant flux. From these observations, it was determined that oxygen ion conductivity limits CO2 permeation and that the equilibrium oxygen vacancy concentration in LSCF is dependent on the partial pressure of CO2 in the gas phase. Finally, the dual-phase membrane was used as a membrane reactor. Separation at high temperatures can produce warm, highly concentrated streams of CO2 that could be used as a chemical feedstock for the synthesis of syngas (H2 + CO). Towards this, three different membrane reactor configurations were examined: 1) blank system, 2) LSCF catalyst and 3) 10% Ni/y-alumina catalyst. Performance increased in the order of blank system < LSCF catalyst < Ni/y-alumina catalyst. Favorable conditions for syngas production were high temperature (850°C), low sweep gas flow rate (10 mL.min-1) and high methane concentration (50%) using the Ni/y-alumina catalyst.
ContributorsAnderson, Matthew Brandon (Author) / Lin, Jerry (Thesis advisor) / Alford, Terry (Committee member) / Rege, Kaushal (Committee member) / Anderson, James (Committee member) / Rivera, Daniel (Committee member) / Arizona State University (Publisher)
Created2011
150007-Thumbnail Image.png
Description
Current economic conditions necessitate the extension of service lives for a variety of aerospace systems. As a result, there is an increased need for structural health management (SHM) systems to increase safety, extend life, reduce maintenance costs, and minimize downtime, lowering life cycle costs for these aging systems. The implementation

Current economic conditions necessitate the extension of service lives for a variety of aerospace systems. As a result, there is an increased need for structural health management (SHM) systems to increase safety, extend life, reduce maintenance costs, and minimize downtime, lowering life cycle costs for these aging systems. The implementation of such a system requires a collaborative research effort in a variety of areas such as novel sensing techniques, robust algorithms for damage interrogation, high fidelity probabilistic progressive damage models, and hybrid residual life estimation models. This dissertation focuses on the sensing and damage estimation aspects of this multidisciplinary topic for application in metallic and composite material systems. The primary means of interrogating a structure in this work is through the use of Lamb wave propagation which works well for the thin structures used in aerospace applications. Piezoelectric transducers (PZTs) were selected for this application since they can be used as both sensors and actuators of guided waves. Placement of these transducers is an important issue in wave based approaches as Lamb waves are sensitive to changes in material properties, geometry, and boundary conditions which may obscure the presence of damage if they are not taken into account during sensor placement. The placement scheme proposed in this dissertation arranges piezoelectric transducers in a pitch-catch mode so the entire structure can be covered using a minimum number of sensors. The stress distribution of the structure is also considered so PZTs are placed in regions where they do not fail before the host structure. In order to process the data from these transducers, advanced signal processing techniques are employed to detect the presence of damage in complex structures. To provide a better estimate of the damage for accurate life estimation, machine learning techniques are used to classify the type of damage in the structure. A data structure analysis approach is used to reduce the amount of data collected and increase computational efficiency. In the case of low velocity impact damage, fiber Bragg grating (FBG) sensors were used with a nonlinear regression tool to reconstruct the loading at the impact site.
ContributorsCoelho, Clyde (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Wu, Tong (Committee member) / Das, Santanu (Committee member) / Rajadas, John (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2011
149993-Thumbnail Image.png
Description
Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's

Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's inherent quality. However, at times, there may be cues in the upstream test data that, if detected, could serve to predict the likelihood of downstream failure or performance degradation induced by product use or environmental stresses. This study explores the use of downstream factory test data or product field reliability data to infer data mining or pattern recognition criteria onto manufacturing process or upstream test data by means of support vector machines (SVM) in order to provide reliability prediction models. In concert with a risk/benefit analysis, these models can be utilized to drive improvement of the product or, at least, via screening to improve the reliability of the product delivered to the customer. Such models can be used to aid in reliability risk assessment based on detectable correlations between the product test performance and the sources of supply, test stands, or other factors related to product manufacture. As an enhancement to the usefulness of the SVM or hyperplane classifier within this context, L-moments and the Western Electric Company (WECO) Rules are used to augment or replace the native process or test data used as inputs to the classifier. As part of this research, a generalizable binary classification methodology was developed that can be used to design and implement predictors of end-item field failure or downstream product performance based on upstream test data that may be composed of single-parameter, time-series, or multivariate real-valued data. Additionally, the methodology provides input parameter weighting factors that have proved useful in failure analysis and root cause investigations as indicators of which of several upstream product parameters have the greater influence on the downstream failure outcomes.
ContributorsMosley, James (Author) / Morrell, Darryl (Committee member) / Cochran, Douglas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Roberts, Chell (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
Description
The focus of this study was the first Serbian opera, Na Uranku (At Dawn). It was written by Stanislav Binièki (1872-1942) and was first performed in 1903 at the National Theatre in Belgrade. There were two objectives of this project: (1) a live concert performance of the opera, which produced

The focus of this study was the first Serbian opera, Na Uranku (At Dawn). It was written by Stanislav Binièki (1872-1942) and was first performed in 1903 at the National Theatre in Belgrade. There were two objectives of this project: (1) a live concert performance of the opera, which produced an audio recording that can be found as an appendix; and, (2) an accompanying document containing a history and an analysis of the work. While Binièki's opera is recognized as an extraordinary artistic achievement, and a new genre of musical enrichment for Serbian music, little had been previously written either about the composer or the work. At Dawn is a romantic opera in the verismo tradition with national elements. The significance of this opera is not only in its artistic expression but also in how it helped the music of Serbia evolve. Early opera settings in Serbia in the mid-nineteenth to early twentieth century did not have the same wealth of history upon which to draw as had existed in the rich operatic oeuvre in Western Europe and Russia. Similarly, conditions for performance were not satisfactory, as were no professional orchestras or singers. Furthermore, audiences were not accustomed to this type of art form. The opera served as an educational instrument for the audience, not only training them to a different type of music but also evolving its national consciousness. Binièki's opera was a foundation on which later generations of composers built. The artistic value of this opera is emphasized. The musical language includes an assimilation of various influences from Western Europe and Russia, properly incorporated into the Serbian musical core. Audience reaction is discussed, a positive affirmation that Binièki was moving in the right direction in establishing a path for the further development of the artistic field of Serbian musical culture. A synopsis of the work as well as the requisite performing forces is also included.
ContributorsMinov, Jana (Author) / Russell, Timothy (Thesis advisor) / Levy, Benjamin (Committee member) / Schildkret, David (Committee member) / Rogers, Rodney (Committee member) / Reber, William (Committee member) / Arizona State University (Publisher)
Created2011
150398-Thumbnail Image.png
Description
Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely

Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely time-dispersive channels. However, the performance of OFDM systems over UWA channels significantly deteriorates due to severe intercarrier interference (ICI) resulting from rapid time variations of the channel. With the motivation of developing enabling techniques for OFDM over UWA channels, the major contributions of this thesis include (1) two effective frequencydomain equalizers that provide general means to counteract the ICI; (2) a family of multiple-resampling receiver designs dealing with distortions caused by user and/or path specific Doppler scaling effects; (3) proposal of using orthogonal frequency division multiple access (OFDMA) as an effective multiple access scheme for UWA communications; (4) the capacity evaluation for single-resampling versus multiple-resampling receiver designs. All of the proposed receiver designs have been verified both through simulations and emulations based on data collected in real-life UWA communications experiments. Particularly, the frequency domain equalizers are shown to be effective with significantly reduced pilot overhead and offer robustness against Doppler and timing estimation errors. The multiple-resampling designs, where each branch is tasked with the Doppler distortion of different paths and/or users, overcome the disadvantages of the commonly-used single-resampling receivers and yield significant performance gains. Multiple-resampling receivers are also demonstrated to be necessary for UWA OFDMA systems. The unique design effectively mitigates interuser interference (IUI), opening up the possibility to exploit advanced user subcarrier assignment schemes. Finally, the benefits of the multiple-resampling receivers are further demonstrated through channel capacity evaluation results.
ContributorsTu, Kai (Author) / Duman, Tolga M. (Thesis advisor) / Zhang, Junshan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2011
150333-Thumbnail Image.png
Description
A systematic approach to composition has been used by a variety of composers to control an assortment of musical elements in their pieces. This paper begins with a brief survey of some of the important systematic approaches that composers have employed in their compositions, devoting particular attention to Pierre Boulez's

A systematic approach to composition has been used by a variety of composers to control an assortment of musical elements in their pieces. This paper begins with a brief survey of some of the important systematic approaches that composers have employed in their compositions, devoting particular attention to Pierre Boulez's Structures Ia . The purpose of this survey is to examine several systematic approaches to composition by prominent composers and their philosophy in adopting this type of approach. The next section of the paper introduces my own systematic approach to composition: the Take-Away System. The third provides several musical applications of the system, citing my work, Octulus for two pianos, as an example. The appendix details theorems and observations within the system for further study.
ContributorsHarbin, Doug (Author) / Hackbarth, Glenn (Thesis advisor) / DeMars, James (Committee member) / Etezady, Roshanne, 1973- (Committee member) / Rockmaker, Jody (Committee member) / Rogers, Rodney (Committee member) / Arizona State University (Publisher)
Created2011
150358-Thumbnail Image.png
Description
During the twentieth-century, the dual influence of nationalism and modernism in the eclectic music from Latin America promoted an idiosyncratic style which naturally combined traditional themes, popular genres and secular music. The saxophone, commonly used as a popular instrument, started to develop a prominent role in Latin American classical music

During the twentieth-century, the dual influence of nationalism and modernism in the eclectic music from Latin America promoted an idiosyncratic style which naturally combined traditional themes, popular genres and secular music. The saxophone, commonly used as a popular instrument, started to develop a prominent role in Latin American classical music beginning in 1970. The lack of exposure and distribution of the Latin American repertoire has created a general perception that composers are not interested in the instrument, and that Latin American repertoire for classical saxophone is minimal. However, there are more than 1100 works originally written for saxophone in the region, and the amount continues to grow. This Modern Latin American Repertoire for Classical Saxophone: Recording Project and Performance Guide document establishes and exhibits seven works by seven representative Latin American composers.The recording includes works by Carlos Gonzalo Guzman (Colombia), Ricardo Tacuchian (Brazil), Roque Cordero (Panama), Luis Naón (Argentina), Andrés Alén-Rodriguez (Cuba), Alejandro César Morales (Mexico) and Jose-Luis Maúrtua (Peru), featuring a range of works for solo alto saxophone to alto saxophone with piano, alto saxophone with vibraphone, and tenor saxophone with electronic tape; thus forming an important selection of Latin American repertoire. Complete recorded performances of all seven pieces are supplemented by biographical, historical, and performance practice suggestions. The result is a written and audio guide to some of the most important pieces composed for classical saxophone in Latin America, with an emphasis on fostering interest in, and research into, composers who have contributed in the development and creation of the instrument in Latin America.
ContributorsOcampo Cardona, Javier Andrés (Author) / McAllister, Timothy (Thesis advisor) / Spring, Robert (Committee member) / Hill, Gary (Committee member) / Pilafian, Sam (Committee member) / Rogers, Rodney (Committee member) / Gardner, Joshua (Committee member) / Arizona State University (Publisher)
Created2011
149867-Thumbnail Image.png
Description
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding

Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
ContributorsKrishnamoorthi, Harish (Author) / Spanias, Andreas (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2011
149951-Thumbnail Image.png
Description
This study examined attitudes and perspectives of classroom guitar students toward the reading of staff notation in music. The purpose of this qualitative research was to reveal these perceptions in the student's own words, and compare them to those of orchestra and band students of comparable experience. Forty-seven students from

This study examined attitudes and perspectives of classroom guitar students toward the reading of staff notation in music. The purpose of this qualitative research was to reveal these perceptions in the student's own words, and compare them to those of orchestra and band students of comparable experience. Forty-seven students from four suburban middle and high schools on the east coast were selected through purposeful sampling techniques. Research instruments included a Musical Background Questionnaire and a thirty-five question Student Survey. Follow-up interviews were conducted with students to clarify or expound upon collected data. Guitar, orchestra, and band teachers were interviewed in order to provide their perspectives on the issues discussed. The Student Survey featured a five-point Likert-type scale, which measured how much students agreed or disagreed with various statements pertaining to their feelings about music, note-reading, or their class at school. Collected data were coded and used to calculate mean scores, standard deviations, and percentages of students in agreement or disagreement with each statement. Interviews were audio recorded and transcribed into a word processing document for analysis. The study found that while a variety of perspectives exist within a typical guitar class, some students do not find note-reading to be necessary for the types of music they desire to learn. Other findings included a perceived lack of relevance toward the classical elements of the guitar programs in the schools, a lack of educational consistency between classroom curricula and private lesson objectives, and the general description of the struggle some guitarists experience with staff notation. Implications of the collected data were discussed, along with recommendations for better engaging these students.
ContributorsWard, Stephen Michael (Author) / Koonce, Frank (Thesis advisor) / Schmidt, Margaret (Thesis advisor) / Buck, Nancy (Committee member) / Rogers, Rodney (Committee member) / McLin, Katherine (Committee member) / Arizona State University (Publisher)
Created2011
149913-Thumbnail Image.png
Description
One necessary condition for the two-pass risk premium estimator to be consistent and asymptotically normal is that the rank of the beta matrix in a proposed linear asset pricing model is full column. I first investigate the asymptotic properties of the risk premium estimators and the related t-test and

One necessary condition for the two-pass risk premium estimator to be consistent and asymptotically normal is that the rank of the beta matrix in a proposed linear asset pricing model is full column. I first investigate the asymptotic properties of the risk premium estimators and the related t-test and Wald test statistics when the full rank condition fails. I show that the beta risk of useless factors or multiple proxy factors for a true factor are priced more often than they should be at the nominal size in the asset pricing models omitting some true factors. While under the null hypothesis that the risk premiums of the true factors are equal to zero, the beta risk of the true factors are priced less often than the nominal size. The simulation results are consistent with the theoretical findings. Hence, the factor selection in a proposed factor model should not be made solely based on their estimated risk premiums. In response to this problem, I propose an alternative estimation of the underlying factor structure. Specifically, I propose to use the linear combination of factors weighted by the eigenvectors of the inner product of estimated beta matrix. I further propose a new method to estimate the rank of the beta matrix in a factor model. For this method, the idiosyncratic components of asset returns are allowed to be correlated both over different cross-sectional units and over different time periods. The estimator I propose is easy to use because it is computed with the eigenvalues of the inner product of an estimated beta matrix. Simulation results show that the proposed method works well even in small samples. The analysis of US individual stock returns suggests that there are six common risk factors in US individual stock returns among the thirteen factor candidates used. The analysis of portfolio returns reveals that the estimated number of common factors changes depending on how the portfolios are constructed. The number of risk sources found from the analysis of portfolio returns is generally smaller than the number found in individual stock returns.
ContributorsWang, Na (Author) / Ahn, Seung C. (Thesis advisor) / Kallberg, Jarl G. (Committee member) / Liu, Crocker H. (Committee member) / Arizona State University (Publisher)
Created2011