Matching Items (12,989)
Filtering by

Clear all filters

150005-Thumbnail Image.png
Description
The Magnetoplasmadynamic (MPD) thruster is an electromagnetic thruster that produces a higher specific impulse than conventional chemical rockets and greater thrust densities than electrostatic thrusters, but the well-known operational limit---referred to as ``onset"---imposes a severe limitation efficiency and lifetime. This phenomenon is associated with large fluctuations in operating voltage, high

The Magnetoplasmadynamic (MPD) thruster is an electromagnetic thruster that produces a higher specific impulse than conventional chemical rockets and greater thrust densities than electrostatic thrusters, but the well-known operational limit---referred to as ``onset"---imposes a severe limitation efficiency and lifetime. This phenomenon is associated with large fluctuations in operating voltage, high rates of electrode erosion, and three-dimensional instabilities in the plasma flow-field which cannot be adequately represented by two-dimensional, axisymmetric models. Simulations of the Princeton Benchmark Thruster (PBT) were conducted using the three-dimensional version of the magnetohydrodynamic (MHD) code, MACH. Validation of the numerical model is partially achieved by comparison to equivalent simulations conducted using the well-established two-dimensional, axisymmetric version of MACH. Comparisons with available experimental data was subsequently performed to further validate the model and gain insights into the physical processes of MPD acceleration. Thrust, plasma voltage, and plasma flow-field predictions were calculated for the PBT operating with applied currents in the range $6.5kA < J < 23.25kA$ and mass-flow rates of $1g/s$, $3g/s$, and $6g/s$. Comparisons of performance characteristics between the two versions of the code show excellent agreement, indicating that MACH3 can be expected to be as predictive as MACH2 has demonstrated over multiple applications to MPD thrusters. Predicted thrust for operating conditions within the range which exhibited no symptoms of the onset phenomenon experimentally also showed agreement between MACH3 and experiment well within the experimental uncertainty. At operating conditions beyond such values , however, there is a discrepancy---up to $\sim20\%$---which implies that certain significant physical processes associated with onset are not currently being modeled. Such processes are also evident in the experimental total voltage data, as is evident by the characteristic ``voltage hash", but not present in predicted plasma voltage. Additionally, analysis of the predicted plasma flow-field shows no breakdown in azimuthal symmetry, which is expected to be associated with onset. This implies that perhaps certain physical processes are modeled by neither MACH2 nor MACH3; the latter indicating that such phenomenon may not be inherently three dimensional and related to the plasma---as suggested by other efforts---but rather a consequence of electrode material processes which have not been incorporated into the current models.
ContributorsParma, Brian (Author) / Mikellides, Pavlos G (Thesis advisor) / Squires, Kyle (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2011
150006-Thumbnail Image.png
Description
ABSTRACT There is a body of literature--albeit largely from the UK and Australia--that examines the ways in which class and gender influence life course, including educational attainment; however, much of this literature offers explanations and analyses for why individuals choose the life course they do.

ABSTRACT There is a body of literature--albeit largely from the UK and Australia--that examines the ways in which class and gender influence life course, including educational attainment; however, much of this literature offers explanations and analyses for why individuals choose the life course they do. By assuming a cause-effect relationship between class and gender and life course, these studies perpetuate the idea that life can be predicted and controlled. Such an approach implies there is but one way of viewing--or an "official reading" of--the experience of class and gender. This silences other readings. This study goes beneath these "interpretations" and explores the phenomenon of identity and identity making in women who grew up working-class. Included is an investigation into how these women recognize and participate in their own identity making, identifying the interpretations they created and apply to their experience and the ways in which they juxtapose their educative experience. Using semi-structured interview I interviewed 21 women with working-class habitués. The strategy of inquiry that corresponded best to the goal of this project was heuristics, a variant of empathetic phenomenology. Heuristics distinguishes itself by including the life experience of the researcher while still showing how different people may participate in an event in their lives and how these individuals may give it radically different meanings. This has two effects: (1) the researcher recognizes that their own life experience affects their interpretations of these stories and (2) it elucidates the researcher's own life as it relates to identity formation and educational experience. Two, heuristics encourages different ways of presenting findings through a variety of art forms meant to enhance the immediacy and impact of an experience rather than offer any explanation of it. As a result of the research, four themes essential to locating the experience of women who grew up working class were discovered: making, paying attention, taking care, and up. These themes have pedagogic significance as women with working-class habitués navigate from this social space: the downstream effect of which is how and what these women take up as education.
ContributorsDecker, Shannon Irene (Author) / Blumenfeld-Jones, Donald (Thesis advisor) / Richards-Young, Gillian (Committee member) / Sandlin, Jennifer (Committee member) / Arizona State University (Publisher)
Created2011
150007-Thumbnail Image.png
Description
Current economic conditions necessitate the extension of service lives for a variety of aerospace systems. As a result, there is an increased need for structural health management (SHM) systems to increase safety, extend life, reduce maintenance costs, and minimize downtime, lowering life cycle costs for these aging systems. The implementation

Current economic conditions necessitate the extension of service lives for a variety of aerospace systems. As a result, there is an increased need for structural health management (SHM) systems to increase safety, extend life, reduce maintenance costs, and minimize downtime, lowering life cycle costs for these aging systems. The implementation of such a system requires a collaborative research effort in a variety of areas such as novel sensing techniques, robust algorithms for damage interrogation, high fidelity probabilistic progressive damage models, and hybrid residual life estimation models. This dissertation focuses on the sensing and damage estimation aspects of this multidisciplinary topic for application in metallic and composite material systems. The primary means of interrogating a structure in this work is through the use of Lamb wave propagation which works well for the thin structures used in aerospace applications. Piezoelectric transducers (PZTs) were selected for this application since they can be used as both sensors and actuators of guided waves. Placement of these transducers is an important issue in wave based approaches as Lamb waves are sensitive to changes in material properties, geometry, and boundary conditions which may obscure the presence of damage if they are not taken into account during sensor placement. The placement scheme proposed in this dissertation arranges piezoelectric transducers in a pitch-catch mode so the entire structure can be covered using a minimum number of sensors. The stress distribution of the structure is also considered so PZTs are placed in regions where they do not fail before the host structure. In order to process the data from these transducers, advanced signal processing techniques are employed to detect the presence of damage in complex structures. To provide a better estimate of the damage for accurate life estimation, machine learning techniques are used to classify the type of damage in the structure. A data structure analysis approach is used to reduce the amount of data collected and increase computational efficiency. In the case of low velocity impact damage, fiber Bragg grating (FBG) sensors were used with a nonlinear regression tool to reconstruct the loading at the impact site.
ContributorsCoelho, Clyde (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Wu, Tong (Committee member) / Das, Santanu (Committee member) / Rajadas, John (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2011
150008-Thumbnail Image.png
Description
Proponents of current educational reform initiatives emphasize strict accountability, the standardization of curriculum and pedagogy and the use of standardized tests to measure student learning and indicate teacher, administrator and school performance. As a result, professional learning communities have emerged as a platform for teachers to collaborate with one another

Proponents of current educational reform initiatives emphasize strict accountability, the standardization of curriculum and pedagogy and the use of standardized tests to measure student learning and indicate teacher, administrator and school performance. As a result, professional learning communities have emerged as a platform for teachers to collaborate with one another in order to improve their teaching practices, increase student achievement and promote continuous school improvement. The primary purpose of this inquiry was to investigate how teachers respond to working in professional learning communities in which the discourses privilege the practice of regularly comparing evidence of students' learning and results. A second purpose was to raise questions about how the current focus on standardization, assessment and accountability impacts teachers, their interactions and relationships with one another, their teaching practices, and school culture. Participants in this qualitative, ethnographic inquiry included fifteen teachers working within Green School District (a pseudonym). Initial interviews were conducted with all teachers, and responses were categorized in a typology borrowed from Barone (2008). Data analysis involved attending to the behaviors and experiences of these teachers, and the meanings these teachers associated with those behaviors and events. Teachers of GSD responded differently to the various layers of expectations and pressures inherent in the policies and practices in education today. The experiences of the teachers from GSD confirm the body of research that illuminates the challenges and complexity of working in collaborative forms of professional development, situated within the present era of accountability. Looking through lenses privileged by critical theorists, this study examined important intended and unintended consequences inherent in the educational practices of standardization and accountability. The inquiry revealed that a focus on certain "results" and the demand to achieve short terms gains may impede the creation of successful, collaborative, professional learning communities.
ContributorsBenson, Karen (Author) / Barone, Thomas (Thesis advisor) / Berliner, David (Committee member) / Enz, Billie (Committee member) / Arizona State University (Publisher)
Created2011
149992-Thumbnail Image.png
Description
Process variations have become increasingly important for scaled technologies starting at 45nm. The increased variations are primarily due to random dopant fluctuations, line-edge roughness and oxide thickness fluctuation. These variations greatly impact all aspects of circuit performance and pose a grand challenge to future robust IC design. To improve robustness,

Process variations have become increasingly important for scaled technologies starting at 45nm. The increased variations are primarily due to random dopant fluctuations, line-edge roughness and oxide thickness fluctuation. These variations greatly impact all aspects of circuit performance and pose a grand challenge to future robust IC design. To improve robustness, efficient methodology is required that considers effect of variations in the design flow. Analyzing timing variability of complex circuits with HSPICE simulations is very time consuming. This thesis proposes an analytical model to predict variability in CMOS circuits that is quick and accurate. There are several analytical models to estimate nominal delay performance but very little work has been done to accurately model delay variability. The proposed model is comprehensive and estimates nominal delay and variability as a function of transistor width, load capacitance and transition time. First, models are developed for library gates and the accuracy of the models is verified with HSPICE simulations for 45nm and 32nm technology nodes. The difference between predicted and simulated σ/μ for the library gates is less than 1%. Next, the accuracy of the model for nominal delay is verified for larger circuits including ISCAS'85 benchmark circuits. The model predicted results are within 4% error of HSPICE simulated results and take a small fraction of the time, for 45nm technology. Delay variability is analyzed for various paths and it is observed that non-critical paths can become critical because of Vth variation. Variability on shortest paths show that rate of hold violations increase enormously with increasing Vth variation.
ContributorsGummalla, Samatha (Author) / Chakrabarti, Chaitali (Thesis advisor) / Cao, Yu (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Arizona State University (Publisher)
Created2011
149993-Thumbnail Image.png
Description
Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's

Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's inherent quality. However, at times, there may be cues in the upstream test data that, if detected, could serve to predict the likelihood of downstream failure or performance degradation induced by product use or environmental stresses. This study explores the use of downstream factory test data or product field reliability data to infer data mining or pattern recognition criteria onto manufacturing process or upstream test data by means of support vector machines (SVM) in order to provide reliability prediction models. In concert with a risk/benefit analysis, these models can be utilized to drive improvement of the product or, at least, via screening to improve the reliability of the product delivered to the customer. Such models can be used to aid in reliability risk assessment based on detectable correlations between the product test performance and the sources of supply, test stands, or other factors related to product manufacture. As an enhancement to the usefulness of the SVM or hyperplane classifier within this context, L-moments and the Western Electric Company (WECO) Rules are used to augment or replace the native process or test data used as inputs to the classifier. As part of this research, a generalizable binary classification methodology was developed that can be used to design and implement predictors of end-item field failure or downstream product performance based on upstream test data that may be composed of single-parameter, time-series, or multivariate real-valued data. Additionally, the methodology provides input parameter weighting factors that have proved useful in failure analysis and root cause investigations as indicators of which of several upstream product parameters have the greater influence on the downstream failure outcomes.
ContributorsMosley, James (Author) / Morrell, Darryl (Committee member) / Cochran, Douglas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Roberts, Chell (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149994-Thumbnail Image.png
Description
A distinct characteristic of ferroelectric materials is the existence of a reversible spontaneous polarization with the application of an electric field. The relevant properties ferroelectric lithium niobate surfaces include a low density of defects and external screening of the bound polarization charge. These properties result in unique surface electric field

A distinct characteristic of ferroelectric materials is the existence of a reversible spontaneous polarization with the application of an electric field. The relevant properties ferroelectric lithium niobate surfaces include a low density of defects and external screening of the bound polarization charge. These properties result in unique surface electric field distribution with a strong electric field in the vicinity of domain boundaries, while away from the boundaries, the field decreases rapidly. In this work, ferroelectric lithium niobate (LN) is used as a template to direct the assembly of metallic nanostructures via photo-induced reduction and a substrate for deposition of ZnO semiconducting thin films via plasma enhanced atomic layer deposition (PE-ALD). To understand the mechanism the photo-induced deposition process the following effects were considered: the illumination photon energy and intensity, the polarization screening mechanism of the lithium niobate template and the chemical concentration. Depending on the UV wavelength, variation of Ag deposition rate and boundary nanowire formation are observed and attributed to the unique surface electric field distribution of the polarity patterned template and the penetration depth of UV light. Oxygen implantation is employed to transition the surface from external screening to internal screening, which results in depressed boundary nanowire formation. The ratio of the photon flux and Ag ion flux to the surface determine the deposition pattern. Domain boundary deposition is enhanced with a high photon/Ag ion flux ratio while domain boundary deposition is depressed with a low photon/Ag ion flux ratio. These results also support the photo-induced deposition model where the process is limited by carrier generation, and the cation reduction occurs at the surface. These findings will provide a foundational understanding to employ ferroelectric templates for assembly and patterning of inorganic, organic, biological, and integrated structures. ZnO films deposited on positive and negative domain surfaces of LN demonstrate different I-V curve behavior at different temperatures. At room temperature, ZnO deposited on positive domains exhibits almost two orders of magnitude greater conductance than on negative domains. The conductance of ZnO on positive domains decreases with increasing temperature while the conductance of ZnO on negative domains increases with increasing temperature. The observations are interpreted in terms of the downward or upward band bending at the ZnO/LN interface which is induced by the ferroelectric polarization charge. Possible application of this effect in non-volatile memory devices is proposed for future work.
ContributorsSun, Yang (Author) / Nemanich, Robert (Thesis advisor) / Bennett, Peter (Committee member) / Sukharev, Maxim (Committee member) / Ros, Robert (Committee member) / McCartney, Martha (Committee member) / Arizona State University (Publisher)
Created2011
149995-Thumbnail Image.png
Description
A new arrangement of the Concerto for Two Horns in E-flat Major, Hob. VIId/6, attributed by some to Franz Joseph Haydn, is presented here. The arrangement reduces the orchestral portion to ten wind instruments, specifically a double wind quintet, to facilitate performance of the work. A full score and a

A new arrangement of the Concerto for Two Horns in E-flat Major, Hob. VIId/6, attributed by some to Franz Joseph Haydn, is presented here. The arrangement reduces the orchestral portion to ten wind instruments, specifically a double wind quintet, to facilitate performance of the work. A full score and a complete set of parts are included. In support of this new arrangement, a discussion of the early treatment of horns in pairs and the subsequent development of the double horn concerto in the eighteenth century provides historical context for the Concerto for Two Horns in E-flat major. A summary of the controversy concerning the identity of the composer of this concerto is followed by a description of the content and structure of each of its three movements. Some comments on the procedures of the arrangement complete the background information.
ContributorsYeh, Guan-Lin (Author) / Ericson, John (Thesis advisor) / Holbrook, Amy (Committee member) / Micklich, Albie (Committee member) / Pilafian, J. Samuel (Committee member) / Arizona State University (Publisher)
Created2011
149996-Thumbnail Image.png
Description
One of the challenges in future semiconductor device design is excessive rise of power dissipation and device temperatures. With the introduction of new geometrically confined device structures like SOI, FinFET, nanowires and continuous incorporation of new materials with poor thermal conductivities in the device active region, the device thermal problem

One of the challenges in future semiconductor device design is excessive rise of power dissipation and device temperatures. With the introduction of new geometrically confined device structures like SOI, FinFET, nanowires and continuous incorporation of new materials with poor thermal conductivities in the device active region, the device thermal problem is expected to become more challenging in coming years. This work examines the degradation in the ON-current due to self-heating effects in 10 nm channel length silicon nanowire transistors. As part of this dissertation, a 3D electrothermal device simulator is developed that self-consistently solves electron Boltzmann transport equation with 3D energy balance equations for both the acoustic and the optical phonons. This device simulator predicts temperature variations and other physical and electrical parameters across the device for different bias and boundary conditions. The simulation results show insignificant current degradation for nanowire self-heating because of pronounced velocity overshoot effect. In addition, this work explores the role of various placement of the source and drain contacts on the magnitude of self-heating effect in nanowire transistors. This work also investigates the simultaneous influence of self-heating and random charge effects on the magnitude of the ON current for both positively and negatively charged single charges. This research suggests that the self-heating effects affect the ON-current in two ways: (1) by lowering the barrier at the source end of the channel, thus allowing more carriers to go through, and (2) via the screening effect of the Coulomb potential. To examine the effect of temperature dependent thermal conductivity of thin silicon films in nanowire transistors, Selberherr's thermal conductivity model is used in the device simulator. The simulations results show larger current degradation because of self-heating due to decreased thermal conductivity . Crystallographic direction dependent thermal conductivity is also included in the device simulations. Larger degradation is observed in the current along the [100] direction when compared to the [110] direction which is in agreement with the values for the thermal conductivity tensor provided by Zlatan Aksamija.
ContributorsHossain, Arif (Author) / Vasileska, Dragica (Thesis advisor) / Ahmed, Shaikh (Committee member) / Bakkaloglu, Bertan (Committee member) / Goodnick, Stephen (Committee member) / Arizona State University (Publisher)
Created2011
149997-Thumbnail Image.png
Description
This thesis pursues a method to deregulate the electric distribution system and provide support to distributed renewable generation. A locational marginal price is used to determine prices across a distribution network in real-time. The real-time pricing may provide benefits such as a reduced electricity bill, decreased peak demand, and lower

This thesis pursues a method to deregulate the electric distribution system and provide support to distributed renewable generation. A locational marginal price is used to determine prices across a distribution network in real-time. The real-time pricing may provide benefits such as a reduced electricity bill, decreased peak demand, and lower emissions. This distribution locational marginal price (D-LMP) determines the cost of electricity at each node in the electrical network. The D-LMP is comprised of the cost of energy, cost of losses, and a renewable energy premium. The renewable premium is an adjustable function to compensate `green' distributed generation. A D-LMP is derived and formulated from the PJM model, as well as several alternative formulations. The logistics and infrastructure an implementation is briefly discussed. This study also takes advantage of the D-LMP real-time pricing to implement distributed storage technology. A storage schedule optimization is developed using linear programming. Day-ahead LMPs and historical load data are used to determine a predictive optimization. A test bed is created to represent a practical electric distribution system. Historical load, solar, and LMP data are used in the test bed to create a realistic environment. A power flow and tabulation of the D-LMPs was conducted for twelve test cases. The test cases included various penetrations of solar photovoltaics (PV), system networking, and the inclusion of storage technology. Tables of the D-LMPs and network voltages are presented in this work. The final costs are summed and the basic economics are examined. The use of a D-LMP can lower costs across a system when advanced technologies are used. Storage improves system costs, decreases losses, improves system load factor, and bolsters voltage. Solar energy provides many of these same attributes at lower penetrations, but high penetrations have a detrimental effect on the system. System networking also increases these positive effects. The D-LMP has a positive impact on residential customer cost, while greatly increasing the costs for the industrial sector. The D-LMP appears to have many positive impacts on the distribution system but proper cost allocation needs further development.
ContributorsKiefer, Brian Daniel (Author) / Heydt, Gerald T (Thesis advisor) / Shunk, Dan (Committee member) / Hedman, Kory (Committee member) / Arizona State University (Publisher)
Created2011