Matching Items (1,263)
Filtering by

Clear all filters

151672-Thumbnail Image.png
Description
ABSTRACT A vortex tube is a device of a simple structure with no moving parts that can be used to separate a compressed gas into a hot stream and a cold stream. Many studies have been carried out to find the mechanisms of the energy separation in the vortex tube.

ABSTRACT A vortex tube is a device of a simple structure with no moving parts that can be used to separate a compressed gas into a hot stream and a cold stream. Many studies have been carried out to find the mechanisms of the energy separation in the vortex tube. Recent rapid development in computational fluid dynamics is providing a powerful tool to investigate the complex flow in the vortex tube. However various issues in these numerical simulations remain, such as choosing the most suitable turbulent model, as well as the lack of systematic comparative analysis. LES model for the vortex tube simulation is hardly used in the present literatures, and the influence of parameters on the performance of the vortex tube has scarcely been studied. This study is aimed to find the influence of various parameters on the performance of the vortex tube, the best geometric value of vortex tube and the realizable method to reach the required cold out flow rate 40 kg/s . First of all, setting up an original 3-D simulation vortex tube model. By comparing experiment results reported in the literature and our simulation results, a most suitable model for the simulation of the vortex tube is obtained. Secondly, we perform simulations to optimize parameters that can deliver a set of desired output, such as cold stream pressure, temperature and flow-rate. We also discuss the use of the cold air flow for petroleum engineering applications.
ContributorsCang, Ruijin (Author) / Chen, Kangping (Thesis advisor) / Huang, Hueiping (Committee member) / Calhoun, Ronald (Committee member) / Arizona State University (Publisher)
Created2013
151673-Thumbnail Image.png
Description
Life Cycle Assessment (LCA) quantifies environmental impacts of products in raw material extraction, processing, manufacturing, distribution, use and final disposal. The findings of an LCA can be used to improve industry practices, to aid in product development, and guide public policy. Unfortunately, existing approaches to LCA are unreliable in the

Life Cycle Assessment (LCA) quantifies environmental impacts of products in raw material extraction, processing, manufacturing, distribution, use and final disposal. The findings of an LCA can be used to improve industry practices, to aid in product development, and guide public policy. Unfortunately, existing approaches to LCA are unreliable in the cases of emerging technologies, where data is unavailable and rapid technological advances outstrip environmental knowledge. Previous studies have demonstrated several shortcomings to existing practices, including the masking of environmental impacts, the difficulty of selecting appropriate weight sets for multi-stakeholder problems, and difficulties in exploration of variability and uncertainty. In particular, there is an acute need for decision-driven interpretation methods that can guide decision makers towards making balanced, environmentally sound decisions in instances of high uncertainty. We propose the first major methodological innovation in LCA since early establishment of LCA as the analytical perspective of choice in problems of environmental management. We propose to couple stochastic multi-criteria decision analytic tools with existing approaches to inventory building and characterization to create a robust approach to comparative technology assessment in the context of high uncertainty, rapid technological change, and evolving stakeholder values. Namely, this study introduces a novel method known as Stochastic Multi-attribute Analysis for Life Cycle Impact Assessment (SMAA-LCIA) that uses internal normalization by means of outranking and exploration of feasible weight spaces.
ContributorsPrado, Valentina (Author) / Seager, Thomas P (Thesis advisor) / Landis, Amy E. (Committee member) / Chester, Mikhail (Committee member) / White, Philip (Committee member) / Arizona State University (Publisher)
Created2013
151321-Thumbnail Image.png
Description
This thesis concerns the role of geometric imperfections on assemblies in which the location of a target part is dependent on supports at two features. In some applications, such as a turbo-machine rotor that is supported by a series of parts at each bearing, it is the interference or clearance

This thesis concerns the role of geometric imperfections on assemblies in which the location of a target part is dependent on supports at two features. In some applications, such as a turbo-machine rotor that is supported by a series of parts at each bearing, it is the interference or clearance at a functional target feature, such as at the blades that must be controlled. The first part of this thesis relates the limits of location for the target part to geometric imperfections of other parts when stacked-up in parallel paths. In this section parts are considered to be rigid (non-deformable). By understanding how much of variation from the supporting parts contribute to variations of the target feature, a designer can better utilize the tolerance budget when assigning values to individual tolerances. In this work, the T-Map®, a spatial math model is used to model the tolerance accumulation in parallel assemblies. In other applications where parts are flexible, deformations are induced when parts in parallel are clamped together during assembly. Presuming that perfectly manufactured parts have been designed to fit perfectly together and produce zero deformations, the clamping-induced deformations result entirely from the imperfect geometry that is produced during manufacture. The magnitudes and types of these deformations are a function of part dimensions and material stiffnesses, and they are limited by design tolerances that control manufacturing variations. These manufacturing variations, if uncontrolled, may produce high enough stresses when the parts are assembled that premature failure can occur before the design life. The last part of the thesis relates the limits on the largest von Mises stress in one part to functional tolerance limits that must be set at the beginning of a tolerance analysis of parts in such an assembly.
ContributorsJaishankar, Lupin Niranjan (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Mignolet, Marc P (Committee member) / Arizona State University (Publisher)
Created2012
151323-Thumbnail Image.png
Description
This study investigates how well prominent behavioral theories from social psychology explain green purchasing behavior (GPB). I assess three prominent theories in terms of their suitability for GPB research, their attractiveness to GPB empiricists, and the strength of their empirical evidence when applied to GPB. First, a qualitative assessment of

This study investigates how well prominent behavioral theories from social psychology explain green purchasing behavior (GPB). I assess three prominent theories in terms of their suitability for GPB research, their attractiveness to GPB empiricists, and the strength of their empirical evidence when applied to GPB. First, a qualitative assessment of the Theory of Planned Behavior (TPB), Norm Activation Theory (NAT), and Value-Belief-Norm Theory (VBN) is conducted to evaluate a) how well the phenomenon and concepts in each theory match the characteristics of pro-environmental behavior and b) how well the assumptions made in each theory match common assumptions made in purchasing theory. Second, a quantitative assessment of these three theories is conducted in which r2 values and methodological parameters (e.g., sample size) are collected from a sample of 21 empirical studies on GPB to evaluate the accuracy and generalize-ability of empirical evidence. In the qualitative assessment, the results show each theory has its advantages and disadvantages. The results also provide a theoretically-grounded roadmap for modifying each theory to be more suitable for GPB research. In the quantitative assessment, the TPB outperforms the other two theories in every aspect taken into consideration. It proves to 1) create the most accurate models 2) be supported by the most generalize-able empirical evidence and 3) be the most attractive theory to empiricists. Although the TPB establishes itself as the best foundational theory for an empiricist to start from, it's clear that a more comprehensive model is needed to achieve consistent results and improve our understanding of GPB. NAT and the Theory of Interpersonal Behavior (TIB) offer pathways to extend the TPB. The TIB seems particularly apt for this endeavor, while VBN does not appear to have much to offer. Overall, the TPB has already proven to hold a relatively high predictive value. But with the state of ecosystem services continuing to decline on a global scale, it's important for models of GPB to become more accurate and reliable. Better models have the capacity to help marketing professionals, product developers, and policy makers develop strategies for encouraging consumers to buy green products.
ContributorsRedd, Thomas Christopher (Author) / Dooley, Kevin (Thesis advisor) / Basile, George (Committee member) / Darnall, Nicole (Committee member) / Arizona State University (Publisher)
Created2012
151362-Thumbnail Image.png
Description
Urban water systems face sustainability challenges ranging from water quality, leaks, over-use, energy consumption, and long-term supply concerns. Resiliency challenges include the capacity to respond to drought, managing pipe deterioration, responding to natural disasters, and preventing terrorism. One strategy to enhance sustainability and resiliency is the development and adoption of

Urban water systems face sustainability challenges ranging from water quality, leaks, over-use, energy consumption, and long-term supply concerns. Resiliency challenges include the capacity to respond to drought, managing pipe deterioration, responding to natural disasters, and preventing terrorism. One strategy to enhance sustainability and resiliency is the development and adoption of smart water grids. A smart water grid incorporates networked monitoring and control devices into its structure, which provides diverse, real-time information about the system, as well as enhanced control. Data provide input for modeling and analysis, which informs control decisions, allowing for improvement in sustainability and resiliency. While smart water grids hold much potential, there are also potential tradeoffs and adoption challenges. More publicly available cost-benefit analyses are needed, as well as system-level research and application, rather than the current focus on individual technologies. This thesis seeks to fill one of these gaps by analyzing the cost and environmental benefits of smart irrigation controllers. Smart irrigation controllers can save water by adapting watering schedules to climate and soil conditions. The potential benefit of smart irrigation controllers is particularly high in southwestern U.S. states, where the arid climate makes water scarcer and increases watering needs of landscapes. To inform the technology development process, a design for environment (DfE) method was developed, which overlays economic and environmental performance parameters under different operating conditions. This method is applied to characterize design goals for controller price and water savings that smart irrigation controllers must meet to yield life cycle carbon dioxide reductions and economic savings in southwestern U.S. states, accounting for regional variability in electricity and water prices and carbon overhead. Results from applying the model to smart irrigation controllers in the Southwest suggest that some areas are significantly easier to design for.
ContributorsMutchek, Michele (Author) / Allenby, Braden (Thesis advisor) / Williams, Eric (Committee member) / Westerhoff, Paul (Committee member) / Arizona State University (Publisher)
Created2012
151455-Thumbnail Image.png
Description
Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving

Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving reliable composite systems is a strong capability of assessing and inspecting physical damage of critical structural components. Installation of a robust Structural Health Monitoring (SHM) system would be very valuable in detecting the onset of composite failure. A number of major issues still require serious attention in connection with the research and development aspects of sensor-integrated reliable SHM systems for composite structures. In particular, the sensitivity of currently available sensor systems does not allow detection of micro level damage; this limits the capability of data driven SHM systems. As a fundamental layer in SHM, modeling can provide in-depth information on material and structural behavior for sensing and detection, as well as data for learning algorithms. This dissertation focusses on the development of a multiscale analysis framework, which is used to detect various forms of damage in complex composite structures. A generalized method of cells based micromechanics analysis, as implemented in NASA's MAC/GMC code, is used for the micro-level analysis. First, a baseline study of MAC/GMC is performed to determine the governing failure theories that best capture the damage progression. The deficiencies associated with various layups and loading conditions are addressed. In most micromechanics analysis, a representative unit cell (RUC) with a common fiber packing arrangement is used. The effect of variation in this arrangement within the RUC has been studied and results indicate this variation influences the macro-scale effective material properties and failure stresses. The developed model has been used to simulate impact damage in a composite beam and an airfoil structure. The model data was verified through active interrogation using piezoelectric sensors. The multiscale model was further extended to develop a coupled damage and wave attenuation model, which was used to study different damage states such as fiber-matrix debonding in composite structures with surface bonded piezoelectric sensors.
ContributorsMoncada, Albert (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Rajadas, John (Committee member) / Yekani Fard, Masoud (Committee member) / Arizona State University (Publisher)
Created2012
151458-Thumbnail Image.png
Description
The focus of this investigation is on the optimum placement of a limited number of dampers, fewer than the number of blades, on a bladed disk to induce the smallest amplitude of blade response. The optimization process considers the presence of random mistuning, i.e. small involuntary variations in blade stiffness

The focus of this investigation is on the optimum placement of a limited number of dampers, fewer than the number of blades, on a bladed disk to induce the smallest amplitude of blade response. The optimization process considers the presence of random mistuning, i.e. small involuntary variations in blade stiffness properties resulting, say, from manufacturing variability. Designed variations of these properties, known as intentional mistuning, is considered as an option to reduce blade response and the pattern of two blade types (A and B blades) is then part of the optimization in addition to the location of dampers on the disk. First, this study focuses on the formulation and validation of dedicated algorithms for the selection of the damper locations and the intentional mistuning pattern. Failure of one or several of the dampers could lead to a sharp rise in blade response and this issue is addressed by including, in the optimization, the possibility of damper failure to yield a fail-safe solution. The high efficiency and accuracy of the optimization algorithms is assessed in comparison with computationally very demanding exhaustive search results. Second, the developed optimization algorithms are applied to nonlinear dampers (underplatform friction dampers), as well as to blade-blade dampers, both linear and nonlinear. Further, the optimization of blade-only and blade-blade linear dampers is extended to include uncertainty or variability in the damper properties induced by manufacturing or wear. It is found that the optimum achieved without considering such uncertainty is robust with respect to it. Finally, the potential benefits of using two different types of friction dampers differing in their masses (A and B types), on a bladed disk are considered. Both A/B pattern and the damper masses are optimized to obtain the largest benefit compared to using identical dampers of optimized masses on every blade. Four situations are considered: tuned disks, disks with random mistuning of blade stiffness, and, disks with random mistuning of both blade stiffness and damper normal forces with and without damper variability induced by manufacturing and wear. In all cases, the benefit of intentional mistuning of friction dampers is small, of the order of a few percent.
ContributorsMurthy, Raghavendra Narasimha (Author) / Mignolet, Marc P (Thesis advisor) / Rajan, Subramaniam D. (Committee member) / Lentz, Jeff (Committee member) / Chattopadhyay, Aditi (Committee member) / Jiang, Hanqing (Committee member) / Arizona State University (Publisher)
Created2012
151485-Thumbnail Image.png
Description
Tesla turbo-machinery offers a robust, easily manufactured, extremely versatile prime mover with inherent capabilities making it perhaps the best, if not the only, solution for certain niche applications. The goal of this thesis is not to optimize the performance of the Tesla turbine, but to compare its performance with various

Tesla turbo-machinery offers a robust, easily manufactured, extremely versatile prime mover with inherent capabilities making it perhaps the best, if not the only, solution for certain niche applications. The goal of this thesis is not to optimize the performance of the Tesla turbine, but to compare its performance with various working fluids. Theoretical and experimental analyses of a turbine-generator assembly utilizing compressed air, saturated steam and water as the working fluids were performed and are presented in this work. A brief background and explanation of the technology is provided along with potential applications. A theoretical thermodynamic analysis is outlined, resulting in turbine and rotor efficiencies, power outputs and Reynolds numbers calculated for the turbine for various combinations of working fluids and inlet nozzles. The results indicate the turbine is capable of achieving a turbine efficiency of 31.17 ± 3.61% and an estimated rotor efficiency 95 ± 9.32%. These efficiencies are promising considering the numerous losses still present in the current design. Calculation of the Reynolds number provided some capability to determine the flow behavior and how that behavior impacts the performance and efficiency of the Tesla turbine. It was determined that turbulence in the flow is essential to achieving high power outputs and high efficiency. Although the efficiency, after peaking, begins to slightly taper off as the flow becomes increasingly turbulent, the power output maintains a steady linear increase.
ContributorsPeshlakai, Aaron (Author) / Phelan, Patrick (Thesis advisor) / Trimble, Steve (Committee member) / Wang, Liping (Committee member) / Arizona State University (Publisher)
Created2012
151510-Thumbnail Image.png
Description
Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified

Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified tolerances. This thesis describes a new method for quality control of a manufacturing process by improving the method used to convert measured points on a part to a geometric entity that can be compared directly with tolerance specifications. The focus of this paper is the development of a new computational method for obtaining the least-squares fit of a set of points that have been measured with a coordinate measurement machine along a line-profile. The pseudo-inverse of a rectangular matrix is used to convert the measured points to the least-squares fit of the profile. Numerical examples are included for convex and concave line-profiles, that are formed from line- and circular arc-segments.
ContributorsSavaliya, Samir (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
151523-Thumbnail Image.png
Description
Shock loading is a complex phenomenon that can lead to failure mechanisms such as strain localization, void nucleation and growth, and eventually spall fracture. Studying incipient stages of spall damage is of paramount importance to accurately determine initiation sites in the material microstructure where damage will nucleate and grow and

Shock loading is a complex phenomenon that can lead to failure mechanisms such as strain localization, void nucleation and growth, and eventually spall fracture. Studying incipient stages of spall damage is of paramount importance to accurately determine initiation sites in the material microstructure where damage will nucleate and grow and to formulate continuum models that account for the variability of the damage process due to microstructural heterogeneity. The length scale of damage with respect to that of the surrounding microstructure has proven to be a key aspect in determining sites of failure initiation. Correlations have been found between the damage sites and the surrounding microstructure to determine the preferred sites of spall damage, since it tends to localize at and around the regions of intrinsic defects such as grain boundaries and triple points. However, considerable amount of work still has to be done in this regard to determine the physics driving the damage at these intrinsic weak sites in the microstructure. The main focus of this research work is to understand the physical mechanisms behind the damage localization at these preferred sites. A crystal plasticity constitutive model is implemented with different damage criteria to study the effects of stress concentration and strain localization at the grain boundaries. A cohesive zone modeling technique is used to include the intrinsic strength of the grain boundaries in the simulations. The constitutive model is verified using single elements tests, calibrated using single crystal impact experiments and validated using bicrystal and multicrystal impact experiments. The results indicate that strain localization is the predominant driving force for damage initiation and evolution. The microstructural effects on theses damage sites are studied to attribute the extent of damage to microstructural features such as grain orientation, misorientation, Taylor factor and the grain boundary planes. The finite element simulations show good correlation with the experimental results and can be used as the preliminary step in developing accurate probabilistic models for damage nucleation.
ContributorsKrishnan, Kapil (Author) / Peralta, Pedro (Thesis advisor) / Mignolet, Marc (Committee member) / Sieradzki, Karl (Committee member) / Jiang, Hanqing (Committee member) / Oswald, Jay (Committee member) / Arizona State University (Publisher)
Created2013