Matching Items (713)
Filtering by

Clear all filters

152005-Thumbnail Image.png
Description
The goal of this research project is to develop a DOF (degree of freedom) algebra for entity clusters to support tolerance specification, validation, and tolerance automation. This representation is required to capture the relation between geometric entities, metric constraints and tolerance specification. This research project is a part of an

The goal of this research project is to develop a DOF (degree of freedom) algebra for entity clusters to support tolerance specification, validation, and tolerance automation. This representation is required to capture the relation between geometric entities, metric constraints and tolerance specification. This research project is a part of an on-going project on creating a bi-level model of GD&T; (Geometric Dimensioning and Tolerancing). This thesis presents the systematic derivation of degree of freedoms of entity clusters corresponding to tolerance classes. The clusters can be datum reference frames (DRFs) or targets. A binary vector representation of degree of freedom and operations for combining them are proposed. An algebraic method is developed by using DOF representation. The ASME Y14.5.1 companion to the Geometric Dimensioning and Tolerancing (GD&T;) standard gives an exhaustive tabulation of active and invariant degrees of freedom (DOF) for Datum Reference Frames (DRF). This algebra is validated by checking it against all cases in the Y14.5.1 tabulation. This algebra allows the derivation of the general rules for tolerance specification and validation. A computer tool is implemented to support GD&T; specification and validation. The computer implementation outputs the geometric and tolerance information in the form of a CTF (Constraint-Tolerance-Feature) file which can be used for tolerance stack analysis.
ContributorsShen, Yadong (Author) / Shah, Jami (Thesis advisor) / Davidson, Joseph (Committee member) / Huebner, Kenneth (Committee member) / Arizona State University (Publisher)
Created2013
151672-Thumbnail Image.png
Description
ABSTRACT A vortex tube is a device of a simple structure with no moving parts that can be used to separate a compressed gas into a hot stream and a cold stream. Many studies have been carried out to find the mechanisms of the energy separation in the vortex tube.

ABSTRACT A vortex tube is a device of a simple structure with no moving parts that can be used to separate a compressed gas into a hot stream and a cold stream. Many studies have been carried out to find the mechanisms of the energy separation in the vortex tube. Recent rapid development in computational fluid dynamics is providing a powerful tool to investigate the complex flow in the vortex tube. However various issues in these numerical simulations remain, such as choosing the most suitable turbulent model, as well as the lack of systematic comparative analysis. LES model for the vortex tube simulation is hardly used in the present literatures, and the influence of parameters on the performance of the vortex tube has scarcely been studied. This study is aimed to find the influence of various parameters on the performance of the vortex tube, the best geometric value of vortex tube and the realizable method to reach the required cold out flow rate 40 kg/s . First of all, setting up an original 3-D simulation vortex tube model. By comparing experiment results reported in the literature and our simulation results, a most suitable model for the simulation of the vortex tube is obtained. Secondly, we perform simulations to optimize parameters that can deliver a set of desired output, such as cold stream pressure, temperature and flow-rate. We also discuss the use of the cold air flow for petroleum engineering applications.
ContributorsCang, Ruijin (Author) / Chen, Kangping (Thesis advisor) / Huang, Hueiping (Committee member) / Calhoun, Ronald (Committee member) / Arizona State University (Publisher)
Created2013
151321-Thumbnail Image.png
Description
This thesis concerns the role of geometric imperfections on assemblies in which the location of a target part is dependent on supports at two features. In some applications, such as a turbo-machine rotor that is supported by a series of parts at each bearing, it is the interference or clearance

This thesis concerns the role of geometric imperfections on assemblies in which the location of a target part is dependent on supports at two features. In some applications, such as a turbo-machine rotor that is supported by a series of parts at each bearing, it is the interference or clearance at a functional target feature, such as at the blades that must be controlled. The first part of this thesis relates the limits of location for the target part to geometric imperfections of other parts when stacked-up in parallel paths. In this section parts are considered to be rigid (non-deformable). By understanding how much of variation from the supporting parts contribute to variations of the target feature, a designer can better utilize the tolerance budget when assigning values to individual tolerances. In this work, the T-Map®, a spatial math model is used to model the tolerance accumulation in parallel assemblies. In other applications where parts are flexible, deformations are induced when parts in parallel are clamped together during assembly. Presuming that perfectly manufactured parts have been designed to fit perfectly together and produce zero deformations, the clamping-induced deformations result entirely from the imperfect geometry that is produced during manufacture. The magnitudes and types of these deformations are a function of part dimensions and material stiffnesses, and they are limited by design tolerances that control manufacturing variations. These manufacturing variations, if uncontrolled, may produce high enough stresses when the parts are assembled that premature failure can occur before the design life. The last part of the thesis relates the limits on the largest von Mises stress in one part to functional tolerance limits that must be set at the beginning of a tolerance analysis of parts in such an assembly.
ContributorsJaishankar, Lupin Niranjan (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Mignolet, Marc P (Committee member) / Arizona State University (Publisher)
Created2012
151326-Thumbnail Image.png
Description
The signing of the No Child Left Behind Act in 2001 created a need for Title 1 principals to conceptualize and operationalize parent engagement. This study examines how three urban principals in Arizona implemented the mandates of the Act as it pertains to parent involvement. The purpose of this qualitative

The signing of the No Child Left Behind Act in 2001 created a need for Title 1 principals to conceptualize and operationalize parent engagement. This study examines how three urban principals in Arizona implemented the mandates of the Act as it pertains to parent involvement. The purpose of this qualitative case study is to examine how principals operationalize and conceptualize parent involvement as they navigate barriers and laws particular to the state of Arizona. This study sought to understand issues surrounding parent involvement in Title 1 schools in Arizona. The beliefs and interview dialogue of the principals as it pertains to parent engagement provided an understanding of how urban principals in Arizona implement the aspects of No Child Left Behind Act that deal with parent involvement. The research study concluded that parents have community cultural wealth that contributes to the success of the students of engaged parents and that cultural responsive leadership assists principals with engaging parents in their schools. The research concludes that a gap exists between how parents and principals perceive and construct parent engagement versus what is prescribed in No Child Left Behind Act.
ContributorsConley, Loraine (Author) / Brayboy, Bryan (Thesis advisor) / Mccarty, Teresa (Committee member) / Scott, Kimberly (Committee member) / Arizona State University (Publisher)
Created2012
151547-Thumbnail Image.png
Description
An unrelenting need exists to improve literacy instruction in secondary schools in the United States. Reading scores, especially among minority and language minority students, as well as the economically disadvantaged, have not produced significant gains in recent years. The problem of low level reading skills in secondary grades is complicated

An unrelenting need exists to improve literacy instruction in secondary schools in the United States. Reading scores, especially among minority and language minority students, as well as the economically disadvantaged, have not produced significant gains in recent years. The problem of low level reading skills in secondary grades is complicated to address, however, as many secondary teachers find themselves ill-equipped to deal with the challenges they face. Improving student achievement by integrating reading comprehension strategies into the freshman English curriculum was the ultimate goal of this innovation. A total of 15 freshman English language arts teachers and 30 freshman students participated in this 14 week action research study, which involved teaching explicit pre-, during-, and post-reading strategies during daily lessons at a large, urban high school in the Southwestern United States. Data were collected using a reading diagnostic test, focus group interviews with teachers, individual interviews with teachers and students, and teacher observations. Findings from the data suggest that professional development designed to infuse comprehension strategies through collaborative inquiry among English language arts teachers contributed to assisting students to perform better on reading diagnostic measures. Furthermore, the findings suggest that this method of professional development served to raise teachers' self-efficacy regarding literacy instruction, which, in turn, improved students' efficacy and performance as readers.
ContributorsWilliams, Jeffrey (Author) / Roe, Mary (Thesis advisor) / Weber, Catherine (Committee member) / Allen, Althe (Committee member) / Arizona State University (Publisher)
Created2013
151455-Thumbnail Image.png
Description
Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving

Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving reliable composite systems is a strong capability of assessing and inspecting physical damage of critical structural components. Installation of a robust Structural Health Monitoring (SHM) system would be very valuable in detecting the onset of composite failure. A number of major issues still require serious attention in connection with the research and development aspects of sensor-integrated reliable SHM systems for composite structures. In particular, the sensitivity of currently available sensor systems does not allow detection of micro level damage; this limits the capability of data driven SHM systems. As a fundamental layer in SHM, modeling can provide in-depth information on material and structural behavior for sensing and detection, as well as data for learning algorithms. This dissertation focusses on the development of a multiscale analysis framework, which is used to detect various forms of damage in complex composite structures. A generalized method of cells based micromechanics analysis, as implemented in NASA's MAC/GMC code, is used for the micro-level analysis. First, a baseline study of MAC/GMC is performed to determine the governing failure theories that best capture the damage progression. The deficiencies associated with various layups and loading conditions are addressed. In most micromechanics analysis, a representative unit cell (RUC) with a common fiber packing arrangement is used. The effect of variation in this arrangement within the RUC has been studied and results indicate this variation influences the macro-scale effective material properties and failure stresses. The developed model has been used to simulate impact damage in a composite beam and an airfoil structure. The model data was verified through active interrogation using piezoelectric sensors. The multiscale model was further extended to develop a coupled damage and wave attenuation model, which was used to study different damage states such as fiber-matrix debonding in composite structures with surface bonded piezoelectric sensors.
ContributorsMoncada, Albert (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Rajadas, John (Committee member) / Yekani Fard, Masoud (Committee member) / Arizona State University (Publisher)
Created2012
151458-Thumbnail Image.png
Description
The focus of this investigation is on the optimum placement of a limited number of dampers, fewer than the number of blades, on a bladed disk to induce the smallest amplitude of blade response. The optimization process considers the presence of random mistuning, i.e. small involuntary variations in blade stiffness

The focus of this investigation is on the optimum placement of a limited number of dampers, fewer than the number of blades, on a bladed disk to induce the smallest amplitude of blade response. The optimization process considers the presence of random mistuning, i.e. small involuntary variations in blade stiffness properties resulting, say, from manufacturing variability. Designed variations of these properties, known as intentional mistuning, is considered as an option to reduce blade response and the pattern of two blade types (A and B blades) is then part of the optimization in addition to the location of dampers on the disk. First, this study focuses on the formulation and validation of dedicated algorithms for the selection of the damper locations and the intentional mistuning pattern. Failure of one or several of the dampers could lead to a sharp rise in blade response and this issue is addressed by including, in the optimization, the possibility of damper failure to yield a fail-safe solution. The high efficiency and accuracy of the optimization algorithms is assessed in comparison with computationally very demanding exhaustive search results. Second, the developed optimization algorithms are applied to nonlinear dampers (underplatform friction dampers), as well as to blade-blade dampers, both linear and nonlinear. Further, the optimization of blade-only and blade-blade linear dampers is extended to include uncertainty or variability in the damper properties induced by manufacturing or wear. It is found that the optimum achieved without considering such uncertainty is robust with respect to it. Finally, the potential benefits of using two different types of friction dampers differing in their masses (A and B types), on a bladed disk are considered. Both A/B pattern and the damper masses are optimized to obtain the largest benefit compared to using identical dampers of optimized masses on every blade. Four situations are considered: tuned disks, disks with random mistuning of blade stiffness, and, disks with random mistuning of both blade stiffness and damper normal forces with and without damper variability induced by manufacturing and wear. In all cases, the benefit of intentional mistuning of friction dampers is small, of the order of a few percent.
ContributorsMurthy, Raghavendra Narasimha (Author) / Mignolet, Marc P (Thesis advisor) / Rajan, Subramaniam D. (Committee member) / Lentz, Jeff (Committee member) / Chattopadhyay, Aditi (Committee member) / Jiang, Hanqing (Committee member) / Arizona State University (Publisher)
Created2012
151485-Thumbnail Image.png
Description
Tesla turbo-machinery offers a robust, easily manufactured, extremely versatile prime mover with inherent capabilities making it perhaps the best, if not the only, solution for certain niche applications. The goal of this thesis is not to optimize the performance of the Tesla turbine, but to compare its performance with various

Tesla turbo-machinery offers a robust, easily manufactured, extremely versatile prime mover with inherent capabilities making it perhaps the best, if not the only, solution for certain niche applications. The goal of this thesis is not to optimize the performance of the Tesla turbine, but to compare its performance with various working fluids. Theoretical and experimental analyses of a turbine-generator assembly utilizing compressed air, saturated steam and water as the working fluids were performed and are presented in this work. A brief background and explanation of the technology is provided along with potential applications. A theoretical thermodynamic analysis is outlined, resulting in turbine and rotor efficiencies, power outputs and Reynolds numbers calculated for the turbine for various combinations of working fluids and inlet nozzles. The results indicate the turbine is capable of achieving a turbine efficiency of 31.17 ± 3.61% and an estimated rotor efficiency 95 ± 9.32%. These efficiencies are promising considering the numerous losses still present in the current design. Calculation of the Reynolds number provided some capability to determine the flow behavior and how that behavior impacts the performance and efficiency of the Tesla turbine. It was determined that turbulence in the flow is essential to achieving high power outputs and high efficiency. Although the efficiency, after peaking, begins to slightly taper off as the flow becomes increasingly turbulent, the power output maintains a steady linear increase.
ContributorsPeshlakai, Aaron (Author) / Phelan, Patrick (Thesis advisor) / Trimble, Steve (Committee member) / Wang, Liping (Committee member) / Arizona State University (Publisher)
Created2012
151510-Thumbnail Image.png
Description
Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified

Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified tolerances. This thesis describes a new method for quality control of a manufacturing process by improving the method used to convert measured points on a part to a geometric entity that can be compared directly with tolerance specifications. The focus of this paper is the development of a new computational method for obtaining the least-squares fit of a set of points that have been measured with a coordinate measurement machine along a line-profile. The pseudo-inverse of a rectangular matrix is used to convert the measured points to the least-squares fit of the profile. Numerical examples are included for convex and concave line-profiles, that are formed from line- and circular arc-segments.
ContributorsSavaliya, Samir (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
151523-Thumbnail Image.png
Description
Shock loading is a complex phenomenon that can lead to failure mechanisms such as strain localization, void nucleation and growth, and eventually spall fracture. Studying incipient stages of spall damage is of paramount importance to accurately determine initiation sites in the material microstructure where damage will nucleate and grow and

Shock loading is a complex phenomenon that can lead to failure mechanisms such as strain localization, void nucleation and growth, and eventually spall fracture. Studying incipient stages of spall damage is of paramount importance to accurately determine initiation sites in the material microstructure where damage will nucleate and grow and to formulate continuum models that account for the variability of the damage process due to microstructural heterogeneity. The length scale of damage with respect to that of the surrounding microstructure has proven to be a key aspect in determining sites of failure initiation. Correlations have been found between the damage sites and the surrounding microstructure to determine the preferred sites of spall damage, since it tends to localize at and around the regions of intrinsic defects such as grain boundaries and triple points. However, considerable amount of work still has to be done in this regard to determine the physics driving the damage at these intrinsic weak sites in the microstructure. The main focus of this research work is to understand the physical mechanisms behind the damage localization at these preferred sites. A crystal plasticity constitutive model is implemented with different damage criteria to study the effects of stress concentration and strain localization at the grain boundaries. A cohesive zone modeling technique is used to include the intrinsic strength of the grain boundaries in the simulations. The constitutive model is verified using single elements tests, calibrated using single crystal impact experiments and validated using bicrystal and multicrystal impact experiments. The results indicate that strain localization is the predominant driving force for damage initiation and evolution. The microstructural effects on theses damage sites are studied to attribute the extent of damage to microstructural features such as grain orientation, misorientation, Taylor factor and the grain boundary planes. The finite element simulations show good correlation with the experimental results and can be used as the preliminary step in developing accurate probabilistic models for damage nucleation.
ContributorsKrishnan, Kapil (Author) / Peralta, Pedro (Thesis advisor) / Mignolet, Marc (Committee member) / Sieradzki, Karl (Committee member) / Jiang, Hanqing (Committee member) / Oswald, Jay (Committee member) / Arizona State University (Publisher)
Created2013