Matching Items (1,718)
Filtering by

Clear all filters

153035-Thumbnail Image.png
Description
Dimensional Metrology is the branch of science that determines length, angular, and geometric relationships within manufactured parts and compares them with required tolerances. The measurements can be made using either manual methods or sampled coordinate metrology (Coordinate measuring machines). Manual measurement methods have been in practice for a long time

Dimensional Metrology is the branch of science that determines length, angular, and geometric relationships within manufactured parts and compares them with required tolerances. The measurements can be made using either manual methods or sampled coordinate metrology (Coordinate measuring machines). Manual measurement methods have been in practice for a long time and are well accepted in the industry, but are slow for the present day manufacturing. On the other hand CMMs are relatively fast, but these methods are not well established yet. The major problem that needs to be addressed is the type of feature fitting algorithm used for evaluating tolerances. In a CMM the use of different feature fitting algorithms on a feature gives different values, and there is no standard that describes the type of feature fitting algorithm to be used for a specific tolerance. Our research is focused on identifying the feature fitting algorithm that is best used for each type of tolerance. Each algorithm is identified as the one to best represent the interpretation of geometric control as defined by the ASME Y14.5 standard and on the manual methods used for the measurement of a specific tolerance type. Using these algorithms normative procedures for CMMs are proposed for verifying tolerances. The proposed normative procedures are implemented as software. Then the procedures are verified by comparing the results from software with that of manual measurements.

To aid this research a library of feature fitting algorithms is developed in parallel. The library consists of least squares, Chebyshev and one sided fits applied on the features of line, plane, circle and cylinder. The proposed normative procedures are useful for evaluating tolerances in CMMs. The results evaluated will be in accordance to the standard. The ambiguity in choosing the algorithms is prevented. The software developed can be used in quality control for inspection purposes.
ContributorsVemulapalli, Prabath (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Takahashi, Timothy (Committee member) / Arizona State University (Publisher)
Created2014
149091-Thumbnail Image.png
Description

Geology and its tangential studies, collectively known and referred to in this thesis as geosciences, have been paramount to the transformation and advancement of society, fundamentally changing the way we view, interact and live with the surrounding natural and built environment. It is important to recognize the value and importance

Geology and its tangential studies, collectively known and referred to in this thesis as geosciences, have been paramount to the transformation and advancement of society, fundamentally changing the way we view, interact and live with the surrounding natural and built environment. It is important to recognize the value and importance of this interdisciplinary scientific field while reconciling its ties to imperial and colonizing extractive systems which have led to harmful and invasive endeavors. This intersection among geosciences, (environmental) justice studies, and decolonization is intended to promote inclusive pedagogical models through just and equitable methodologies and frameworks as to prevent further injustices and promote recognition and healing of old wounds. By utilizing decolonial frameworks and highlighting the voices of peoples from colonized and exploited landscapes, this annotated syllabus tackles the issues previously described while proposing solutions involving place-based education and the recentering of land within geoscience pedagogical models. (abstract)

ContributorsReed, Cameron E (Author) / Richter, Jennifer (Thesis director) / Semken, Steven (Committee member) / School of Earth and Space Exploration (Contributor, Contributor) / School of Sustainability (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
156186-Thumbnail Image.png
Description
This thesis uses an aircraft aerodynamic model and propulsion data, which

represents a configuration similar to the Airbus A320, to perform trade studies to understand the weight and configuration effects of “out-of-trim” flight during takeoff, cruise, initial approach, and balked landing. It is found that flying an aircraft slightly above the

This thesis uses an aircraft aerodynamic model and propulsion data, which

represents a configuration similar to the Airbus A320, to perform trade studies to understand the weight and configuration effects of “out-of-trim” flight during takeoff, cruise, initial approach, and balked landing. It is found that flying an aircraft slightly above the angle of attack or pitch angle required for a trimmed, stabilized flight will cause the aircraft to lose speed rapidly. This effect is most noticeable for lighter aircraft and when one engine is rendered inoperative. In the event of an engine failure, if the pilot does not pitch the nose of the aircraft down quickly, speed losses are significant and potentially lead to stalling the aircraft. Even when the risk of stalling the aircraft is small, the implications on aircraft climb performance, obstacle clearance, and acceleration distances can still become problematic if the aircraft is not flown properly. When the aircraft is slightly above the trimmed angle of attack, the response is shown to closely follow the classical phugoid response where the aircraft will trade speed and altitude in an oscillatory manner. However, when the pitch angle is slightly above the trimmed condition, the aircraft does not show this phugoid pattern but instead just loses speed until it reaches a new stabilized trajectory, never having speed and altitude oscillate. In this event, the way a pilot should respond to both events is different and may cause confusion in the cockpit.
ContributorsDelisle, Mathew Robert (Author) / Takahashi, Timothy (Thesis advisor) / White, Daniel (Committee member) / Niemczyk, Mary (Committee member) / Arizona State University (Publisher)
Created2018
157125-Thumbnail Image.png
Description
This study identifies the influence that leading-edge shape has on the aerodynamic characteristics of a wing using surface far-field and near-field analysis. It examines if a wake survey is the appropriate means for measuring profile drag and induced drag. The paper unveils the differences between sharp leading-edge and blunt leading-edge

This study identifies the influence that leading-edge shape has on the aerodynamic characteristics of a wing using surface far-field and near-field analysis. It examines if a wake survey is the appropriate means for measuring profile drag and induced drag. The paper unveils the differences between sharp leading-edge and blunt leading-edge wings with the tools of pressure loop, chordwise pressure distribution, span load plots and with wake integral computations. The analysis was performed using Computational Fluid Dynamics (CFD), vortex lattice potential flow code (VORLAX), and a few wind-tunnels runs to acquire data for comparison. This study found that sharp leading-edge wings have less leading-edge suction and higher drag than blunt leading-edge wings.

The blunt leading-edge wings have less drag because the normal vector of the surface in the front section of the airfoil develops forces at opposed skin friction. The shape of the leading edge, in conjunction with the effect of viscosity, slightly alter the span load; both the magnitude of the lift and the transverse distribution. Another goal in this study is to verify the veracity of wake survey theory; the two different leading-edge shapes reveals the shortcoming of Mclean’s equation which is only applicable to blunt leading-edge wings.
ContributorsOu, Che Wei (Author) / Takahashi, Timothy (Thesis advisor) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2019
157140-Thumbnail Image.png
Description
In previous work, the effects of power extraction for onboard electrical equipment and flight control systems were studied to determine which turbine shaft (i.e. high power shaft vs low power shaft) is best suited for power extraction. This thesis will look into an alternative option, a three-spool design with a

In previous work, the effects of power extraction for onboard electrical equipment and flight control systems were studied to determine which turbine shaft (i.e. high power shaft vs low power shaft) is best suited for power extraction. This thesis will look into an alternative option, a three-spool design with a high-pressure turbine, low-pressure turbine, and a turbine dedicated to driving the fan. One of the three-spool turbines is designed to be a vaneless counter-rotating turbine. The off-design performance of this new design will be compared to the traditional two-spool design to determine if the additional spool is a practical alternative to current designs for high shaft horsepower extraction requirements. Upon analysis, this thesis has shown that a three-spool engine with a vaneless counter-rotating stage has worse performance characteristics than traditional two-spool designs for UAV systems.
ContributorsBurgett, Luke Michael (Author) / Takahashi, Timothy (Thesis advisor) / Dahm, Werner (Committee member) / Trimble, Steve (Committee member) / Arizona State University (Publisher)
Created2019
133340-Thumbnail Image.png
Description
For as long as humans have been working, they have been looking for ways to get that work done better, faster, and more efficient. Over the course of human history, mankind has created innumerable spectacular inventions, all with the goal of making the economy and daily life more efficient. Today,

For as long as humans have been working, they have been looking for ways to get that work done better, faster, and more efficient. Over the course of human history, mankind has created innumerable spectacular inventions, all with the goal of making the economy and daily life more efficient. Today, innovations and technological advancements are happening at a pace like never seen before, and technology like automation and artificial intelligence are poised to once again fundamentally alter the way people live and work in society. Whether society is prepared or not, robots are coming to replace human labor, and they are coming fast. In many areas artificial intelligence has disrupted entire industries of the economy. As people continue to make advancements in artificial intelligence, more industries will be disturbed, more jobs will be lost, and entirely new industries and professions will be created in their wake. The future of the economy and society will be determined by how humans adapt to the rapid innovations that are taking place every single day. In this paper I will examine the extent to which automation will take the place of human labor in the future, project the potential effect of automation to future unemployment, and what individuals and society will need to do to adapt to keep pace with rapidly advancing technology. I will also look at the history of automation in the economy. For centuries humans have been advancing technology to make their everyday work more productive and efficient, and for centuries this has forced humans to adapt to the modern technology through things like training and education. The thesis will additionally examine the ways in which the U.S. education system will have to adapt to meet the demands of the advancing economy, and how job retraining programs must be modernized to prepare workers for the changing economy.
ContributorsCunningham, Reed P. (Author) / DeSerpa, Allan (Thesis director) / Haglin, Brett (Committee member) / School of International Letters and Cultures (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133345-Thumbnail Image.png
Description
The purpose of this study was to observe the effectiveness of the phenylalanyl arginine β-naphthylamide dihydrochloride inhibitor and Tween 20 when combined with an antibiotic against Escherichia. coli. As antibiotic resistance becomes more and more prevalent it is necessary to think outside the box and do more than just increase

The purpose of this study was to observe the effectiveness of the phenylalanyl arginine β-naphthylamide dihydrochloride inhibitor and Tween 20 when combined with an antibiotic against Escherichia. coli. As antibiotic resistance becomes more and more prevalent it is necessary to think outside the box and do more than just increase the dosage of currently prescribed antibiotics. This study attempted to combat two forms of antibiotic resistance. The first is the AcrAB efflux pump which is able to pump antibiotics out of the cell. The second is the biofilms that E. coli can form. By using an inhibitor, the pump should be unable to rid itself of an antibiotic. On the other hand, using Tween allows for biofilm formation to either be disrupted or for the biofilm to be dissolved. By combining these two chemicals with an antibiotic that the efflux pump is known to expel, low concentrations of each chemical should result in an equivalent or greater effect on bacteria compared to any one chemical in higher concentrations. To test this hypothesis a 96 well plate BEC screen test was performed. A range of antibiotics were used at various concentrations and with varying concentrations of both Tween and the inhibitor to find a starting point. Following this, Erythromycin and Ciprofloxacin were picked as the best candidates and the optimum range of the antibiotic, Tween, and inhibitor were established. Finally, all three chemicals were combined to observe the effects they had together as opposed to individually or paired together. From the results of this experiment several conclusions were made. First, the inhibitor did in fact increase the effectiveness of the antibiotic as less antibiotic was needed if the inhibitor was present. Second, Tween showed an ability to prevent recovery in the MBEC reading, showing that it has the ability to disrupt or dissolve biofilms. However, Tween also showed a noticeable decrease in effectiveness in the overall treatment. This negative interaction was unable to be compensated for when using the inhibitor and so the hypothesis was proven false as combining the three chemicals led to a less effective treatment method.
ContributorsPetrovich Flynn, Chandler James (Author) / Misra, Rajeev (Thesis director) / Bean, Heather (Committee member) / Perkins, Kim (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133349-Thumbnail Image.png
Description
The purpose of this paper is to understand how companies are finding high potential employees and if they are leaving top talent behind in their approach. Eugene Burke stated in 2014 that 55% of employees that are labeled as a High Potential Employee will turn over and move companies. Burke

The purpose of this paper is to understand how companies are finding high potential employees and if they are leaving top talent behind in their approach. Eugene Burke stated in 2014 that 55% of employees that are labeled as a High Potential Employee will turn over and move companies. Burke (2014) also states that the average high potential employee tenure is five years. The Corporate Leadership Council says that on average, 27% of a company's development budget is spent on its high potential program (CEB 2017). For a midsize company, the high potential development budget is almost a million dollars for only a handful of employees, only to see half of the investment walking out the door to another company . Furthermore, the Corporate Leadership Council said that a study done in 2005 revealed that 50% of high potential employees had significant problems within their job (Kotlyar and Karkowsky 2014). Are time and resources are being given to the wrong employees and the right employees are being overlooked? This paper exams how companies traditionally select high potential employees and where companies are potentially omitting employees who would be better suited for the program. This paper proposes that how a company discovers their top talent will correlate to the number of turnovers or struggles that a high potential employee has on their job. Future research direction and practical considerations are also presented in this paper.
ContributorsHarrison, Carrie (Author) / Mizzi, Philip (Thesis director) / Ruediger, Stefan (Committee member) / Department of Management and Entrepreneurship (Contributor) / School of Sustainability (Contributor) / Department of Supply Chain Management (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133355-Thumbnail Image.png
Description
This study estimates the capitalization effect of golf courses in Maricopa County using the hedonic pricing method. It draws upon a dataset of 574,989 residential transactions from 2000 to 2006 to examine how the aesthetic, non-golf benefits of golf courses capitalize across a gradient of proximity measures. The measures for

This study estimates the capitalization effect of golf courses in Maricopa County using the hedonic pricing method. It draws upon a dataset of 574,989 residential transactions from 2000 to 2006 to examine how the aesthetic, non-golf benefits of golf courses capitalize across a gradient of proximity measures. The measures for amenity value extend beyond home adjacency and include considerations for homes within a range of discrete walkability buffers of golf courses. The models also distinguish between public and private golf courses as a proxy for the level of golf course access perceived by non-golfers. Unobserved spatial characteristics of the neighborhoods around golf courses are controlled for by increasing the extent of spatial fixed effects from city, to census tract, and finally to 2000 meter golf course ‘neighborhoods.’ The estimation results support two primary conclusions. First, golf course proximity is found to be highly valued for adjacent homes and homes up to 50 meters way from a course, still evident but minimal between 50 and 150 meters, and insignificant at all other distance ranges. Second, private golf courses do not command a higher proximity premia compared to public courses with the exception of homes within 25 to 50 meters of a course, indicating that the non-golf benefits of courses capitalize similarly, regardless of course type. The results of this study motivate further investigation into golf course features that signal access or add value to homes in the range of capitalization, particularly for near-adjacent homes between 50 and 150 meters thought previously not to capitalize.
ContributorsJoiner, Emily (Author) / Abbott, Joshua (Thesis director) / Smith, Kerry (Committee member) / Economics Program in CLAS (Contributor) / School of Sustainability (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133363-Thumbnail Image.png
Description
An in-depth analysis on the effects vortex generators cause to the boundary layer separation that occurs when an internal flow passes through a diffuser is presented. By understanding the effects vortex generators demonstrate on the boundary layer, they can be utilized to improve the performance and efficiencies of diffusers and

An in-depth analysis on the effects vortex generators cause to the boundary layer separation that occurs when an internal flow passes through a diffuser is presented. By understanding the effects vortex generators demonstrate on the boundary layer, they can be utilized to improve the performance and efficiencies of diffusers and other internal flow applications. An experiment was constructed to acquire physical data that could assess the change in performance of the diffusers once vortex generators were applied. The experiment consisted of pushing air through rectangular diffusers with half angles of 10, 20, and 30 degrees. A velocity distribution model was created for each diffuser without the application of vortex generators before modeling the velocity distribution with the application of vortex generators. This allowed the two results to be directly compared to one another and the improvements to be quantified. This was completed by using the velocity distribution model to find the partial mass flow rate through the outer portion of the diffuser's cross-sectional area. The analysis concluded that the vortex generators noticeably increased the performance of the diffusers. This was best seen in the performance of the 30-degree diffuser. Initially the diffuser experienced airflow velocities near zero towards the edges. This led to 0.18% of the mass flow rate occurring in the outer one-fourth portion of the cross-sectional area. With the application of vortex generators, this percentage increased to 5.7%. The 20-degree diffuser improved from 2.5% to 7.9% of the total mass flow rate in the outer portion and the 10-degree diffuser improved from 11.9% to 19.2%. These results demonstrate an increase in performance by the addition of vortex generators while allowing the possibility for further investigation on improvement through the design and configuration of these vortex generators.
ContributorsSanchez, Zachary Daniel (Author) / Takahashi, Timothy (Thesis director) / Herrmann, Marcus (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / W.P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05