Matching Items (17)
156187-Thumbnail Image.png
Description
This thesis focuses on studying the interaction between floating objects and an air-water flow system driven by gravity. The system consists of an inclined channel in which a gravity driven two phase flow carries a series of floating solid objects downstream. Numerical simulations of such a system requires the solution

This thesis focuses on studying the interaction between floating objects and an air-water flow system driven by gravity. The system consists of an inclined channel in which a gravity driven two phase flow carries a series of floating solid objects downstream. Numerical simulations of such a system requires the solution of not only the basic Navier-Stokes equation but also dynamic interaction between the solid body and the two-phase flow. In particular, this requires embedding of dynamic mesh within the two-phase flow. A computational fluid dynamics solver, ANSYS fluent, is used to solve this problem. Also, the individual components for these simulations are already available in the solver, few examples exist in which all are combined. A series of simulations are performed by varying the key parameters, including density of floating objects and mass flow rate at the inlet. The motion of the floating objects in those simulations are analyzed to determine the stability of the coupled flow-solid system. The simulations are successfully performed over a broad range of parametric values. The numerical framework developed in this study can potentially be used in applications, especially in assisting the design of similar gravity driven systems for transportation in manufacturing processes. In a small number of the simulations, two kinds of numerically instability are observed. One is characterized by a sudden vertical acceleration of the floating object due to a strong imbalance of the force acting on the body, which occurs when the mass flow of water is weak. The other is characterized by a sudden vertical movement of air-water interface, which occurs when two floating objects become too close together. These new types of numerical instability deserve future studies and clarifications. This study is performed only for a 2-D system. Extension of the numerical framework to a full 3-D setting is recommended as future work.
ContributorsMangavelli, Sai Chaitanya (Author) / Huang, Huei-Ping (Thesis advisor) / Kim, Jeonglae (Committee member) / Forzani, Erica (Committee member) / Arizona State University (Publisher)
Created2018
157292-Thumbnail Image.png
Description
Autonomic closure is a new general methodology for subgrid closures in large eddy simulations that circumvents the need to specify fixed closure models and instead allows a fully- adaptive self-optimizing closure. The closure is autonomic in the sense that the simulation itself determines the optimal relation at each point and

Autonomic closure is a new general methodology for subgrid closures in large eddy simulations that circumvents the need to specify fixed closure models and instead allows a fully- adaptive self-optimizing closure. The closure is autonomic in the sense that the simulation itself determines the optimal relation at each point and time between any subgrid term and the variables in the simulation, through the solution of a local system identification problem. It is based on highly generalized representations of subgrid terms having degrees of freedom that are determined dynamically at each point and time in the simulation. This can be regarded as a very high-dimensional generalization of the dynamic approach used with some traditional prescribed closure models, or as a type of “data-driven” turbulence closure in which machine- learning methods are used with internal training data obtained at a test-filter scale at each point and time in the simulation to discover the local closure representation.

In this study, a priori tests were performed to develop accurate and efficient implementations of autonomic closure based on particular generalized representations and parameters associated with the local system identification of the turbulence state. These included the relative number of training points and bounding box size, which impact computational cost and generalizability of coefficients in the representation from the test scale to the LES scale. The focus was on studying impacts of these factors on the resulting accuracy and efficiency of autonomic closure for the subgrid stress. Particular attention was paid to the associated subgrid production field, including its structural features in which large forward and backward energy transfer are concentrated.

More than five orders of magnitude reduction in computational cost of autonomic closure was achieved in this study with essentially no loss of accuracy, primarily by using efficient frame-invariant forms for generalized representations that greatly reduce the number of degrees of freedom. The recommended form is a 28-coefficient representation that provides subgrid stress and production fields that are far more accurate in terms of structure and statistics than are traditional prescribed closure models.
ContributorsKshitij, Abhinav (Author) / Dahm, Werner J.A. (Thesis advisor) / Herrmann, Marcus (Committee member) / Hamlington, Peter E (Committee member) / Peet, Yulia (Committee member) / Kim, Jeonglae (Committee member) / Arizona State University (Publisher)
Created2019
Description

Particle Image Velocimetry (PIV) has become a cornerstone of modern experimental fluid mechanics due to its unique ability to resolve the entire instantaneous two-dimensional velocity field of an experimental flow. However, this methodology has historically been omitted from undergraduate curricula due to the significant cost of research-grade PIV systems and

Particle Image Velocimetry (PIV) has become a cornerstone of modern experimental fluid mechanics due to its unique ability to resolve the entire instantaneous two-dimensional velocity field of an experimental flow. However, this methodology has historically been omitted from undergraduate curricula due to the significant cost of research-grade PIV systems and safety considerations stemming from the high-power Nd-YAG lasers typically implemented by PIV systems. In the following undergraduate thesis, a low-cost model of a PIV system is designed to be used within the context of an undergraduate fluid mechanics lab. The proposed system consists of a Hele-Shaw water tunnel, a high-power LED lighting source, and a modern smartphone camera. Additionally, a standalone application was developed to perform the necessary image processing as well as to perform Particle Streak Velocimetry (PSV) and PIV image analysis. Ultimately, the proposed system costs $229.33 and can replicate modern PIV techniques albeit for simple flow scenarios.

ContributorsZamora, Matthew Alan (Author) / Adrian, Ronald (Thesis director) / Kim, Jeonglae (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
189345-Thumbnail Image.png
Description
The current work aims to understand the influence of particles on scalar transport in particle-laden turbulent jets using point-particle direct numerical simulations (DNS). Such turbulence phenomena are observed in many applications, such as aircraft and rocket engines (e.g., engines operating in dusty environments and when close to the surface) and

The current work aims to understand the influence of particles on scalar transport in particle-laden turbulent jets using point-particle direct numerical simulations (DNS). Such turbulence phenomena are observed in many applications, such as aircraft and rocket engines (e.g., engines operating in dusty environments and when close to the surface) and geophysical flows (sediment-laden rivers discharging nutrients into the oceans), etc.This thesis looks at systematically understanding the fundamental interplay between (1) fluid turbulence, (2) inertial particles, and (3) scalar transport. This work considers a temporal jet of Reynolds number of 5000 filled with the point-particles and the influence of Stokes number (St). Three Stokes numbers, St = 1, 7.5, and 20, were considered for the current work. The simulations were solved using the NGA solver, which solves the Navier-Stokes, advection-diffusion, and particle transport equations. The statistical analysis of the mean and turbulence quantities, along with the Reynolds stresses, are estimated for the fluid and particle phases throughout the domain. The observations do not show a significant influence of St in the mean flow evolution of fluid, scalar, and particle phases. The scalar mixture fraction variance and the turbulent kinetic energy (TKE) increase slightly for the St = 1 case, compared to the particle-free and higher St cases, indicating that an optimal St exists for which the scalar variation increases. The current preliminary study establishes that the scalar variance is influenced by particles under the optimal particle St. Directions for future studies based on the current observations are presented.
ContributorsPaturu, Venkata Sai Sushant (Author) / Pathikonda, Gokul (Thesis advisor) / Kasbaoui, Mohamed Houssem (Committee member) / Kim, Jeonglae (Committee member) / Prabhakaran, Prasanth (Committee member) / Arizona State University (Publisher)
Created2023
157724-Thumbnail Image.png
Description
Micro/meso combustion has several advantages over regular combustion in terms of scale, efficiency, enhanced heat and mass transfer, quick startup and shutdown, fuel utilization and carbon footprint. This study aims to analyze the effect of temperature on critical sooting equivalence ratio and precursor formation in a micro-flow reactor. The effect

Micro/meso combustion has several advantages over regular combustion in terms of scale, efficiency, enhanced heat and mass transfer, quick startup and shutdown, fuel utilization and carbon footprint. This study aims to analyze the effect of temperature on critical sooting equivalence ratio and precursor formation in a micro-flow reactor. The effect of temperature on the critical sooting equivalence ratio of propane/air mixture at atmospheric pressure with temperatures ranging from 750-1250°C was investigated using a micro-flow reactor with a controlled temperature profile of diameter 2.3mm, equivalence ratios of 1-13 and inlet flow rates of 10 and 100sccm. The effect of inert gas dilution was studied by adding 90sccm of nitrogen to 10sccm of propane/air to make a total flow rate of 100sccm. The gas species were collected at the end of the reactor using a gas chromatograph for further analysis. Soot was indicated by visually examining the reactor before and after combustion for traces of soot particles on the inside of the reactor. At 1000-1250°C carbon deposition/soot formation was observed inside the reactor at critical sooting equivalence ratios. At 750-950°C, no soot formation was observed despite operating at much higher equivalence ratio, i.e., up to 100. Adding nitrogen resulted in an increase in the critical sooting equivalence ratio.

The wall temperature profiles were obtained with the help of a K-type thermocouple, to get an idea of the difference between the wall temperature provided with the resistive heater and the wall temperature with combustion inside the reactor. The temperature profiles were very similar in the case of 10sccm but markedly different in the other two cases for all the temperatures.

These results indicate a trend that is not well-known or understood for sooting flames, i.e., decreasing temperature decreases soot formation. The reactor capability to examine the effect of temperature on the critical sooting equivalence ratio at different flow rates was successfully demonstrated.
ContributorsKhalid, Abdul Hannan Hannan (Author) / Milcarek, Ryan (Thesis advisor) / Dahm, Werner (Committee member) / Kim, Jeonglae (Committee member) / Arizona State University (Publisher)
Created2019
157738-Thumbnail Image.png
Description
Water is one of, if not the most valuable natural resource but extremely challenging to manage. According to old research in the field, many Water Distribution Systems (WDSs) around the world lose above 40 percent of clean water pumped into the distribution system because of unfortune leaks before the water

Water is one of, if not the most valuable natural resource but extremely challenging to manage. According to old research in the field, many Water Distribution Systems (WDSs) around the world lose above 40 percent of clean water pumped into the distribution system because of unfortune leaks before the water gets anywhere from the fresh water resources. By reducing the amount of water leaked, distribution system managers can reduce the amount of money, resources, and energy wasted on finding and repairing the leaks, and then producing and pumping water, increase system reliability and more easily satisfy present and future needs of all consumers. But having access to this information pre-amatively and sufficiently can be complex and time taking. For large companies like SRP who are moving tonnes of water from various water bodies around phoenix area, it is even more crucial to efficiently locate and characterize the leaks. And phoenix being a busy city, it is not easy to go start digging everywhere, whenever a loss in pressure is reported at the destination.

Keeping this in mind, non-invasive methods to geo-physically work on it needs attention. There is a lot of potential in this field of work to even help with environmental crisis as this helps in places where water theft is big and is conducted through leaks in the distribution system. Methods like Acoustic sensing and ground penetrating radars have shown good results, and the work done in this thesis helps us realise the limitations and extents to which they can be used in the phoenix are.

The concrete pipes used by SRP are would not be able to generate enough acoustic signals to be affectively picked up by a hydrophone at the opening, so the GPR would be helpful in finding the initial location of the leak, as the water around the leak would make the sand wet and hence show a clear difference on the GPR. After that the frequency spectrum can be checked around that point which would show difference from another where we know a leak is not present.
ContributorsSrivastava, Siddhant (Author) / Lee, Taewoo (Thesis advisor) / Kwan, Beomjin (Committee member) / Kim, Jeonglae (Committee member) / Arizona State University (Publisher)
Created2019
158804-Thumbnail Image.png
Description
Autonomic closure is a recently-proposed subgrid closure methodology for large eddy simulation (LES) that replaces the prescribed subgrid models used in traditional LES closure with highly generalized representations of subgrid terms and solution of a local system identification problem that allows the simulation itself to determine the local relation between

Autonomic closure is a recently-proposed subgrid closure methodology for large eddy simulation (LES) that replaces the prescribed subgrid models used in traditional LES closure with highly generalized representations of subgrid terms and solution of a local system identification problem that allows the simulation itself to determine the local relation between each subgrid term and the resolved variables at every point and time. The present study demonstrates, for the first time, practical LES based on fully dynamic implementation of autonomic closure for the subgrid stress and the subgrid scalar flux. It leverages the inherent computational efficiency of tensorally-correct generalized representations in terms of parametric quantities, and uses the fundamental representation theory of Smith (1971) to develop complete and minimal tensorally-correct representations for the subgrid stress and scalar flux. It then assesses the accuracy of these representations via a priori tests, and compares with the corresponding accuracy from nonparametric representations and from traditional prescribed subgrid models. It then assesses the computational stability of autonomic closure with these tensorally-correct parametric representations, via forward simulations with a high-order pseudo-spectral code, including the extent to which any added stabilization is needed to ensure computational stability, and compares with the added stabilization needed in traditional closure with prescribed subgrid models. Further, it conducts a posteriori tests based on forward simulations of turbulent conserved scalar mixing with the same pseudo-spectral code, in which velocity and scalar statistics from autonomic closure with these representations are compared with corresponding statistics from traditional closure using prescribed models, and with corresponding statistics of filtered fields from direct numerical simulation (DNS). These comparisons show substantially greater accuracy from autonomic closure than from traditional closure. This study demonstrates that fully dynamic autonomic closure is a practical approach for LES that requires accuracy even at the smallest resolved scales.
ContributorsStallcup, Eric Warren (Author) / Dahm, Werner J.A. (Thesis advisor) / Herrmann, Marcus (Committee member) / Calhoun, Ronald (Committee member) / Kim, Jeonglae (Committee member) / Kostelich, Eric J. (Committee member) / Arizona State University (Publisher)
Created2020
161412-Thumbnail Image.png
Description
The objective of this study is to estimate the variation of flight performance of a variable sweep wing geometry on the reverse engineered Boeing 2707-100 SST, when compared against the traditional delta wing approach used on supersonic airliner. The reason for this lies beneath the fact that supersonic orientations of

The objective of this study is to estimate the variation of flight performance of a variable sweep wing geometry on the reverse engineered Boeing 2707-100 SST, when compared against the traditional delta wing approach used on supersonic airliner. The reason for this lies beneath the fact that supersonic orientations of wings doesn’t seem to work well for subsonic conditions, and subsonic wings are inefficient for supersonic flight. This would likely mean that flying long haul subsonic with supersonic wing geometry is inefficient compared to regular aircraft, but more importantly requires high takeoff/landing speeds and even long runways to bring the aircraft to hold. One might be able to get around this problem - partially - by adding thrust either by using afterburners, or by using variable geometry wings. To assess the flight performance, the research work done in this report focuses on implementing the latter solution to the abovementioned problem by using the aerodynamic performance parameters such as Coefficient of Lift, Coefficient of Drag along with its components specific to every test Mach number and altitude, along with the propulsion performance parameters such as thrust and thrust specific fuel consumption at different iterations of power settings of engine, flight Mach number and altitude in a propulsion database file to estimate flight performance using flight missions and energy-maneuverability theory approach. The flight performance was studied at several sweep angles of the aircraft to estimate the best possible sweep orientation based on the requirement of mission and an optimal flight mission was developed for an aircraft with swing wing capabilities.
ContributorsChaudhari, Bhargav Naginbhai (Author) / Takahashi, Timothy T (Thesis advisor) / Dahm, Werner J (Committee member) / Kim, Jeonglae (Committee member) / Arizona State University (Publisher)
Created2021
161464-Thumbnail Image.png
Description
The Transonic Area Rule, developed by Richard T. Whitcomb in the early 1950s, revolutionized high-speed flight because its insight allowed engineers to reduce and/or delay the transonic drag rise. To this day, it is the rationale behind “coke-bottle” sculpturing (indenting the aircraft fuselage at the wing-fuselage junction) to alter the

The Transonic Area Rule, developed by Richard T. Whitcomb in the early 1950s, revolutionized high-speed flight because its insight allowed engineers to reduce and/or delay the transonic drag rise. To this day, it is the rationale behind “coke-bottle” sculpturing (indenting the aircraft fuselage at the wing-fuselage junction) to alter the cross-sectional area development of the body. According to Whitcomb, this indentation is meant to create a smoother transition of cross-sectional area development of the body and consequently would reduce the number of shocks on the body, their intensity, and their shock pattern complexity. Along with this, modeling of a geometry’s transonic drag rise could be simplified by creating a comparable body of revolution with the same cross-sectional area development as the original geometry. Thus, the Transonic Area Rule has been advertised as an aerodynamic multitool. This new work probes the underlying mechanics of the Transonic Area Rule and determines just how accurate it is in producing its advertised results. To accomplish this, several different wave-drag approximation methods were used to replicate and compare the results presented in Whitcomb’s famous 1952 report16. These methods include EDET (Empirical Drag Estimation Technique)4, D2500 (Harris Wave Drag program)6, and CFD (Computational Fluid Dynamics) analysis through SU25. Overall drag increment data was collected for comparison with Whitcomb’s data. More in-depth analysis was then done on the flow conditions around the geometries using CFD solution plots. After analysis of the collected data was performed, it was discovered that this data argued against Whitcomb’s comparable body of revolution claim as no cases were demonstrated where the comparable body and original body yielded similar drag rise characteristics. Along with this, shock structures and patterns were not simplified in two of the three cases observed and were instead complicated even further. The only exception to this observation was the swept wing, cylindrical body in which all shocks were virtually eliminated at all observed Mach numbers. For the reduced transonic drag rise claim, the data argued in favor of this as the drag rise was indeed reduced for the three observed geometries, but only for a limited Mach number range.
ContributorsArmenta, Francisco Xavier (Author) / Takahashi, Timothy T (Thesis advisor) / Kim, Jeonglae (Committee member) / Rodi, Patrick (Committee member) / Arizona State University (Publisher)
Created2021
161953-Thumbnail Image.png
Description
Identifying and tracking the location of the fluid interface is a fundamental aspect of multiphase flows. The Volume of Fluid (VOF) and Level Set methods are widely used to track the interface accurately. Analyzing the liquid structures such as sheets, ligaments, and droplets helps understand the flow physics and fluid

Identifying and tracking the location of the fluid interface is a fundamental aspect of multiphase flows. The Volume of Fluid (VOF) and Level Set methods are widely used to track the interface accurately. Analyzing the liquid structures such as sheets, ligaments, and droplets helps understand the flow physics and fluid breakup mechanism, aids in predicting droplet formation, improves atomization modeling and spray combustion. The thesis focuses on developing a new method to identify these liquid structures and devise a sphere model for droplet size prediction by augmenting concepts of linear algebra, rigid body dynamics, computational fluid mechanics, scientific computing, and visualization. The first part of the thesis presents a new approach to classify the fluid structures based on their length scales along their principal axes. This approach provides a smooth tracking of the structures' generation history instead of relying on high-speed video imaging of the experiment. A droplet is observed to have three equal length scales, while a ligament has one and a sheet has two significantly larger length scales. The subsequent breakup of ligaments and droplets depends on the atomizer geometry, operating conditions, and fluid physical properties. While it's straightforward to apply DNS and estimate this breakup, it is proven to be computationally expensive. The second part of the thesis deals with developing a sphere model that would essentially reduce this computational cost. After identifying a liquid structure, the sphere model utilizes the level set data in the domain to quantify the structure using spheres. By using the evolution information of these spheres as they separate from each other, the subsequent droplet size distribution can be evaluated.
ContributorsKashetty, Sindhuja (Author) / Herrmann, Marcus (Thesis advisor) / Wells, Valana (Committee member) / Kim, Jeonglae (Committee member) / Arizona State University (Publisher)
Created2021