Matching Items (720)
Filtering by

Clear all filters

150092-Thumbnail Image.png
Description
The evolution of single hairpin vortices and multiple interacting hairpin vortices are studied in direct numerical simulations of channel flow at Re-tau=395. The purpose of this study is to observe the effects of increased Reynolds number and varying initial conditions on the growth of hairpins and the conditions under which

The evolution of single hairpin vortices and multiple interacting hairpin vortices are studied in direct numerical simulations of channel flow at Re-tau=395. The purpose of this study is to observe the effects of increased Reynolds number and varying initial conditions on the growth of hairpins and the conditions under which single hairpins autogenerate hairpin packets. The hairpin vortices are believed to provide a unified picture of wall turbulence and play an important role in the production of Reynolds shear stress which is directly related to turbulent drag. The structures of the initial three-dimensional vortices are extracted from the two-point spatial correlation of the fully turbulent direct numerical simulation of the velocity field by linear stochastic estimation and embedded in a mean flow having the profile of the fully turbulent flow. The Reynolds number of the present simulation is more than twice that of the Re-tau=180 flow from earlier literature and the conditional events used to define the stochastically estimated single vortex initial conditions include a number of new types of events such as quasi-streamwise vorticity and Q4 events. The effects of parameters like strength, asymmetry and position are evaluated and compared with existing results in the literature. This study then attempts to answer questions concerning how vortex mergers produce larger scale structures, a process that may contribute to the growth of length scale with increasing distance from the wall in turbulent wall flows. Multiple vortex interactions are studied in detail.
ContributorsParthasarathy, Praveen Kumar (Author) / Adrian, Ronald (Thesis advisor) / Huang, Huei-Ping (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2011
Description
Buildings in the United States, account for over 68 percent of electricity consumed, 39 percent of total energy use, and 38 percent of the carbon dioxide emissions. By the year 2035, about 75% of the U.S. building sector will be either new or renovated. The energy efficiency requirements of current

Buildings in the United States, account for over 68 percent of electricity consumed, 39 percent of total energy use, and 38 percent of the carbon dioxide emissions. By the year 2035, about 75% of the U.S. building sector will be either new or renovated. The energy efficiency requirements of current building codes would have a significant impact on future energy use, hence, one of the most widely accepted solutions to slowing the growth rate of GHG emissions and then reversing it involves a stringent adoption of building energy codes. A large number of building energy codes exist and a large number of studies which state the energy savings possible through code compliance. However, most codes are difficult to comprehend and require an extensive understanding of the code, the compliance paths, all mandatory and prescriptive requirements as well as the strategy to convert the same to energy model inputs. This paper provides a simplified solution for the entire process by providing an easy to use interface for code compliance and energy simulation through a spreadsheet based tool, the ECCO or the Energy Code COmpliance Tool. This tool provides a platform for a more detailed analysis of building codes as applicable to each and every individual building in each climate zone. It also facilitates quick building energy simulation to determine energy savings achieved through code compliance. This process is highly beneficial not only for code compliance, but also for identifying parameters which can be improved for energy efficiency. Code compliance is simplified through a series of parametric runs which generates the minimally compliant baseline building and 30% beyond code building. This tool is seen as an effective solution for architects and engineers for an initial level analysis as well as for jurisdictions as a front-end diagnostic check for code compliance.  
ContributorsGoel, Supriya (Author) / Bryan, Harvey J. (Thesis advisor) / Reddy, T. Agami (Committee member) / Addison, Marlin (Committee member) / Arizona State University (Publisher)
Created2011
150141-Thumbnail Image.png
Description
A method of determining nanoparticle temperature through fluorescence intensity levels is described. Intracellular processes are often tracked through the use of fluorescence tagging, and ideal temperatures for many of these processes are unknown. Through the use of fluorescence-based thermometry, cellular processes such as intracellular enzyme movement can be studied and

A method of determining nanoparticle temperature through fluorescence intensity levels is described. Intracellular processes are often tracked through the use of fluorescence tagging, and ideal temperatures for many of these processes are unknown. Through the use of fluorescence-based thermometry, cellular processes such as intracellular enzyme movement can be studied and their respective temperatures established simultaneously. Polystyrene and silica nanoparticles are synthesized with a variety of temperature-sensitive dyes such as BODIPY, rose Bengal, Rhodamine dyes 6G, 700, and 800, and Nile Blue A and Nile Red. Photographs are taken with a QImaging QM1 Questar EXi Retiga camera while particles are heated from 25 to 70 C and excited at 532 nm with a Coherent DPSS-532 laser. Photographs are converted to intensity images in MATLAB and analyzed for fluorescence intensity, and plots are generated in MATLAB to describe each dye's intensity vs temperature. Regression curves are created to describe change in fluorescence intensity over temperature. Dyes are compared as nanoparticle core material is varied. Large particles are also created to match the camera's optical resolution capabilities, and it is established that intensity values increase proportionally with nanoparticle size. Nile Red yielded the closest-fit model, with R2 values greater than 0.99 for a second-order polynomial fit. By contrast, Rhodamine 6G only yielded an R2 value of 0.88 for a third-order polynomial fit, making it the least reliable dye for temperature measurements using the polynomial model. Of particular interest in this work is Nile Blue A, whose fluorescence-temperature curve yielded a much different shape from the other dyes. It is recommended that future work describe a broader range of dyes and nanoparticle sizes, and use multiple excitation wavelengths to better quantify each dye's quantum efficiency. Further research into the effects of nanoparticle size on fluorescence intensity levels should be considered as the particles used here greatly exceed 2 ìm. In addition, Nile Blue A should be further investigated as to why its fluorescence-temperature curve did not take on a characteristic shape for a temperature-sensitive dye in these experiments.
ContributorsTomforde, Christine (Author) / Phelan, Patrick (Thesis advisor) / Dai, Lenore (Committee member) / Adrian, Ronald (Committee member) / Arizona State University (Publisher)
Created2011
150104-Thumbnail Image.png
Description
A full understanding of material behavior is important for the prediction of residual useful life of aerospace structures via computational modeling. In particular, the influence of rolling-induced anisotropy on fatigue properties has not been studied extensively and it is likely to have a meaningful effect. In this work, fatigue behavior

A full understanding of material behavior is important for the prediction of residual useful life of aerospace structures via computational modeling. In particular, the influence of rolling-induced anisotropy on fatigue properties has not been studied extensively and it is likely to have a meaningful effect. In this work, fatigue behavior of a wrought Al alloy (2024-T351) is studied using notched uniaxial samples with load axes along either the longitudinal or transverse direction, and center notched biaxial samples (cruciforms) with a uniaxial stress state of equivalent amplitude about the bore. Local composition and crystallography were quantified before testing using Energy Dispersive Spectroscopy and Electron Backscattering Diffraction. Interrupted fatigue testing at stresses close to yielding was performed on the samples to nucleate and propagate short cracks and nucleation sites were located and characterized using standard optical and Scanning Electron Microscopy. Results show that crack nucleation occurred due to fractured particles for longitudinal dogbone/cruciform samples; while transverse samples nucleated cracks by debonded and fractured particles. Change in crack nucleation mechanism is attributed to dimensional change of particles with respect to the material axes caused by global anisotropy. Crack nucleation from debonding reduced life till matrix fracture because debonded particles are sharper and generate matrix cracks sooner than their fractured counterparts. Longitudinal samples experienced multisite crack initiation because of reduced cross sectional areas of particles parallel to the loading direction. Conversely the favorable orientation of particles in transverse samples reduced instances of particle fracture eliminating multisite cracking and leading to increased fatigue life. Cyclic tests of cruciform samples showed that crack growth favors longitudinal and transverse directions with few instances of crack growth 45 degrees (diagonal) to the rolling direction. The diagonal crack growth is attributed to stronger influences of local anisotropy on crack nucleation. It was observed that majority of the time crack nucleation is governed by the mixed influences of global and local anisotropies. Measurements of crystal directions parallel to the load on main crack paths revealed directions clustered near the {110} planes and high index directions. This trend is attributed to environmental effects as a result of cyclic testing in air.
ContributorsMakaš, Admir (Author) / Peralta, Pedro D. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Sieradzki, Karl (Committee member) / Arizona State University (Publisher)
Created2011
150105-Thumbnail Image.png
Description
The objective of this work is to develop a Stop-Rotor Multimode UAV. This UAV is capable of vertical take-off and landing like a helicopter and can convert from a helicopter mode to an airplane mode in mid-flight. Thus, this UAV can hover as a helicopter and achieve high mission range

The objective of this work is to develop a Stop-Rotor Multimode UAV. This UAV is capable of vertical take-off and landing like a helicopter and can convert from a helicopter mode to an airplane mode in mid-flight. Thus, this UAV can hover as a helicopter and achieve high mission range of an airplane. The stop-rotor concept implies that in mid-flight the lift generating helicopter rotor stops and rotates the blades into airplane wings. The thrust in airplane mode is then provided by a pusher propeller. The aircraft configuration presents unique challenges in flight dynamics, modeling and control. In this thesis a mathematical model along with the design and simulations of a hover control will be presented. In addition, the discussion of the performance in fixed-wing flight, and the autopilot architecture of the UAV will be presented. Also presented, are some experimental "conversion" results where the Stop-Rotor aircraft was dropped from a hot air balloon and performed a successful conversion from helicopter to airplane mode.
ContributorsVargas-Clara, Alvaro (Author) / Redkar, Sangram (Thesis advisor) / Macia, Narciso (Committee member) / Rajadas, John (Committee member) / Arizona State University (Publisher)
Created2011
150107-Thumbnail Image.png
Description
Titanium dioxide (TiO2) nanomaterial use is becoming more prevalent as is the likelihood of human exposure and environmental release. The goal of this thesis is to develop analytical techniques to quantify the level of TiO2 in complex matrices to support environmental, health, and safety research of TiO2 nanomaterials. A pharmacokinetic

Titanium dioxide (TiO2) nanomaterial use is becoming more prevalent as is the likelihood of human exposure and environmental release. The goal of this thesis is to develop analytical techniques to quantify the level of TiO2 in complex matrices to support environmental, health, and safety research of TiO2 nanomaterials. A pharmacokinetic model showed that the inhalation of TiO2 nanomaterials caused the highest amount to be absorbed and distributed throughout the body. Smaller nanomaterials (< 5nm) accumulated in the kidneys before clearance. Nanoparticles of 25 nm diameter accumulated in the liver and spleen and were cleared from the body slower than smaller nanomaterials. A digestion method using nitric acid, hydrofluoric acid, and hydrogen peroxide was found to digest organic materials and TiO2 with a recovery of >80%. The samples were measured by inductively coupled plasma-mass spectrometry (ICP-MS) and the method detection limit was 600 ng of Ti. An intratracheal instillation study of TiO2 nanomaterials in rats found anatase TiO2 nanoparticles in the caudal lung lobe of rats 1 day post instillation at a concentration of 1.2 ug/mg dry tissue, the highest deposition rate of any TiO2 nanomaterial. For all TiO2 nanomaterial morphologies the concentrations in the caudal lobes were significantly higher than those in the cranial lobes. In a study of TiO2 concentration in food products, white colored foods or foods with a hard outer shell had higher concentrations of TiO2. Hostess Powdered Donettes were found to have the highest Ti mass per serving with 200 mg Ti. As much as 3.8% of the total TiO2 mass was able to pass through a 0.45 um indicating that some of the TiO2 is likely nanosized. In a study of TiO2 concentrations in personal care products and paints, the concentration of TiO2 was as high as 117 ug/mg in Benjamin Moore white paint and 70 ug/mg in a Neutrogena sunscreen. Greater than 6% of Ti in one sunscreen was able to pass through a 0.45 um filter. The nanosized TiO2 in food products and personal care products may release as much as 16 mg of nanosized TiO2 per individual per day to wastewater.
ContributorsWeir, Alex Alan (Author) / Westerhoff, Paul K (Thesis advisor) / Hristovski, Kiril (Committee member) / Herckes, Pierre (Committee member) / Arizona State University (Publisher)
Created2011
152344-Thumbnail Image.png
Description
Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of

Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of damage over time can provide extremely useful information in assessing the operational worthiness of a structure and in determining whether the structure should be repaired or removed from service. In this work, a sequential Bayesian approach with active sensing is employed for monitoring crack growth within fatigue-loaded materials. The monitoring approach is based on predicting crack damage state dynamics and modeling crack length observations. Since fatigue loading of a structural component can change while in service, an interacting multiple model technique is employed to estimate probabilities of different loading modes and incorporate this information in the crack length estimation problem. For the observation model, features are obtained from regions of high signal energy in the time-frequency plane and modeled for each crack length damage condition. Although this observation model approach exhibits high classification accuracy, the resolution characteristics can change depending upon the extent of the damage. Therefore, several different transmission waveforms and receiver sensors are considered to create multiple modes for making observations of crack damage. Resolution characteristics of the different observation modes are assessed using a predicted mean squared error criterion and observations are obtained using the predicted, optimal observation modes based on these characteristics. Calculation of the predicted mean square error metric can be computationally intensive, especially if performed in real time, and an approximation method is proposed. With this approach, the real time computational burden is decreased significantly and the number of possible observation modes can be increased. Using sensor measurements from real experiments, the overall sequential Bayesian estimation approach, with the adaptive capability of varying the state dynamics and observation modes, is demonstrated for tracking crack damage.
ContributorsHuff, Daniel W (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Kovvali, Narayan (Committee member) / Chakrabarti, Chaitali (Committee member) / Chattopadhyay, Aditi (Committee member) / Arizona State University (Publisher)
Created2013
152349-Thumbnail Image.png
Description
As robots are increasingly migrating out of factories and research laboratories and into our everyday lives, they should move and act in environments designed for humans. For this reason, the need of anthropomorphic movements is of utmost importance. The objective of this thesis is to solve the inverse kinematics problem

As robots are increasingly migrating out of factories and research laboratories and into our everyday lives, they should move and act in environments designed for humans. For this reason, the need of anthropomorphic movements is of utmost importance. The objective of this thesis is to solve the inverse kinematics problem of redundant robot arms that results to anthropomorphic configurations. The swivel angle of the elbow was used as a human arm motion parameter for the robot arm to mimic. The swivel angle is defined as the rotation angle of the plane defined by the upper and lower arm around a virtual axis that connects the shoulder and wrist joints. Using kinematic data recorded from human subjects during every-day life tasks, the linear sensorimotor transformation model was validated and used to estimate the swivel angle, given the desired end-effector position. Defining the desired swivel angle simplifies the kinematic redundancy of the robot arm. The proposed method was tested with an anthropomorphic redundant robot arm and the computed motion profiles were compared to the ones of the human subjects. This thesis shows that the method computes anthropomorphic configurations for the robot arm, even if the robot arm has different link lengths than the human arm and starts its motion at random configurations.
ContributorsWang, Yuting (Author) / Artemiadis, Panagiotis (Thesis advisor) / Mignolet, Marc (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
152297-Thumbnail Image.png
Description
This thesis research focuses on developing a single-cell gene expression analysis method for marine diatom Thalassiosira pseudonana and constructing a chip level tool to realize the single cell RT-qPCR analysis. This chip will serve as a conceptual foundation for future deployable ocean monitoring systems. T. pseudonana, which is a common

This thesis research focuses on developing a single-cell gene expression analysis method for marine diatom Thalassiosira pseudonana and constructing a chip level tool to realize the single cell RT-qPCR analysis. This chip will serve as a conceptual foundation for future deployable ocean monitoring systems. T. pseudonana, which is a common surface water microorganism, was detected in the deep ocean as confirmed by phylogenetic and microbial community functional studies. Six-fold copy number differences between 23S rRNA and 23S rDNA were observed by RT-qPCR, demonstrating the moderate functional activity of detected photosynthetic microbes in the deep ocean including T. pseudonana. Because of the ubiquity of T. pseudonana, it is a good candidate for an early warning system for ocean environmental perturbation monitoring. This early warning system will depend on identifying outlier gene expression at the single-cell level. An early warning system based on single-cell analysis is expected to detect environmental perturbations earlier than population level analysis which can only be observed after a whole community has reacted. Preliminary work using tube-based, two-step RT-qPCR revealed for the first time, gene expression heterogeneity of T. pseudonana under different nutrient conditions. Heterogeneity was revealed by different gene expression activity for individual cells under the same conditions. This single cell analysis showed a skewed, lognormal distribution and helped to find outlier cells. The results indicate that the geometric average becomes more important and representative of the whole population than the arithmetic average. This is in contrast with population level analysis which is limited to arithmetic averages only and highlights the value of single cell analysis. In order to develop a deployable sensor in the ocean, a chip level device was constructed. The chip contains surface-adhering droplets, defined by hydrophilic patterning, that serve as real-time PCR reaction chambers when they are immersed in oil. The chip had demonstrated sensitivities at the single cell level for both DNA and RNA. The successful rate of these chip-based reactions was around 85%. The sensitivity of the chip was equivalent to published microfluidic devices with complicated designs and protocols, but the production process of the chip was simple and the materials were all easily accessible in conventional environmental and/or biology laboratories. On-chip tests provided heterogeneity information about the whole population and were validated by comparing with conventional tube based methods and by p-values analysis. The power of chip-based single-cell analyses were mainly between 65-90% which were acceptable and can be further increased by higher throughput devices. With this chip and single-cell analysis approaches, a new paradigm for robust early warning systems of ocean environmental perturbation is possible.
ContributorsShi, Xu (Author) / Meldrum, Deirdre R. (Thesis advisor) / Zhang, Weiwen (Committee member) / Chao, Shih-hui (Committee member) / Westerhoff, Paul (Committee member) / Arizona State University (Publisher)
Created2013
151787-Thumbnail Image.png
Description
Electromyogram (EMG)-based control interfaces are increasingly used in robot teleoperation, prosthetic devices control and also in controlling robotic exoskeletons. Over the last two decades researchers have come up with a plethora of decoding functions to map myoelectric signals to robot motions. However, this requires a lot of training and validation

Electromyogram (EMG)-based control interfaces are increasingly used in robot teleoperation, prosthetic devices control and also in controlling robotic exoskeletons. Over the last two decades researchers have come up with a plethora of decoding functions to map myoelectric signals to robot motions. However, this requires a lot of training and validation data sets, while the parameters of the decoding function are specific for each subject. In this thesis we propose a new methodology that doesn't require training and is not user-specific. The main idea is to supplement the decoding functional error with the human ability to learn inverse model of an arbitrary mapping function. We have shown that the subjects gradually learned the control strategy and their learning rates improved. We also worked on identifying an optimized control scheme that would be even more effective and easy to learn for the subjects. Optimization was done by taking into account that muscles act in synergies while performing a motion task. The low-dimensional representation of the neural activity was used to control a two-dimensional task. Results showed that in the case of reduced dimensionality mapping, the subjects were able to learn to control the device in a slower pace, however they were able to reach and retain the same level of controllability. To summarize, we were able to build an EMG-based controller for robot devices that would work for any subject, without any training or decoding function, suggesting human-embedded controllers for robotic devices.
ContributorsAntuvan, Chris Wilson (Author) / Artemiadis, Panagiotis (Thesis advisor) / Muthuswamy, Jitendran (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013