Matching Items (616)
Filtering by

Clear all filters

152030-Thumbnail Image.png
Description
Recently, the use of zinc oxide (ZnO) nanowires as an interphase in composite materials has been demonstrated to increase the interfacial shear strength between carbon fiber and an epoxy matrix. In this research work, the strong adhesion between ZnO and carbon fiber is investigated to elucidate the interactions at the

Recently, the use of zinc oxide (ZnO) nanowires as an interphase in composite materials has been demonstrated to increase the interfacial shear strength between carbon fiber and an epoxy matrix. In this research work, the strong adhesion between ZnO and carbon fiber is investigated to elucidate the interactions at the interface that result in high interfacial strength. First, molecular dynamics (MD) simulations are performed to calculate the adhesive energy between bare carbon and ZnO. Since the carbon fiber surface has oxygen functional groups, these were modeled and MD simulations showed the preference of ketones to strongly interact with ZnO, however, this was not observed in the case of hydroxyls and carboxylic acid. It was also found that the ketone molecules ability to change orientation facilitated the interactions with the ZnO surface. Experimentally, the atomic force microscope (AFM) was used to measure the adhesive energy between ZnO and carbon through a liftoff test by employing highly oriented pyrolytic graphite (HOPG) substrate and a ZnO covered AFM tip. Oxygen functionalization of the HOPG surface shows the increase of adhesive energy. Additionally, the surface of ZnO was modified to hold a negative charge, which demonstrated an increase in the adhesive energy. This increase in adhesion resulted from increased induction forces given the relatively high polarizability of HOPG and the preservation of the charge on ZnO surface. It was found that the additional negative charge can be preserved on the ZnO surface because there is an energy barrier since carbon and ZnO form a Schottky contact. Other materials with the same ionic properties of ZnO but with higher polarizability also demonstrated good adhesion to carbon. This result substantiates that their induced interaction can be facilitated not only by the polarizability of carbon but by any of the materials at the interface. The versatility to modify the magnitude of the induced interaction between carbon and an ionic material provides a new route to create interfaces with controlled interfacial strength.
ContributorsGalan Vera, Magdian Ulises (Author) / Sodano, Henry A (Thesis advisor) / Jiang, Hanqing (Committee member) / Solanki, Kiran (Committee member) / Oswald, Jay (Committee member) / Speyer, Gil (Committee member) / Arizona State University (Publisher)
Created2013
151874-Thumbnail Image.png
Description
Wind measurements are fundamental inputs for the evaluation of potential energy yield and performance of wind farms. Three-dimensional scanning coherent Doppler lidar (CDL) may provide a new basis for wind farm site selection, design, and control. In this research, CDL measurements obtained from multiple wind energy developments are analyzed and

Wind measurements are fundamental inputs for the evaluation of potential energy yield and performance of wind farms. Three-dimensional scanning coherent Doppler lidar (CDL) may provide a new basis for wind farm site selection, design, and control. In this research, CDL measurements obtained from multiple wind energy developments are analyzed and a novel wind farm control approach has been modeled. The possibility of using lidar measurements to more fully characterize the wind field is discussed, specifically, terrain effects, spatial variation of winds, power density, and the effect of shear at different layers within the rotor swept area. Various vector retrieval methods have been applied to the lidar data, and results are presented on an elevated terrain-following surface at hub height. The vector retrieval estimates are compared with tower measurements, after interpolation to the appropriate level. CDL data is used to estimate the spatial power density at hub height. Since CDL can measure winds at different vertical levels, an approach for estimating wind power density over the wind turbine rotor-swept area is explored. Sample optimized layouts of wind farm using lidar data and global optimization algorithms, accounting for wake interaction effects, have been explored. An approach to evaluate spatial wind speed and direction estimates from a standard nested Coupled Ocean and Atmosphere Mesoscale Prediction System (COAMPS) model and CDL is presented. The magnitude of spatial difference between observations and simulation for wind energy assessment is researched. Diurnal effects and ramp events as estimated by CDL and COAMPS were inter-compared. Novel wind farm control based on incoming winds and direction input from CDL's is developed. Both yaw and pitch control using scanning CDL for efficient wind farm control is analyzed. The wind farm control optimizes power production and reduces loads on wind turbines for various lidar wind speed and direction inputs, accounting for wind farm wake losses and wind speed evolution. Several wind farm control configurations were developed, for enhanced integrability into the electrical grid. Finally, the value proposition of CDL for a wind farm development, based on uncertainty reduction and return of investment is analyzed.
ContributorsKrishnamurthy, Raghavendra (Author) / Calhoun, Ronald J (Thesis advisor) / Chen, Kangping (Committee member) / Huang, Huei-Ping (Committee member) / Fraser, Matthew (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2013
151700-Thumbnail Image.png
Description
Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance

Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance of two velocity estimation schemes used in Doppler processing systems, namely, directional velocity estimation (DVE) and conventional velocity estimation (CVE). We find that DVE provides better estimation performance and is the only functioning method when the beam to flow angle is large. Unfortunately, DVE is computationally expensive and also requires divisions and square root operations that are hard to implement. We propose two approximation techniques to replace these computations. The simulation results on cyst images show that the proposed approximations do not affect the estimation performance. We also study backend processing which includes envelope detection, log compression and scan conversion. Three different envelope detection methods are compared. Among them, FIR based Hilbert Transform is considered the best choice when phase information is not needed, while quadrature demodulation is a better choice if phase information is necessary. Bilinear and Gaussian interpolation are considered for scan conversion. Through simulations of a cyst image, we show that bilinear interpolation provides comparable contrast-to-noise ratio (CNR) performance with Gaussian interpolation and has lower computational complexity. Thus, bilinear interpolation is chosen for our system.
ContributorsWei, Siyuan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
152235-Thumbnail Image.png
Description
The ability to design high performance buildings has acquired great importance in recent years due to numerous federal, societal and environmental initiatives. However, this endeavor is much more demanding in terms of designer expertise and time. It requires a whole new level of synergy between automated performance prediction with the

The ability to design high performance buildings has acquired great importance in recent years due to numerous federal, societal and environmental initiatives. However, this endeavor is much more demanding in terms of designer expertise and time. It requires a whole new level of synergy between automated performance prediction with the human capabilities to perceive, evaluate and ultimately select a suitable solution. While performance prediction can be highly automated through the use of computers, performance evaluation cannot, unless it is with respect to a single criterion. The need to address multi-criteria requirements makes it more valuable for a designer to know the "latitude" or "degrees of freedom" he has in changing certain design variables while achieving preset criteria such as energy performance, life cycle cost, environmental impacts etc. This requirement can be met by a decision support framework based on near-optimal "satisficing" as opposed to purely optimal decision making techniques. Currently, such a comprehensive design framework is lacking, which is the basis for undertaking this research. The primary objective of this research is to facilitate a complementary relationship between designers and computers for Multi-Criterion Decision Making (MCDM) during high performance building design. It is based on the application of Monte Carlo approaches to create a database of solutions using deterministic whole building energy simulations, along with data mining methods to rank variable importance and reduce the multi-dimensionality of the problem. A novel interactive visualization approach is then proposed which uses regression based models to create dynamic interplays of how varying these important variables affect the multiple criteria, while providing a visual range or band of variation of the different design parameters. The MCDM process has been incorporated into an alternative methodology for high performance building design referred to as Visual Analytics based Decision Support Methodology [VADSM]. VADSM is envisioned to be most useful during the conceptual and early design performance modeling stages by providing a set of potential solutions that can be analyzed further for final design selection. The proposed methodology can be used for new building design synthesis as well as evaluation of retrofits and operational deficiencies in existing buildings.
ContributorsDutta, Ranojoy (Author) / Reddy, T Agami (Thesis advisor) / Runger, George C. (Committee member) / Addison, Marlin S. (Committee member) / Arizona State University (Publisher)
Created2013
152239-Thumbnail Image.png
Description
Production from a high pressure gas well at a high production-rate encounters the risk of operating near the choking condition for a compressible flow in porous media. The unbounded gas pressure gradient near the point of choking, which is located near the wellbore, generates an effective tensile stress on the

Production from a high pressure gas well at a high production-rate encounters the risk of operating near the choking condition for a compressible flow in porous media. The unbounded gas pressure gradient near the point of choking, which is located near the wellbore, generates an effective tensile stress on the porous rock frame. This tensile stress almost always exceeds the tensile strength of the rock and it causes a tensile failure of the rock, leading to wellbore instability. In a porous rock, not all pores are choked at the same flow rate, and when just one pore is choked, the flow through the entire porous medium should be considered choked as the gas pressure gradient at the point of choking becomes singular. This thesis investigates the choking condition for compressible gas flow in a single microscopic pore. Quasi-one-dimensional analysis and axisymmetric numerical simulations of compressible gas flow in a pore scale varicose tube with a number of bumps are carried out, and the local Mach number and pressure along the tube are computed for the flow near choking condition. The effects of tube length, inlet-to-outlet pressure ratio, the number of bumps and the amplitude of the bumps on the choking condition are obtained. These critical values provide guidance for avoiding the choking condition in practice.
ContributorsYuan, Jing (Author) / Chen, Kangping (Thesis advisor) / Wang, Liping (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2013
152181-Thumbnail Image.png
Description
The objective of this thesis was to compare various approaches for classification of the `good' and `bad' parts via non-destructive resonance testing methods by collecting and analyzing experimental data in the frequency and time domains. A Laser Scanning Vibrometer was employed to measure vibrations samples in order to determine the

The objective of this thesis was to compare various approaches for classification of the `good' and `bad' parts via non-destructive resonance testing methods by collecting and analyzing experimental data in the frequency and time domains. A Laser Scanning Vibrometer was employed to measure vibrations samples in order to determine the spectral characteristics such as natural frequencies and amplitudes. Statistical pattern recognition tools such as Hilbert Huang, Fisher's Discriminant, and Neural Network were used to identify and classify the unknown samples whether they are defective or not. In this work, a Finite Element Analysis software packages (ANSYS 13.0 and NASTRAN NX8.0) was used to obtain estimates of resonance frequencies in `good' and `bad' samples. Furthermore, a system identification approach was used to generate Auto-Regressive-Moving Average with exogenous component, Box-Jenkins, and Output Error models from experimental data that can be used for classification
ContributorsJameel, Osama (Author) / Redkar, Sangram (Thesis advisor) / Arizona State University (Publisher)
Created2013
152142-Thumbnail Image.png
Description
According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement.

According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement. Energy Benchmarking offers initial building energy performance assessment without rigorous evaluation. Energy benchmarking tools based on the Commercial Buildings Energy Consumption Survey (CBECS) database are investigated in this thesis. This study proposes a new benchmarking methodology based on decision trees, where a relationship between the energy use intensities (EUI) and building parameters (continuous and categorical) is developed for different building types. This methodology was applied to medium office and school building types contained in the CBECS database. The Random Forest technique was used to find the most influential parameters that impact building energy use intensities. Subsequently, correlations which were significant were identified between EUIs and CBECS variables. Other than floor area, some of the important variables were number of workers, location, number of PCs and main cooling equipment. The coefficient of variation was used to evaluate the effectiveness of the new model. The customization technique proposed in this thesis was compared with another benchmarking model that is widely used by building owners and designers namely, the ENERGY STAR's Portfolio Manager. This tool relies on the standard Linear Regression methods which is only able to handle continuous variables. The model proposed uses data mining technique and was found to perform slightly better than the Portfolio Manager. The broader impacts of the new benchmarking methodology proposed is that it allows for identifying important categorical variables, and then incorporating them in a local, as against a global, model framework for EUI pertinent to the building type. The ability to identify and rank the important variables is of great importance in practical implementation of the benchmarking tools which rely on query-based building and HVAC variable filters specified by the user.
ContributorsKaskhedikar, Apoorva Prakash (Author) / Reddy, T. Agami (Thesis advisor) / Bryan, Harvey (Committee member) / Runger, George C. (Committee member) / Arizona State University (Publisher)
Created2013
152098-Thumbnail Image.png
Description
Natural resource depletion and environmental degradation are the stark realities of the times we live in. As awareness about these issues increases globally, industries and businesses are becoming interested in understanding and minimizing the ecological footprints of their activities. Evaluating the environmental impacts of products and processes has become a

Natural resource depletion and environmental degradation are the stark realities of the times we live in. As awareness about these issues increases globally, industries and businesses are becoming interested in understanding and minimizing the ecological footprints of their activities. Evaluating the environmental impacts of products and processes has become a key issue, and the first step towards addressing and eventually curbing climate change. Additionally, companies are finding it beneficial and are interested in going beyond compliance using pollution prevention strategies and environmental management systems to improve their environmental performance. Life-cycle Assessment (LCA) is an evaluative method to assess the environmental impacts associated with a products' life-cycle from cradle-to-grave (i.e. from raw material extraction through to material processing, manufacturing, distribution, use, repair and maintenance, and finally, disposal or recycling). This study focuses on evaluating building envelopes on the basis of their life-cycle analysis. In order to facilitate this analysis, a small-scale office building, the University Services Building (USB), with a built-up area of 148,101 ft2 situated on ASU campus in Tempe, Arizona was studied. The building's exterior envelope is the highlight of this study. The current exterior envelope is made of tilt-up concrete construction, a type of construction in which the concrete elements are constructed horizontally and tilted up, after they are cured, using cranes and are braced until other structural elements are secured. This building envelope is compared to five other building envelope systems (i.e. concrete block, insulated concrete form, cast-in-place concrete, steel studs and curtain wall constructions) evaluating them on the basis of least environmental impact. The research methodology involved developing energy models, simulating them and generating changes in energy consumption due to the above mentioned envelope types. Energy consumption data, along with various other details, such as building floor area, areas of walls, columns, beams etc. and their material types were imported into Life-Cycle Assessment software called ATHENA impact estimator for buildings. Using this four-stepped LCA methodology, the results showed that the Steel Stud envelope performed the best and less environmental impact compared to other envelope types. This research methodology can be applied to other building typologies.
ContributorsRamachandran, Sriranjani (Author) / Bryan, Harvey (Thesis advisor) / Reddy T, Agami (Committee member) / White, Philip (Committee member) / Arizona State University (Publisher)
Created2013
152246-Thumbnail Image.png
Description
Smoke entering a flight deck cabin has been an issue for commercial aircraft for many years. The issue for a flight crew is how to mitigate the smoke so that they can safely fly the aircraft. For this thesis, the feasibility of having a Negative Pressure System that utilizes the

Smoke entering a flight deck cabin has been an issue for commercial aircraft for many years. The issue for a flight crew is how to mitigate the smoke so that they can safely fly the aircraft. For this thesis, the feasibility of having a Negative Pressure System that utilizes the cabin altitude pressure and outside altitude pressure to remove smoke from a flight deck was studied. Existing procedures for flight crews call for a descent down to a safe level for depressurizing the aircraft before taking further action. This process takes crucial time that is critical to the flight crew's ability to keep aware of the situation. This process involves a flight crews coordination and fast thinking to manually take control of the aircraft; which has become increasing more difficult due to the advancements in aircraft automation. Unfortunately this is the only accepted procedure that is used by a flight crew. Other products merely displace the smoke. This displacement is after the time it takes for the flight crew to set up the smoke displacement unit with no guarantee that a flight crew will be able to see or use all of the aircraft's controls. The Negative Pressure System will work automatically and not only use similar components already found on the aircraft, but work in conjunction with the smoke detection system and pressurization system so smoke removal can begin without having to descend down to a lower altitude. In order for this system to work correctly many factors must be taken into consideration. The size of a flight deck varies from aircraft to aircraft, therefore the ability for the system to efficiently remove smoke from an aircraft is taken into consideration. For the system to be feasible on an aircraft the cost and weight must be taken into consideration as the added fuel consumption due to weight of the system may be the limiting factor for installing such a system on commercial aircraft.
ContributorsDavies, Russell (Author) / Rogers, Bradley (Thesis advisor) / Palmgren, Dale (Committee member) / Rajadas, John (Committee member) / Arizona State University (Publisher)
Created2013
152254-Thumbnail Image.png
Description
The friction condition is an important factor in controlling the compressing process in metalforming. The friction calibration maps (FCM) are widely used in estimating friction factors between the workpiece and die. However, in standard FEA, the friction condition is defined by friction coefficient factor (µ), while the FCM is used

The friction condition is an important factor in controlling the compressing process in metalforming. The friction calibration maps (FCM) are widely used in estimating friction factors between the workpiece and die. However, in standard FEA, the friction condition is defined by friction coefficient factor (µ), while the FCM is used to a constant shear friction factors (m) to describe the friction condition. The purpose of this research is to find a method to convert the m factor to u factor, so that FEA can be used to simulate ring tests with µ. The research is carried out with FEA and Design of Experiment (DOE). FEA is used to simulate the ring compression test. A 2D quarter model is adopted as geometry model. A bilinear material model is used in nonlinear FEA. After the model is established, validation tests are conducted via the influence of Poisson's ratio on the ring compression test. It is shown that the established FEA model is valid especially if the Poisson's ratio is close to 0.5 in the setting of FEA. Material folding phenomena is present in this model, and µ factors are applied at all surfaces of the ring respectively. It is also found that the reduction ratio of the ring and the slopes of the FCM can be used to describe the deformation of the ring specimen. With the baseline FEA model, some formulas between the deformation parameters, material mechanical properties and µ factors are generated through the statistical analysis to the simulating results of the ring compression test. A method to substitute the m factor with µ factors for particular material by selecting and applying the µ factor in time sequence is found based on these formulas. By converting the m factor into µ factor, the cold forging can be simulated.
ContributorsKexiang (Author) / Shah, Jami (Thesis advisor) / Davidson, Joseph (Committee member) / Trimble, Steve (Committee member) / Arizona State University (Publisher)
Created2013