Matching Items (94)
Filtering by

Clear all filters

151700-Thumbnail Image.png
Description
Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance

Ultrasound imaging is one of the major medical imaging modalities. It is cheap, non-invasive and has low power consumption. Doppler processing is an important part of many ultrasound imaging systems. It is used to provide blood velocity information and is built on top of B-mode systems. We investigate the performance of two velocity estimation schemes used in Doppler processing systems, namely, directional velocity estimation (DVE) and conventional velocity estimation (CVE). We find that DVE provides better estimation performance and is the only functioning method when the beam to flow angle is large. Unfortunately, DVE is computationally expensive and also requires divisions and square root operations that are hard to implement. We propose two approximation techniques to replace these computations. The simulation results on cyst images show that the proposed approximations do not affect the estimation performance. We also study backend processing which includes envelope detection, log compression and scan conversion. Three different envelope detection methods are compared. Among them, FIR based Hilbert Transform is considered the best choice when phase information is not needed, while quadrature demodulation is a better choice if phase information is necessary. Bilinear and Gaussian interpolation are considered for scan conversion. Through simulations of a cyst image, we show that bilinear interpolation provides comparable contrast-to-noise ratio (CNR) performance with Gaussian interpolation and has lower computational complexity. Thus, bilinear interpolation is chosen for our system.
ContributorsWei, Siyuan (Author) / Chakrabarti, Chaitali (Thesis advisor) / Frakes, David (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
152239-Thumbnail Image.png
Description
Production from a high pressure gas well at a high production-rate encounters the risk of operating near the choking condition for a compressible flow in porous media. The unbounded gas pressure gradient near the point of choking, which is located near the wellbore, generates an effective tensile stress on the

Production from a high pressure gas well at a high production-rate encounters the risk of operating near the choking condition for a compressible flow in porous media. The unbounded gas pressure gradient near the point of choking, which is located near the wellbore, generates an effective tensile stress on the porous rock frame. This tensile stress almost always exceeds the tensile strength of the rock and it causes a tensile failure of the rock, leading to wellbore instability. In a porous rock, not all pores are choked at the same flow rate, and when just one pore is choked, the flow through the entire porous medium should be considered choked as the gas pressure gradient at the point of choking becomes singular. This thesis investigates the choking condition for compressible gas flow in a single microscopic pore. Quasi-one-dimensional analysis and axisymmetric numerical simulations of compressible gas flow in a pore scale varicose tube with a number of bumps are carried out, and the local Mach number and pressure along the tube are computed for the flow near choking condition. The effects of tube length, inlet-to-outlet pressure ratio, the number of bumps and the amplitude of the bumps on the choking condition are obtained. These critical values provide guidance for avoiding the choking condition in practice.
ContributorsYuan, Jing (Author) / Chen, Kangping (Thesis advisor) / Wang, Liping (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2013
152139-Thumbnail Image.png
Description
ABSTRACT Developing new non-traditional device models is gaining popularity as the silicon-based electrical device approaches its limitation when it scales down. Membrane systems, also called P systems, are a new class of biological computation model inspired by the way cells process chemical signals. Spiking Neural P systems (SNP systems), a

ABSTRACT Developing new non-traditional device models is gaining popularity as the silicon-based electrical device approaches its limitation when it scales down. Membrane systems, also called P systems, are a new class of biological computation model inspired by the way cells process chemical signals. Spiking Neural P systems (SNP systems), a certain kind of membrane systems, is inspired by the way the neurons in brain interact using electrical spikes. Compared to the traditional Boolean logic, SNP systems not only perform similar functions but also provide a more promising solution for reliable computation. Two basic neuron types, Low Pass (LP) neurons and High Pass (HP) neurons, are introduced. These two basic types of neurons are capable to build an arbitrary SNP neuron. This leads to the conclusion that these two basic neuron types are Turing complete since SNP systems has been proved Turing complete. These two basic types of neurons are further used as the elements to construct general-purpose arithmetic circuits, such as adder, subtractor and comparator. In this thesis, erroneous behaviors of neurons are discussed. Transmission error (spike loss) is proved to be equivalent to threshold error, which makes threshold error discussion more universal. To improve the reliability, a new structure called motif is proposed. Compared to Triple Modular Redundancy improvement, motif design presents its efficiency and effectiveness in both single neuron and arithmetic circuit analysis. DRAM-based CMOS circuits are used to implement the two basic types of neurons. Functionality of basic type neurons is proved using the SPICE simulations. The motif improved adder and the comparator, as compared to conventional Boolean logic design, are much more reliable with lower leakage, and smaller silicon area. This leads to the conclusion that SNP system could provide a more promising solution for reliable computation than the conventional Boolean logic.
ContributorsAn, Pei (Author) / Cao, Yu (Thesis advisor) / Barnaby, Hugh (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2013
152249-Thumbnail Image.png
Description
For CFD validation, hypersonic flow fields are simulated and compared with experimental data specifically designed to recreate conditions found by hypersonic vehicles. Simulated flow fields on a cone-ogive with flare at Mach 7.2 are compared with experimental data from NASA Ames Research Center 3.5" hypersonic wind tunnel. A parametric study

For CFD validation, hypersonic flow fields are simulated and compared with experimental data specifically designed to recreate conditions found by hypersonic vehicles. Simulated flow fields on a cone-ogive with flare at Mach 7.2 are compared with experimental data from NASA Ames Research Center 3.5" hypersonic wind tunnel. A parametric study of turbulence models is presented and concludes that the k-kl-omega transition and SST transition turbulence model have the best correlation. Downstream of the flare's shockwave, good correlation is found for all boundary layer profiles, with some slight discrepancies of the static temperature near the surface. Simulated flow fields on a blunt cone with flare above Mach 10 are compared with experimental data from CUBRC LENS hypervelocity shock tunnel. Lack of vibrational non-equilibrium calculations causes discrepancies in heat flux near the leading edge. Temperature profiles, where non-equilibrium effects are dominant, are compared with the dissociation of molecules to show the effects of dissociation on static temperature. Following the validation studies is a parametric analysis of a hypersonic inlet from Mach 6 to 20. Compressor performance is investigated for numerous cowl leading edge locations up to speeds of Mach 10. The variable cowl study showed positive trends in compressor performance parameters for a range of Mach numbers that arise from maximizing the intake of compressed flow. An interesting phenomenon due to the change in shock wave formation for different Mach numbers developed inside the cowl that had a negative influence on the total pressure recovery. Investigation of the hypersonic inlet at different altitudes is performed to study the effects of Reynolds number, and consequently, turbulent viscous effects on compressor performance. Turbulent boundary layer separation was noted as the cause for a change in compressor performance parameters due to a change in Reynolds number. This effect would not be noticeable if laminar flow was assumed. Mach numbers up to 20 are investigated to study the effects of vibrational and chemical non-equilibrium on compressor performance. A direct impact on the trends on the kinetic energy efficiency and compressor efficiency was found due to dissociation.
ContributorsOliden, Daniel (Author) / Lee, Tae-Woo (Thesis advisor) / Peet, Yulia (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2013
152067-Thumbnail Image.png
Description
A new theoretical model was developed utilizing energy conservation methods in order to determine the fully-atomized cross-sectional Sauter mean diameters of pressure-swirl atomizers. A detailed boundary-layer assessment led to the development of a new viscous dissipation model for droplets in the spray. Integral momentum methods were also used to determine

A new theoretical model was developed utilizing energy conservation methods in order to determine the fully-atomized cross-sectional Sauter mean diameters of pressure-swirl atomizers. A detailed boundary-layer assessment led to the development of a new viscous dissipation model for droplets in the spray. Integral momentum methods were also used to determine the complete velocity history of the droplets and entrained gas in the spray. The model was extensively validated through comparison with experiment and it was found that the model could predict the correct droplet size with high accuracy for a wide range of operating conditions. Based on detailed analysis, it was found that the energy model has a tendency to overestimate the droplet diameters for very low injection velocities, Weber numbers, and cone angles. A full parametric study was also performed in order to unveil some underlying behavior of pressure-swirl atomizers. It was found that at high injection velocities, the kinetic energy in the spray is significantly larger than the surface tension energy, therefore, efforts into improving atomization quality by changing the liquid's surface tension may not be the most productive. From the parametric studies it was also shown how the Sauter mean diameter and entrained velocities vary with increasing ambient gas density. Overall, the present energy model has the potential to provide quick and reasonably accurate solutions for a wide range of operating conditions enabling the user to determine how different injection parameters affect the spray quality.
ContributorsMoradi, Ali (Author) / Lee, Taewoo (Thesis advisor) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2013
151941-Thumbnail Image.png
Description
With increasing transistor volume and reducing feature size, it has become a major design constraint to reduce power consumption also. This has given rise to aggressive architectural changes for on-chip power management and rapid development to energy efficient hardware accelerators. Accordingly, the objective of this research work is to facilitate

With increasing transistor volume and reducing feature size, it has become a major design constraint to reduce power consumption also. This has given rise to aggressive architectural changes for on-chip power management and rapid development to energy efficient hardware accelerators. Accordingly, the objective of this research work is to facilitate software developers to leverage these hardware techniques and improve energy efficiency of the system. To achieve this, I propose two solutions for Linux kernel: Optimal use of these architectural enhancements to achieve greater energy efficiency requires accurate modeling of processor power consumption. Though there are many models available in literature to model processor power consumption, there is a lack of such models to capture power consumption at the task-level. Task-level energy models are a requirement for an operating system (OS) to perform real-time power management as OS time multiplexes tasks to enable sharing of hardware resources. I propose a detailed design methodology for constructing an architecture agnostic task-level power model and incorporating it into a modern operating system to build an online task-level power profiler. The profiler is implemented inside the latest Linux kernel and validated for Intel Sandy Bridge processor. It has a negligible overhead of less than 1\% hardware resource consumption. The profiler power prediction was demonstrated for various application benchmarks from SPEC to PARSEC with less than 4\% error. I also demonstrate the importance of the proposed profiler for emerging architectural techniques through use case scenarios, which include heterogeneous computing and fine grained per-core DVFS. Along with architectural enhancement in general purpose processors to improve energy efficiency, hardware accelerators like Coarse Grain reconfigurable architecture (CGRA) are gaining popularity. Unlike vector processors, which rely on data parallelism, CGRA can provide greater flexibility and compiler level control making it more suitable for present SoC environment. To provide streamline development environment for CGRA, I propose a flexible framework in Linux to do design space exploration for CGRA. With accurate and flexible hardware models, fine grained integration with accurate architectural simulator, and Linux memory management and DMA support, a user can carry out limitless experiments on CGRA in full system environment.
ContributorsDesai, Digant Pareshkumar (Author) / Vrudhula, Sarma (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Wu, Carole-Jean (Committee member) / Arizona State University (Publisher)
Created2013
151645-Thumbnail Image.png
Description
Gas turbines have become widely used in the generation of power for cities. They are used all over the world and must operate under a wide variety of ambient conditions. Every turbine has a temperature at which it operates at peak capacity. In order to attain this temperature in the

Gas turbines have become widely used in the generation of power for cities. They are used all over the world and must operate under a wide variety of ambient conditions. Every turbine has a temperature at which it operates at peak capacity. In order to attain this temperature in the hotter months various cooling methods are used such as refrigeration inlet cooling systems, evaporative methods, and thermal energy storage systems. One of the more widely used is the evaporative systems because it is one of the safest and easiest to utilize method. However, the behavior of water droplets within the inlet to the turbine has not been extensively studied or documented. It is important to understand how the droplets behave within the inlet so that water droplets above a critical diameter will not enter the compressor and cause damage to the compressor blades. In order to do this a FLUENT simulation was constructed in order to determine the behavior of the water droplets and if any droplets remain at the exit of the inlet, along with their size. In order to do this several engineering drawings were obtained from SRP and studies in order to obtain the correct dimensions. Then the simulation was set up using data obtained from SRP and Parker-Hannifin, the maker of the spray nozzles. Then several sets of simulations were run in order to see how the water droplets behaved under various conditions. These results were then analyzed and quantified so that they could be easily understood. The results showed that the possible damage to the compressor increased with increasing temperature at a constant relative humidity. This is due in part to the fact that in order to keep a constant relative humidity at varying temperatures the mass fraction of water vapor in the air must be changed. As temperature increases the water vapor mass fraction must increase in order to maintain a constant relative humidity. This in turn makes it slightly increases the evaporation time of the water droplets. This will then lead to more droplets exiting the inlet and at larger diameters.
ContributorsHargrave, Kevin (Author) / Lee, Taewoo (Thesis advisor) / Huang, Huei-Ping (Committee member) / Chen, Kaangping (Committee member) / Arizona State University (Publisher)
Created2013
151294-Thumbnail Image.png
Description
The partitioning of available solar energy into different fluxes at the Earth's surface is important in determining different physical processes, such as turbulent transport, subsurface hydrology, land-atmospheric interactions, etc. Direct measurements of these turbulent fluxes were carried out using eddy-covariance (EC) towers. However, the distribution of EC towers is sparse

The partitioning of available solar energy into different fluxes at the Earth's surface is important in determining different physical processes, such as turbulent transport, subsurface hydrology, land-atmospheric interactions, etc. Direct measurements of these turbulent fluxes were carried out using eddy-covariance (EC) towers. However, the distribution of EC towers is sparse due to relatively high cost and practical difficulties in logistics and deployment. As a result, data is temporally and spatially limited and is inadequate to be used for researches at large scales, such as regional and global climate modeling. Besides field measurements, an alternative way is to estimate turbulent fluxes based on the intrinsic relations between surface energy budget components, largely through thermodynamic equilibrium. These relations, referred as relative efficiency, have been included in several models to estimate the magnitude of turbulent fluxes in surface energy budgets such as latent heat and sensible heat. In this study, three theoretical models based on the lumped heat transfer model, the linear stability analysis and the maximum entropy principle respectively, were investigated. Model predictions of relative efficiencies were compared with turbulent flux data over different land covers, viz. lake, grassland and suburban surfaces. Similar results were observed over lake and suburban surface but significant deviation is found over vegetation surface. The relative efficiency of outgoing longwave radiation is found to be orders of magnitude deviated from theoretic predictions. Meanwhile, results show that energy partitioning process is influenced by the surface water availability to a great extent. The study provides insight into what property is determining energy partitioning process over different land covers and gives suggestion for future models.
ContributorsYang, Jiachuan (Author) / Wang, Zhihua (Thesis advisor) / Huang, Huei-Ping (Committee member) / Vivoni, Enrique (Committee member) / Mays, Larry (Committee member) / Arizona State University (Publisher)
Created2012
152296-Thumbnail Image.png
Description
Ten regional climate models (RCMs) and atmosphere-ocean generalized model parings from the North America Regional Climate Change Assessment Program were used to estimate the shift of extreme precipitation due to climate change using present-day and future-day climate scenarios. RCMs emulate winter storms and one-day duration events at the sub-regional level.

Ten regional climate models (RCMs) and atmosphere-ocean generalized model parings from the North America Regional Climate Change Assessment Program were used to estimate the shift of extreme precipitation due to climate change using present-day and future-day climate scenarios. RCMs emulate winter storms and one-day duration events at the sub-regional level. Annual maximum series were derived for each model pairing, each modeling period; and for annual and winter seasons. The reliability ensemble average (REA) method was used to qualify each RCM annual maximum series to reproduce historical records and approximate average predictions, because there are no future records. These series determined (a) shifts in extreme precipitation frequencies and magnitudes, and (b) shifts in parameters during modeling periods. The REA method demonstrated that the winter season had lower REA factors than the annual season. For the winter season the RCM pairing of the Hadley regional Model 3 and the Geophysical Fluid-Dynamics Laboratory atmospheric-land generalized model had the lowest REA factors. However, in replicating present-day climate, the pairing of the Abdus Salam International Center for Theoretical Physics' Regional Climate Model Version 3 with the Geophysical Fluid-Dynamics Laboratory atmospheric-land generalized model was superior. Shifts of extreme precipitation in the 24-hour event were measured using precipitation magnitude for each frequency in the annual maximum series, and the difference frequency curve in the generalized extreme-value-function parameters. The average trend of all RCM pairings implied no significant shift in the winter annual maximum series, however the REA-selected models showed an increase in annual-season precipitation extremes: 0.37 inches for the 100-year return period and for the winter season suggested approximately 0.57 inches for the same return period. Shifts of extreme precipitation were estimated using predictions 70 years into the future based on RCMs. Although these models do not provide climate information for the intervening 70 year period, the models provide an assertion on the behavior of future climate. The shift in extreme precipitation may be significant in the frequency distribution function, and will vary depending on each model-pairing condition. The proposed methodology addresses the many uncertainties associated with the current methodologies dealing with extreme precipitation.
ContributorsRiaño, Alejandro (Author) / Mays, Larry W. (Thesis advisor) / Vivoni, Enrique (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2013
152173-Thumbnail Image.png
Description
Stream computing has emerged as an importantmodel of computation for embedded system applications particularly in the multimedia and network processing domains. In recent past several programming languages and embedded multi-core processors have been proposed for streaming applications. This thesis examines the execution and dynamic scheduling of stream programs on embedded

Stream computing has emerged as an importantmodel of computation for embedded system applications particularly in the multimedia and network processing domains. In recent past several programming languages and embedded multi-core processors have been proposed for streaming applications. This thesis examines the execution and dynamic scheduling of stream programs on embedded multi-core processors. The thesis addresses the problem in the context of a multi-tasking environment with a time varying allocation of processing elements for a particular streaming application. As a solution the thesis proposes a two step approach where the stream program is compiled to gather key application information, and to generate re-targetable code. A light weight dynamic scheduler incorporates the second stage of the approach. The dynamic scheduler utilizes the static information and available resources to assign or partition the application across the multi-core architecture. The objective of the dynamic scheduler is to maximize the throughput of the application, and it is sensitive to the resource (processing elements, scratch-pad memory, DMA bandwidth) constraints imposed by the target architecture. We evaluate the proposed approach by compiling and scheduling benchmark stream programs on a representative embedded multi-core processor. We present experimental results that evaluate the quality of the solutions generated by the proposed approach by comparisons with existing techniques.
ContributorsLee, Haeseung (Author) / Chatha, Karamvir (Thesis advisor) / Vrudhula, Sarma (Committee member) / Chakrabarti, Chaitali (Committee member) / Wu, Carole-Jean (Committee member) / Arizona State University (Publisher)
Created2013