Matching Items (237)
150660-Thumbnail Image.png
Description
Semiconductor scaling technology has led to a sharp growth in transistor counts. This has resulted in an exponential increase on both power dissipation and heat flux (or power density) in modern microprocessors. These microprocessors are integrated as the major components in many modern embedded devices, which offer richer features and

Semiconductor scaling technology has led to a sharp growth in transistor counts. This has resulted in an exponential increase on both power dissipation and heat flux (or power density) in modern microprocessors. These microprocessors are integrated as the major components in many modern embedded devices, which offer richer features and attain higher performance than ever before. Therefore, power and thermal management have become the significant design considerations for modern embedded devices. Dynamic voltage/frequency scaling (DVFS) and dynamic power management (DPM) are two well-known hardware capabilities offered by modern embedded processors. However, the power or thermal aware performance optimization is not fully explored for the mainstream embedded processors with discrete DVFS and DPM capabilities. Many key problems have not been answered yet. What is the maximum performance that an embedded processor can achieve under power or thermal constraint for a periodic application? Does there exist an efficient algorithm for the power or thermal management problems with guaranteed quality bound? These questions are hard to be answered because the discrete settings of DVFS and DPM enhance the complexity of many power and thermal management problems, which are generally NP-hard. The dissertation presents a comprehensive study on these NP-hard power and thermal management problems for embedded processors with discrete DVFS and DPM capabilities. In the domain of power management, the dissertation addresses the power minimization problem for real-time schedules, the energy-constrained make-span minimization problem on homogeneous and heterogeneous chip multiprocessors (CMP) architectures, and the battery aware energy management problem with nonlinear battery discharging model. In the domain of thermal management, the work addresses several thermal-constrained performance maximization problems for periodic embedded applications. All the addressed problems are proved to be NP-hard or strongly NP-hard in the study. Then the work focuses on the design of the off-line optimal or polynomial time approximation algorithms as solutions in the problem design space. Several addressed NP-hard problems are tackled by dynamic programming with optimal solutions and pseudo-polynomial run time complexity. Because the optimal algorithms are not efficient in worst case, the fully polynomial time approximation algorithms are provided as more efficient solutions. Some efficient heuristic algorithms are also presented as solutions to several addressed problems. The comprehensive study answers the key questions in order to fully explore the power and thermal management potentials on embedded processors with discrete DVFS and DPM capabilities. The provided solutions enable the theoretical analysis of the maximum performance for periodic embedded applications under power or thermal constraints.
ContributorsZhang, Sushu (Author) / Chatha, Karam S (Thesis advisor) / Cao, Yu (Committee member) / Konjevod, Goran (Committee member) / Vrudhula, Sarma (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2012
150550-Thumbnail Image.png
Description
Ultra-concealable multi-threat body armor used by law-enforcement is a multi-purpose armor that protects against attacks from knife, spikes, and small caliber rounds. The design of this type of armor involves fiber-resin composite materials that are flexible, light, are not unduly affected by environmental conditions, and perform as required. The National

Ultra-concealable multi-threat body armor used by law-enforcement is a multi-purpose armor that protects against attacks from knife, spikes, and small caliber rounds. The design of this type of armor involves fiber-resin composite materials that are flexible, light, are not unduly affected by environmental conditions, and perform as required. The National Institute of Justice (NIJ) characterizes this type of armor as low-level protection armor. NIJ also specifies the geometry of the knife and spike as well as the strike energy levels required for this level of protection. The biggest challenges are to design a thin, lightweight and ultra-concealable armor that can be worn under street clothes. In this study, several fundamental tasks involved in the design of such armor are addressed. First, the roles of design of experiments and regression analysis in experimental testing and finite element analysis are presented. Second, off-the-shelf materials available from international material manufacturers are characterized via laboratory experiments. Third, the calibration process required for a constitutive model is explained through the use of experimental data and computer software. Various material models in LS-DYNA for use in the finite element model are discussed. Numerical results are generated via finite element simulations and are compared against experimental data thus establishing the foundation for optimizing the design.
ContributorsVokshi, Erblina (Author) / Rajan, Subramaniam D. (Thesis advisor) / Neithalath, Narayanan (Committee member) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2012
150433-Thumbnail Image.png
Description

The current method of measuring thermal conductivity requires flat plates. For most common civil engineering materials, creating or extracting such samples is difficult. A prototype thermal conductivity experiment had been developed at Arizona State University (ASU) to test cylindrical specimens but proved difficult for repeated testing. In this study, enhancements

The current method of measuring thermal conductivity requires flat plates. For most common civil engineering materials, creating or extracting such samples is difficult. A prototype thermal conductivity experiment had been developed at Arizona State University (ASU) to test cylindrical specimens but proved difficult for repeated testing. In this study, enhancements to both testing methods were made. Additionally, test results of cylindrical testing were correlated with the results from identical materials tested by the Guarded Hot&ndashPlate; method, which uses flat plate specimens. In validating the enhancements made to the Guarded Hot&ndashPlate; and Cylindrical Specimen methods, 23 tests were ran on five different materials. The percent difference shown for the Guarded Hot&ndashPlate; method was less than 1%. This gives strong evidence that the enhanced Guarded Hot-Plate apparatus in itself is now more accurate for measuring thermal conductivity. The correlation between the thermal conductivity values of the Guarded Hot&ndashPlate; to those of the enhanced Cylindrical Specimen method was excellent. The conventional concrete mixture, due to much higher thermal conductivity values compared to the other mixtures, yielded a P&ndashvalue; of 0.600 which provided confidence in the performance of the enhanced Cylindrical Specimen Apparatus. Several recommendations were made for the future implementation of both test methods. The work in this study fulfills the research community and industry desire for a more streamlined, cost effective, and inexpensive means to determine the thermal conductivity of various civil engineering materials.

ContributorsMorris, Derek (Author) / Kaloush, Kamil (Thesis advisor) / Mobasher, Barzin (Committee member) / Phelan, Patrick E (Committee member) / Arizona State University (Publisher)
Created2011
151078-Thumbnail Image.png
Description
A unique feature, yet a challenge, in cognitive radio (CR) networks is the user hierarchy: secondary users (SU) wishing for data transmission must defer in the presence of active primary users (PUs), whose priority to channel access is strictly higher.Under a common thread of characterizing and improving Quality of Service

A unique feature, yet a challenge, in cognitive radio (CR) networks is the user hierarchy: secondary users (SU) wishing for data transmission must defer in the presence of active primary users (PUs), whose priority to channel access is strictly higher.Under a common thread of characterizing and improving Quality of Service (QoS) for the SUs, this dissertation is progressively organized under two main thrusts: the first thrust focuses on SU's throughput by exploiting the underlying properties of the PU spectrum to perform effective scheduling algorithms; and the second thrust aims at another important QoS performance of the SUs, namely delay, subject to the impact of PUs' activities, and proposes enhancement and control mechanisms. More specifically, in the first thrust, opportunistic spectrum scheduling for SU is first considered by jointly exploiting the memory in PU's occupancy and channel fading. In particular, the underexplored scenario where PU occupancy presents a {long} temporal memory is taken into consideration. By casting the problem as a partially observable Markov decision process, a set of {multi-tier} tradeoffs are quantified and illustrated. Next, a spectrum shaping framework is proposed by leveraging network coding as a {spectrum shaper} on the PU's traffic. Such shaping effect brings in predictability of the primary spectrum, which is utilized by the SUs to carry out adaptive channel sensing by prioritizing channel access order, and hence significantly improve their throughput. On the other hand, such predictability can make wireless channels more susceptible to jamming attacks. As a result, caution must be taken in designing wireless systems to balance the throughput and the jamming-resistant capability. The second thrust turns attention to an equally important performance metric, i.e., delay performance. Specifically, queueing delay analysis is conducted for SUs employing random access over the PU channels. Fluid approximation is taken and Poisson driven stochastic differential equations are applied to characterize the moments of the SUs' steady-state queueing delay. Then, dynamic packet generation control mechanisms are developed to meet the given delay requirements for SUs.
ContributorsWang, Shanshan (Author) / Zhang, Junshan (Thesis advisor) / Xue, Guoliang (Committee member) / Hui, Joseph (Committee member) / Duman, Tolga (Committee member) / Arizona State University (Publisher)
Created2012
151063-Thumbnail Image.png
Description
Interference constitutes a major challenge for communication networks operating over a shared medium where availability is imperative. This dissertation studies the problem of designing and analyzing efficient medium access protocols which are robust against strong adversarial jamming. More specifically, four medium access (MAC) protocols (i.e., JADE, ANTIJAM, COMAC, and SINRMAC)

Interference constitutes a major challenge for communication networks operating over a shared medium where availability is imperative. This dissertation studies the problem of designing and analyzing efficient medium access protocols which are robust against strong adversarial jamming. More specifically, four medium access (MAC) protocols (i.e., JADE, ANTIJAM, COMAC, and SINRMAC) which aim to achieve high throughput despite jamming activities under a variety of network and adversary models are presented. We also propose a self-stabilizing leader election protocol, SELECT, that can effectively elect a leader in the network with the existence of a strong adversary. Our protocols can not only deal with internal interference without the exact knowledge on the number of participants in the network, but they are also robust to unintentional or intentional external interference, e.g., due to co-existing networks or jammers. We model the external interference by a powerful adaptive and/or reactive adversary which can jam a (1 − ε)-portion of the time steps, where 0 < ε ≤ 1 is an arbitrary constant. We allow the adversary to be adaptive and to have complete knowledge of the entire protocol history. Moreover, in case the adversary is also reactive, it uses carrier sensing to make informed decisions to disrupt communications. Among the proposed protocols, JADE, ANTIJAM and COMAC are able to achieve Θ(1)-competitive throughput with the presence of the strong adversary; while SINRMAC is the first attempt to apply SINR model (i.e., Signal to Interference plus Noise Ratio), in robust medium access protocols design; the derived principles are also useful to build applications on top of the MAC layer, and we present SELECT, which is an exemplary study for leader election, which is one of the most fundamental tasks in distributed computing.
ContributorsZhang, Jin (Author) / Richa, Andréa W. (Thesis advisor) / Scheideler, Christian (Committee member) / Sen, Arunabha (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2012
151002-Thumbnail Image.png
Description
This study considered the impact of grid resolution on wind velocity simulated by the Weather Research and Forecasting (WRF) model. The period simulated spanned November 2009 through January 2010, for which, multi-resolution nested domains were examined. Basic analysis was performed utilizing the data assimilation tools of NCEP/NCAR (National Center for

This study considered the impact of grid resolution on wind velocity simulated by the Weather Research and Forecasting (WRF) model. The period simulated spanned November 2009 through January 2010, for which, multi-resolution nested domains were examined. Basic analysis was performed utilizing the data assimilation tools of NCEP/NCAR (National Center for Environmental Prediction/National Center for Atmospheric Research) to determine the ideal location to examine during the simulation was the Pacific Northwest portion of the United States, specifically the border between California and Oregon. The simulated mutli-resolution nested domains in this region indicated an increase in apparent wind speed as the resolution for the domain was increased. These findings were confirmed by statistical analysis which identified a positive bias for wind speed with respect to increased resolution as well as a correlation coefficient indicating the existence of a positive change in wind speed with increased resolution. An analysis of temperature change was performed in order to test the validity of the findings of the WRF simulation model. The statistical analysis performed on temperature change throughout the increased grid resolution did not indicate any change in temperature. In fact the correlation coefficient values between the domains were found in the 0.90 range, indicating the non-sensitivity of temperature across the increased resolutions. These results validate the findings of the WRF simulation: increased wind velocity can be observed at higher grid resolution. The study then considered the difference between wind velocity observed over the entire domains and the wind velocity observed solely over offshore locations. Wind velocity was observed to be significantly higher (an increase of 68.4%) in the offshore locations. The findings of this study suggest simulation tools should be utilized to examine domains at a higher resolution in order to identify potential locations for wind farms. The results go further to suggest the ideal location for these potential wind farms will be at offshore locations.
ContributorsBouey, Michael (Author) / Huang, Huei-Ping (Thesis advisor) / Trimble, Steve (Committee member) / Ronald, Ronald (Committee member) / Arizona State University (Publisher)
Created2012
151005-Thumbnail Image.png
Description
The project is mainly aimed at detecting the gas flow rate in Biosensors and medical health applications by means of an acoustic method using whistle based device. Considering the challenges involved in maintaining particular flow rate and back pressure for detecting certain analytes in breath analysis the proposed system along

The project is mainly aimed at detecting the gas flow rate in Biosensors and medical health applications by means of an acoustic method using whistle based device. Considering the challenges involved in maintaining particular flow rate and back pressure for detecting certain analytes in breath analysis the proposed system along with a cell phone provides a suitable way to maintain the flow rate without any additional battery driven device. To achieve this, a system-level approach is implemented which involves development of a closed end whistle which is placed inside a tightly fitted constant back pressure tube. By means of experimentation pressure vs. flowrate curve is initially obtained and used for the development of the particular whistle. Finally, by means of an FFT code in a cell phone the flow rate vs. frequency characteristic curve is obtained. When a person respires through the device a whistle sound is generated which is captured by the cellphone microphone and a FFT analysis is performed to determine the frequency and hence the flow rate from the characteristic curve. This approach can be used to detect flow rate as low as low as 1L/min. The concept has been applied for the first time in this work to the development and optimization of a breath analyzer.
ContributorsRavichandran, Balaje Dhanram (Author) / Forzani, Erica (Thesis advisor) / Xian, Xiaojun (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2012
150726-Thumbnail Image.png
Description
The heat and mass transfer phenomena in micro-scale for the mass transfer phenomena on drug in cylindrical matrix system, the simulation of oxygen/drug diffusion in a three dimensional capillary network, and a reduced chemical kinetic modeling of gas turbine combustion for Jet propellant-10 have been studied numerically. For the numerical

The heat and mass transfer phenomena in micro-scale for the mass transfer phenomena on drug in cylindrical matrix system, the simulation of oxygen/drug diffusion in a three dimensional capillary network, and a reduced chemical kinetic modeling of gas turbine combustion for Jet propellant-10 have been studied numerically. For the numerical analysis of the mass transfer phenomena on drug in cylindrical matrix system, the governing equations are derived from the cylindrical matrix systems, Krogh cylinder model, which modeling system is comprised of a capillary to a surrounding cylinder tissue along with the arterial distance to veins. ADI (Alternative Direction Implicit) scheme and Thomas algorithm are applied to solve the nonlinear partial differential equations (PDEs). This study shows that the important factors which have an effect on the drug penetration depth to the tissue are the mass diffusivity and the consumption of relevant species during the time allowed for diffusion to the brain tissue. Also, a computational fluid dynamics (CFD) model has been developed to simulate the blood flow and oxygen/drug diffusion in a three dimensional capillary network, which are satisfied in the physiological range of a typical capillary. A three dimensional geometry has been constructed to replicate the one studied by Secomb et al. (2000), and the computational framework features a non-Newtonian viscosity model for blood, the oxygen transport model including in oxygen-hemoglobin dissociation and wall flux due to tissue absorption, as well as an ability to study the diffusion of drugs and other materials in the capillary streams. Finally, a chemical kinetic mechanism of JP-10 has been compiled and validated for a wide range of combustion regimes, covering pressures of 1atm to 40atm with temperature ranges of 1,200 K - 1,700 K, which is being studied as a possible Jet propellant for the Pulse Detonation Engine (PDE) and other high-speed flight applications such as hypersonic missiles. The comprehensive skeletal mechanism consists of 58 species and 315 reactions including in CPD, Benzene formation process by the theory for polycyclic aromatic hydrocarbons (PAH) and soot formation process on the constant volume combustor, premixed flame characteristics.
ContributorsBae, Kang-Sik (Author) / Lee, Taewoo (Thesis advisor) / Huang, Huei-Ping (Committee member) / Calhoun, Ronald (Committee member) / Phelan, Patrick (Committee member) / Lopez, Juan (Committee member) / Arizona State University (Publisher)
Created2012
151207-Thumbnail Image.png
Description
This doctoral thesis investigates the predictability characteristics of floods and flash floods by coupling high resolution precipitation products to a distributed hydrologic model. The research hypotheses are tested at multiple watersheds in the Colorado Front Range (CFR) undergoing warm-season precipitation. Rainfall error structures are expected to propagate into hydrologic simulations

This doctoral thesis investigates the predictability characteristics of floods and flash floods by coupling high resolution precipitation products to a distributed hydrologic model. The research hypotheses are tested at multiple watersheds in the Colorado Front Range (CFR) undergoing warm-season precipitation. Rainfall error structures are expected to propagate into hydrologic simulations with added uncertainties by model parameters and initial conditions. Specifically, the following science questions are addressed: (1) What is the utility of Quantitative Precipitation Estimates (QPE) for high resolution hydrologic forecasts in mountain watersheds of the CFR?, (2) How does the rainfall-reflectivity relation determine the magnitude of errors when radar observations are used for flood forecasts?, and (3) What are the spatiotemporal limits of flood forecasting in mountain basins when radar nowcasts are used into a distributed hydrological model?. The methodology consists of QPE evaluations at the site (i.e., rain gauge location), basin-average and regional scales, and Quantitative Precipitation Forecasts (QPF) assessment through regional grid-to-grid verification techniques and ensemble basin-averaged time series. The corresponding hydrologic responses that include outlet discharges, distributed runoff maps, and streamflow time series at internal channel locations, are used in light of observed and/or reference data to diagnose the suitability of fusing precipitation forecasts into a distributed model operating at multiple catchments. Results reveal that radar and multisensor QPEs lead to an improved hydrologic performance compared to simulations driven with rain gauge data only. In addition, hydrologic performances attained by satellite products preserve the fundamental properties of basin responses, including a simple scaling relation between the relative spatial variability of runoff and its magnitude. Overall, the spatial variations contained in gridded QPEs add value for warm-season flood forecasting in mountain basins, with sparse data even if those products contain some biases. These results are encouraging and open new avenues for forecasting in regions with limited access and sparse observations. Regional comparisons of different reflectivity -rainfall (Z-R) relations during three summer seasons, illustrated significant rainfall variability across the region. Consistently, hydrologic errors introduced by the distinct Z-R relations, are significant and proportional (in the log-log space) to errors in precipitation estimations and stream flow magnitude. The use of operational Z-R relations without prior calibration may lead to wrong estimation of precipitation, runoff magnitude and increased flood forecasting errors. This suggests that site-specific Z-R relations, prior to forecasting procedures, are desirable in complex terrain regions. Nowcasting experiments show the limits of flood forecasting and its dependence functions of lead time and basin scale. Across the majority of the basins, flood forecasting skill decays with lead time, but the functional relation depends on the interactions between watershed properties and rainfall characteristics. Both precipitation and flood forecasting skills are noticeably reduced for lead times greater than 30 minutes. Scale dependence of hydrologic forecasting errors demonstrates reduced predictability at intermediate-size basins, the typical scale of convective storm systems. Overall, the fusion of high resolution radar nowcasts and the convenient parallel capabilities of the distributed hydrologic model provide an efficient framework for generating accurate real-time flood forecasts suitable for operational environments.
ContributorsMoreno Ramirez, Hernan (Author) / Vivoni, Enrique R. (Thesis advisor) / Ruddell, Benjamin L. (Committee member) / Gochis, David J. (Committee member) / Mays, Larry W. (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2012
151212-Thumbnail Image.png
Description
This study performs numerical modeling for the climate of semi-arid regions by running a high-resolution atmospheric model constrained by large-scale climatic boundary conditions, a practice commonly called climate downscaling. These investigations focus especially on precipitation and temperature, quantities that are critical to life in semi-arid regions. Using the Weather Research

This study performs numerical modeling for the climate of semi-arid regions by running a high-resolution atmospheric model constrained by large-scale climatic boundary conditions, a practice commonly called climate downscaling. These investigations focus especially on precipitation and temperature, quantities that are critical to life in semi-arid regions. Using the Weather Research and Forecast (WRF) model, a non-hydrostatic geophysical fluid dynamical model with a full suite of physical parameterization, a series of numerical sensitivity experiments are conducted to test how the intensity and spatial/temporal distribution of precipitation change with grid resolution, time step size, the resolution of lower boundary topography and surface characteristics. Two regions, Arizona in U.S. and Aral Sea region in Central Asia, are chosen as the test-beds for the numerical experiments: The former for its complex terrain and the latter for the dramatic man-made changes in its lower boundary conditions (the shrinkage of Aral Sea). Sensitivity tests show that the parameterization schemes for rainfall are not resolution-independent, thus a refinement of resolution is no guarantee of a better result. But, simulations (at all resolutions) do capture the inter-annual variability of rainfall over Arizona. Nevertheless, temperature is simulated more accurately with refinement in resolution. Results show that both seasonal mean rainfall and frequency of extreme rainfall events increase with resolution. For Aral Sea, sensitivity tests indicate that while the shrinkage of Aral Sea has a dramatic impact on the precipitation over the confine of (former) Aral Sea itself, its effect on the precipitation over greater Central Asia is not necessarily greater than the inter-annual variability induced by the lateral boundary conditions in the model and large scale warming in the region. The numerical simulations in the study are cross validated with observations to address the realism of the regional climate model. The findings of this sensitivity study are useful for water resource management in semi-arid regions. Such high spatio-temporal resolution gridded-data can be used as an input for hydrological models for regions such as Arizona with complex terrain and sparse observations. Results from simulations of Aral Sea region are expected to contribute to ecosystems management for Central Asia.
ContributorsSharma, Ashish (Author) / Huang, Huei-Ping (Thesis advisor) / Adrian, Ronald (Committee member) / Herrmann, Marcus (Committee member) / Phelan, Patrick E. (Committee member) / Vivoni, Enrique (Committee member) / Arizona State University (Publisher)
Created2012