Matching Items (256)
151367-Thumbnail Image.png
Description
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on

This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
ContributorsDeivanayagam, Arumugam (Author) / Rajan, Subramaniam D. (Thesis advisor) / Mobasher, Barzin (Committee member) / Neithalath, Narayanan (Committee member) / Arizona State University (Publisher)
Created2012
150942-Thumbnail Image.png
Description
The ease of use of mobile devices and tablets by students has generated a lot of interest in the area of engineering education. By using mobile technologies in signal analysis and applied mathematics, undergraduate-level courses can broaden the scope and effectiveness of technical education in classrooms. The current mobile devices

The ease of use of mobile devices and tablets by students has generated a lot of interest in the area of engineering education. By using mobile technologies in signal analysis and applied mathematics, undergraduate-level courses can broaden the scope and effectiveness of technical education in classrooms. The current mobile devices have abundant memory and powerful processors, in addition to providing interactive interfaces. Therefore, these devices can support the implementation of non-trivial signal processing algorithms. Several existing visual programming environments such as Java Digital Signal Processing (J-DSP), are built using the platform-independent infrastructure of Java applets. These enable students to perform signal-processing exercises over the Internet. However, some mobile devices do not support Java applets. Furthermore, mobile simulation environments rely heavily on establishing robust Internet connections with a remote server where the processing is performed. The interactive Java Digital Signal Processing tool (iJDSP) has been developed as graphical mobile app on iOS devices (iPads, iPhones and iPod touches). In contrast to existing mobile applications, iJDSP has the ability to execute simulations directly on the mobile devices, and is a completely stand-alone application. In addition to a substantial set of signal processing algorithms, iJDSP has a highly interactive graphical interface where block diagrams can be constructed using a simple drag-n-drop procedure. Functions such as visualization of the convolution operation, and an interface to wireless sensors have been developed. The convolution module animates the process of the continuous and discrete convolution operations, including time-shift and integration, so that users can observe and learn, intuitively. The current set of DSP functions in the application enables students to perform simulation exercises on continuous and discrete convolution, z-transform, filter design and the Fast Fourier Transform (FFT). The interface to wireless sensors in iJDSP allows users to import data from wireless sensor networks, and use the rich suite of functions in iJDSP for data processing. This allows users to perform operations such as localization, activity detection and data fusion. The exercises and the iJDSP application were evaluated by senior-level students at Arizona State University (ASU), and the results of those assessments are analyzed and reported in this thesis.
ContributorsHu, Shuang (Author) / Spanias, Andreas (Thesis advisor) / Tsakalis, Kostas (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2012
150953-Thumbnail Image.png
Description
Cognitive Radios (CR) are designed to dynamically reconfigure their transmission and/or reception parameters to utilize the bandwidth efficiently. With a rapidly fluctuating radio environment, spectrum management becomes crucial for cognitive radios. In a Cognitive Radio Ad Hoc Network (CRAHN) setting, the sensing and transmission times of the cognitive radio play

Cognitive Radios (CR) are designed to dynamically reconfigure their transmission and/or reception parameters to utilize the bandwidth efficiently. With a rapidly fluctuating radio environment, spectrum management becomes crucial for cognitive radios. In a Cognitive Radio Ad Hoc Network (CRAHN) setting, the sensing and transmission times of the cognitive radio play a more important role because of the decentralized nature of the network. They have a direct impact on the throughput. Due to the tradeoff between throughput and the sensing time, finding optimal values for sensing time and transmission time is difficult. In this thesis, a method is proposed to improve the throughput of a CRAHN by dynamically changing the sensing and transmission times. To simulate the CRAHN setting, ns-2, the network simulator with an extension for CRAHN is used. The CRAHN extension module implements the required Primary User (PU) and Secondary User (SU) and other CR functionalities to simulate a realistic CRAHN scenario. First, this work presents a detailed analysis of various CR parameters, their interactions, their individual contributions to the throughput to understand how they affect the transmissions in the network. Based on the results of this analysis, changes to the system model in the CRAHN extension are proposed. Instantaneous throughput of the network is introduced in the new model, which helps to determine how the parameters should adapt based on the current throughput. Along with instantaneous throughput, checks are done for interference with the PUs and their transmission power, before modifying these CR parameters. Simulation results demonstrate that the throughput of the CRAHN with the adaptive sensing and transmission times is significantly higher as compared to that of non-adaptive parameters.
ContributorsBapat, Namrata Arun (Author) / Syrotiuk, Violet R. (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2012
150448-Thumbnail Image.png
Description
Concrete design has recently seen a shift in focus from prescriptive specifications to performance based specifications with increasing demands for sustainable products. Fiber reinforced composites (FRC) provides unique properties to a material that is very weak under tensile loads. The addition of fibers to a concrete mix provides additional ductility

Concrete design has recently seen a shift in focus from prescriptive specifications to performance based specifications with increasing demands for sustainable products. Fiber reinforced composites (FRC) provides unique properties to a material that is very weak under tensile loads. The addition of fibers to a concrete mix provides additional ductility and reduces the propagation of cracks in the concrete structure. It is the fibers that bridge the crack and dissipate the incurred strain energy in the form of a fiber-pullout mechanism. The addition of fibers plays an important role in tunnel lining systems and in reducing shrinkage cracking in high performance concretes. The interest in most design situations is the load where cracking first takes place. Typically the post crack response will exhibit either a load bearing increase as deflection continues, or a load bearing decrease as deflection continues. These behaviors are referred to as strain hardening and strain softening respectively. A strain softening or hardening response is used to model the behavior of different types of fiber reinforced concrete and simulate the experimental flexural response. Closed form equations for moment-curvature response of rectangular beams under four and three point loading in conjunction with crack localization rules are utilized. As a result, the stress distribution that considers a shifting neutral axis can be simulated which provides a more accurate representation of the residual strength of the fiber cement composites. The use of typical residual strength parameters by standards organizations ASTM, JCI and RILEM are examined to be incorrect in their linear elastic assumption of FRC behavior. Finite element models were implemented to study the effects and simulate the load defection response of fiber reinforced shotcrete round discrete panels (RDP's) tested in accordance with ASTM C-1550. The back-calculated material properties from the flexural tests were used as a basis for the FEM material models. Further development of FEM beams were also used to provide additional comparisons in residual strengths of early age samples. A correlation between the RDP and flexural beam test was generated based a relationship between normalized toughness with respect to the newly generated crack surfaces. A set of design equations are proposed using a residual strength correction factor generated by the model and produce the design moment based on specified concrete slab geometry.
ContributorsBarsby, Christopher (Author) / Mobasher, Barzin (Thesis advisor) / Rajan, Subramaniam D. (Committee member) / Neithalath, Narayanan (Committee member) / Arizona State University (Publisher)
Created2011
150660-Thumbnail Image.png
Description
Semiconductor scaling technology has led to a sharp growth in transistor counts. This has resulted in an exponential increase on both power dissipation and heat flux (or power density) in modern microprocessors. These microprocessors are integrated as the major components in many modern embedded devices, which offer richer features and

Semiconductor scaling technology has led to a sharp growth in transistor counts. This has resulted in an exponential increase on both power dissipation and heat flux (or power density) in modern microprocessors. These microprocessors are integrated as the major components in many modern embedded devices, which offer richer features and attain higher performance than ever before. Therefore, power and thermal management have become the significant design considerations for modern embedded devices. Dynamic voltage/frequency scaling (DVFS) and dynamic power management (DPM) are two well-known hardware capabilities offered by modern embedded processors. However, the power or thermal aware performance optimization is not fully explored for the mainstream embedded processors with discrete DVFS and DPM capabilities. Many key problems have not been answered yet. What is the maximum performance that an embedded processor can achieve under power or thermal constraint for a periodic application? Does there exist an efficient algorithm for the power or thermal management problems with guaranteed quality bound? These questions are hard to be answered because the discrete settings of DVFS and DPM enhance the complexity of many power and thermal management problems, which are generally NP-hard. The dissertation presents a comprehensive study on these NP-hard power and thermal management problems for embedded processors with discrete DVFS and DPM capabilities. In the domain of power management, the dissertation addresses the power minimization problem for real-time schedules, the energy-constrained make-span minimization problem on homogeneous and heterogeneous chip multiprocessors (CMP) architectures, and the battery aware energy management problem with nonlinear battery discharging model. In the domain of thermal management, the work addresses several thermal-constrained performance maximization problems for periodic embedded applications. All the addressed problems are proved to be NP-hard or strongly NP-hard in the study. Then the work focuses on the design of the off-line optimal or polynomial time approximation algorithms as solutions in the problem design space. Several addressed NP-hard problems are tackled by dynamic programming with optimal solutions and pseudo-polynomial run time complexity. Because the optimal algorithms are not efficient in worst case, the fully polynomial time approximation algorithms are provided as more efficient solutions. Some efficient heuristic algorithms are also presented as solutions to several addressed problems. The comprehensive study answers the key questions in order to fully explore the power and thermal management potentials on embedded processors with discrete DVFS and DPM capabilities. The provided solutions enable the theoretical analysis of the maximum performance for periodic embedded applications under power or thermal constraints.
ContributorsZhang, Sushu (Author) / Chatha, Karam S (Thesis advisor) / Cao, Yu (Committee member) / Konjevod, Goran (Committee member) / Vrudhula, Sarma (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2012
150773-Thumbnail Image.png
Description
Photovoltaics (PV) is an important and rapidly growing area of research. With the advent of power system monitoring and communication technology collectively known as the "smart grid," an opportunity exists to apply signal processing techniques to monitoring and control of PV arrays. In this paper a monitoring system which provides

Photovoltaics (PV) is an important and rapidly growing area of research. With the advent of power system monitoring and communication technology collectively known as the "smart grid," an opportunity exists to apply signal processing techniques to monitoring and control of PV arrays. In this paper a monitoring system which provides real-time measurements of each PV module's voltage and current is considered. A fault detection algorithm formulated as a clustering problem and addressed using the robust minimum covariance determinant (MCD) estimator is described; its performance on simulated instances of arc and ground faults is evaluated. The algorithm is found to perform well on many types of faults commonly occurring in PV arrays. Among several types of detection algorithms considered, only the MCD shows high performance on both types of faults.
ContributorsBraun, Henry (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2012
150530-Thumbnail Image.png
Description
With increased usage of green energy, the number of photovoltaic arrays used in power generation is increasing rapidly. Many of the arrays are located at remote locations where faults that occur within the array often go unnoticed and unattended for large periods of time. Technicians sent to rectify the faults

With increased usage of green energy, the number of photovoltaic arrays used in power generation is increasing rapidly. Many of the arrays are located at remote locations where faults that occur within the array often go unnoticed and unattended for large periods of time. Technicians sent to rectify the faults have to spend a large amount of time determining the location of the fault manually. Automated monitoring systems are needed to obtain the information about the performance of the array and detect faults. Such systems must monitor the DC side of the array in addition to the AC side to identify non catastrophic faults. This thesis focuses on two of the requirements for DC side monitoring of an automated PV array monitoring system. The first part of the thesis quantifies the advantages of obtaining higher resolution data from a PV array on detection of faults. Data for the monitoring system can be gathered for the array as a whole or from additional places within the array such as individual modules and end of strings. The fault detection rate and the false positive rates are compared for array level, string level and module level PV data. Monte Carlo simulations are performed using PV array models developed in Simulink and MATLAB for fault and no fault cases. The second part describes a graphical user interface (GUI) that can be used to visualize the PV array for module level monitoring system information. A demonstration GUI is built in MATLAB using data obtained from a PV array test facility in Tempe, AZ. Visualizations are implemented to display information about the array as a whole or individual modules and locate faults in the array.
ContributorsKrishnan, Venkatachalam (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Ayyanar, Raja (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
150550-Thumbnail Image.png
Description
Ultra-concealable multi-threat body armor used by law-enforcement is a multi-purpose armor that protects against attacks from knife, spikes, and small caliber rounds. The design of this type of armor involves fiber-resin composite materials that are flexible, light, are not unduly affected by environmental conditions, and perform as required. The National

Ultra-concealable multi-threat body armor used by law-enforcement is a multi-purpose armor that protects against attacks from knife, spikes, and small caliber rounds. The design of this type of armor involves fiber-resin composite materials that are flexible, light, are not unduly affected by environmental conditions, and perform as required. The National Institute of Justice (NIJ) characterizes this type of armor as low-level protection armor. NIJ also specifies the geometry of the knife and spike as well as the strike energy levels required for this level of protection. The biggest challenges are to design a thin, lightweight and ultra-concealable armor that can be worn under street clothes. In this study, several fundamental tasks involved in the design of such armor are addressed. First, the roles of design of experiments and regression analysis in experimental testing and finite element analysis are presented. Second, off-the-shelf materials available from international material manufacturers are characterized via laboratory experiments. Third, the calibration process required for a constitutive model is explained through the use of experimental data and computer software. Various material models in LS-DYNA for use in the finite element model are discussed. Numerical results are generated via finite element simulations and are compared against experimental data thus establishing the foundation for optimizing the design.
ContributorsVokshi, Erblina (Author) / Rajan, Subramaniam D. (Thesis advisor) / Neithalath, Narayanan (Committee member) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2012
150433-Thumbnail Image.png
Description

The current method of measuring thermal conductivity requires flat plates. For most common civil engineering materials, creating or extracting such samples is difficult. A prototype thermal conductivity experiment had been developed at Arizona State University (ASU) to test cylindrical specimens but proved difficult for repeated testing. In this study, enhancements

The current method of measuring thermal conductivity requires flat plates. For most common civil engineering materials, creating or extracting such samples is difficult. A prototype thermal conductivity experiment had been developed at Arizona State University (ASU) to test cylindrical specimens but proved difficult for repeated testing. In this study, enhancements to both testing methods were made. Additionally, test results of cylindrical testing were correlated with the results from identical materials tested by the Guarded Hot&ndashPlate; method, which uses flat plate specimens. In validating the enhancements made to the Guarded Hot&ndashPlate; and Cylindrical Specimen methods, 23 tests were ran on five different materials. The percent difference shown for the Guarded Hot&ndashPlate; method was less than 1%. This gives strong evidence that the enhanced Guarded Hot-Plate apparatus in itself is now more accurate for measuring thermal conductivity. The correlation between the thermal conductivity values of the Guarded Hot&ndashPlate; to those of the enhanced Cylindrical Specimen method was excellent. The conventional concrete mixture, due to much higher thermal conductivity values compared to the other mixtures, yielded a P&ndashvalue; of 0.600 which provided confidence in the performance of the enhanced Cylindrical Specimen Apparatus. Several recommendations were made for the future implementation of both test methods. The work in this study fulfills the research community and industry desire for a more streamlined, cost effective, and inexpensive means to determine the thermal conductivity of various civil engineering materials.

ContributorsMorris, Derek (Author) / Kaloush, Kamil (Thesis advisor) / Mobasher, Barzin (Committee member) / Phelan, Patrick E (Committee member) / Arizona State University (Publisher)
Created2011
151078-Thumbnail Image.png
Description
A unique feature, yet a challenge, in cognitive radio (CR) networks is the user hierarchy: secondary users (SU) wishing for data transmission must defer in the presence of active primary users (PUs), whose priority to channel access is strictly higher.Under a common thread of characterizing and improving Quality of Service

A unique feature, yet a challenge, in cognitive radio (CR) networks is the user hierarchy: secondary users (SU) wishing for data transmission must defer in the presence of active primary users (PUs), whose priority to channel access is strictly higher.Under a common thread of characterizing and improving Quality of Service (QoS) for the SUs, this dissertation is progressively organized under two main thrusts: the first thrust focuses on SU's throughput by exploiting the underlying properties of the PU spectrum to perform effective scheduling algorithms; and the second thrust aims at another important QoS performance of the SUs, namely delay, subject to the impact of PUs' activities, and proposes enhancement and control mechanisms. More specifically, in the first thrust, opportunistic spectrum scheduling for SU is first considered by jointly exploiting the memory in PU's occupancy and channel fading. In particular, the underexplored scenario where PU occupancy presents a {long} temporal memory is taken into consideration. By casting the problem as a partially observable Markov decision process, a set of {multi-tier} tradeoffs are quantified and illustrated. Next, a spectrum shaping framework is proposed by leveraging network coding as a {spectrum shaper} on the PU's traffic. Such shaping effect brings in predictability of the primary spectrum, which is utilized by the SUs to carry out adaptive channel sensing by prioritizing channel access order, and hence significantly improve their throughput. On the other hand, such predictability can make wireless channels more susceptible to jamming attacks. As a result, caution must be taken in designing wireless systems to balance the throughput and the jamming-resistant capability. The second thrust turns attention to an equally important performance metric, i.e., delay performance. Specifically, queueing delay analysis is conducted for SUs employing random access over the PU channels. Fluid approximation is taken and Poisson driven stochastic differential equations are applied to characterize the moments of the SUs' steady-state queueing delay. Then, dynamic packet generation control mechanisms are developed to meet the given delay requirements for SUs.
ContributorsWang, Shanshan (Author) / Zhang, Junshan (Thesis advisor) / Xue, Guoliang (Committee member) / Hui, Joseph (Committee member) / Duman, Tolga (Committee member) / Arizona State University (Publisher)
Created2012