Matching Items (184)
152383-Thumbnail Image.png
Description
Data centers connect a larger number of servers requiring IO and switches with low power and delay. Virtualization of IO and network is crucial for these servers, which run virtual processes for computing, storage, and apps. We propose using the PCI Express (PCIe) protocol and a new PCIe switch fabric

Data centers connect a larger number of servers requiring IO and switches with low power and delay. Virtualization of IO and network is crucial for these servers, which run virtual processes for computing, storage, and apps. We propose using the PCI Express (PCIe) protocol and a new PCIe switch fabric for IO and switch virtualization. The switch fabric has little data buffering, allowing up to 512 physical 10 Gb/s PCIe2.0 lanes to be connected via a switch fabric. The switch is scalable with adapters running multiple adaptation protocols, such as Ethernet over PCIe, PCIe over Internet, or FibreChannel over Ethernet. Such adaptation protocols allow integration of IO often required for disjoint datacenter applications such as storage and networking. The novel switch fabric based on space-time carrier sensing facilitates high bandwidth, low power, and low delay multi-protocol switching. To achieve Terabit switching, both time (high transmission speed) and space (multi-stage interconnection network) technologies are required. In this paper, we present the design of an up to 256 lanes Clos-network of multistage crossbar switch fabric for PCIe system. The switch core consists of 48 16x16 crossbar sub-switches. We also propose a new output contention resolution algorithm utilizing an out-of-band protocol of Request-To-Send (RTS), Clear-To-Send (CTS) before sending PCIe packets through the switch fabric. Preliminary power and delay estimates are provided.
ContributorsLuo, Haojun (Author) / Hui, Joseph (Thesis advisor) / Song, Hongjiang (Committee member) / Reisslein, Martin (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2013
151367-Thumbnail Image.png
Description
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on

This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
ContributorsDeivanayagam, Arumugam (Author) / Rajan, Subramaniam D. (Thesis advisor) / Mobasher, Barzin (Committee member) / Neithalath, Narayanan (Committee member) / Arizona State University (Publisher)
Created2012
150639-Thumbnail Image.png
Description
A new type of Ethernet switch based on the PCI Express switching fabric is being presented. The switch leverages PCI Express peer-to-peer communication protocol to implement high performance Ethernet packet switching. The advantages and challenges of using the PCI Express as the switching fabric are addressed. The PCI Express is

A new type of Ethernet switch based on the PCI Express switching fabric is being presented. The switch leverages PCI Express peer-to-peer communication protocol to implement high performance Ethernet packet switching. The advantages and challenges of using the PCI Express as the switching fabric are addressed. The PCI Express is a high-speed short-distance communication protocol largely used in motherboard-level interconnects. The total bandwidth of a PCI Express 3.0 link can reach as high as 256 gigabit per second (Gb/s) per 16 lanes. Concerns for PCI Express such as buffer speed, address mapping, Quality of Service and power consumption need to be considered. An overview of the proposed Ethernet switch architecture is presented. The switch consists of a PCI Express switching fabric and multiple adaptor cards. The thesis reviews the peer-to-peer (P2P) communication protocol used in the switching fabric. The thesis also discusses the packet routing procedure in P2P protocol in detail. The Ethernet switch utilizes a portion of the Quality of Service provided with PCI Express to ensure guaranteed transmission. The thesis presents a method of adapting Ethernet packets over the PCI Express transaction layer packets. The adaptor card is divided into the following two parts: receive path and transmit path. The commercial off-the-shelf Media Access Control (MAC) core and PCI Express endpoint core are used in the adaptor. The output address lookup logic block is responsible for converting Ethernet MAC addresses to PCI Express port addresses. Different methods of providing Quality of Service in the adaptor card include classification, flow control, and error detection with the cooperation of the PCI Express switch are discussed. The adaptor logic is implemented in Verilog hardware description language. Functional simulation is conducted in ModelSim. The simulation results show that the Ethernet packets are able to be converted to the corresponding PCI Express transaction layer packets based on their destination MAC addresses. The transaction layer packets are then converted back to Ethernet packets. A functionally correct FPGA logic of the adaptor card is ready for implementation on real FPGA development board.
ContributorsChen, Caiyi (Author) / Hui, Joseph (Thesis advisor) / Reisslein, Martin (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2012
150448-Thumbnail Image.png
Description
Concrete design has recently seen a shift in focus from prescriptive specifications to performance based specifications with increasing demands for sustainable products. Fiber reinforced composites (FRC) provides unique properties to a material that is very weak under tensile loads. The addition of fibers to a concrete mix provides additional ductility

Concrete design has recently seen a shift in focus from prescriptive specifications to performance based specifications with increasing demands for sustainable products. Fiber reinforced composites (FRC) provides unique properties to a material that is very weak under tensile loads. The addition of fibers to a concrete mix provides additional ductility and reduces the propagation of cracks in the concrete structure. It is the fibers that bridge the crack and dissipate the incurred strain energy in the form of a fiber-pullout mechanism. The addition of fibers plays an important role in tunnel lining systems and in reducing shrinkage cracking in high performance concretes. The interest in most design situations is the load where cracking first takes place. Typically the post crack response will exhibit either a load bearing increase as deflection continues, or a load bearing decrease as deflection continues. These behaviors are referred to as strain hardening and strain softening respectively. A strain softening or hardening response is used to model the behavior of different types of fiber reinforced concrete and simulate the experimental flexural response. Closed form equations for moment-curvature response of rectangular beams under four and three point loading in conjunction with crack localization rules are utilized. As a result, the stress distribution that considers a shifting neutral axis can be simulated which provides a more accurate representation of the residual strength of the fiber cement composites. The use of typical residual strength parameters by standards organizations ASTM, JCI and RILEM are examined to be incorrect in their linear elastic assumption of FRC behavior. Finite element models were implemented to study the effects and simulate the load defection response of fiber reinforced shotcrete round discrete panels (RDP's) tested in accordance with ASTM C-1550. The back-calculated material properties from the flexural tests were used as a basis for the FEM material models. Further development of FEM beams were also used to provide additional comparisons in residual strengths of early age samples. A correlation between the RDP and flexural beam test was generated based a relationship between normalized toughness with respect to the newly generated crack surfaces. A set of design equations are proposed using a residual strength correction factor generated by the model and produce the design moment based on specified concrete slab geometry.
ContributorsBarsby, Christopher (Author) / Mobasher, Barzin (Thesis advisor) / Rajan, Subramaniam D. (Committee member) / Neithalath, Narayanan (Committee member) / Arizona State University (Publisher)
Created2011
150550-Thumbnail Image.png
Description
Ultra-concealable multi-threat body armor used by law-enforcement is a multi-purpose armor that protects against attacks from knife, spikes, and small caliber rounds. The design of this type of armor involves fiber-resin composite materials that are flexible, light, are not unduly affected by environmental conditions, and perform as required. The National

Ultra-concealable multi-threat body armor used by law-enforcement is a multi-purpose armor that protects against attacks from knife, spikes, and small caliber rounds. The design of this type of armor involves fiber-resin composite materials that are flexible, light, are not unduly affected by environmental conditions, and perform as required. The National Institute of Justice (NIJ) characterizes this type of armor as low-level protection armor. NIJ also specifies the geometry of the knife and spike as well as the strike energy levels required for this level of protection. The biggest challenges are to design a thin, lightweight and ultra-concealable armor that can be worn under street clothes. In this study, several fundamental tasks involved in the design of such armor are addressed. First, the roles of design of experiments and regression analysis in experimental testing and finite element analysis are presented. Second, off-the-shelf materials available from international material manufacturers are characterized via laboratory experiments. Third, the calibration process required for a constitutive model is explained through the use of experimental data and computer software. Various material models in LS-DYNA for use in the finite element model are discussed. Numerical results are generated via finite element simulations and are compared against experimental data thus establishing the foundation for optimizing the design.
ContributorsVokshi, Erblina (Author) / Rajan, Subramaniam D. (Thesis advisor) / Neithalath, Narayanan (Committee member) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2012
150433-Thumbnail Image.png
Description

The current method of measuring thermal conductivity requires flat plates. For most common civil engineering materials, creating or extracting such samples is difficult. A prototype thermal conductivity experiment had been developed at Arizona State University (ASU) to test cylindrical specimens but proved difficult for repeated testing. In this study, enhancements

The current method of measuring thermal conductivity requires flat plates. For most common civil engineering materials, creating or extracting such samples is difficult. A prototype thermal conductivity experiment had been developed at Arizona State University (ASU) to test cylindrical specimens but proved difficult for repeated testing. In this study, enhancements to both testing methods were made. Additionally, test results of cylindrical testing were correlated with the results from identical materials tested by the Guarded Hot&ndashPlate; method, which uses flat plate specimens. In validating the enhancements made to the Guarded Hot&ndashPlate; and Cylindrical Specimen methods, 23 tests were ran on five different materials. The percent difference shown for the Guarded Hot&ndashPlate; method was less than 1%. This gives strong evidence that the enhanced Guarded Hot-Plate apparatus in itself is now more accurate for measuring thermal conductivity. The correlation between the thermal conductivity values of the Guarded Hot&ndashPlate; to those of the enhanced Cylindrical Specimen method was excellent. The conventional concrete mixture, due to much higher thermal conductivity values compared to the other mixtures, yielded a P&ndashvalue; of 0.600 which provided confidence in the performance of the enhanced Cylindrical Specimen Apparatus. Several recommendations were made for the future implementation of both test methods. The work in this study fulfills the research community and industry desire for a more streamlined, cost effective, and inexpensive means to determine the thermal conductivity of various civil engineering materials.

ContributorsMorris, Derek (Author) / Kaloush, Kamil (Thesis advisor) / Mobasher, Barzin (Committee member) / Phelan, Patrick E (Committee member) / Arizona State University (Publisher)
Created2011
151024-Thumbnail Image.png
Description
Video deinterlacing is a key technique in digital video processing, particularly with the widespread usage of LCD and plasma TVs. This thesis proposes a novel spatio-temporal, non-linear video deinterlacing technique that adaptively chooses between the results from one dimensional control grid interpolation (1DCGI), vertical temporal filter (VTF) and temporal line

Video deinterlacing is a key technique in digital video processing, particularly with the widespread usage of LCD and plasma TVs. This thesis proposes a novel spatio-temporal, non-linear video deinterlacing technique that adaptively chooses between the results from one dimensional control grid interpolation (1DCGI), vertical temporal filter (VTF) and temporal line averaging (LA). The proposed method performs better than several popular benchmarking methods in terms of both visual quality and peak signal to noise ratio (PSNR). The algorithm performs better than existing approaches like edge-based line averaging (ELA) and spatio-temporal edge-based median filtering (STELA) on fine moving edges and semi-static regions of videos, which are recognized as particularly challenging deinterlacing cases. The proposed approach also performs better than the state-of-the-art content adaptive vertical temporal filtering (CAVTF) approach. Along with the main approach several spin-off approaches are also proposed each with its own characteristics.
ContributorsVenkatesan, Ragav (Author) / Frakes, David H (Thesis advisor) / Li, Baoxin (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2012
151059-Thumbnail Image.png
Description
With internet traffic being bursty in nature, Dynamic Bandwidth Allocation(DBA) Algorithms have always been very important for any broadband access network to utilize the available bandwidth effciently. It is no different for Passive Optical Networks(PON), which are networks based on fiber optics in the physical layer of TCP/IP stack or

With internet traffic being bursty in nature, Dynamic Bandwidth Allocation(DBA) Algorithms have always been very important for any broadband access network to utilize the available bandwidth effciently. It is no different for Passive Optical Networks(PON), which are networks based on fiber optics in the physical layer of TCP/IP stack or OSI model, which in turn increases the bandwidth in the upper layers. The work in this thesis covers general description of basic DBA Schemes and mathematical derivations that have been established in research. We introduce a Novel Survey Topology that classifes DBA schemes based on their functionality. The novel perspective of classification will be useful in determining which scheme will best suit consumer's needs. We classify DBA as Direct, Intelligent and Predictive back on its computation method and we are able to qualitatively describe their delay and throughput bounds. Also we describe a recently developed DBA Scheme, Multi-thread polling(MTP) used in LRPON and describes the different viewpoints and issues and consequently introduce a novel technique Parallel Polling that overcomes most of issues faced in MTP and that promises better delay performance for LRPON.
ContributorsMercian, Anu (Author) / Reisslein, Martin (Thesis advisor) / McGarry, Michael (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2012
151173-Thumbnail Image.png
Description
While developing autonomous intelligent robots has been the goal of many research programs, a more practical application involving intelligent robots is the formation of teams consisting of both humans and robots. An example of such an application is search and rescue operations where robots commanded by humans are sent to

While developing autonomous intelligent robots has been the goal of many research programs, a more practical application involving intelligent robots is the formation of teams consisting of both humans and robots. An example of such an application is search and rescue operations where robots commanded by humans are sent to environments too dangerous for humans. For such human-robot interaction, natural language is considered a good communication medium as it allows humans with less training about the robot's internal language to be able to command and interact with the robot. However, any natural language communication from the human needs to be translated to a formal language that the robot can understand. Similarly, before the robot can communicate (in natural language) with the human, it needs to formulate its communique in some formal language which then gets translated into natural language. In this paper, I develop a high level language for communication between humans and robots and demonstrate various aspects through a robotics simulation. These language constructs borrow some ideas from action execution languages and are grounded with respect to simulated human-robot interaction transcripts.
ContributorsLumpkin, Barry Thomas (Author) / Baral, Chitta (Thesis advisor) / Lee, Joohyung (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2012
151126-Thumbnail Image.png
Description
Insertion and deletion errors represent an important category of channel impairments. Despite their importance and much work over the years, channels with such impairments are far from being fully understood as they proved to be difficult to analyze. In this dissertation, a promising coding scheme is investigated over independent and

Insertion and deletion errors represent an important category of channel impairments. Despite their importance and much work over the years, channels with such impairments are far from being fully understood as they proved to be difficult to analyze. In this dissertation, a promising coding scheme is investigated over independent and identically distributed (i.i.d.) insertion/deletion channels, i.e., interleaved concatenation of an outer low-density parity-check (LDPC) code with error-correction capabilities and an inner marker code for synchronization purposes. Marker code structures which offer the highest achievable rates are found with standard bit-level synchronization is performed. Then, to exploit the correlations in the likelihoods corresponding to different transmitted bits, a novel symbol-level synchronization algorithm that works on groups of consecutive bits is introduced. Extrinsic information transfer (EXIT) charts are also utilized to analyze the convergence behavior of the receiver, and to design LDPC codes with degree distributions matched to these channels. The next focus is on segmented deletion channels. It is first shown that such channels are information stable, and hence their channel capacity exists. Several upper and lower bounds are then introduced in an attempt to understand the channel capacity behavior. The asymptotic behavior of the channel capacity is also quantified when the average bit deletion rate is small. Further, maximum-a-posteriori (MAP) based synchronization algorithms are developed and specific LDPC codes are designed to match the channel characteristics. Finally, in addition to binary substitution errors, coding schemes and the corresponding detection algorithms are also studied for several other models with synchronization errors, including inter-symbol interference (ISI) channels, channels with multiple transmit/receive elements and multi-user communication systems.
ContributorsWang, Feng (Author) / Duman, Tolga M. (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Reisslein, Martin (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2012