This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 41 - 50 of 124
Filtering by

Clear all filters

150550-Thumbnail Image.png
Description
Ultra-concealable multi-threat body armor used by law-enforcement is a multi-purpose armor that protects against attacks from knife, spikes, and small caliber rounds. The design of this type of armor involves fiber-resin composite materials that are flexible, light, are not unduly affected by environmental conditions, and perform as required. The National

Ultra-concealable multi-threat body armor used by law-enforcement is a multi-purpose armor that protects against attacks from knife, spikes, and small caliber rounds. The design of this type of armor involves fiber-resin composite materials that are flexible, light, are not unduly affected by environmental conditions, and perform as required. The National Institute of Justice (NIJ) characterizes this type of armor as low-level protection armor. NIJ also specifies the geometry of the knife and spike as well as the strike energy levels required for this level of protection. The biggest challenges are to design a thin, lightweight and ultra-concealable armor that can be worn under street clothes. In this study, several fundamental tasks involved in the design of such armor are addressed. First, the roles of design of experiments and regression analysis in experimental testing and finite element analysis are presented. Second, off-the-shelf materials available from international material manufacturers are characterized via laboratory experiments. Third, the calibration process required for a constitutive model is explained through the use of experimental data and computer software. Various material models in LS-DYNA for use in the finite element model are discussed. Numerical results are generated via finite element simulations and are compared against experimental data thus establishing the foundation for optimizing the design.
ContributorsVokshi, Erblina (Author) / Rajan, Subramaniam D. (Thesis advisor) / Neithalath, Narayanan (Committee member) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2012
149788-Thumbnail Image.png
Description
Residue number systems have gained significant importance in the field of high-speed digital signal processing due to their carry-free nature and speed-up provided by parallelism. The critical aspect in the application of RNS is the selection of the moduli set and the design of the conversion units. There have been

Residue number systems have gained significant importance in the field of high-speed digital signal processing due to their carry-free nature and speed-up provided by parallelism. The critical aspect in the application of RNS is the selection of the moduli set and the design of the conversion units. There have been several RNS moduli sets proposed for the implementation of digital filters. However, some are unbalanced and some do not provide the required dynamic range. This thesis addresses the drawbacks of existing RNS moduli sets and proposes a new moduli set for efficient implementation of FIR filters. An efficient VLSI implementation model has been derived for the design of a reverse converter from RNS to the conventional two's complement representation. This model facilitates the realization of a reverse converter for better performance with less hardware complexity when compared with the reverse converter designs of the existing balanced 4-moduli sets. Experimental results comparing multiply and accumulate units using RNS that are implemented using the proposed four-moduli set with the state-of-the-art balanced four-moduli sets, show large improvements in area (46%) and power (43%) reduction for various dynamic ranges. RNS FIR filters using the proposed moduli-set and existing balanced 4-moduli set are implemented in RTL and compared for chip area and power and observed 20% improvements. This thesis also presents threshold logic implementation of the reverse converter.
ContributorsChalivendra, Gayathri (Author) / Vrudhula, Sarma (Thesis advisor) / Shrivastava, Aviral (Committee member) / Bakkaloglu, Bertan (Committee member) / Arizona State University (Publisher)
Created2011
153815-Thumbnail Image.png
Description
Increased priority on the minimization of environmental impacts of conventional construction materials in recent years has motivated increased use of waste materials or bi-products such as fly ash, blast furnace slag with a view to reduce or eliminate the manufacturing/consumption of ordinary portland cement (OPC) which accounts for approximately 5-7%

Increased priority on the minimization of environmental impacts of conventional construction materials in recent years has motivated increased use of waste materials or bi-products such as fly ash, blast furnace slag with a view to reduce or eliminate the manufacturing/consumption of ordinary portland cement (OPC) which accounts for approximately 5-7% of global carbon dioxide emission. The current study explores, for the first time, the possibility of carbonating waste metallic iron powder to develop carbon-negative sustainable binder systems for concrete. The fundamental premise of this work is that metallic iron will react with aqueous CO2 under controlled conditions to form complex iron carbonates which have binding capabilities. The compressive and flexural strengths of the chosen iron-based binder systems increase with carbonation duration and the specimens carbonated for 4 days exhibit mechanical properties that are comparable to those of companion ordinary portland cement systems. The optimal mixture proportion and carbonation regime for this non-conventional sustainable binder is established based on the study of carbonation efficiency of a series of mixtures using thermogravimetric analysis. The pore- and micro-structural features of this novel binding material are also evaluated. The fracture response of this novel binder is evaluated using strain energy release rate and measurement of fracture process zone using digital image correlation (DIC). The iron-based binder system exhibits significantly higher strain energy release rates when compared to those of the OPC systems in both the unreinforced and glass fiber reinforced states. The iron-based binder also exhibits higher amount of area of fracture process zone due to its ability to undergo inelastic deformation facilitated by unreacted metallic iron particle inclusions in the microstructure that helps crack bridging /deflection. The intrinsic nano-mechanical properties of carbonate reaction product are explored using statistical nanoindentation technique coupled with a stochastic deconvolution algorithm. Effect of exposure to high temperature (up to 800°C) is also studied. Iron-based binder shows significantly higher residual flexural strength after exposure to high temperatures. Results of this comprehensive study establish the viability of this binder type for concrete as an environment-friendly and economical alternative to OPC.
ContributorsDas, Sumanta (Author) / Neithalath, Narayanan (Thesis advisor) / Rajan, S.D. (Committee member) / Mobasher, Barzin (Committee member) / Marzke, Robert (Committee member) / Chawla, Nikhilesh (Committee member) / Stone, David (Committee member) / Arizona State University (Publisher)
Created2015
154091-Thumbnail Image.png
Description
Dynamic software update (DSU) enables a program to update while it is running. DSU aims to minimize the loss due to program downtime for updates. Usually DSU is done in three steps: suspending the execution of an old program, mapping the execution state from the old program to a new

Dynamic software update (DSU) enables a program to update while it is running. DSU aims to minimize the loss due to program downtime for updates. Usually DSU is done in three steps: suspending the execution of an old program, mapping the execution state from the old program to a new one, and resuming execution of the new program with the mapped state. The semantic correctness of DSU depends largely on the state mapping which is mostly composed by developers manually nowadays. However, the manual construction of a state mapping does not necessarily ensure sound and dependable state mapping. This dissertation presents a methodology to assist developers by automating the construction of a partial state mapping with a guarantee of correctness.

This dissertation includes a detailed study of DSU correctness and automatic state mapping for server programs with an established user base. At first, the dissertation presents the formal treatment of DSU correctness and the state mapping problem. Then the dissertation presents an argument that for programs with an established user base, dynamic updates must be backward compatible. The dissertation next presents a general definition of backward compatibility that specifies the allowed changes in program interaction between an old version and a new version and identified patterns of code evolution that results in backward compatible behavior. Thereafter the dissertation presents formal definitions of these patterns together with proof that any changes to programs in these patterns will result in backward compatible update. To show the applicability of the results, the dissertation presents SitBack, a program analysis tool that has an old version program and a new one as input and computes a partial state mapping under the assumption that the new version is backward compatible with the old version.

SitBack does not handle all kinds of changes and it reports to the user in incomplete part of a state mapping. The dissertation presents a detailed evaluation of SitBack which shows that the methodology of automatic state mapping is promising in deal with real world program updates. For example, SitBack produces state mappings for 17-75% of the changed functions. Furthermore, SitBack generates automatic state mapping that leads to successful DSU. In conclusion, the study presented in this dissertation does assist developers in developing state mappings for DSU by automating the construction of state mappings with a correctness guarantee, which helps the adoption of DSU ultimately.
ContributorsShen, Jun (Author) / Bazzi, Rida A (Thesis advisor) / Fainekos, Georgios (Committee member) / Neamtiu, Iulian (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2015
154003-Thumbnail Image.png
Description
Most embedded applications are constructed with multiple threads to handle concurrent events. For optimization and debugging of the programs, dynamic program analysis is widely used to collect execution information while the program is running. Unfortunately, the non-deterministic behavior of multithreaded embedded software makes the dynamic analysis difficult. In addition, instrumentation

Most embedded applications are constructed with multiple threads to handle concurrent events. For optimization and debugging of the programs, dynamic program analysis is widely used to collect execution information while the program is running. Unfortunately, the non-deterministic behavior of multithreaded embedded software makes the dynamic analysis difficult. In addition, instrumentation overhead for gathering execution information may change the execution of a program, and lead to distorted analysis results, i.e., probe effect. This thesis presents a framework that tackles the non-determinism and probe effect incurred in dynamic analysis of embedded software. The thesis largely consists of three parts. First of all, we discusses a deterministic replay framework to provide reproducible execution. Once a program execution is recorded, software instrumentation can be safely applied during replay without probe effect. Second, a discussion of probe effect is presented and a simulation-based analysis is proposed to detect execution changes of a program caused by instrumentation overhead. The simulation-based analysis examines if the recording instrumentation changes the original program execution. Lastly, the thesis discusses data race detection algorithms that help to remove data races for correctness of the replay and the simulation-based analysis. The focus is to make the detection efficient for C/C++ programs, and to increase scalability of the detection on multi-core machines.
ContributorsSong, Young Wn (Author) / Lee, Yann-Hang (Thesis advisor) / Shrivastava, Aviral (Committee member) / Fainekos, Georgios (Committee member) / Lee, Joohyung (Committee member) / Arizona State University (Publisher)
Created2015
155975-Thumbnail Image.png
Description
Cyber-Physical Systems (CPS) are being used in many safety-critical applications. Due to the important role in virtually every aspect of human life, it is crucial to make sure that a CPS works properly before its deployment. However, formal verification of CPS is a computationally hard problem. Therefore, lightweight verification methods

Cyber-Physical Systems (CPS) are being used in many safety-critical applications. Due to the important role in virtually every aspect of human life, it is crucial to make sure that a CPS works properly before its deployment. However, formal verification of CPS is a computationally hard problem. Therefore, lightweight verification methods such as testing and monitoring of the CPS are considered in the industry. The formal representation of the CPS requirements is a challenging task. In addition, checking the system outputs with respect to requirements is a computationally complex problem. In this dissertation, these problems for the verification of CPS are addressed. The first method provides a formal requirement analysis framework which can find logical issues in the requirements and help engineers to correct the requirements. Also, a method is provided to detect tests which vacuously satisfy the requirement because of the requirement structure. This method is used to improve the test generation framework for CPS. Finally, two runtime verification algorithms are developed for off-line/on-line monitoring with respect to real-time requirements. These monitoring algorithms are computationally efficient, and they can be used in practical applications for monitoring CPS with low runtime overhead.
ContributorsDokhanchi, Adel (Author) / Fainekos, Georgios (Thesis advisor) / Lee, Yann-Hang (Committee member) / Sarjoughian, Hessam S. (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2017
156003-Thumbnail Image.png
Description
Designers employ a variety of modeling theories and methodologies to create functional models of discrete network systems. These dynamical models are evaluated using verification and validation techniques throughout incremental design stages. Models created for these systems should directly represent their growing complexity with respect to composition and heterogeneity. Similar to

Designers employ a variety of modeling theories and methodologies to create functional models of discrete network systems. These dynamical models are evaluated using verification and validation techniques throughout incremental design stages. Models created for these systems should directly represent their growing complexity with respect to composition and heterogeneity. Similar to software engineering practices, incremental model design is required for complex system design. As a result, models at early increments are significantly simpler relative to real systems. While experimenting (verification or validation) on models at early increments are computationally less demanding, the results of these experiments are less trustworthy and less rewarding. At any increment of design, a set of tools and technique are required for controlling the complexity of models and experimentation.

A complex system such as Network-on-Chip (NoC) may benefit from incremental design stages. Current design methods for NoC rely on multiple models developed using various modeling frameworks. It is useful to develop frameworks that can formalize the relationships among these models. Fine-grain models are derived using their coarse-grain counterparts. Moreover, validation and verification capability at various design stages enabled through disciplined model conversion is very beneficial.

In this research, Multiresolution Modeling (MRM) is used for system level design of NoC. MRM aids in creating a family of models at different levels of scale and complexity with well-formed relationships. In addition, a variant of the Discrete Event System Specification (DEVS) formalism is proposed which supports model checking. Hierarchical models of Network-on-Chip components may be created at different resolutions while each model can be validated using discrete-event simulation and verified via state exploration. System property expressions are defined in the DEVS language and developed as Transducers which can be applied seamlessly for model checking and simulation purposes.

Multiresolution Modeling with verification and validation capabilities of this framework complement one another. MRM manages the scale and complexity of models which in turn can reduces V&V time and effort and conversely the V&V helps ensure correctness of models at multiple resolutions. This framework is realized through extending the DEVS-Suite simulator and its applicability demonstrated for exemplar NoC models.
ContributorsGholami, Soroosh (Author) / Sarjoughian, Hessam S. (Thesis advisor) / Fainekos, Georgios (Committee member) / Ogras, Umit Y. (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2017
156460-Thumbnail Image.png
Description
Concrete is relatively brittle, and its tensile strength is typically only about one-tenth of its compressive strength. Regular concrete is therefore normally uses reinforcement steel bars to increase the tensile strength. It is becoming increasingly popular to use random distributed fibers as reinforcement and polymeric fibers is once such kind.

Concrete is relatively brittle, and its tensile strength is typically only about one-tenth of its compressive strength. Regular concrete is therefore normally uses reinforcement steel bars to increase the tensile strength. It is becoming increasingly popular to use random distributed fibers as reinforcement and polymeric fibers is once such kind. In the case of polymeric fibers, due to hydrophobicity and lack of any chemical bond between the fiber and matrix, the weak interface zone limits the ability of the fibers to effectively carry the load that is on the matrix phase. Depending on the fiber’s surface asperity, shape, chemical nature, and mechanical bond characteristic of the load transfer between matrix and fiber can be altered so that the final composite can be improved. These modifications can be carried out by means of thermal treatment, mechanical surface modifications, or chemical changes The objective of this study is to measure and document the effect of gamma ray irradiation on the mechanical properties of macro polymeric fibers. The objective is to determine the mechanical properties of macro-synthetic fibers and develop guidelines for treatment and characterization that allow for potential positive changes due to exposure to irradiation. Fibers are exposed to various levels of ionizing radiation and the tensile, interface and performance in a mortar matrix are documented. Uniaxial tensile tests were performed on irradiated fibers to study fiber strength and failure pattern. SEM tests were carried out in order to study the surface characteristic and effect of different radiation dose on polymeric fiber. The interaction of the irradiated fiber with the cement composite was studied by a series of quasi-static pullout test for a specific embedded length. As a final task, flexural tests were carried out for different irradiated fibers to sum up the investigation. An average increase of 13% in the stiffness of the fiber was observed for 5 kGy of radiation. Flexural tests showed an average increase of 181% in the Req3 value and 102 % in the toughness of the sample was observed for 5 kGy of dose.
ContributorsTiwari, Sanchay Sushil (Author) / Mobasher, Barzin (Thesis advisor) / Neithalath, Narayanan (Thesis advisor) / Dharmarajan, Subramaniam (Committee member) / Holbert, Keith E. (Committee member) / Arizona State University (Publisher)
Created2018
155944-Thumbnail Image.png
Description
Caches have long been used to reduce memory access latency. However, the increased complexity of cache coherence brings significant challenges in processor design as the number of cores increases. While making caches scalable is still an important research problem, some researchers are exploring the possibility of a more power-efficient SRAM

Caches have long been used to reduce memory access latency. However, the increased complexity of cache coherence brings significant challenges in processor design as the number of cores increases. While making caches scalable is still an important research problem, some researchers are exploring the possibility of a more power-efficient SRAM called scratchpad memories or SPMs. SPMs consume significantly less area, and are more energy-efficient per access than caches, and therefore make the design of on-chip memories much simpler. Unlike caches, which fetch data from memories automatically, an SPM requires explicit instructions for data transfers. SPM-only architectures are thus named as software managed manycore (SMM), since the data movements of such architectures rely on software. SMM processors have been widely used in different areas, such as embedded computing, network processing, or even high performance computing. While SMM processors provide a low-power platform, the hardware alone does not guarantee power efficiency, if applications on such processors deliver low performance. Efficient software techniques are therefore required. A big body of management techniques for SMM architectures are compiler-directed, as inserting data movement operations by hand forces programmers to trace flow of data, which can be error-prone and sometimes difficult if not impossible. This thesis develops compiler-directed techniques to manage data transfers for embedded applications on SMMs efficiently. The techniques analyze and find out the proper program points and insert data movement instructions accordingly. The techniques manage code, stack and heap data of applications, and reduce execution time by 14%, 52% and 80% respectively compared to their predecessors on typical embedded applications. On top of managing local data, a technique is also developed for shared data in SMM architectures. Experimental results show it achieves more than 2X speedup than the previous technique on average.
ContributorsCai, Jian (Author) / Shrivastava, Aviral (Thesis advisor) / Wu, Carole (Committee member) / Ren, Fengbo (Committee member) / Dasgupta, Partha (Committee member) / Arizona State University (Publisher)
Created2017
156779-Thumbnail Image.png
Description
This research summarizes the validation testing completed for the material model MAT213, currently implemented in the LS-DYNA finite element program. Testing was carried out using a carbon fiber composite material, T800-F3900. Stacked-ply tension and compression tests were performed for open-hole and full coupons. Comparisons of experimental and simulation results showed

This research summarizes the validation testing completed for the material model MAT213, currently implemented in the LS-DYNA finite element program. Testing was carried out using a carbon fiber composite material, T800-F3900. Stacked-ply tension and compression tests were performed for open-hole and full coupons. Comparisons of experimental and simulation results showed a good agreement between the two for metrics including, stress-strain response and displacements. Strains and displacements in the direction of loading were better predicted by the simulations than for that of the transverse direction.

Double cantilever beam and end notched flexure tests were performed experimentally and through simulations to determine the delamination properties of the material at the interlaminar layers. Experimental results gave the mode I critical energy release rate as having a range of 2.18 – 3.26 psi-in and the mode II critical energy release rate as 10.50 psi-in, both for the pre-cracked condition. Simulations were performed to calibrate other cohesive zone parameters required for modeling.

Samples of tested T800/F3900 coupons were processed and examined with scanning electron microscopy to determine and understand the underlying structure of the material. Tested coupons revealed damage and failure occurring at the micro scale for the composite material.
ContributorsHolt, Nathan T (Author) / Rajan, Subramaniam D. (Thesis advisor) / Mobasher, Barzin (Committee member) / Hoover, Christian (Committee member) / Arizona State University (Publisher)
Created2018