Matching Items (202)
150449-Thumbnail Image.png
Description
Current information on successful leadership and management practices is contradictory and inconsistent, which makes difficult to understand what successful business practices are and what are not. The purpose of this study is to identify a simple process that quickly and logically identifies consistent and inconsistent leadership and management criteria. The

Current information on successful leadership and management practices is contradictory and inconsistent, which makes difficult to understand what successful business practices are and what are not. The purpose of this study is to identify a simple process that quickly and logically identifies consistent and inconsistent leadership and management criteria. The hypothesis proposed is that Information Measurement Theory (IMT) along with the Kashiwagi Solution Model (KSM) is a methodology than can differentiate between accurate and inaccurate principles the initial part of the study about authors in these areas show how information is conflictive, and also served to establish an initial baseline of recommended practices aligned with IMT. The one author that excels in comparison to the rest suits the "Initial Baseline Matrix from Deming" which composes the first model. The second model is denominated the "Full Extended KSM-Matrix" composed of all the LS characteristics found among all authors and IMT. Both models were tested-out for accuracy. The second part of the study was directed to evaluate the perception of individuals on these principles. Two different groups were evaluated, one group of people that had prior training and knowledge of IMT; another group of people without any knowledge of IMT. The results of the survey showed more confusion in the group of people without knowledge to IMT and improved consistency and less variation in the group of people with knowledge in IMT. The third part of the study, the analysis of case studies of success and failure, identified principles as contributors, and categorized them into LS/type "A" characteristics and RS/type "C" characteristics, by applying the KSM. The results validated the initial proposal and led to the conclusion that practices that fall into the LS side of the KSM will lead to success, while practices that fall into the RS of the KSM will lead to failure. The comparison and testing of both models indicated a dominant support of the IMT concepts as contributors to success; while the KSM model has a higher accuracy of prediction.
ContributorsReynolds, Harry (Author) / Kashiwagi, Dean (Thesis advisor) / Sullivan, Kenneth (Committee member) / Badger, William (Committee member) / Arizona State University (Publisher)
Created2011
150660-Thumbnail Image.png
Description
Semiconductor scaling technology has led to a sharp growth in transistor counts. This has resulted in an exponential increase on both power dissipation and heat flux (or power density) in modern microprocessors. These microprocessors are integrated as the major components in many modern embedded devices, which offer richer features and

Semiconductor scaling technology has led to a sharp growth in transistor counts. This has resulted in an exponential increase on both power dissipation and heat flux (or power density) in modern microprocessors. These microprocessors are integrated as the major components in many modern embedded devices, which offer richer features and attain higher performance than ever before. Therefore, power and thermal management have become the significant design considerations for modern embedded devices. Dynamic voltage/frequency scaling (DVFS) and dynamic power management (DPM) are two well-known hardware capabilities offered by modern embedded processors. However, the power or thermal aware performance optimization is not fully explored for the mainstream embedded processors with discrete DVFS and DPM capabilities. Many key problems have not been answered yet. What is the maximum performance that an embedded processor can achieve under power or thermal constraint for a periodic application? Does there exist an efficient algorithm for the power or thermal management problems with guaranteed quality bound? These questions are hard to be answered because the discrete settings of DVFS and DPM enhance the complexity of many power and thermal management problems, which are generally NP-hard. The dissertation presents a comprehensive study on these NP-hard power and thermal management problems for embedded processors with discrete DVFS and DPM capabilities. In the domain of power management, the dissertation addresses the power minimization problem for real-time schedules, the energy-constrained make-span minimization problem on homogeneous and heterogeneous chip multiprocessors (CMP) architectures, and the battery aware energy management problem with nonlinear battery discharging model. In the domain of thermal management, the work addresses several thermal-constrained performance maximization problems for periodic embedded applications. All the addressed problems are proved to be NP-hard or strongly NP-hard in the study. Then the work focuses on the design of the off-line optimal or polynomial time approximation algorithms as solutions in the problem design space. Several addressed NP-hard problems are tackled by dynamic programming with optimal solutions and pseudo-polynomial run time complexity. Because the optimal algorithms are not efficient in worst case, the fully polynomial time approximation algorithms are provided as more efficient solutions. Some efficient heuristic algorithms are also presented as solutions to several addressed problems. The comprehensive study answers the key questions in order to fully explore the power and thermal management potentials on embedded processors with discrete DVFS and DPM capabilities. The provided solutions enable the theoretical analysis of the maximum performance for periodic embedded applications under power or thermal constraints.
ContributorsZhang, Sushu (Author) / Chatha, Karam S (Thesis advisor) / Cao, Yu (Committee member) / Konjevod, Goran (Committee member) / Vrudhula, Sarma (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2012
150550-Thumbnail Image.png
Description
Ultra-concealable multi-threat body armor used by law-enforcement is a multi-purpose armor that protects against attacks from knife, spikes, and small caliber rounds. The design of this type of armor involves fiber-resin composite materials that are flexible, light, are not unduly affected by environmental conditions, and perform as required. The National

Ultra-concealable multi-threat body armor used by law-enforcement is a multi-purpose armor that protects against attacks from knife, spikes, and small caliber rounds. The design of this type of armor involves fiber-resin composite materials that are flexible, light, are not unduly affected by environmental conditions, and perform as required. The National Institute of Justice (NIJ) characterizes this type of armor as low-level protection armor. NIJ also specifies the geometry of the knife and spike as well as the strike energy levels required for this level of protection. The biggest challenges are to design a thin, lightweight and ultra-concealable armor that can be worn under street clothes. In this study, several fundamental tasks involved in the design of such armor are addressed. First, the roles of design of experiments and regression analysis in experimental testing and finite element analysis are presented. Second, off-the-shelf materials available from international material manufacturers are characterized via laboratory experiments. Third, the calibration process required for a constitutive model is explained through the use of experimental data and computer software. Various material models in LS-DYNA for use in the finite element model are discussed. Numerical results are generated via finite element simulations and are compared against experimental data thus establishing the foundation for optimizing the design.
ContributorsVokshi, Erblina (Author) / Rajan, Subramaniam D. (Thesis advisor) / Neithalath, Narayanan (Committee member) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2012
150433-Thumbnail Image.png
Description

The current method of measuring thermal conductivity requires flat plates. For most common civil engineering materials, creating or extracting such samples is difficult. A prototype thermal conductivity experiment had been developed at Arizona State University (ASU) to test cylindrical specimens but proved difficult for repeated testing. In this study, enhancements

The current method of measuring thermal conductivity requires flat plates. For most common civil engineering materials, creating or extracting such samples is difficult. A prototype thermal conductivity experiment had been developed at Arizona State University (ASU) to test cylindrical specimens but proved difficult for repeated testing. In this study, enhancements to both testing methods were made. Additionally, test results of cylindrical testing were correlated with the results from identical materials tested by the Guarded Hot&ndashPlate; method, which uses flat plate specimens. In validating the enhancements made to the Guarded Hot&ndashPlate; and Cylindrical Specimen methods, 23 tests were ran on five different materials. The percent difference shown for the Guarded Hot&ndashPlate; method was less than 1%. This gives strong evidence that the enhanced Guarded Hot-Plate apparatus in itself is now more accurate for measuring thermal conductivity. The correlation between the thermal conductivity values of the Guarded Hot&ndashPlate; to those of the enhanced Cylindrical Specimen method was excellent. The conventional concrete mixture, due to much higher thermal conductivity values compared to the other mixtures, yielded a P&ndashvalue; of 0.600 which provided confidence in the performance of the enhanced Cylindrical Specimen Apparatus. Several recommendations were made for the future implementation of both test methods. The work in this study fulfills the research community and industry desire for a more streamlined, cost effective, and inexpensive means to determine the thermal conductivity of various civil engineering materials.

ContributorsMorris, Derek (Author) / Kaloush, Kamil (Thesis advisor) / Mobasher, Barzin (Committee member) / Phelan, Patrick E (Committee member) / Arizona State University (Publisher)
Created2011
151078-Thumbnail Image.png
Description
A unique feature, yet a challenge, in cognitive radio (CR) networks is the user hierarchy: secondary users (SU) wishing for data transmission must defer in the presence of active primary users (PUs), whose priority to channel access is strictly higher.Under a common thread of characterizing and improving Quality of Service

A unique feature, yet a challenge, in cognitive radio (CR) networks is the user hierarchy: secondary users (SU) wishing for data transmission must defer in the presence of active primary users (PUs), whose priority to channel access is strictly higher.Under a common thread of characterizing and improving Quality of Service (QoS) for the SUs, this dissertation is progressively organized under two main thrusts: the first thrust focuses on SU's throughput by exploiting the underlying properties of the PU spectrum to perform effective scheduling algorithms; and the second thrust aims at another important QoS performance of the SUs, namely delay, subject to the impact of PUs' activities, and proposes enhancement and control mechanisms. More specifically, in the first thrust, opportunistic spectrum scheduling for SU is first considered by jointly exploiting the memory in PU's occupancy and channel fading. In particular, the underexplored scenario where PU occupancy presents a {long} temporal memory is taken into consideration. By casting the problem as a partially observable Markov decision process, a set of {multi-tier} tradeoffs are quantified and illustrated. Next, a spectrum shaping framework is proposed by leveraging network coding as a {spectrum shaper} on the PU's traffic. Such shaping effect brings in predictability of the primary spectrum, which is utilized by the SUs to carry out adaptive channel sensing by prioritizing channel access order, and hence significantly improve their throughput. On the other hand, such predictability can make wireless channels more susceptible to jamming attacks. As a result, caution must be taken in designing wireless systems to balance the throughput and the jamming-resistant capability. The second thrust turns attention to an equally important performance metric, i.e., delay performance. Specifically, queueing delay analysis is conducted for SUs employing random access over the PU channels. Fluid approximation is taken and Poisson driven stochastic differential equations are applied to characterize the moments of the SUs' steady-state queueing delay. Then, dynamic packet generation control mechanisms are developed to meet the given delay requirements for SUs.
ContributorsWang, Shanshan (Author) / Zhang, Junshan (Thesis advisor) / Xue, Guoliang (Committee member) / Hui, Joseph (Committee member) / Duman, Tolga (Committee member) / Arizona State University (Publisher)
Created2012
151063-Thumbnail Image.png
Description
Interference constitutes a major challenge for communication networks operating over a shared medium where availability is imperative. This dissertation studies the problem of designing and analyzing efficient medium access protocols which are robust against strong adversarial jamming. More specifically, four medium access (MAC) protocols (i.e., JADE, ANTIJAM, COMAC, and SINRMAC)

Interference constitutes a major challenge for communication networks operating over a shared medium where availability is imperative. This dissertation studies the problem of designing and analyzing efficient medium access protocols which are robust against strong adversarial jamming. More specifically, four medium access (MAC) protocols (i.e., JADE, ANTIJAM, COMAC, and SINRMAC) which aim to achieve high throughput despite jamming activities under a variety of network and adversary models are presented. We also propose a self-stabilizing leader election protocol, SELECT, that can effectively elect a leader in the network with the existence of a strong adversary. Our protocols can not only deal with internal interference without the exact knowledge on the number of participants in the network, but they are also robust to unintentional or intentional external interference, e.g., due to co-existing networks or jammers. We model the external interference by a powerful adaptive and/or reactive adversary which can jam a (1 − ε)-portion of the time steps, where 0 < ε ≤ 1 is an arbitrary constant. We allow the adversary to be adaptive and to have complete knowledge of the entire protocol history. Moreover, in case the adversary is also reactive, it uses carrier sensing to make informed decisions to disrupt communications. Among the proposed protocols, JADE, ANTIJAM and COMAC are able to achieve Θ(1)-competitive throughput with the presence of the strong adversary; while SINRMAC is the first attempt to apply SINR model (i.e., Signal to Interference plus Noise Ratio), in robust medium access protocols design; the derived principles are also useful to build applications on top of the MAC layer, and we present SELECT, which is an exemplary study for leader election, which is one of the most fundamental tasks in distributed computing.
ContributorsZhang, Jin (Author) / Richa, Andréa W. (Thesis advisor) / Scheideler, Christian (Committee member) / Sen, Arunabha (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2012
151237-Thumbnail Image.png
Description
This dissertation examines an analytical methodology that considers predictive maintenance on industrial facilities equipment to exceed world class availability standards with greater understanding for organizational participation impacts. The research for this study was performed at one of the world's largest semiconductor facilities, with the intent of understanding one possible cause

This dissertation examines an analytical methodology that considers predictive maintenance on industrial facilities equipment to exceed world class availability standards with greater understanding for organizational participation impacts. The research for this study was performed at one of the world's largest semiconductor facilities, with the intent of understanding one possible cause for a noticeable behavior in technical work routines. Semiconductor manufacturing disruption poses significant potential revenue loss on a scale easily quantified in millions of dollars per hour. These instances are commonly referred to as "Interruption to production" (ITP). ITP is a standardized metric used across Company ABC's worldwide factory network to track frequency of occurrence and duration of manufacturing downtime. ITP, the key quantifiable indicator in this dissertation, will be the primary analytical measurement to identify the effectiveness of maintenance personnel's work routines as they relate to unscheduled downtime with facilities systems. This dissertation examines the process used to obtain change in an industrial facilities organization and the associated reactions of individual organizational members. To give the reader background orientation on the methodology for testing, measuring and ultimately assessing the benefits and risks associated with integrating a predictive equipment failure methodology, this dissertation will examine analytical findings associated with the statement of purpose as it pertains to ITP reduction. However, the focus will be the exploration of behavioral findings within the organization and the development of an improved industry standard for predictive ITP reduction process implementation. Specifically, findings associated with organizational participation and learning development trends found within the work group.
ContributorsMcDonald, Douglas Kirk (Author) / Sullivan, Kenneth (Thesis advisor) / Badger, William (Committee member) / Verdini, William (Committee member) / Arizona State University (Publisher)
Created2012
136132-Thumbnail Image.png
Description
Calcium hydroxide carbonation processes were studied to investigate the potential for abiotic soil improvement. Different mixtures of common soil constituents such as sand, clay, and granite were mixed with a calcium hydroxide slurry and carbonated at approximately 860 psi. While the carbonation was successful and calcite formation was strong on

Calcium hydroxide carbonation processes were studied to investigate the potential for abiotic soil improvement. Different mixtures of common soil constituents such as sand, clay, and granite were mixed with a calcium hydroxide slurry and carbonated at approximately 860 psi. While the carbonation was successful and calcite formation was strong on sample exteriors, a 4 mm passivating boundary layer effect was observed, impeding the carbonation process at the center. XRD analysis was used to characterize the extent of carbonation, indicating extremely poor carbonation and therefore CO2 penetration inside the visible boundary. The depth of the passivating layer was found to be independent of both time and choice of aggregate. Less than adequate strength was developed in carbonated trials due to formation of small, weakly-connected crystals, shown with SEM analysis. Additional research, especially in situ analysis with thermogravimetric analysis would be useful to determine the causation of poor carbonation performance. This technology has great potential to substitute for certain Portland cement applications if these issues can be addressed.
ContributorsHermens, Stephen Edward (Author) / Bearat, Hamdallah (Thesis director) / Dai, Lenore (Committee member) / Mobasher, Barzin (Committee member) / Barrett, The Honors College (Contributor) / Chemical Engineering Program (Contributor)
Created2015-05
137066-Thumbnail Image.png
Description
The Information Measurement Theory (IMT) is a revolutionary thinking paradigm. Its principles allow an individual to accurately perceive reality and simplify the complexities of life. To understand IMT, individuals start by first recognizing that everything must follow natural law and cause and effect, that there is no randomness, and that

The Information Measurement Theory (IMT) is a revolutionary thinking paradigm. Its principles allow an individual to accurately perceive reality and simplify the complexities of life. To understand IMT, individuals start by first recognizing that everything must follow natural law and cause and effect, that there is no randomness, and that everyone changes at a certain rate. They then move on to understanding that individuals are described by certain characteristics that can be used to predict their future behavior. And finally, they discover that they must learn to understand, accept, and improve themselves while understanding and accepting others. The author, who has spent a considerable amount of time studying and utilizing IMT, believes that IMT can be used within the field of psychology. The extraordinary results that IMT has produced in the construction industry can potentially be produced in a similar fashion within the psychology field. One of the most important principles of IMT teaches that control or influence over others does not exist. This principle alone differentiates IMT from the traditional model of psychology, which is dedicated to changing an individual (through influence). Five case studies will be presented in which individuals have used the principles of IMT to overcome severe issues such as substance abuse and depression. Each case study is unique and exhibits a remarkable change within each individual.
ContributorsMalladi, Basavanth (Author) / Kashiwagi, Dean (Thesis director) / Sullivan, Kenneth (Committee member) / Kashiwagi, Jacob (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor)
Created2014-05
Description
As green buildings become more popular, the challenge of structural engineer is to move beyond simply green to develop sustainable, and high-performing buildings that are more than just environmentally friendly. For decades, Portland cement-based products have been known as the most commonly used construction materials in the world, and as

As green buildings become more popular, the challenge of structural engineer is to move beyond simply green to develop sustainable, and high-performing buildings that are more than just environmentally friendly. For decades, Portland cement-based products have been known as the most commonly used construction materials in the world, and as a result, cement production is a significant source of global carbon dioxide (CO2) emissions, and environmental impacts at all stages of the process. In recent years, the increasing cost of energy and resource supplies, and concerns related to greenhouse gas emissions and environmental impacts have ignited more interests in utilizing waste and by-product materials as the primary ingredient to replace ordinary Portland cement in concrete systems. The environmental benefits of cement replacement are enormous, including the diversion of non-recycled waste from landfills for useful applications, the reduction in non-renewable energy consumption for cement production, and the corresponding emission of greenhouse gases. In the vast available body of literature, concretes consisting activated fly ash or slag as the binder have been shown to have high compressive strengths, and resistance to fire and chemical attack. This research focuses to utilize fly ash, by-product of coal fired power plant along with different alkaline solutions to form a final product with comparable properties to or superior than those of ordinary Portland cement concrete. Fly ash mortars using different concentration of sodium hydroxide and waterglass were dry and moist cured at different temperatures prior subjecting to uniaxial compressive loading condition. Since moist curing continuously supplies water for the hydration process of activated fly ash mortars while preventing thermal shrinkage and cracking, the samples were more durable and demonstrated a noticeably higher compressive strength. The influence of the concentration of the activating agent (4, or 8 M sodium hydroxide solution), and activator-to-binder ratio of 0.40 on the compressive strengths of concretes containing Class F fly ash as the sole binder is analyzed. Furthermore, liquid sodium silicate (waterglass) with silica modulus of 1.0 and 2.0 along with activator-to-binder ratio of 0.04 and 0.07 was also studied to understand its performance in contributing to the strength development of the activated fly ash concrete. Statistical analysis of the compressive strength results show that the available alkali concentration has a larger influence on the compressive strengths of activated concretes made using fly ash than the influence of curing parameters (elevated temperatures, condition, and duration).
ContributorsBanh, Kingsten Chi (Author) / Neithalath, Narayanan (Thesis director) / Rajan, Subramaniam (Committee member) / Mobasher, Barzin (Committee member) / Civil, Environmental and Sustainable Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2013-05