Matching Items (266)
Filtering by

Clear all filters

151243-Thumbnail Image.png
Description
The construction industry faces important performance problems such as low productivity, poor quality of work, and work-related accidents and injuries. Creating a high reliability work system that is simultaneously highly productive and exceptionally safe has become a challenge for construction practitioners and scholars. The main goal of this dissertation was

The construction industry faces important performance problems such as low productivity, poor quality of work, and work-related accidents and injuries. Creating a high reliability work system that is simultaneously highly productive and exceptionally safe has become a challenge for construction practitioners and scholars. The main goal of this dissertation was to create an understanding of high reliability construction work systems based on lessons from the production practices of high performance work crews. High performance work crews are defined as the work crews that constantly reach and maintain a high level of productivity and exceptional safety record while delivering high quality of work. This study was conceptualized on findings from High Reliability Organizations and with a primary focus on lean construction, human factors, safety, and error management. Toward the research objective, this dissertation answered two major questions. First, it explored the task factors and project attributes that shape and increase workers' task demands and consequently affect workers' safety, production, and quality performance. Second, it explored and investigated the production practices of construction field supervisors (foremen) to understand how successful supervisors regulate task and project demands to create a highly reliable work process. Employing case study methodology, this study explored and analyzed the work practices of six work crews and crew supervisors in different trades including concrete, masonry, and hot asphalt roofing construction. The case studies included one exceptional and one average performing crew from each trade. Four major factors were considered in the selection of exceptional crew supervisors: (1) safety performance, (2) production performance, (3) quality performance, and (4) the level of project difficulty they supervised. The data collection was carried out in three phases including: (1) interview with field supervisors to understand their production practices, (2) survey and interview with workers to understand their perception and to identify the major sources of task demands, and (3) several close field observations. Each trade's specific findings including task demands, project attributes, and production practices used by crew supervisors are presented in a separate chapter. At the end the production practices that converged to create high reliability work systems are summarized and presented in nine major categories.
ContributorsMemarian, Babak (Author) / Bashford, Howard (Thesis advisor) / Boren, Rebecca (Committee member) / Wiezel, Avi (Committee member) / Arizona State University (Publisher)
Created2012
151244-Thumbnail Image.png
Description
The Smart Grid initiative describes the collaborative effort to modernize the U.S. electric power infrastructure. Modernization efforts incorporate digital data and information technology to effectuate control, enhance reliability, encourage small customer sited distributed generation (DG), and better utilize assets. The Smart Grid environment is envisioned to include distributed generation, flexible

The Smart Grid initiative describes the collaborative effort to modernize the U.S. electric power infrastructure. Modernization efforts incorporate digital data and information technology to effectuate control, enhance reliability, encourage small customer sited distributed generation (DG), and better utilize assets. The Smart Grid environment is envisioned to include distributed generation, flexible and controllable loads, bidirectional communications using smart meters and other technologies. Sensory technology may be utilized as a tool that enhances operation including operation of the distribution system. Addressing this point, a distribution system state estimation algorithm is developed in this thesis. The state estimation algorithm developed here utilizes distribution system modeling techniques to calculate a vector of state variables for a given set of measurements. Measurements include active and reactive power flows, voltage and current magnitudes, phasor voltages with magnitude and angle information. The state estimator is envisioned as a tool embedded in distribution substation computers as part of distribution management systems (DMS); the estimator acts as a supervisory layer for a number of applications including automation (DA), energy management, control and switching. The distribution system state estimator is developed in full three-phase detail, and the effect of mutual coupling and single-phase laterals and loads on the solution is calculated. The network model comprises a full three-phase admittance matrix and a subset of equations that relates measurements to system states. Network equations and variables are represented in rectangular form. Thus a linear calculation procedure may be employed. When initialized to the vector of measured quantities and approximated non-metered load values, the calculation procedure is non-iterative. This dissertation presents background information used to develop the state estimation algorithm, considerations for distribution system modeling, and the formulation of the state estimator. Estimator performance for various power system test beds is investigated. Sample applications of the estimator to Smart Grid systems are presented. Applications include monitoring, enabling demand response (DR), voltage unbalance mitigation, and enhancing voltage control. Illustrations of these applications are shown. Also, examples of enhanced reliability and restoration using a sensory based automation infrastructure are shown.
ContributorsHaughton, Daniel Andrew (Author) / Heydt, Gerald T (Thesis advisor) / Vittal, Vijay (Committee member) / Ayyanar, Raja (Committee member) / Hedman, Kory W (Committee member) / Arizona State University (Publisher)
Created2012
151270-Thumbnail Image.png
Description
The aim of this study was to investigate the microstructural sensitivity of the statistical distribution and diffusion kurtosis (DKI) models of non-monoexponential signal attenuation in the brain using diffusion-weighted MRI (DWI). We first developed a simulation of 2-D water diffusion inside simulated tissue consisting of semi-permeable cells and a variable

The aim of this study was to investigate the microstructural sensitivity of the statistical distribution and diffusion kurtosis (DKI) models of non-monoexponential signal attenuation in the brain using diffusion-weighted MRI (DWI). We first developed a simulation of 2-D water diffusion inside simulated tissue consisting of semi-permeable cells and a variable cell size. We simulated a DWI acquisition using a pulsed gradient spin echo (PGSE) pulse sequence, and fitted the models to the simulated DWI signals using b-values up to 2500 s/mm2. For comparison, we calculated the apparent diffusion coefficient (ADC) of the monoexponential model (b-value = 1000 s/mm2). In separate experiments, we varied the cell size (5-10-15 μ), cell volume fraction (0.50-0.65-0.80), and membrane permeability (0.001-0.01-0.1 mm/s) to study how the fitted parameters tracked simulated microstructural changes. The ADC was sensitive to all the simulated microstructural changes except the decrease in membrane permeability. The σstat of the statistical distribution model increased exclusively with a decrease in cell volume fraction. The Kapp of the DKI model increased exclusively with decreased cell size and decreased with increasing membrane permeability. These results suggest that the non-monoexponential models have different, specific microstructural sensitivity, and a combination of the models may give insights into the microstructural underpinning of tissue pathology. Faster PROPELLER DWI acquisitions, such as Turboprop and X-prop, remain subject to phase errors inherent to a gradient echo readout, which ultimately limits the applied turbo factor and thus scan time reductions. This study introduces a new phase correction to Turboprop, called Turboprop+. This technique employs calibration blades, which generate 2-D phase error maps and are rotated in accordance with the data blades, to correct phase errors arising from off-resonance and system imperfections. The results demonstrate that with a small increase in scan time for collecting calibration blades, Turboprop+ had a superior immunity to the off-resonance related artifacts when compared to standard Turboprop and recently proposed X-prop with the high turbo factor (turbo factor = 7). Thus, low specific absorption rate (SAR) and short scan time can be achieved in Turboprop+ using a high turbo factor, while off-resonance related artifacts are minimized.
ContributorsLee, Chu-Yu (Author) / Debbins, Josef P (Thesis advisor) / Bennett, Kevin M (Thesis advisor) / Karam, Lina (Committee member) / Pipe, James G (Committee member) / Arizona State University (Publisher)
Created2012
151157-Thumbnail Image.png
Description
Within the vast area of study in Organizational Change lays the industrial application of Change Management, which includes the understanding of both resisters and facilitators to organizational change. This dissertation presents an approach of gauging levels of change as it relates to both external and internal organization factors. The arena

Within the vast area of study in Organizational Change lays the industrial application of Change Management, which includes the understanding of both resisters and facilitators to organizational change. This dissertation presents an approach of gauging levels of change as it relates to both external and internal organization factors. The arena of such a test is given through the introduction of the same initiative change model, which attempts to improve transparency and accountability, across six different organizations where the varying results of change are measured. The change model itself consists of an interdisciplinary approach which emphasizes education of advanced organizational measurement techniques as fundamental drivers of converging change. The observations are documented in the real-time observed cased studies of six organizations as they progressed through the change process. This research also introduces a scaled metric for determining preliminary levels of change and endeavors to test both internal and external, or environmental, factors of change. A key contribution to the work is the analysis between both observed and surveyed data where a grounded theory analysis is used to help answer the question of what are factors of change in organizations. This work is considered to be foundational in real-time observational studies but has a promise for future additional contributions which would further elaborate on the phenomenon of prescribed organizational change.
ContributorsStone, Brian (Author) / Sullivan, Kenneth T. (Thesis advisor) / Verdini, William (Committee member) / Badger, William (Committee member) / Arizona State University (Publisher)
Created2012
151085-Thumbnail Image.png
Description
The thesis examines how high density polyethylene (HDPE) pipe installed by horizontal directional drilling (HDD) and traditional open trench (OT) construction techniques behave differently in saturated soil conditions typical of river crossings. Design fundamentals for depth of cover are analogous between HDD and OT; however, how the product pipe is

The thesis examines how high density polyethylene (HDPE) pipe installed by horizontal directional drilling (HDD) and traditional open trench (OT) construction techniques behave differently in saturated soil conditions typical of river crossings. Design fundamentals for depth of cover are analogous between HDD and OT; however, how the product pipe is situated in the soil medium is vastly different. This distinction in pipe bedding can produce significant differences in the post installation phase. The research was inspired by several incidents involving plastic pipe installed beneath rivers by HDD where the pipeline penetrated the overburden soil and floated to the surface after installation. It was hypothesized that pipes installed by HDD have a larger effective volume due to the presence of low permeability bentonite based drilling fluids in the annular space on completion of the installation. This increased effective volume of the pipe increases the buoyant force of the pipe compared to the same product diameter installed by OT methods, especially in situations where the pipe is installed below the ground water table. To simulate these conditions, a real-scale experiment was constructed to model the behavior of buried pipelines submerged in saturated silty soils. A full factorial design was developed to analyze scenarios with pipe diameters of 50, 75, and 100 mm installed at varying depths in a silty soil simulating an alluvial deposition. Contrary to the experimental hypothesis, pipes installed by OT required a greater depth of cover to prevent pipe floatation than similarly sized pipe installed by HDD. The results suggested that pipes installed by HDD are better suited to survive changing depths of cover. In addition, finite element method (FEM) modeling was conducted to understand soil stress patterns in the soil overburden post-installation. Maximum soil stresses occurring in the soil overburden between post-OT and HDD installation scenarios were compared to understand the pattern of total soil stress incurred by the two construction methods. The results of the analysis showed that OT installation methods triggered a greater total soil stress than HDD installation methods. The annular space in HDD resulted in less soil stress occurring in the soil overburden. Furthermore, the diameter of the HDD annular space influenced the soil stress that occurred in the soil overburden, while the density of drilling fluids did not vastly affect soil stress variations. Thus, the diameter of the annular space could impact soil stress patterns in HDD installations post-construction. With these findings engineers and designers may plan, design, and construct more efficient river-crossing projects.
ContributorsCho, Chin-sŏng (Author) / Ariaratnam, Samuel (Thesis advisor) / Lueke, Jason (Thesis advisor) / Arizona State University (Publisher)
Created2012
149494-Thumbnail Image.png
Description
The constant scaling of supply voltages in state-of-the-art CMOS processes has led to severe limitations for many analog circuit applications. Some CMOS processes have addressed this issue by adding high voltage MOSFETs to their process. Although it can be a completely viable solution, it usually requires a changing of the

The constant scaling of supply voltages in state-of-the-art CMOS processes has led to severe limitations for many analog circuit applications. Some CMOS processes have addressed this issue by adding high voltage MOSFETs to their process. Although it can be a completely viable solution, it usually requires a changing of the process flow or adding additional steps, which in turn, leads to an increase in fabrication costs. Si-MESFETs (silicon-metal-semiconductor-field-effect-transistors) from Arizona State University (ASU) on the other hand, have an inherent high voltage capability and can be added to any silicon-on-insulator (SOI) or silicon-on-sapphire (SOS) CMOS process free of cost. This has been proved at five different commercial foundries on technologies ranging from 0.5 to 0.15 μm. Another critical issue facing CMOS processes on insulated substrates is the scaling of the thin silicon channel. Consequently, the future direction of SOI/SOS CMOS transistors may trend away from partially depleted (PD) transistors and towards fully depleted (FD) devices. FD-CMOS are already being implemented in multiple applications due to their very low power capability. Since the FD-CMOS market only figures to grow, it is appropriate that MESFETs also be developed for these processes. The beginning of this thesis will focus on the device aspects of both PD and FD-MESFETs including their layout structure, DC and RF characteristics, and breakdown voltage. The second half will then shift the focus towards implementing both types of MESFETs in an analog circuit application. Aside from their high breakdown ability, MESFETs also feature depletion mode operation, easy to adjust but well controlled threshold voltages, and fT's up to 45 GHz. Those unique characteristics can allow certain designs that were previously difficult to implement or prohibitively expensive using conventional technologies to now be achieved. One such application which benefits is low dropout regulators (LDO). By utilizing an n-channel MESFET as the pass transistor, a LDO featuring very low dropout voltage, fast transient response, and stable operation can be achieved without an external capacitance. With the focus of this thesis being MESFET based LDOs, the device discussion will be mostly tailored towards optimally designing MESFETs for this particular application.
ContributorsLepkowski, William (Author) / Thornton, Trevor (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Goryll, Michael (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2010
149539-Thumbnail Image.png
Description
The rheological properties at liquid-liquid interfaces are important in many industrial processes such as manufacturing foods, pharmaceuticals, cosmetics, and petroleum products. This dissertation focuses on the study of linear viscoelastic properties at liquid-liquid interfaces by tracking the thermal motion of particles confined at the interfaces. The technique of interfacial microrheology

The rheological properties at liquid-liquid interfaces are important in many industrial processes such as manufacturing foods, pharmaceuticals, cosmetics, and petroleum products. This dissertation focuses on the study of linear viscoelastic properties at liquid-liquid interfaces by tracking the thermal motion of particles confined at the interfaces. The technique of interfacial microrheology is first developed using one- and two-particle tracking, respectively. In one-particle interfacial microrheology, the rheological response at the interface is measured from the motion of individual particles. One-particle interfacial microrheology at polydimethylsiloxane (PDMS) oil-water interfaces depends strongly on the surface chemistry of different tracer particles. In contrast, by tracking the correlated motion of particle pairs, two-particle interfacial microrheology significantly minimizes the effects from tracer particle surface chemistry and particle size. Two-particle interfacial microrheology is further applied to study the linear viscoelastic properties of immiscible polymer-polymer interfaces. The interfacial loss and storage moduli at PDMS-polyethylene glycol (PEG) interfaces are measured over a wide frequency range. The zero-shear interfacial viscosity, estimated from the Cross model, falls between the bulk viscosities of two individual polymers. Surprisingly, the interfacial relaxation time is observed to be an order of magnitude larger than that of the PDMS bulk polymers. To explore the fundamental basis of interfacial nanorheology, molecular dynamics (MD) simulations are employed to investigate the nanoparticle dynamics. The diffusion of single nanoparticles in pure water and low-viscosity PDMS oils is reasonably consistent with the prediction by the Stokes-Einstein equation. To demonstrate the potential of nanorheology based on the motion of nanoparticles, the shear moduli and viscosities of the bulk phases and interfaces are calculated from single-nanoparticle tracking. Finally, the competitive influences of nanoparticles and surfactants on other interfacial properties, such as interfacial thickness and interfacial tension are also studied by MD simulations.
ContributorsSong, Yanmei (Author) / Dai, Lenore L (Thesis advisor) / Jiang, Hanqing (Committee member) / Lin, Jerry Y S (Committee member) / Raupp, Gregory B (Committee member) / Sierks, Michael R (Committee member) / Arizona State University (Publisher)
Created2011
161584-Thumbnail Image.png
Description
Low frequency oscillations (LFOs) are recognized as one of the most challenging problems in electric grids as they limit power transfer capability and can result in system instability. In recent years, the deployment of phasor measurement units (PMUs) has increased the accessibility to time-synchronized wide-area measurements, which has, in turn,

Low frequency oscillations (LFOs) are recognized as one of the most challenging problems in electric grids as they limit power transfer capability and can result in system instability. In recent years, the deployment of phasor measurement units (PMUs) has increased the accessibility to time-synchronized wide-area measurements, which has, in turn, enabledthe effective detection and control of the oscillatory modes of the power system. This work assesses the stability improvements that can be achieved through the coordinated wide-area control of power system stabilizers (PSSs), static VAr compensators (SVCs), and supplementary damping controllers (SDCs) of high voltage DC (HVDC) lines, for damping electromechanical oscillations in a modern power system. The improved damping is achieved by designing different types of coordinated wide-area damping controllers (CWADC) that employ partial state-feedback. The first design methodology uses a linear matrix inequality (LMI)-based mixed H2/Hinfty control that is robust for multiple operating scenarios. To counteract the negative impact of communication failure or missing PMU measurements on the designed control, a scheme to identify the alternate set of feedback signals is proposed. Additionally, the impact of delays on the performance of the control design is investigated. The second approach is motivated by the increasing popularity of artificial intelligence (AI) in enhancing the performance of interconnected power systems. Two different wide-area coordinated control schemes are developed using deep neural networks (DNNs) and deep reinforcement learning (DRL), while accounting for the uncertainties present in the power system. The DNN-CWADC learns to make control decisions using supervised learning; the training dataset consisting of polytopic controllers designed with the help of LMI-based mixed H2/Hinfty optimization. The DRL-CWADC learns to adapt to the system uncertainties based on its continuous interaction with the power system environment by employing an advanced version of the state-of-the-art deep deterministic policy gradient (DDPG) algorithm referred to as bounded exploratory control-based DDPG (BEC-DDPG). The studies performed on a 29 machine, 127 bus equivalent model of theWestern Electricity Coordinating Council (WECC) system-embedded with different types of damping controls have demonstrated the effectiveness and robustness of the proposed CWADCs.
ContributorsGupta, Pooja (Author) / Pal, Anamitra (Thesis advisor) / Vittal, Vijay (Thesis advisor) / Zhang, Junshan (Committee member) / Hedmnan, Mojdeh (Committee member) / Wu, Meng (Committee member) / Arizona State University (Publisher)
Created2021
161588-Thumbnail Image.png
Description
Ensuring reliable operation of large power systems subjected to multiple outages is a challenging task because of the combinatorial nature of the problem. Traditional methods of steady-state security assessment in power systems involve contingency analysis based on AC or DC power flows. However, power flow based contingency analysis is not

Ensuring reliable operation of large power systems subjected to multiple outages is a challenging task because of the combinatorial nature of the problem. Traditional methods of steady-state security assessment in power systems involve contingency analysis based on AC or DC power flows. However, power flow based contingency analysis is not fast enough to evaluate all contingencies for real-time operations. Therefore, real-time contingency analysis (RTCA) only evaluates a subset of the contingencies (called the contingency list), and hence might miss critical contingencies that lead to cascading failures.This dissertation proposes a new graph-theoretic approach, called the feasibility test (FT) algorithm, for analyzing whether a contingency will create a saturated or over-loaded cut-set in a meshed power network; a cut-set denotes a set of lines which if tripped separates the network into two disjoint islands. A novel feature of the proposed approach is that it lowers the solution time significantly making the approach viable for an exhaustive real-time evaluation of the system. Detecting saturated cut-sets in the power system is important because they represent the vulnerable bottlenecks in the network. The robustness of the FT algorithm is demonstrated on a 17,000+ bus model of the Western Interconnection (WI). Following the detection of post-contingency cut-set saturation, a two-component methodology is proposed to enhance the reliability of large power systems during a series of outages. The first component combines the proposed FT algorithm with RTCA to create an integrated corrective action (iCA), whose goal is to secure the power system against post-contingency cut-set saturation as well as critical branch overloads. The second component only employs the results of the FT to create a relaxed corrective action (rCA) that quickly secures the system against saturated cut-sets. The first component is more comprehensive than the second, but the latter is computationally more efficient. The effectiveness of the two components is evaluated based upon the number of cascade triggering contingencies alleviated, and the computation time. Analysis of different case-studies on the IEEE 118-bus and 2000-bus synthetic Texas systems indicate that the proposed two-component methodology enhances the scope and speed of power system security assessment during multiple outages.
ContributorsSen Biswas, Reetam (Author) / Pal, Anamitra (Thesis advisor) / Vittal, Vijay (Committee member) / Undrill, John (Committee member) / Wu, Meng (Committee member) / Zhang, Yingchen (Committee member) / Arizona State University (Publisher)
Created2021
171763-Thumbnail Image.png
Description
Construction project teams expend substantial effort to develop scope definition during the front end planning phase of building projects but oftentimes neglect to sufficiently plan for the complexities of tribal building projects. A needs assessment conducted by the author comprising interviews with practitioners familiar with construction on tribal lands revealed

Construction project teams expend substantial effort to develop scope definition during the front end planning phase of building projects but oftentimes neglect to sufficiently plan for the complexities of tribal building projects. A needs assessment conducted by the author comprising interviews with practitioners familiar with construction on tribal lands revealed the need for a front end planning (FEP) process to assess scope definition of capital projects on tribal lands. This dissertation summarizes the motivations and efforts to develop a front end planning tool for tribal building projects, the Project Definition Rating Index (PDRI) for Tribal Building Projects. The author convened a research team to review, analyze, and adapt an existing building-projects-focused FEP tool, the PDRI – Building Projects, and other resources to develop a set of 67 specific elements relevant to the planning of tribal building projects. The author supported the facilitation of seven workshops in which 20 industry professionals evaluated the element descriptions and provided element prioritization data that was statistically analyzed to develop a preliminary weighted score sheet that corresponds to the element descriptions. Given that the author was only able to collect complete data from 11 projects, definitively determining element weights was not possible. Therefore, the author leveraged a Delphi study to test the PDRI – Tribal Building Projects. Delphi study results indicate the PDRI – Tribal Building Projects element descriptions fully address the scope of tribal building projects, and 75 percent of panelists agreed they would use this tool on their next tribal project. The author also explored the PDRI – Tribal Building Projects tool through the lens of the Diné (Navajo) Philosophy of Sa’ąh Naagháí Bik’eh Hózhóón (SNBH) and the guiding principles of Nistáhákees (thinking), Nahat’á (planning), Iiná (living), and Sihasin (assurance/reflection). The results of the author’s research provides several contributions to the American Indian Studies, front end planning, and tribal building projects bodies of knowledge: 1) defining unique features of tribal projects, 2) explicitly documenting the synergies between Western and Diné ways of planning, and 3) creating a tool to assist in planning capital projects on tribal lands in the American Southwest in support of improved cost performance.
ContributorsArviso, Brianne (Author) / Parrish, Kristen (Thesis advisor) / Gibson, George E. (Committee member) / Hale, Michelle (Committee member) / Arizona State University (Publisher)
Created2022