Matching Items (577)
Filtering by

Clear all filters

157530-Thumbnail Image.png
Description
The study of soft magnetic materials has been growing in popularity in recent years. Driving this interest are new applications for traditional electrical power-management components, such as inductors and transformers, which must be scaled down to the micro and nano scale while the frequencies of operation have been scaling u

The study of soft magnetic materials has been growing in popularity in recent years. Driving this interest are new applications for traditional electrical power-management components, such as inductors and transformers, which must be scaled down to the micro and nano scale while the frequencies of operation have been scaling up to the gigahertz range and beyond. The exceptional magnetic properties of the materials make them highly effective in these small-component applications, but the ability of these materials to provide highly-effective shielding has not been so thoroughly considered. Most shielding is done with traditional metals, such as aluminum, because of the relatively low cost of the material and high workability in shaping the material to meet size and dimensional requirements.

This research project focuses on analyzing the variance in shielding effectiveness and electromagnetic field effects of a thin film of Cobalt Zirconium Tantalum Boron (CZTB) in the band of frequencies most likely to require innovative solutions to long-standing problems of noise and interference. The measurements include Near H-Field attenuation and field effects, Far Field shielding, and Backscatter. Minor variances in the thickness and layering of sputter deposition can have significant changes electromagnetic signature of devices which radiate energy through the material.

The material properties presented in this research are H-Field attenuation, H-Field Flux Orientation, Far-Field Approximation, E Field Vector Directivity, H Field Vector Directivity, and Backscatter Magnitude. The results are presented, analyzed and explained using characterization techniques. Future work includes the effect of sputter deposition orientation, application to devices, and applicability in mitigating specific noise signals beyond the 5G band.
ContributorsMiller, Phillip Carl (Author) / Yu, Hongbin (Thesis advisor) / Aberle, James T., 1961- (Committee member) / Blain Christen, Jennifer (Committee member) / Arizona State University (Publisher)
Created2019
157531-Thumbnail Image.png
Description
Despite the fact that machine learning supports the development of computer vision applications by shortening the development cycle, finding a general learning algorithm that solves a wide range of applications is still bounded by the ”no free lunch theorem”. The search for the right algorithm to solve a specific problem

Despite the fact that machine learning supports the development of computer vision applications by shortening the development cycle, finding a general learning algorithm that solves a wide range of applications is still bounded by the ”no free lunch theorem”. The search for the right algorithm to solve a specific problem is driven by the problem itself, the data availability and many other requirements.

Automated visual inspection (AVI) systems represent a major part of these challenging computer vision applications. They are gaining growing interest in the manufacturing industry to detect defective products and keep these from reaching customers. The process of defect detection and classification in semiconductor units is challenging due to different acceptable variations that the manufacturing process introduces. Other variations are also typically introduced when using optical inspection systems due to changes in lighting conditions and misalignment of the imaged units, which makes the defect detection process more challenging.

In this thesis, a BagStack classification framework is proposed, which makes use of stacking and bagging concepts to handle both variance and bias errors. The classifier is designed to handle the data imbalance and overfitting problems by adaptively transforming the

multi-class classification problem into multiple binary classification problems, applying a bagging approach to train a set of base learners for each specific problem, adaptively specifying the number of base learners assigned to each problem, adaptively specifying the number of samples to use from each class, applying a novel data-imbalance aware cross-validation technique to generate the meta-data while taking into account the data imbalance problem at the meta-data level and, finally, using a multi-response random forest regression classifier as a meta-classifier. The BagStack classifier makes use of multiple features to solve the defect classification problem. In order to detect defects, a locally adaptive statistical background modeling is proposed. The proposed BagStack classifier outperforms state-of-the-art image classification techniques on our dataset in terms of overall classification accuracy and average per-class classification accuracy. The proposed detection method achieves high performance on the considered dataset in terms of recall and precision.
ContributorsHaddad, Bashar Muneer (Author) / Karam, Lina (Thesis advisor) / Li, Baoxin (Committee member) / He, Jingrui (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2019
161293-Thumbnail Image.png
Description
The first task faced by many teams endeavoring to solve complex scientific problems is to seek funding for their research venture. Often, this necessitates forming new, geographically dispersed teams of researchers from multiple disciplines. While the team science and organizational management fields have studied project teams extensively, nascent teams are

The first task faced by many teams endeavoring to solve complex scientific problems is to seek funding for their research venture. Often, this necessitates forming new, geographically dispersed teams of researchers from multiple disciplines. While the team science and organizational management fields have studied project teams extensively, nascent teams are underrepresented in the literature. Nonetheless, understanding proposal team dynamics is important because if left unaddressed, obstacles may persist beyond the funding decision and undermine the possibility of team successes adjunctive to funding. Participant observation of more than 100 multi-investigator proposal teams and semi-structured interviews with six leaders of multidisciplinary proposal teams identified investigator motivations for collaboration, obstacles to collaboration, and indicators of proposal team success. The motivations ranged from technical interests in the research question to a desire to have impact beyond oneself. The obstacles included inconsistent or non-existent communication protocols, unclear processes for producing and reviewing documents, ad hoc file and citation management systems, short and stressful time horizons, ambiguous decision-making procedures, and uncertainty in establishing a shared vision. While funding outcome was the most objective indicator of a proposal team’s success, other success indicators emerged, including whether the needs of the team member(s) had been met and the willingness of team members to continue collaborating. This multi-dimensional definition of success makes it possible for teams to simultaneously be considered successes and failures. As a framework to analyze and overcome obstacles, this work turned to the United States military’s command and control (C2) approach, which relies on specifying the following elements to increase an organization’s agility: patterns of interaction, distribution of information, and allocation of decision rights. To address disciplinary differences and varied motivations for collaboration, this work added a fourth element: shared meaning-making. The broader impact of this work is that by implementing a C2 framework to uncover and address obstacles, the proposal experience—from team creation, to idea generation, to document creation, to final submittal—becomes more rewarding for faculty, leading to greater job satisfaction. This in turn will change how university research enterprises create, organize, and share knowledge to solve complex problems in the post-industrial information age.
ContributorsPassantino, Laurel (Author) / Seager, Thomas P (Thesis advisor) / Cantwell, Elizabeth R (Committee member) / Johnston, Erik (Committee member) / Arizona State University (Publisher)
Created2021
161583-Thumbnail Image.png
Description
In the structural engineering industry, the design of structures typically follows a prescriptive approach in which engineers conform to a series of code requirements that stipulate the design process. Prescriptive design is tested, reliable, and understood by practically every structural engineer in the industry; however, in recent history a new

In the structural engineering industry, the design of structures typically follows a prescriptive approach in which engineers conform to a series of code requirements that stipulate the design process. Prescriptive design is tested, reliable, and understood by practically every structural engineer in the industry; however, in recent history a new method of design has started to gain traction among certain groups of engineers. Performance-based design is a reversal of the prescriptive approach in that it allows engineers to set performance goals and work to prove that their proposed designs meet the criteria they have established. To many, it is an opportunity for growth in the structural design industry. Currently, performance-based design is most commonly utilized in regions where seismic activity plays an important role in the design process. Due to its flexible nature, performance-based design has proven extremely useful when applied to unique structures such as high-rises, stadiums, and other community-centric designs. With a focus placed on performance objectives and not on current code prescriptions, engineers utilizing performance-based design are more adept to implement new materials, design processes, and construction methods, and can more efficiently design their structures to exist on a specific area of land. Despite these many cited benefits, performance-based design is still considered an uncommon practice in the broad view of structural design. In order to ensure that structural engineers have the proper tools to practice performance-based design in instances where they see fit, a coordinated effort will be required of the engineers themselves, the firms of which they are employed, the professional societies to which they belong, and the educators who are preparing their next generation. Performance-based design holds with it the opportunity to elevate the role of the structural engineer to which they are informed members of the community, where the structures they create not only perform according to design prescriptions, but also perform according to the needs of the owners, engineers, and society.
ContributorsMaurer, Cole (Author) / Hjelmstad, Keith (Thesis advisor) / Chatziefstratiou, Efthalia (Committee member) / Dusenberry, Donald (Committee member) / Arizona State University (Publisher)
Created2021
161584-Thumbnail Image.png
Description
Low frequency oscillations (LFOs) are recognized as one of the most challenging problems in electric grids as they limit power transfer capability and can result in system instability. In recent years, the deployment of phasor measurement units (PMUs) has increased the accessibility to time-synchronized wide-area measurements, which has, in turn,

Low frequency oscillations (LFOs) are recognized as one of the most challenging problems in electric grids as they limit power transfer capability and can result in system instability. In recent years, the deployment of phasor measurement units (PMUs) has increased the accessibility to time-synchronized wide-area measurements, which has, in turn, enabledthe effective detection and control of the oscillatory modes of the power system. This work assesses the stability improvements that can be achieved through the coordinated wide-area control of power system stabilizers (PSSs), static VAr compensators (SVCs), and supplementary damping controllers (SDCs) of high voltage DC (HVDC) lines, for damping electromechanical oscillations in a modern power system. The improved damping is achieved by designing different types of coordinated wide-area damping controllers (CWADC) that employ partial state-feedback. The first design methodology uses a linear matrix inequality (LMI)-based mixed H2/Hinfty control that is robust for multiple operating scenarios. To counteract the negative impact of communication failure or missing PMU measurements on the designed control, a scheme to identify the alternate set of feedback signals is proposed. Additionally, the impact of delays on the performance of the control design is investigated. The second approach is motivated by the increasing popularity of artificial intelligence (AI) in enhancing the performance of interconnected power systems. Two different wide-area coordinated control schemes are developed using deep neural networks (DNNs) and deep reinforcement learning (DRL), while accounting for the uncertainties present in the power system. The DNN-CWADC learns to make control decisions using supervised learning; the training dataset consisting of polytopic controllers designed with the help of LMI-based mixed H2/Hinfty optimization. The DRL-CWADC learns to adapt to the system uncertainties based on its continuous interaction with the power system environment by employing an advanced version of the state-of-the-art deep deterministic policy gradient (DDPG) algorithm referred to as bounded exploratory control-based DDPG (BEC-DDPG). The studies performed on a 29 machine, 127 bus equivalent model of theWestern Electricity Coordinating Council (WECC) system-embedded with different types of damping controls have demonstrated the effectiveness and robustness of the proposed CWADCs.
ContributorsGupta, Pooja (Author) / Pal, Anamitra (Thesis advisor) / Vittal, Vijay (Thesis advisor) / Zhang, Junshan (Committee member) / Hedmnan, Mojdeh (Committee member) / Wu, Meng (Committee member) / Arizona State University (Publisher)
Created2021
161588-Thumbnail Image.png
Description
Ensuring reliable operation of large power systems subjected to multiple outages is a challenging task because of the combinatorial nature of the problem. Traditional methods of steady-state security assessment in power systems involve contingency analysis based on AC or DC power flows. However, power flow based contingency analysis is not

Ensuring reliable operation of large power systems subjected to multiple outages is a challenging task because of the combinatorial nature of the problem. Traditional methods of steady-state security assessment in power systems involve contingency analysis based on AC or DC power flows. However, power flow based contingency analysis is not fast enough to evaluate all contingencies for real-time operations. Therefore, real-time contingency analysis (RTCA) only evaluates a subset of the contingencies (called the contingency list), and hence might miss critical contingencies that lead to cascading failures.This dissertation proposes a new graph-theoretic approach, called the feasibility test (FT) algorithm, for analyzing whether a contingency will create a saturated or over-loaded cut-set in a meshed power network; a cut-set denotes a set of lines which if tripped separates the network into two disjoint islands. A novel feature of the proposed approach is that it lowers the solution time significantly making the approach viable for an exhaustive real-time evaluation of the system. Detecting saturated cut-sets in the power system is important because they represent the vulnerable bottlenecks in the network. The robustness of the FT algorithm is demonstrated on a 17,000+ bus model of the Western Interconnection (WI). Following the detection of post-contingency cut-set saturation, a two-component methodology is proposed to enhance the reliability of large power systems during a series of outages. The first component combines the proposed FT algorithm with RTCA to create an integrated corrective action (iCA), whose goal is to secure the power system against post-contingency cut-set saturation as well as critical branch overloads. The second component only employs the results of the FT to create a relaxed corrective action (rCA) that quickly secures the system against saturated cut-sets. The first component is more comprehensive than the second, but the latter is computationally more efficient. The effectiveness of the two components is evaluated based upon the number of cascade triggering contingencies alleviated, and the computation time. Analysis of different case-studies on the IEEE 118-bus and 2000-bus synthetic Texas systems indicate that the proposed two-component methodology enhances the scope and speed of power system security assessment during multiple outages.
ContributorsSen Biswas, Reetam (Author) / Pal, Anamitra (Thesis advisor) / Vittal, Vijay (Committee member) / Undrill, John (Committee member) / Wu, Meng (Committee member) / Zhang, Yingchen (Committee member) / Arizona State University (Publisher)
Created2021
161495-Thumbnail Image.png
Description
In human-autonomy teams (HATs), the human needs to interact with one or more autonomous agents, and this new type of interaction is different than the existing human-to-human interaction. Next Generation Combat Vehicles (NGCVs), which are envisioned for the U.S. military are associated with the concept of HAT. As NGCVs are

In human-autonomy teams (HATs), the human needs to interact with one or more autonomous agents, and this new type of interaction is different than the existing human-to-human interaction. Next Generation Combat Vehicles (NGCVs), which are envisioned for the U.S. military are associated with the concept of HAT. As NGCVs are in the early stage of development, it is necessary to develop different training methods and measures for team effectiveness. The way team members communicate and task complexity are factors affecting team efficiency. This study analyzes the impact of two interaction strategies and task complexity on team situation awareness among 22 different teams. Teams were randomly assigned different interaction conditions and went through two missions to finish their assigned tasks. Results indicate that the team with the procedural interaction strategy had better team situation awareness according to the Coordinated Awareness of the Situation by Teams (CAST) scores on the artillery calls. However, the difference between the strategies was not found on CAST scores of perturbations, map accuracy, or Situation Awareness Global Assessment Technique (SAGAT) scores. Additionally, the impact of task complexity on the team situation awareness was not found. Implications and suggestions for future work are discussed.
ContributorsKim, Jimin (Author) / Gutzwiller, Rober (Thesis advisor) / Cooke, Nancy (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2021
161363-Thumbnail Image.png
Description
Two fatigue life prediction methods using the energy-based approach have been proposed. A number of approaches have been developed in the past five decades. This study reviews some common models and discusses the model that is most suitable for each different condition, no matter whether the model is designed

Two fatigue life prediction methods using the energy-based approach have been proposed. A number of approaches have been developed in the past five decades. This study reviews some common models and discusses the model that is most suitable for each different condition, no matter whether the model is designed to solve uniaxial, multiaxial, or biaxial loading paths in fatigue prediction. In addition, different loading cases such as various loading and constant loading are also discussed. These models are suitable for one or two conditions in fatigue prediction. While most of the existing models can only solve single cases, the proposed new energy-based approach not only can deal with different loading paths but is applicable for various loading cases. The first energy-based model using the linear cumulative rule is developed to calculate random loading cases. The method is developed by combining Miner’s rule and the rainflow-counting algorithm. For the second energy-based method, I propose an alternative method and develop an approach to avert the rainflow-counting algorithm. Specifically, I propose to use an energy-based model by directly using the time integration concept. In this study, first, the equivalent energy concept that can transform three-dimensional loading into an equivalent loading will be discussed. Second, the new damage propagation method modified by fatigue crack growth will be introduced to deal with cycle-based fatigue prediction. Third, the time-based concept will be implemented to determine fatigue damage under every cycle in the random loading case. The formulation will also be explained in detail. Through this new model, the fatigue life can be calculated properly in different loading cases. In addition, the proposed model is verified with experimental datasets from several published studies. The data include both uniaxial and multiaxial loading paths under constant loading and random loading cases. Finally, the discussion and conclusion based on the results, are included. Additional loading cases such as the spectrum including both elastic and plastic regions will be explored in future research.
ContributorsTien, Shih-Chuan (Author) / Liu, Yongming (Thesis advisor) / Nian, Qiong (Committee member) / Jiao, Yang (Committee member) / Arizona State University (Publisher)
Created2021
161364-Thumbnail Image.png
Description
The Inverted Pendulum on a Cart is a classical control theory problem that helps understand the importance of feedback control systems for a coupled plant. In this study, a custom built pendulum system is coupled with a linearly actuated cart and a control system is designed to show the stability

The Inverted Pendulum on a Cart is a classical control theory problem that helps understand the importance of feedback control systems for a coupled plant. In this study, a custom built pendulum system is coupled with a linearly actuated cart and a control system is designed to show the stability of the pendulum. The three major objectives of this control system are to swing up the pendulum, balance the pendulum in the inverted position (i.e. $180^\circ$), and maintain the position of the cart. The input to this system is the translational force applied to the cart using the rotation of the tires. The main objective of this thesis is to design a control system that will help in balancing the pendulum while maintaining the position of the cart and implement it in a robot. The pendulum is made free rotating with the help of ball bearings and the angle of the pendulum is measured using an Inertial Measurement Unit (IMU) sensor. The cart is actuated by two Direct Current (DC) motors and the position of the cart is measured using encoders that generate pulse signals based on the wheel rotation. The control is implemented in a cascade format where an inner loop controller is used to stabilize and balance the pendulum in the inverted position and an outer loop controller is used to control the position of the cart. Both the inner loop and outer loop controllers follow the Proportional-Integral-Derivative (PID) control scheme with some modifications for the inner loop. The system is first mathematically modeled using the Newton-Euler first principles method and based on this model, a controller is designed for specific closed-loop parameters. All of this is implemented on hardware with the help of an Arduino Due microcontroller which serves as the main processing unit for the system.
ContributorsNamasivayam, Vignesh (Author) / Tsakalis, Konstantinos (Thesis advisor) / Rodriguez, Armando (Committee member) / Si, Jennie (Committee member) / Shafique, Md. Ashfaque Bin (Committee member) / Arizona State University (Publisher)
Created2021
161788-Thumbnail Image.png
Description
Collision-free path planning is also a major challenge in managing unmanned aerial vehicles (UAVs) fleets, especially in uncertain environments. The design of UAV routing policies using multi-agent reinforcement learning has been considered, and propose a Multi-resolution, Multi-agent, Mean-field reinforcement learning algorithm, named 3M-RL, for flight planning, where multiple vehicles need

Collision-free path planning is also a major challenge in managing unmanned aerial vehicles (UAVs) fleets, especially in uncertain environments. The design of UAV routing policies using multi-agent reinforcement learning has been considered, and propose a Multi-resolution, Multi-agent, Mean-field reinforcement learning algorithm, named 3M-RL, for flight planning, where multiple vehicles need to avoid collisions with each other while moving towards their destinations. In this system, each UAV makes decisions based on local observations, and does not communicate with other UAVs. The algorithm trains a routing policy using an Actor-Critic neural network with multi-resolution observations, including detailed local information and aggregated global information based on mean-field. The algorithm tackles the curse-of-dimensionality problem in multi-agent reinforcement learning and provides a scalable solution. The proposed algorithm is tested in different complex scenarios in both 2D and 3D space and the simulation results show that 3M-RL result in good routing policies. Also as a compliment, dynamic data communications between UAVs and a control center has also been studied, where the control center needs to monitor the safety state of each UAV in the system in real time, where the transition of risk level is simply considered as a Markov process. Given limited communication bandwidth, it is impossible for the control center to communicate with all UAVs at the same time. A dynamic learning problem with limited communication bandwidth is also discussed in this paper where the objective is to minimize the total information entropy in real-time risk level tracking. The simulations also demonstrate that the algorithm outperforms policies such as a Round & Robin policy.
ContributorsWang, Weichang (Author) / Ying, Lei (Thesis advisor) / Liu, Yongming (Thesis advisor) / Zhang, Junshan (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2021