Matching Items (2,227)
Filtering by

Clear all filters

190910-Thumbnail Image.png
Description
A key contribution of human factors engineering is the concept of workload: a construct that represents the relationship between an operator’s cognitive resources, the demands of their task, and performance. Understanding workload can lead to improvements in safety and performance for people working in critical environments, particularly within action teams.

A key contribution of human factors engineering is the concept of workload: a construct that represents the relationship between an operator’s cognitive resources, the demands of their task, and performance. Understanding workload can lead to improvements in safety and performance for people working in critical environments, particularly within action teams. Recently, there has been interest in considering how the workload of a team as a whole may differ from that of an individual, prompting investigation into team workload as a distinct team-level construct. In empirical research, team-level workload is often considered as the sum or average of individual team members' workloads. However, the intrinsic characteristics of action teams—such as interdependence and heterogeneity—challenge this assumption, and traditional methods of measuring team workload might be unsuitable. This dissertation delves into this issue with a review of empirical work in action teams, pinpointing several gaps. Next, the development of a testbed is described and used to address two pressing gaps regarding the impact of interdependence and how team communications relate to team workload states and performance. An experiment was conducted with forty 3-person teams collaborating in an action team task. Results of this experiment suggest that the traditional way of measuring workload in action teams via subjective questionnaires averaged at the team level has some major shortcomings, particularly when demands are elevated, and action teams are highly interdependent. The results also suggested that several communication measures are associated with increases in demands, laying the groundwork for team-level communication-based measures of team workload. The results are synthesized with findings from the literature to provide a way forward for conceptualizing and measuring team workload in action teams.
ContributorsJohnson, Craig Jonathon (Author) / Cooke, Nancy J (Thesis advisor) / Gutzwiller, Robert S (Committee member) / Holder, Eric (Committee member) / Amazeen, Polemnia G (Committee member) / Arizona State University (Publisher)
Created2023
190907-Thumbnail Image.png
Description
Air conditioning is a significant energy consumer in buildings, especially in humid regions where a substantial portion of energy is used to remove moisture rather than cool the air. Traditional dehumidification methods, which cool air to its dew point to condense water vapor, are energy intensive. This process unnecessarily overcools

Air conditioning is a significant energy consumer in buildings, especially in humid regions where a substantial portion of energy is used to remove moisture rather than cool the air. Traditional dehumidification methods, which cool air to its dew point to condense water vapor, are energy intensive. This process unnecessarily overcools the air, only to reheat it to the desired temperature.This research introduces thermoresponsive materials as efficient desiccants to reduce energy demand for dehumidification. A system using lower critical solution temperature (LCST) type ionic liquids (ILs) as dehumidifiers is presented. Through the Flory-Huggins theory of mixtures, interactions between ionic liquids and water are analyzed. LCST ionic liquids demonstrate superior performance, with a coefficient of performance (COP) four times higher than non-thermoresponsive desiccants under similar conditions. The efficacy of ionic liquids as dehumidifiers is assessed based on properties like LCST temperature and enthalpic interaction parameter. The research also delves into thermoresponsive solid desiccants, particularly polymers, using the Vrentas-Vrentas model. This model offers a more accurate depiction of their behaviors compared to the Flory-Huggins theory by considering elastic energy stored in the polymers. Moisture absorption in thin film polymers is studied under diverse conditions, producing absorption isotherms for various temperatures and humidities. Using temperature-dependent interaction parameters, the behavior of the widely-used thermoresponsive polymer (TRP) PNIPAAm and hypothetical TRPs is investigated. The parameters from the model are used as input to do a finite element analysis of a thermoresponsive dehumidifier. This model demonstrates the complete absorption-desorption cycle under varied conditions such as polymer absorption temperature, relative humidity, and air speed. Results indicate that a TRP with enhanced absorption capacity and an LCST of 50℃ achieves a peak moisture removal efficiency (MRE) of 0.9 at 75% relative humidity which is comparable to other existing thermoresponsive dehumidification systems. But other TRPs with even greater absorption capacity can produce MRE as high as 3.6. This system also uniquely recovers water in liquid form.
ContributorsRana, Ashish (Author) / Wang, Robert RW (Thesis advisor) / Green, Matthew MG (Committee member) / Milcarek, Ryan RM (Committee member) / Wang, Liping LW (Committee member) / Phelan, Patrick PP (Committee member) / Arizona State University (Publisher)
Created2023
190908-Thumbnail Image.png
Description
Advancements in three-dimensional (3D) additive manufacturing techniques have opened up new possibilities for healthcare systems and the medical industry, allowing for the realization of concepts that were once confined to theoretical discussions. Among these groundbreaking research endeavors is the development of intricate magnetic structures that can be actuated through non-invasive

Advancements in three-dimensional (3D) additive manufacturing techniques have opened up new possibilities for healthcare systems and the medical industry, allowing for the realization of concepts that were once confined to theoretical discussions. Among these groundbreaking research endeavors is the development of intricate magnetic structures that can be actuated through non-invasive methods, including electromagnetic and magnetic actuation. Magnetic actuation, in particular, offers the advantage of untethered operation. In this study, a photopolymerizable resin infused with Fe3O4 oxide nanoparticles is employed in the printing process using the micro-continuous liquid interface production technique. The objective is to optimize the manufacturing process to produce microstructures featuring smooth surfaces and reduced surface porosity, and enhanced flexibility and magnetic actuation. Various intricate structures are fabricated to validate the printing process's capabilities. Furthermore, the assessment of the flexibilty of these 3D-printed structures is conducted in the presence of an external magnetic field using a homemade bending test setup, allowing for a comprehensive characterization of these components. This research serves as a foundation for the future design and development of micro-robots using micro-continuous liquid interface production technique.
ContributorsJha, Ujjawal (Author) / Chen, Xiangfan (Thesis advisor) / Li, Xiangjia (Committee member) / Jin, Kailong (Committee member) / Nian, Qiong (Committee member) / Arizona State University (Publisher)
Created2023
190873-Thumbnail Image.png
Description
Instruction tuning of language models has demonstrated the ability to enhance model generalization to unseen tasks via in-context learning using a few examples. However, typical supervised learning still requires a plethora of training data for downstream or “Held-in” tasks. Often in real-world situations, there is a scarcity of data available

Instruction tuning of language models has demonstrated the ability to enhance model generalization to unseen tasks via in-context learning using a few examples. However, typical supervised learning still requires a plethora of training data for downstream or “Held-in” tasks. Often in real-world situations, there is a scarcity of data available for finetuning, falling somewhere between few shot inference and fully supervised finetuning. In this work, I demonstrate the sample efficiency of instruction tuned models over various tasks by estimating the minimal training data required by downstream or “Held-In” tasks to perform transfer learning and match the performance of state-of-the-art (SOTA) supervised models. I conduct experiments on 119 tasks from Super Natural Instructions (SuperNI) in both the single task learning / Expert Modelling (STL) and multi task learning (MTL) settings. My findings reveal that, in the STL setting, instruction tuned models equipped with 25% of the downstream train data surpass the SOTA performance on the downstream tasks. In the MTL setting, an instruction tuned model trained on only 6% of downstream training data achieve SOTA, while using 100% of the training data results in a 3.69% points improvement (ROUGE-L 74.68) over the previous SOTA. I conduct an analysis on T5 vs Tk-Instruct by developing several baselines to demonstrate that instruction tuning aids in increasing both sample efficiency and transfer learning. Additionally, I observe a consistent ∼ 4% performance increase in both settings when pre-finetuning is performed with instructions. Finally, I conduct a categorical study and find that contrary to previous results, tasks in the question rewriting and title generation categories suffer from instruction tuning.
ContributorsGupta, Himanshu (Author) / Baral, Chitta Dr (Thesis advisor) / Mitra, Arindam Dr (Committee member) / Gopalan, Nakul Dr (Committee member) / Arizona State University (Publisher)
Created2023
190926-Thumbnail Image.png
Description
The evolution of technology, including the proliferation of the Internet of Things (IoT), advanced sensors, intelligent systems, and more, has paved the way for the establishment of smart homes. These homes bring a new era of automation with interconnected devices, offering increased services. However, they also introduce data security and

The evolution of technology, including the proliferation of the Internet of Things (IoT), advanced sensors, intelligent systems, and more, has paved the way for the establishment of smart homes. These homes bring a new era of automation with interconnected devices, offering increased services. However, they also introduce data security and device management challenges. Current smart home technologies are susceptible to security violations, leaving users vulnerable to data compromise, privacy invasions, and physical risks. These systems often fall short in implementing stringent data security safeguards, and the user control process is complex. In this thesis, an approach is presented to improve smart home security by integrating private blockchain technology with situational awareness access control. Using blockchain technology ensures transparency and immutability in data transactions. Transparency from the blockchain enables meticulous tracking of data access, modifications, and policy changes. The immutability of blockchain is utilized to strengthen the integrity of data, deterring, and preventing unauthorized alterations. While the designed solution leverages these specific blockchain features, it consciously does not employ blockchain's decentralization due to the limited computational resources of IoT devices and the focused requirement for centralized management within a smart home context. Additionally, situational awareness facilitates the dynamic adaptation of access policies. The strategies in this thesis excel beyond existing solutions, providing fine-grained access control, reliable transaction data storage, data ownership, audibility, transparency, access policy, and immutability. This approach is thoroughly evaluated against existing smart home security improvement solutions.
ContributorsLin, Zhicheng (Author) / Yau, Stephen S. (Thesis advisor) / Baek, Jaejong (Committee member) / Ghayekhloo, Samira (Committee member) / Arizona State University (Publisher)
Created2023
190927-Thumbnail Image.png
Description
The advancement of cloud technology has impacted society positively in a number of ways, but it has also led to an increase in threats that target private information available on cloud systems. Intrusion prevention systems play a crucial role in protecting cloud systems from such threats. In this thesis, an

The advancement of cloud technology has impacted society positively in a number of ways, but it has also led to an increase in threats that target private information available on cloud systems. Intrusion prevention systems play a crucial role in protecting cloud systems from such threats. In this thesis, an intrusion prevention approach todetect and prevent such threats in real-time is proposed. This approach is designed for network-based intrusion prevention systems and leverages the power of supervised machine learning with Extreme Gradient Boosting (XGBoost) and Long Short-Term Memory (LSTM) algorithms, to analyze the flow of each packet that is sent to a cloud system through the network. The innovations of this thesis include developing a custom LSTM architecture, using this architecture to train a LSTM model to identify attacks and using TCP reset functionality to prevent attacks for cloud systems. The aim of this thesis is to provide a framework for an Intrusion Prevention System. Based on simulations and experimental results with the NF-UQ-NIDS-v2 dataset, the proposed system is accurate, fast, scalable and has a low rate of false positives, making it suitable for real world applications.
ContributorsGianchandani, Siddharth (Author) / Yau, Stephen (Thesis advisor) / Zhao, Ming (Committee member) / Lee, Kookjin (Committee member) / Arizona State University (Publisher)
Created2023
190924-Thumbnail Image.png
Description
Ethylene is one of the most widely used organic compounds worldwide with ever increasing demand. Almost all the industries currently producing ethylene globally use the method of steam cracking, which, though highly selective and cost effective, is energy intensive along with having a high carbon footprint. This study aims to

Ethylene is one of the most widely used organic compounds worldwide with ever increasing demand. Almost all the industries currently producing ethylene globally use the method of steam cracking, which, though highly selective and cost effective, is energy intensive along with having a high carbon footprint. This study aims to analyze micro-scale partial oxidation of propane as a novel approach towards ethylene generation which is simpler, less energy consuming, operates at lower temperatures and causes minimum CO2 emission. The experimental study endeavors to maximize the ethylene production by investigating the effect of variables such as temperature, flow rate, equivalence ratio and reactor diameter. The micro-scale partial oxidation of propane is studied inside quartz tube reactors of 1 mm and 3 mm diameter at a temperature range of 800 to 900 oC, at varying flow rates of 10 to 100 sccm and equivalence ratios of 1 to 6. The study reveals ethylene yield has a strong dependence on all the above factors. However, the factors are not completely independent of each other. Adjusting certain factors and levels results in greater ethylene yields as high as 10%, but propane to ethylene conversion efficiency is approximately constant for most conditions. Low CO2 concentrations are also recorded for most of the factor and level combinations, indicating the potential to achieve lower CO2 yields compared to conventional approaches. The investigation indicates promise for application in the field of ethylene generation.
ContributorsMAHALKAR, PAWAN MUKUND (Author) / Milcarek, Ryan (Thesis advisor) / Kwon, Beomjin (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2023
190953-Thumbnail Image.png
Description
Pan Tilt Traffic Cameras (PTTC) are a vital component of traffic managementsystems for monitoring/surveillance. In a real world scenario, if a vehicle is in pursuit of another vehicle or an accident has occurred at an intersection causing traffic stoppages, accurate and venerable data from PTTC is necessary to quickly localize the cars on

Pan Tilt Traffic Cameras (PTTC) are a vital component of traffic managementsystems for monitoring/surveillance. In a real world scenario, if a vehicle is in pursuit of another vehicle or an accident has occurred at an intersection causing traffic stoppages, accurate and venerable data from PTTC is necessary to quickly localize the cars on a map for adept emergency response as more and more traffic systems are getting automated using machine learning concepts. However, the position(orientation) of the PTTC with respect to the environment is often unknown as most of them lack Inertial Measurement Units or Encoders. Current State Of the Art systems 1. Demand high performance compute and use carbon footprint heavy Deep Neural Networks(DNN), 2. Are only applicable to scenarios with appropriate lane markings or only roundabouts, 3. Demand complex mathematical computations to determine focal length and optical center first before determining the pose. A compute light approach "TIPANGLE" is presented in this work. The approach uses the concept of Siamese Neural Networks(SNN) encompassing simple mathematical functions i.e., Euclidian Distance and Contrastive Loss to achieve the objective. The effectiveness of the approach is reckoned with a thorough comparison study with alternative approaches and also by executing the approach on an embedded system i.e., Raspberry Pi 3.
ContributorsJagadeesha, Shreehari (Author) / Shrivastava, Aviral (Thesis advisor) / Gopalan, Nakul (Committee member) / Arora, Aman (Committee member) / Arizona State University (Publisher)
Created2023
190995-Thumbnail Image.png
Description
This dissertation is an examination of collective systems of computationally limited agents that require coordination to achieve complex ensemble behaviors or goals. The design of coordination strategies can be framed as multiagent optimization problems, which are addressed in this work from both theoretical and practical perspectives. The primary foci of

This dissertation is an examination of collective systems of computationally limited agents that require coordination to achieve complex ensemble behaviors or goals. The design of coordination strategies can be framed as multiagent optimization problems, which are addressed in this work from both theoretical and practical perspectives. The primary foci of this study are models where computation is distributed over the agents themselves, which are assumed to possess onboard computational capabilities. There exist many assumption variants for distributed models, including fairness and concurrency properties. In general, there is a fundamental trade-off whereby weakening model assumptions increases the applicability of proposed solutions, while also increasing the difficulty of proving theoretical guarantees. This dissertation aims to produce a deeper understanding of this trade-off with respect to multiagent optimization and scalability in distributed settings. This study considers four multiagent optimization problems. The model assumptions begin with fully centralized computation for the all-or-nothing multicommodity flow problem, then progress to synchronous distributed models through examination of the unmapped multivehicle routing problem and the distributed target localization problem. The final model is again distributed but assumes an unfair asynchronous adversary in the context of the energy distribution problem for programmable matter. For these problems, a variety of algorithms are presented, each of which is grounded in a theoretical foundation that permits formal guarantees regarding correctness, running time, and other critical properties. These guarantees are then validated with in silico simulations and (in some cases) physical experiments, demonstrating empirically that they may carry over to the real world. Hence, this dissertation bridges a portion of the predictability-practicality gap with respect to multiagent optimization problems.
ContributorsWeber, Jamison Wayne (Author) / Richa, Andréa W (Thesis advisor) / Bertsekas, Dimitri P (Committee member) / Murphey, Todd D (Committee member) / Jiang, Zilin (Committee member) / Arizona State University (Publisher)
Created2023
190888-Thumbnail Image.png
Description
Due to the internet being in its infancy, there is no consensus regarding policy approaches that various countries have taken. These policies range from strict government control to liberal access to the internet which makes protecting individual private data difficult. There are too many loopholes and various forms of policy

Due to the internet being in its infancy, there is no consensus regarding policy approaches that various countries have taken. These policies range from strict government control to liberal access to the internet which makes protecting individual private data difficult. There are too many loopholes and various forms of policy on how to approach protecting data. There must be effort by both the individual, government, and private entities by using theoretical mixed methods to approach protecting oneself properly online.
ContributorsPeralta, Christina A (Author) / Scheall, Scott (Thesis advisor) / Hollinger, Keith (Thesis advisor) / Alozie, Nicholas (Committee member) / Arizona State University (Publisher)
Created2023