Matching Items (58)
Description

In an effort to address the lack of literature in on-campus active travel, this study aims to investigate the following primary questions:<br/>• What are the modes that students use to travel on campus?<br/>• What are the motivations that underlie the mode choice of students on campus?<br/>My first stage of research

In an effort to address the lack of literature in on-campus active travel, this study aims to investigate the following primary questions:<br/>• What are the modes that students use to travel on campus?<br/>• What are the motivations that underlie the mode choice of students on campus?<br/>My first stage of research involved a series of qualitative investigations. I held one-on-one virtual interviews with students in which I asked them questions about the mode they use and why they feel that their chosen mode works best for them. These interviews served two functions. First, they provided me with insight into the various motivations underlying student mode choice. Second, they provided me with an indication of what explanatory variables should be included in a model of mode choice on campus.<br/>The first half of the research project informed a quantitative survey that was released via the Honors Digest to attract student respondents. Data was gathered on travel behavior as well as relevant explanatory variables.<br/>My analysis involved developing a logit model to predict student mode choice on campus and presenting the model estimation in conjunction with a discussion of student travel motivations based on the qualitative interviews. I use this information to make a recommendation on how campus infrastructure could be modified to better support the needs of the student population.

ContributorsMirtich, Laura Christine (Author) / Salon, Deborah (Thesis director) / Fang, Kevin (Committee member) / School of Public Affairs (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
131560-Thumbnail Image.png
Description
Spaceflight and spaceflight analogue culture enhance the virulence and pathogenesis-related stress resistance of the foodborne pathogen Salmonella enterica serovar Typhimurium (S. Typhimurium). This is an alarming finding as it suggests that astronauts may have an increased risk of infection during spaceflight. This risk is further exacerbated as multiple studies indicate

Spaceflight and spaceflight analogue culture enhance the virulence and pathogenesis-related stress resistance of the foodborne pathogen Salmonella enterica serovar Typhimurium (S. Typhimurium). This is an alarming finding as it suggests that astronauts may have an increased risk of infection during spaceflight. This risk is further exacerbated as multiple studies indicate that spaceflight negatively impacts aspects of the immune system. In order to ensure astronaut safety during long term missions, it is important to study the phenotypic effects of the microgravity environment on a range of medically important microbial pathogens that might be encountered by the crew. This ground-based study uses the NASA-engineered Rotating Wall Vessel (RWV) bioreactor as a spaceflight analogue culture system to grow bacteria under low fluid shear forces relative to those encountered in microgravity, and interestingly, in the intestinal tract during infection. The culture environment in the RWV is commonly referred to as low shear modeled microgravity (LSMMG). In this study, we characterized the stationary phase stress response of the enteric pathogen, Salmonella enterica serovar Enteritidis (S. Enteritidis), to LSMMG culture. We showed that LSMMG enhanced the resistance of stationary phase cultures of S. Enteritidis to acid and thermal stressors, which differed from the LSSMG stationary phase response of the closely related pathovar, S. Typhimurium. Interestingly, LSMMG increased the ability of both S. Enteritidis and S. Typhimurium to adhere to, invade into, and survive within an in vitro 3-D intestinal co-culture model containing immune cells. Our results indicate that LSMMG regulates pathogenesis-related characteristics of S. Enteritidis in ways that may present an increased health risk to astronauts during spaceflight missions.
ContributorsKoroli, Sara (Author) / Nickerson, Cheryl (Thesis director) / Barrila, Jennifer (Committee member) / Ott, C. Mark (Committee member) / School of Life Sciences (Contributor) / School of Molecular Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
134286-Thumbnail Image.png
Description
Many researchers aspire to create robotics systems that assist humans in common office tasks, especially by taking over delivery and messaging tasks. For meaningful interactions to take place, a mobile robot must be able to identify the humans it interacts with and communicate successfully with them. It must also be

Many researchers aspire to create robotics systems that assist humans in common office tasks, especially by taking over delivery and messaging tasks. For meaningful interactions to take place, a mobile robot must be able to identify the humans it interacts with and communicate successfully with them. It must also be able to successfully navigate the office environment. While mobile robots are well suited for navigating and interacting with elements inside a deterministic office environment, attempting to interact with human beings in an office environment remains a challenge due to the limits on the amount of cost-efficient compute power onboard the robot. In this work, I propose the use of remote cloud services to offload intensive interaction tasks. I detail the interactions required in an office environment and discuss the challenges faced when implementing a human-robot interaction platform in a stochastic office environment. I also experiment with cloud services for facial recognition, speech recognition, and environment navigation and discuss my results. As part of my thesis, I have implemented a human-robot interaction system utilizing cloud APIs into a mobile robot, enabling it to navigate the office environment, identify humans within the environment, and communicate with these humans.
Created2017-05
161939-Thumbnail Image.png
Description
Traditional Reinforcement Learning (RL) assumes to learn policies with respect to reward available from the environment but sometimes learning in a complex domain requires wisdom which comes from a wide range of experience. In behavior based robotics, it is observed that a complex behavior can be described by a combination

Traditional Reinforcement Learning (RL) assumes to learn policies with respect to reward available from the environment but sometimes learning in a complex domain requires wisdom which comes from a wide range of experience. In behavior based robotics, it is observed that a complex behavior can be described by a combination of simpler behaviors. It is tempting to apply similar idea such that simpler behaviors can be combined in a meaningful way to tailor the complex combination. Such an approach would enable faster learning and modular design of behaviors. Complex behaviors can be combined with other behaviors to create even more advanced behaviors resulting in a rich set of possibilities. Similar to RL, combined behavior can keep evolving by interacting with the environment. The requirement of this method is to specify a reasonable set of simple behaviors. In this research, I present an algorithm that aims at combining behavior such that the resulting behavior has characteristics of each individual behavior. This approach has been inspired by behavior based robotics, such as the subsumption architecture and motor schema-based design. The combination algorithm outputs n weights to combine behaviors linearly. The weights are state dependent and change dynamically at every step in an episode. This idea is tested on discrete and continuous environments like OpenAI’s “Lunar Lander” and “Biped Walker”. Results are compared with related domains like Multi-objective RL, Hierarchical RL, Transfer learning, and basic RL. It is observed that the combination of behaviors is a novel way of learning which helps the agent achieve required characteristics. A combination is learned for a given state and so the agent is able to learn faster in an efficient manner compared to other similar approaches. Agent beautifully demonstrates characteristics of multiple behaviors which helps the agent to learn and adapt to the environment. Future directions are also suggested as possible extensions to this research.
ContributorsVora, Kevin Jatin (Author) / Zhang, Yu (Thesis advisor) / Yang, Yezhou (Committee member) / Praharaj, Sarbeswar (Committee member) / Arizona State University (Publisher)
Created2021
168694-Thumbnail Image.png
Description
Retinotopic map, the map between visual inputs on the retina and neuronal activation in brain visual areas, is one of the central topics in visual neuroscience. For human observers, the map is typically obtained by analyzing functional magnetic resonance imaging (fMRI) signals of cortical responses to slowly moving visual stimuli

Retinotopic map, the map between visual inputs on the retina and neuronal activation in brain visual areas, is one of the central topics in visual neuroscience. For human observers, the map is typically obtained by analyzing functional magnetic resonance imaging (fMRI) signals of cortical responses to slowly moving visual stimuli on the retina. Biological evidences show the retinotopic mapping is topology-preserving/topological (i.e. keep the neighboring relationship after human brain process) within each visual region. Unfortunately, due to limited spatial resolution and the signal-noise ratio of fMRI, state of art retinotopic map is not topological. The topic was to model the topology-preserving condition mathematically, fix non-topological retinotopic map with numerical methods, and improve the quality of retinotopic maps. The impose of topological condition, benefits several applications. With the topological retinotopic maps, one may have a better insight on human retinotopic maps, including better cortical magnification factor quantification, more precise description of retinotopic maps, and potentially better exam ways of in Ophthalmology clinic.
ContributorsTu, Yanshuai (Author) / Wang, Yalin (Thesis advisor) / Lu, Zhong-Lin (Committee member) / Crook, Sharon (Committee member) / Yang, Yezhou (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2022
189299-Thumbnail Image.png
Description
Multiple robotic arms collaboration is to control multiple robotic arms to collaborate with each other to work on the same task. During the collaboration, theagent is required to avoid all possible collisions between each part of the robotic arms. Thus, incentivizing collaboration and preventing collisions are the two principles which are followed

Multiple robotic arms collaboration is to control multiple robotic arms to collaborate with each other to work on the same task. During the collaboration, theagent is required to avoid all possible collisions between each part of the robotic arms. Thus, incentivizing collaboration and preventing collisions are the two principles which are followed by the agent during the training process. Nowadays, more and more applications, both in industry and daily lives, require at least two arms, instead of requiring only a single arm. A dual-arm robot satisfies much more needs of different types of tasks, such as folding clothes at home, making a hamburger in a grill or picking and placing a product in a warehouse. The applications done in this paper are all about object pushing. This thesis focuses on how to train the agent to learn pushing an object away as far as possible. Reinforcement Learning (RL), which is a type of Machine Learning (ML), is then utilized in this paper to train the agent to generate optimal actions. Deep Deterministic Policy Gradient (DDPG) and Hindsight Experience Replay (HER) are the two RL methods used in this thesis.
ContributorsLin, Steve (Author) / Ben Amor, Hani (Thesis advisor) / Redkar, Sangram (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2023
190951-Thumbnail Image.png
Description
Millimeter wave (mmWave) and massive multiple-input multiple-output (MIMO) systems are intrinsic components of 5G and beyond. These systems rely on using beamforming codebooks for both initial access and data transmission. Current beam codebooks, however, are not optimized for the given deployment, which can sometimes incur noticeable performance loss. To address

Millimeter wave (mmWave) and massive multiple-input multiple-output (MIMO) systems are intrinsic components of 5G and beyond. These systems rely on using beamforming codebooks for both initial access and data transmission. Current beam codebooks, however, are not optimized for the given deployment, which can sometimes incur noticeable performance loss. To address these problems, in this dissertation, three novel machine learning (ML) based frameworks for site-specific analog beam codebook design are proposed. In the first framework, two special neural network-based architectures are designed for learning environment and hardware aware beam codebooks through supervised and self-supervised learning respectively. To avoid explicitly estimating the channels, in the second framework, a deep reinforcement learning-based architecture is developed. The proposed solution significantly relaxes the system requirements and is particularly interesting in scenarios where the channel acquisition is challenging. Building upon it, in the third framework, a sample-efficient online reinforcement learning-based beam codebook design algorithm that learns how to shape the beam patterns to null the interfering directions, without requiring any coordination with the interferers, is developed. In the last part of the dissertation, the proposed beamforming framework is further extended to tackle the beam focusing problem in near field wideband systems. %Specifically, the developed solution can achieve beam focusing without knowing the user position and can account for unknown and non-uniform array geometry. All the frameworks are numerically evaluated and the simulation results highlight their potential of learning site-specific codebooks that adapt to the deployment. Furthermore, a hardware proof-of-concept prototype based on mmWave phased arrays is built and used to evaluate the developed online beam learning solutions in realistic scenarios. The learned beam patterns, measured in an anechoic chamber, show the performance gains of the developed framework. All that highlights a promising ML-based beam/codebook optimization direction for practical and hardware-constrained mmWave and terahertz systems.
ContributorsZhang, Yu (Author) / Alkhateeb, Ahmed AA (Thesis advisor) / Tepedelenlioglu, Cihan CT (Committee member) / Bliss, Daniel DB (Committee member) / Dasarathy, Gautam GD (Committee member) / Arizona State University (Publisher)
Created2023
172013-Thumbnail Image.png
Description
In this thesis work, a novel learning approach to solving the problem of controllinga quadcopter (drone) swarm is explored. To deal with large sizes, swarm control is often achieved in a distributed fashion by combining different behaviors such that each behavior implements some desired swarm characteristics, such as avoiding ob- stacles and staying

In this thesis work, a novel learning approach to solving the problem of controllinga quadcopter (drone) swarm is explored. To deal with large sizes, swarm control is often achieved in a distributed fashion by combining different behaviors such that each behavior implements some desired swarm characteristics, such as avoiding ob- stacles and staying close to neighbors. One common approach in distributed swarm control uses potential fields. A limitation of this approach is that the potential fields often depend statically on a set of control parameters that are manually specified a priori. This paper introduces Dynamic Potential Fields for flexible swarm control. These potential fields are modulated by a set of dynamic control parameters (DCPs) that can change under different environment situations. Since the focus is only on these DCPs, it simplifies the learning problem and makes it feasible for practical use. This approach uses soft actor critic (SAC) where the actor only determines how to modify DCPs in the current situation, resulting in more flexible swarm control. In the results, this work will show that the DCP approach allows for the drones to bet- ter traverse environments with obstacles compared to several state-of-the-art swarm control methods with a fixed set of control parameters. This approach also obtained a higher safety score commonly used to assess swarm behavior. A basic reinforce- ment learning approach is compared to demonstrate faster convergence. Finally, an ablation study is conducted to validate the design of this approach.
ContributorsFerraro, Calvin Shores (Author) / Zhang, Yu (Thesis advisor) / Ben Amor, Hani (Committee member) / Berman, Spring (Committee member) / Arizona State University (Publisher)
Created2022
171959-Thumbnail Image.png
Description
Recent breakthroughs in Artificial Intelligence (AI) have brought the dream of developing and deploying complex AI systems that can potentially transform everyday life closer to reality than ever before. However, the growing realization that there might soon be people from all walks of life using and working with these systems

Recent breakthroughs in Artificial Intelligence (AI) have brought the dream of developing and deploying complex AI systems that can potentially transform everyday life closer to reality than ever before. However, the growing realization that there might soon be people from all walks of life using and working with these systems has also spurred a lot of interest in ensuring that AI systems can efficiently and effectively work and collaborate with their intended users. Chief among the efforts in this direction has been the pursuit of imbuing these agents with the ability to provide intuitive and useful explanations regarding their decisions and actions to end-users. In this dissertation, I will describe various works that I have done in the area of explaining sequential decision-making problems. Furthermore, I will frame the discussions of my work within a broader framework for understanding and analyzing explainable AI (XAI). My works herein tackle many of the core challenges related to explaining automated decisions to users including (1) techniques to address asymmetry in knowledge between the user and the system, (2) techniques to address asymmetry in inferential capabilities, and (3) techniques to address vocabulary mismatch.The dissertation will also describe the works I have done in generating interpretable behavior and policy summarization. I will conclude this dissertation, by using the framework of human-aware explanation as a lens to analyze and understand the current landscape of explainable planning.
ContributorsSreedharan, Sarath (Author) / Kambhampati, Subbarao (Thesis advisor) / Kim, Been (Committee member) / Smith, David E (Committee member) / Srivastava, Siddharth (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2022
171876-Thumbnail Image.png
Description
As intelligent agents become pervasive in our lives, they are expected to not only achieve tasks alone but also engage in tasks with humans in the loop. In such cases, the human naturally forms an understanding of the agent, which affects his perception of the agent’s behavior. However, such an

As intelligent agents become pervasive in our lives, they are expected to not only achieve tasks alone but also engage in tasks with humans in the loop. In such cases, the human naturally forms an understanding of the agent, which affects his perception of the agent’s behavior. However, such an understanding inevitably deviates from the ground truth due to reasons such as the human’s lack of understanding of the domain or misunderstanding of the agent’s capabilities. Such differences would result in an unmatched expectation of the agent’s behavior with the agent’s optimal behavior, thereby biasing the human’s assessment of the agent’s performance. In this dissertation, I focus on when these differences are due to a biased belief about domain dynamics. I especially investigate the impact of such a biased belief on the agent’s decision-making process in two different problem settings from a learning perspective. In the first setting, the agent is tasked to accomplish a task alone but must infer the human’s objectives from the human’s feedback on the agent’s behavior in the environment. In such a case, the human biased feedback could mislead the agent to learn a reward function that results in a sub-optimal and, potentially, undesired policy. In the second setting, the agent must accomplish a task with a human observer. Given that the agent’s optimal behavior may not match the human’s expectation due to the biased belief, the agent’s optimal behavior may be viewed as inexplicable, leading to degraded performance and loss of trust. Consequently, this dissertation proposes approaches that (1) endow the agent with the ability to be aware of the human’s biased belief while inferring the human’s objectives, thereby (2) neutralize the impact of the model differences in a reinforcement learning framework, and (3) behave explicably by reconciling the human’s expectation and optimality during decision-making.
ContributorsGong, Ze (Author) / Zhang, Yu (Thesis advisor) / Amor, Hani Ben (Committee member) / Kambhampati, Subbarao (Committee member) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2022