Matching Items (26)
Filtering by
- All Subjects: Computer vision
- All Subjects: Computer Engineering
- Creators: Yang, Yezhou
Description
Topological methods for data analysis present opportunities for enforcing certain invariances of broad interest in computer vision: including view-point in activity analysis, articulation in shape analysis, and measurement invariance in non-linear dynamical modeling. The increasing success of these methods is attributed to the complementary information that topology provides, as well as availability of tools for computing topological summaries such as persistence diagrams. However, persistence diagrams are multi-sets of points and hence it is not straightforward to fuse them with features used for contemporary machine learning tools like deep-nets. In this paper theoretically well-grounded approaches to develop novel perturbation robust topological representations are presented, with the long-term view of making them amenable to fusion with contemporary learning architectures. The proposed representation lives on a Grassmann manifold and hence can be efficiently used in machine learning pipelines.
The proposed representation.The efficacy of the proposed descriptor was explored on three applications: view-invariant activity analysis, 3D shape analysis, and non-linear dynamical modeling. Favorable results in both high-level recognition performance and improved performance in reduction of time-complexity when compared to other baseline methods are obtained.
The proposed representation.The efficacy of the proposed descriptor was explored on three applications: view-invariant activity analysis, 3D shape analysis, and non-linear dynamical modeling. Favorable results in both high-level recognition performance and improved performance in reduction of time-complexity when compared to other baseline methods are obtained.
ContributorsThopalli, Kowshik (Author) / Turaga, Pavan Kumar (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2017
Description
Students learn in various ways \u2014 visualization, auditory, memorizing, or making analogies. Traditional lecturing in engineering courses and the learning styles of engineering students are inharmonious causing students to be at a disadvantage based on their learning style (Felder & Silverman, 1988). My study analyzes the traditional approach to learning coding skills which is unnatural to engineering students with no previous exposure and examining if visual learning enhances introductory computer science education. Visual and text-based learning are evaluated to determine how students learn introductory coding skills and associated problem solving skills. My study was conducted to observe how the two types of learning aid the students in learning how to problem solve as well as how much knowledge can be obtained in a short period of time. The application used for visual learning was Scratch and Repl.it was used for text-based learning. Two exams were made to measure the progress made by each student. The topics covered by the exam were initialization, variable reassignment, output, if statements, if else statements, nested if statements, logical operators, arrays/lists, while loop, type casting, functions, object orientation, and sorting. Analysis of the data collected in the study allow us to observe whether the traditional method of teaching programming or block-based programming is more beneficial and in what topics of introductory computer science concepts.
ContributorsVidaure, Destiny Vanessa (Author) / Meuth, Ryan (Thesis director) / Yang, Yezhou (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description
This work presents a communication paradigm, using a context-aware mixed reality approach, for instructing human workers when collaborating with robots. The main objective of this approach is to utilize the physical work environment as a canvas to communicate task-related instructions and robot intentions in the form of visual cues. A vision-based object tracking algorithm is used to precisely determine the pose and state of physical objects in and around the workspace. A projection mapping technique is used to overlay visual cues on tracked objects and the workspace. Simultaneous tracking and projection onto objects enables the system to provide just-in-time instructions for carrying out a procedural task. Additionally, the system can also inform and warn humans about the intentions of the robot and safety of the workspace. It was hypothesized that using this system for executing a human-robot collaborative task will improve the overall performance of the team and provide a positive experience to the human partner. To test this hypothesis, an experiment involving human subjects was conducted and the performance (both objective and subjective) of the presented system was compared with a conventional method based on printed instructions. It was found that projecting visual cues enabled human subjects to collaborate more effectively with the robot and resulted in higher efficiency in completing the task.
ContributorsKalpagam Ganesan, Ramsundar (Author) / Ben Amor, Hani (Thesis advisor) / Yang, Yezhou (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2017
Description
Light field imaging is limited in its computational processing demands of high
sampling for both spatial and angular dimensions. Single-shot light field cameras
sacrifice spatial resolution to sample angular viewpoints, typically by multiplexing
incoming rays onto a 2D sensor array. While this resolution can be recovered using
compressive sensing, these iterative solutions are slow in processing a light field. We
present a deep learning approach using a new, two branch network architecture,
consisting jointly of an autoencoder and a 4D CNN, to recover a high resolution
4D light field from a single coded 2D image. This network decreases reconstruction
time significantly while achieving average PSNR values of 26-32 dB on a variety of
light fields. In particular, reconstruction time is decreased from 35 minutes to 6.7
minutes as compared to the dictionary method for equivalent visual quality. These
reconstructions are performed at small sampling/compression ratios as low as 8%,
allowing for cheaper coded light field cameras. We test our network reconstructions
on synthetic light fields, simulated coded measurements of real light fields captured
from a Lytro Illum camera, and real coded images from a custom CMOS diffractive
light field camera. The combination of compressive light field capture with deep
learning allows the potential for real-time light field video acquisition systems in the
future.
sampling for both spatial and angular dimensions. Single-shot light field cameras
sacrifice spatial resolution to sample angular viewpoints, typically by multiplexing
incoming rays onto a 2D sensor array. While this resolution can be recovered using
compressive sensing, these iterative solutions are slow in processing a light field. We
present a deep learning approach using a new, two branch network architecture,
consisting jointly of an autoencoder and a 4D CNN, to recover a high resolution
4D light field from a single coded 2D image. This network decreases reconstruction
time significantly while achieving average PSNR values of 26-32 dB on a variety of
light fields. In particular, reconstruction time is decreased from 35 minutes to 6.7
minutes as compared to the dictionary method for equivalent visual quality. These
reconstructions are performed at small sampling/compression ratios as low as 8%,
allowing for cheaper coded light field cameras. We test our network reconstructions
on synthetic light fields, simulated coded measurements of real light fields captured
from a Lytro Illum camera, and real coded images from a custom CMOS diffractive
light field camera. The combination of compressive light field capture with deep
learning allows the potential for real-time light field video acquisition systems in the
future.
ContributorsGupta, Mayank (Author) / Turaga, Pavan (Thesis advisor) / Yang, Yezhou (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2017
Description
Visual navigation is a useful and important task for a variety of applications. As the prevalence of robots increase, there is an increasing need for energy-efficient navigation methods as well. Many aspects of efficient visual navigation algorithms have been implemented in the literature, but there is a lack of work on evaluation of the efficiency of the image sensors. In this thesis, two methods are evaluated: adaptive image sensor quantization for traditional camera pipelines as well as new event-based sensors for low-power computer vision.The first contribution in this thesis is an evaluation of performing varying levels of sensor linear and logarithmic quantization with the task of visual simultaneous localization and mapping (SLAM). This unconventional method can provide efficiency benefits with a trade off between accuracy of the task and energy-efficiency. A new sensor quantization method, gradient-based quantization, is introduced to improve the accuracy of the task. This method only lowers the bit level of parts of the image that are less likely to be important in the SLAM algorithm since lower bit levels signify better energy-efficiency, but worse task accuracy. The third contribution is an evaluation of the efficiency and accuracy of event-based camera intensity representations for the task of optical flow. The results of performing a learning based optical flow are provided for each of five different reconstruction methods along with ablation studies. Lastly, the challenges of an event feature-based SLAM system are presented with results demonstrating the necessity for high quality and high resolution event data. The work in this thesis provides studies useful for examining tradeoffs for an efficient visual navigation system with traditional and event vision sensors. The results of this thesis also provide multiple directions for future work.
ContributorsChristie, Olivia Catherine (Author) / Jayasuriya, Suren (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2022
Description
Deep neural network-based methods have been proved to achieve outstanding performance on object detection and classification tasks. Deep neural networks follow the ``deeper model with deeper confidence'' belief to gain a higher recognition accuracy. However, reducing these networks' computational costs remains a challenge, which impedes their deployment on embedded devices. For instance, the intersection management of Connected Autonomous Vehicles (CAVs) requires running computationally intensive object recognition algorithms on low-power traffic cameras. This dissertation aims to study the effect of a dynamic hardware and software approach to address this issue. Characteristics of real-world applications can facilitate this dynamic adjustment and reduce the computation. Specifically, this dissertation starts with a dynamic hardware approach that adjusts itself based on the toughness of input and extracts deeper features if needed. Next, an adaptive learning mechanism has been studied that use extracted feature from previous inputs to improve system performance. Finally, a system (ARGOS) was proposed and evaluated that can be run on embedded systems while maintaining the desired accuracy. This system adopts shallow features at inference time, but it can switch to deep features if the system desires a higher accuracy. To improve the performance, ARGOS distills the temporal knowledge from deep features to the shallow system. Moreover, ARGOS reduces the computation furthermore by focusing on regions of interest. The response time and mean average precision are adopted for the performance evaluation to evaluate the proposed ARGOS system.
ContributorsFarhadi, Mohammad (Author) / Yang, Yezhou (Thesis advisor) / Vrudhula, Sarma (Committee member) / Wu, Carole-Jean (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2022
Description
Recent advances in autonomous vehicle (AV) technologies have ensured that autonomous driving will soon be present in real-world traffic. Despite the potential of AVs, many studies have shown that traffic accidents in hybrid traffic environments (where both AVs and human-driven vehicles (HVs) are present) are inevitable because of the unpredictability of human-driven vehicles. Given that eliminating accidents is impossible, an achievable goal of designing AVs is to design them in a way so that they will not be blamed for any accident in which they are involved in. This work proposes BlaFT – a Blame-Free motion planning algorithm in hybrid Traffic. BlaFT is designed to be compatible with HVs and other AVs, and will not be blamed for accidents in a structured road environment. Also, it proves that no accidents will happen if all AVs are using the BlaFT motion planner and that when in hybrid traffic, the AV using BlaFT will be blame-free even if it is involved in a collision. The work instantiated scores of BlaFT and HV vehicles in an urban road scape loop in the 'Simulation of Urban MObility', ran the simulation for several hours, and observe that as the percentage of BlaFT vehicles increases, the traffic becomes safer. Adding BlaFT vehicles to HVs also increases the efficiency of traffic as a whole by up to 34%.
ContributorsPark, Sanggu (Author) / Shrivastava, Aviral (Thesis advisor) / Wang, Ruoyu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2022
Description
It is not merely an aggregation of static entities that a video clip carries, but alsoa variety of interactions and relations among these entities. Challenges still remain
for a video captioning system to generate natural language descriptions focusing on
the prominent interest and aligning with the latent aspects beyond observations. This
work presents a Commonsense knowledge Anchored Video cAptioNing (dubbed as
CAVAN) approach. CAVAN exploits inferential commonsense knowledge to assist the
training of video captioning model with a novel paradigm for sentence-level semantic
alignment. Specifically, commonsense knowledge is queried to complement per training
caption by querying a generic knowledge atlas ATOMIC, and form the commonsense-
caption entailment corpus. A BERT based language entailment model trained from
this corpus then serves as a commonsense discriminator for the training of video
captioning model, and penalizes the model from generating semantically misaligned
captions. With extensive empirical evaluations on MSR-VTT, V2C and VATEX
datasets, CAVAN consistently improves the quality of generations and shows higher
keyword hit rate. Experimental results with ablations validate the effectiveness of
CAVAN and reveals that the use of commonsense knowledge contributes to the video
caption generation.
ContributorsShao, Huiliang (Author) / Yang, Yezhou (Thesis advisor) / Jayasuriya, Suren (Committee member) / Xiao, Chaowei (Committee member) / Arizona State University (Publisher)
Created2022
Description
Visual navigation is a multi-disciplinary field across computer vision, machine learning and robotics. It is of great significance in both research and industrial applications. An intelligent agent with visual navigation ability will be capable of performing the following tasks: actively explore in environments, distinguish and localize a requested target and approach the target following acquired strategies. Despite a variety of advances in mobile robotics, enabling an autonomous with above-mentioned abilities is still a challenging and complex task. However, the solution to the task is very likely to accelerate the landing of assistive robots.
Reinforcement learning is a method that trains autonomous robot based on rewarding desired behaviors to help it obtain an action policy that maximizes rewards while the robot interacting with the environment. Through trial and error, an agent learns sophisticated and skillful strategies to handle complex tasks in the environment. Inspired by navigation procedures of human beings that when navigating through environments, humans reason about accessible spaces and geometry of the environment a lot based on first-person view, figure out the destination and then ease over, this work develops a model that maps from pixels to actions and inherently estimate the target as well as the free-space map. The model has three major constituents: (i) a cognitive mapper that maps the topologic free-space map from first-person view images, (ii) a target recognition network that locates a desired object and (iii) an action policy deep reinforcement learning network. Further, a planner model with cascade architecture based on multi-scale semantic top-down occupancy map input is proposed.
Reinforcement learning is a method that trains autonomous robot based on rewarding desired behaviors to help it obtain an action policy that maximizes rewards while the robot interacting with the environment. Through trial and error, an agent learns sophisticated and skillful strategies to handle complex tasks in the environment. Inspired by navigation procedures of human beings that when navigating through environments, humans reason about accessible spaces and geometry of the environment a lot based on first-person view, figure out the destination and then ease over, this work develops a model that maps from pixels to actions and inherently estimate the target as well as the free-space map. The model has three major constituents: (i) a cognitive mapper that maps the topologic free-space map from first-person view images, (ii) a target recognition network that locates a desired object and (iii) an action policy deep reinforcement learning network. Further, a planner model with cascade architecture based on multi-scale semantic top-down occupancy map input is proposed.
ContributorsZheng, Shibin (Author) / Yang, Yezhou (Thesis advisor) / Zhang, Wenlong (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2019
Description
In a multi-robot system, locating a team robot is an important issue. If robots
can refer to the location of team robots based on information through passive action
recognition without explicit communication, various advantages (e.g. improving security
for military purposes) can be obtained. Specifically, when team robots follow
the same motion rule based on information about adjacent robots, associations can
be found between robot actions. If the association can be analyzed, this can be a clue
to the remote robot. Using these clues, it is possible to infer remote robots which are
outside of the sensor range.
In this paper, a multi-robot system is constructed using a combination of Thymio
II robotic platforms and Raspberry pi controllers. Robots moving in chain-formation
take action using motion rules based on information obtained through passive action
recognition. To find associations between robots, a regression model is created using
Deep Neural Network (DNN) and Long Short-Term Memory (LSTM), one of state-of-art technologies.
The input data of the regression model is divided into historical data, which
are consecutive positions of the robot, and observed data, which is information about the
observed robot. Historical data is sequence data that is analyzed through the LSTM
layer. The accuracy of the regression model designed using DNN can vary depending
on the quantity and quality of the input. In this thesis, three different input situations
are assumed for comparison. First, the amount of observed data is different, second, the
type of observed data is different, and third, the history length is different. Comparative
models are constructed for each case, and prediction accuracy is compared to analyze
the effect of input data on the regression model. This exploration validates that these
methods from deep learning can reduce the communication demands in coordinated
motion of multi-robot systems
can refer to the location of team robots based on information through passive action
recognition without explicit communication, various advantages (e.g. improving security
for military purposes) can be obtained. Specifically, when team robots follow
the same motion rule based on information about adjacent robots, associations can
be found between robot actions. If the association can be analyzed, this can be a clue
to the remote robot. Using these clues, it is possible to infer remote robots which are
outside of the sensor range.
In this paper, a multi-robot system is constructed using a combination of Thymio
II robotic platforms and Raspberry pi controllers. Robots moving in chain-formation
take action using motion rules based on information obtained through passive action
recognition. To find associations between robots, a regression model is created using
Deep Neural Network (DNN) and Long Short-Term Memory (LSTM), one of state-of-art technologies.
The input data of the regression model is divided into historical data, which
are consecutive positions of the robot, and observed data, which is information about the
observed robot. Historical data is sequence data that is analyzed through the LSTM
layer. The accuracy of the regression model designed using DNN can vary depending
on the quantity and quality of the input. In this thesis, three different input situations
are assumed for comparison. First, the amount of observed data is different, second, the
type of observed data is different, and third, the history length is different. Comparative
models are constructed for each case, and prediction accuracy is compared to analyze
the effect of input data on the regression model. This exploration validates that these
methods from deep learning can reduce the communication demands in coordinated
motion of multi-robot systems
ContributorsKang, Sehyeok (Author) / Pavlic, Theodore P (Thesis advisor) / Richa, Andréa W. (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020