Matching Items (84)
Filtering by
- Genre: Masters Thesis
- Creators: Yang, Yezhou
![189367-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2023-08/189367-Thumbnail%20Image.png?versionId=VqiQds5hyPE83ZNVwIQYLcDFIknmwCHr&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T181300Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=965a36573130435049901c120b03d84fe8ab61a173a665a47a35d73a55afd383&itok=0mN3HjFs)
Description
With the rise in social media usage and rapid communication, the proliferation of misinformation and fake news has become a pressing concern. The detection of multimodal fake news requires careful consideration of both image and textual semantics with proper alignment of the embedding space. Automated fake news detection has gained significant attention in recent years. Existing research has focused on either capturing cross-modal inconsistency information or leveraging the complementary information within image-text pairs. However, the potential of powerful cross-modal contrastive learning methods and effective modality mixing remains an open-ended question. The thesis proposes a novel two-leg single-tower architecture equipped with self-attention mechanisms and custom contrastive loss to efficiently aggregate multimodal features. Furthermore, pretraining and fine-tuning are employed on the custom transformer model to classify fake news across the popular Twitter multimodal fake news dataset. The experimental results demonstrate the efficacy and robustness of the proposed approach, offering promising advancements in multimodal fake news detection research.
ContributorsLakhanpal, Sanyam (Author) / Lee, Kookjin (Thesis advisor) / Baral, Chitta (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2023
![171810-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2022-12/171810-Thumbnail%20Image.png?versionId=ATzZ8Lv9gJh3oYgicZy9ZJSoFuT4iLj9&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T170013Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=44ff29987489039fa7fb3ff35755d7e64583d15464a2dca460b564c7356b4936&itok=l_DipdOn)
Description
For a system of autonomous vehicles functioning together in a traffic scene, 3Dunderstanding of participants in the field of view or surrounding is very essential for
assessing the safety operation of the involved. This problem can be decomposed into online
pose and shape estimation, which has been a core research area of computer vision for over
a decade now. This work is an add-on to support and improve the joint estimate of the pose
and shape of vehicles from monocular cameras. The objective of jointly estimating the
vehicle pose and shape online is enabled by what is called an offline reconstruction
pipeline. In the offline reconstruction step, an approach to obtain the vehicle 3D shape with
keypoints labeled is formulated.
This work proposes a multi-view reconstruction pipeline using images and masks
which can create an approximate shape of vehicles and can be used as a shape prior. Then
a 3D model-fitting optimization approach to refine the shape prior using high quality
computer-aided design (CAD) models of vehicles is developed. A dataset of such 3D
vehicles with 20 keypoints annotated is prepared and call it the AvaCAR dataset. The
AvaCAR dataset can be used to estimate the vehicle shape and pose, without having the
need to collect significant amounts of data needed for adequate training of a neural
network. The online reconstruction can use this synthesis dataset to generate novel
viewpoints and simultaneously train a neural network for pose and shape estimation. Most
methods in the current literature using deep neural networks, that are trained to estimate
pose of the object from a single image, are inherently biased to the viewpoint of the images
used. This approach aims at addressing these existing limitations in the current method by
delivering the online estimation a shape prior which can generate novel views to account
for the bias due to viewpoint. The dataset is provided with ground truth extrinsic parameters
and the compact vector based shape representations which along with the multi-view
dataset can be used to efficiently trained neural networks for vehicle pose and shape
estimation. The vehicles in this library are evaluated with some standard metrics to assure
they are capable of aiding online estimation and model based tracking.
ContributorsDUTTA, PRABAL BIJOY (Author) / Yang, Yezhou (Thesis advisor) / Berman, Spring (Committee member) / Lu, Duo (Committee member) / Arizona State University (Publisher)
Created2022
![171818-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2022-12/171818-Thumbnail%20Image.png?versionId=PVSCDHKNvTjJT1P_LlS9RroB7H76PKpq&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T170201Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=bae7e51ad704492fb990d0eba6a85b2a8d1bec19f645f349a16c3d4ddbfefefa&itok=8K8lWFrx)
Description
Recent advances in autonomous vehicle (AV) technologies have ensured that autonomous driving will soon be present in real-world traffic. Despite the potential of AVs, many studies have shown that traffic accidents in hybrid traffic environments (where both AVs and human-driven vehicles (HVs) are present) are inevitable because of the unpredictability of human-driven vehicles. Given that eliminating accidents is impossible, an achievable goal of designing AVs is to design them in a way so that they will not be blamed for any accident in which they are involved in. This work proposes BlaFT – a Blame-Free motion planning algorithm in hybrid Traffic. BlaFT is designed to be compatible with HVs and other AVs, and will not be blamed for accidents in a structured road environment. Also, it proves that no accidents will happen if all AVs are using the BlaFT motion planner and that when in hybrid traffic, the AV using BlaFT will be blame-free even if it is involved in a collision. The work instantiated scores of BlaFT and HV vehicles in an urban road scape loop in the 'Simulation of Urban MObility', ran the simulation for several hours, and observe that as the percentage of BlaFT vehicles increases, the traffic becomes safer. Adding BlaFT vehicles to HVs also increases the efficiency of traffic as a whole by up to 34%.
ContributorsPark, Sanggu (Author) / Shrivastava, Aviral (Thesis advisor) / Wang, Ruoyu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2022
![171616-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2022-12/171616-Thumbnail%20Image.png?versionId=dKgHuDIiViwRHQPN3YPWBXpwG0sBvri7&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T164312Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=df37b525e75c251bebf12c5c2f786ffe3d72d80cc3ea7d40341fb9e160496336&itok=bnOIj5yK)
Description
Computer vision is becoming an essential component of embedded system applications such as smartphones, wearables, autonomous systems and internet-of-things (IoT). These applications are generally deployed into environments with limited energy, memory bandwidth and computational resources. This trend is driving the development of energy-effi cient image processing solutions from sensing to computation. In this thesis, diff erent alternatives are explored to implement energy-efficient computer vision systems. First, I present a fi eld programmable gate array (FPGA) implementation of an adaptive subsampling algorithm for region-of-interest (ROI) -based object tracking. By implementing the computationally intensive sections of this algorithm on an FPGA, I aim to offl oad computing resources from energy-ineffi cient graphics processing units (GPUs) and/or general-purpose central processing units (CPUs). I also present a working system executing this algorithm in near real-time latency implemented on a standalone embedded device. Secondly, I present a neural network-based pipeline to improve the performance of event-based cameras in non-ideal optical conditions. Event-based cameras or dynamic vision sensors (DVS) are bio-inspired sensors that measure logarithmic per-pixel brightness changes in a scene. Their advantages include high dynamic range, low latency and ultra-low power when compared to standard frame-based cameras. Several tasks have been proposed to take advantage of these novel sensors but they rely on perfectly calibrated optical lenses that are in-focus. In this work I propose a methodto reconstruct events captured with an out-of-focus event-camera so they can be fed into an intensity reconstruction task. The network is trained with a dataset generated by simulating defocus blur in sequences from object tracking datasets such as LaSOT and OTB100. I also test the generalization performance of this network in scenes captured with a DAVIS event-based sensor equipped with an out-of-focus lens.
ContributorsTorres Muro, Victor Isaac (Author) / Jayasuriya, Suren (Thesis advisor) / Spanias, Andreas (Committee member) / Seo, Jae-Sun (Committee member) / Arizona State University (Publisher)
Created2022
Description
Realistic lighting is important to improve immersion and make mixed reality applications seem more plausible. To properly blend the AR objects in the real scene, it is important to study the lighting of the environment. The existing illuminationframeworks proposed by Google’s ARCore (Google’s Augmented Reality Software Development Kit) and Apple’s ARKit (Apple’s Augmented Reality Software Development Kit) are computationally expensive and have very slow refresh rates, which make them incompatible for dynamic environments and low-end mobile devices. Recently, there have been other illumination estimation frameworks such as GLEAM, Xihe, which aim at providing better illumination with faster refresh rates. GLEAM is an illumination estimation framework that understands the real scene by collecting pixel data from a reflecting spherical light probe. GLEAM uses this data to form environment cubemaps which are later mapped onto a reflection probe to generate illumination for AR objects.
It is noticed that from a single viewpoint only one half of the light probe can be observed at a time which does not give complete information about the environment. This leads to the idea of having a multi-viewpoint estimation for better performance. This thesis work analyzes the multi-viewpoint capabilities of AR illumination frameworks that use physical light probes to understand the environment. The current work builds networking using TCP and UDP protocols on GLEAM. This thesis work also documents how processor load sharing has been done while networking devices and how that benefits the performance of GLEAM on mobile devices. Some enhancements using multi-threading have also been made to the already existing GLEAM model to improve its performance.
ContributorsGurram, Sahithi (Author) / LiKamWa, Robert (Thesis advisor) / Jayasuriya, Suren (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2022
![168441-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2022-08/168441-Thumbnail%20Image.png?versionId=8tnNoW0sEK.pAgCqIxI2TstAtK0..tBX&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T181300Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=488baacd2d1eb46c0a41c0601217a34a92c8b51e9b26089939af3500bb34670a&itok=zFTPU7vk)
Description
Generative models in various domain such as images, speeches, and videos are beingdeveloped actively over the last decades and recent deep generative models are now
capable of synthesizing multimedia contents are difficult to be distinguishable from
authentic contents. Such capabilities cause concerns such as malicious impersonation,
Intellectual property theft(IP theft) and copyright infringement.
One method to solve these threats is to embedded attributable watermarking in
synthesized contents so that user can identify the user-end models where the contents
are generated from. This paper investigates a solution for model attribution, i.e., the
classification of synthetic contents by their source models via watermarks embedded
in the contents. Existing studies showed the feasibility of model attribution in the
image domain and tradeoff between attribution accuracy and generation quality under
the various adversarial attacks but not in speech domain.
This work discuss the feasibility of model attribution in different domain and
algorithmic improvements for generating user-end speech models that empirically
achieve high accuracy of attribution while maintaining high generation quality. Lastly,
several experiments are conducted show the tradeoff between attributability and
generation quality under a variety of attacks on generated speech signals attempting
to remove the watermarks.
ContributorsCho, Yongbaek (Author) / Yang, Yezhou (Thesis advisor) / Ren, Yi (Committee member) / Trieu, Ni (Committee member) / Arizona State University (Publisher)
Created2021
![161987-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-11/161987-Thumbnail%20Image.png?versionId=EA26f6RQpjDDE2Owh88BdkI_YnfYwvjH&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T154806Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=c042258590bca858eb3941f63c198d46b4d8ad75742a20c430be7c0dd1b3418d&itok=O3pNVSnW)
Description
Machine learning (ML) and deep learning (DL) has become an intrinsic part of multiple fields. The ability to solve complex problems makes machine learning a panacea. In the last few years, there has been an explosion of data generation, which has greatly improvised machine learning models. But this comes with a cost of high computation, which invariably increases power usage and cost of the hardware. In this thesis we explore applications of ML techniques, applied to two completely different fields - arts, media and theater and urban climate research using low-cost and low-powered edge devices. The multi-modal chatbot uses different machine learning techniques: natural language processing (NLP) and computer vision (CV) to understand inputs of the user and accordingly perform in the play and interact with the audience. This system is also equipped with other interactive hardware setups like movable LED systems, together they provide an experiential theatrical play tailored to each user. I will discuss how I used edge devices to achieve this AI system which has created a new genre in theatrical play. I will then discuss MaRTiny, which is an AI-based bio-meteorological system that calculates mean radiant temperature (MRT), which is an important parameter for urban climate research. It is also equipped with a vision system that performs different machine learning tasks like pedestrian and shade detection. The entire system costs around $200 which can potentially replace the existing setup worth $20,000. I will further discuss how I overcame the inaccuracies in MRT value caused by the system, using machine learning methods. These projects although belonging to two very different fields, are implemented using edge devices and use similar ML techniques. In this thesis I will detail out different techniques that are shared between these two projects and how they can be used in several other applications using edge devices.
ContributorsKulkarni, Karthik Kashinath (Author) / Jayasuriya, Suren (Thesis advisor) / Middel, Ariane (Thesis advisor) / Yu, Hongbin (Committee member) / Arizona State University (Publisher)
Created2021
![162001-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-11/162001-Thumbnail%20Image.png?versionId=Yw7LeQtIzlUoBRdYM1tGi1hC1BKf2j_y&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T181300Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=db1f0e130cae37e64e49b64d610210f3abb89729898e641d9868f5d929f79b15&itok=eg5Hessy)
Description
Floating trash objects are very commonly seen on water bodies such as lakes, canals and rivers. With the increase of plastic goods and human activities near the water bodies, these trash objects can pile up and cause great harm to the surrounding environment. Using human workers to clear out these trash is a hazardous and time-consuming task. Employing autonomous robots for these tasks is a better approach since it is more efficient and faster than humans. However, for a robot to clean the trash objects, a good detection algorithm is required. Real-time object detection on water surfaces is a challenging issue due to nature of the environment and the volatility of the water surface. In addition to this, running an object detection algorithm on an on-board processor of a robot limits the amount of CPU consumption that the algorithm can utilize. In this thesis, a computationally low cost object detection approach for robust detection of trash objects that was run on an on-board processor of a multirotor is presented. To account for specular reflections on the water surface, we use a polarization filter and integrate a specularity removal algorithm on our approach as well. The challenges faced during testing and the means taken to eliminate those challenges are also discussed. The algorithm was compared with two other object detectors using 4 different metrics. The testing was carried out using videos of 5 different objects collected at different illumination conditions over a lake using a multirotor. The results indicate that our algorithm is much suitable to be employed in real-time since it had the highest processing speed of 21 FPS, the lowest CPU consumption of 37.5\% and considerably high precision and recall values in detecting the object.
ContributorsSyed, Danish Faraaz (Author) / Zhang, Wenlong (Thesis advisor) / Yang, Yezhou (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2021
![168739-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2022-08/168739-Thumbnail%20Image.png?versionId=7Fwwlzt1Kfpcd27jDTbpCslrKfUSuNT7&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T181300Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=cee7c5e4f4c2b1e2355c172720d3ccdb73ad08f52293d1687c3ee8659d2b4fc4&itok=R7XlIiYE)
Description
Visual navigation is a useful and important task for a variety of applications. As the prevalence of robots increase, there is an increasing need for energy-efficient navigation methods as well. Many aspects of efficient visual navigation algorithms have been implemented in the literature, but there is a lack of work on evaluation of the efficiency of the image sensors. In this thesis, two methods are evaluated: adaptive image sensor quantization for traditional camera pipelines as well as new event-based sensors for low-power computer vision.The first contribution in this thesis is an evaluation of performing varying levels of sensor linear and logarithmic quantization with the task of visual simultaneous localization and mapping (SLAM). This unconventional method can provide efficiency benefits with a trade off between accuracy of the task and energy-efficiency. A new sensor quantization method, gradient-based quantization, is introduced to improve the accuracy of the task. This method only lowers the bit level of parts of the image that are less likely to be important in the SLAM algorithm since lower bit levels signify better energy-efficiency, but worse task accuracy. The third contribution is an evaluation of the efficiency and accuracy of event-based camera intensity representations for the task of optical flow. The results of performing a learning based optical flow are provided for each of five different reconstruction methods along with ablation studies. Lastly, the challenges of an event feature-based SLAM system are presented with results demonstrating the necessity for high quality and high resolution event data. The work in this thesis provides studies useful for examining tradeoffs for an efficient visual navigation system with traditional and event vision sensors. The results of this thesis also provide multiple directions for future work.
ContributorsChristie, Olivia Catherine (Author) / Jayasuriya, Suren (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2022
![193840-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2024-05/193840-Thumbnail%20Image.png?versionId=l1N_sExcV2tppC8Phm6cvgRH71niL6nN&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240619/us-west-2/s3/aws4_request&X-Amz-Date=20240619T181300Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=1ba09500f0ce7d3048e19b3cd242c793956213e64545ab004262bb85fe1dbad7&itok=cCZQo2bM)
Description
3D perception poses a significant challenge in Intelligent Transportation Systems (ITS) due to occlusion and limited field of view. The necessity for real-time processing and alignment with existing traffic infrastructure compounds these limitations. To counter these issues, this work introduces a novel multi-camera Bird-Eye View (BEV) occupancy detection framework. This approach leverages multi-camera setups to overcome occlusion and field-of-view limitations while employing BEV occupancy to simplify the 3D perception task, ensuring critical information is retained. A noble dataset for BEV Occupancy detection, encompassing diverse scenes and varying camera configurations, was created using the CARLA simulator. Subsequent extensive evaluation of various Multiview occupancy detection models showcased the critical roles of scene diversity and occupancy grid resolution in enhancing model performance. A structured framework that complements the generated data is proposed for data collection in the real world. The trained model is validated against real-world conditions to ensure its practical application, demonstrating the influence of robust dataset design in refining ITS perception systems. This contributes to significant advancements in traffic management, safety, and operational efficiency.
ContributorsVaghela, Arpitsinh Rohitkumar (Author) / Yang, Yezhou (Thesis advisor) / Lu, Duo (Committee member) / Chakravarthi, Bharatesh (Committee member) / Wei, Hua (Committee member) / Arizona State University (Publisher)
Created2024