A Proactive Systematic Approach to Enhance and Preserve Users’ Tech Applications Data Privacy Awareness and Control in Smart Cities

171531-Thumbnail Image.png
Description
The reality of smart cities is here and now. The issues of data privacy in tech applications are apparent in smart cities. Privacy as an issue raised by many and addressed by few remains critical for smart cities’ success. It

The reality of smart cities is here and now. The issues of data privacy in tech applications are apparent in smart cities. Privacy as an issue raised by many and addressed by few remains critical for smart cities’ success. It is the common responsibility of smart cities, tech application makers, and users to embark on the journey to solutions. Privacy is an individual problem that smart cities need to provide a collective solution for. The research focuses on understanding users’ data privacy preferences, what information they consider private, and what they need to protect. The research identifies the data security loopholes, data privacy roadblocks, and common opportunities for change to implement a proactive privacy-driven tech solution necessary to address and resolve tech-induced data privacy concerns among citizens. This dissertation aims at addressing the issue of data privacy in tech applications based on known methodologies to address the concerns they allow. Through this research, a data privacy survey on tech applications was conducted, and the results reveal users’ desires to become a part of the solution by becoming aware and taking control of their data privacy while using tech applications. So, this dissertation gives an overview of the data privacy issues in tech, discusses available data privacy basis, elaborates on the different steps needed to create a robust remedy to data privacy concerns in enabling users’ awareness and control, and proposes two privacy applications one as a data privacy awareness solution and the other as a representation of the privacy control framework to address data privacy concerns in smart cities.
Date Created
2022
Agent

Why Pop? A System to Explain How Deep Learning Models Classify Music

171505-Thumbnail Image.png
Description
The impact of Artificial Intelligence (AI) has increased significantly in daily life. AI is taking big strides towards moving into areas of life that are critical such as healthcare but, also into areas such as entertainment and leisure. Deep neural

The impact of Artificial Intelligence (AI) has increased significantly in daily life. AI is taking big strides towards moving into areas of life that are critical such as healthcare but, also into areas such as entertainment and leisure. Deep neural networks have been pivotal in making all these advancements possible. But, a well-known problem with deep neural networks is the lack of explanations for the choices it makes. To combat this, several methods have been tried in the field of research. One example of this is assigning rankings to the individual features and how influential they are in the decision-making process. In contrast a newer class of methods focuses on Concept Activation Vectors (CAV) which focus on extracting higher-level concepts from the trained model to capture more information as a mixture of several features and not just one. The goal of this thesis is to employ concepts in a novel domain: to explain how a deep learning model uses computer vision to classify music into different genres. Due to the advances in the field of computer vision with deep learning for classification tasks, it is rather a standard practice now to convert an audio clip into corresponding spectrograms and use those spectrograms as image inputs to the deep learning model. Thus, a pre-trained model can classify the spectrogram images (representing songs) into musical genres. The proposed explanation system called “Why Pop?” tries to answer certain questions about the classification process such as what parts of the spectrogram influence the model the most, what concepts were extracted and how are they different for different classes. These explanations aid the user gain insights into the model’s learnings, biases, and the decision-making process.
Date Created
2022
Agent

3D Printed Indirect Ophthalmoscopy Smart Device Adapter

168661-Thumbnail Image.png
Description
Ophthalmoscopes are integral to diagnosing various eye conditions; however, they often come at a hefty cost and are not generally portable, limiting access. With the increase in the prevalence of smart devices and improvements to their imaging capabilities, these devices

Ophthalmoscopes are integral to diagnosing various eye conditions; however, they often come at a hefty cost and are not generally portable, limiting access. With the increase in the prevalence of smart devices and improvements to their imaging capabilities, these devices have the potential to benefit areas where specialized imaging infrastructure is not well established. Smart device cameras alone cannot replace an ophthalmoscope. However, with the addition of lens and optics, it becomes possible to take diagnostic quality images. The goal is to design a modular system that acts as an adapter to a smart device enabling any user to take retinal images and corneal images with little to no previous experience. The device should be cost-effective, reliable, and easy to use. The device is not meant to replace conventional funduscopes but acts in areas where current units fail. Applications in non-optimal settings, low resource areas, or areas that currently receive suboptimal care due to geographic or socioeconomic barriers are examples where this device could be used. The introduction of screening programs run by nonspecialized medical personnel with devices that can capture and transmit quality eye images minimizes the long-term complications of degenerative eye conditions.
Date Created
2022
Agent

Recognizing Compositional Actions in Videos with Temporal Ordering

168522-Thumbnail Image.png
Description
In some scenarios, true temporal ordering is required to identify the actions occurring in a video. Recently a new synthetic dataset named CATER, was introduced containing 3D objects like sphere, cone, cylinder etc. which undergo simple movements such as slide,

In some scenarios, true temporal ordering is required to identify the actions occurring in a video. Recently a new synthetic dataset named CATER, was introduced containing 3D objects like sphere, cone, cylinder etc. which undergo simple movements such as slide, pick & place etc. The task defined in the dataset is to identify compositional actions with temporal ordering. In this thesis, a rule-based system and a window-based technique are proposed to identify individual actions (atomic) and multiple actions with temporal ordering (composite) on the CATER dataset. The rule-based system proposed here is a heuristic algorithm that evaluates the magnitude and direction of object movement across frames to determine the atomic action temporal windows and uses these windows to predict the composite actions in the videos. The performance of the rule-based system is validated using the frame-level object coordinates provided in the dataset and it outperforms the performance of the baseline models on the CATER dataset. A window-based training technique is proposed for identifying composite actions in the videos. A pre-trained deep neural network (I3D model) is used as a base network for action recognition. During inference, non-overlapping windows are passed through the I3D network to obtain the atomic action predictions and the predictions are passed through a rule-based system to determine the composite actions. The approach outperforms the state-of-the-art composite action recognition models by 13.37% (mAP 66.47% vs. mAP 53.1%).
Date Created
2022
Agent

Haptic Feedback for Women’s Health

165073-Thumbnail Image.png
Description

The intent of this project was to design, build, and test a female-intended vibrator that incorporates elements of haptic feedback, biomimicry, and/or micro robotics. Device development was based on human-centered user design elements and the study of physiological arousal, as

The intent of this project was to design, build, and test a female-intended vibrator that incorporates elements of haptic feedback, biomimicry, and/or micro robotics. Device development was based on human-centered user design elements and the study of physiological arousal, as sexuality and sexual functioning are a part of a human’s overall assessment of health and well-being. The thesis sought to fill the gap that prevents data collection of a female entire sexual response from initial arousal to final orgasm.

Date Created
2022-05
Agent

Effect of Image Captioning with Description on the Working Memory

161949-Thumbnail Image.png
Description
Working memory plays an important role in human activities across academic,professional, and social settings. Working memory is dened as the memory extensively involved in goal-directed behaviors in which information must be retained and manipulated to ensure successful task execution. The aim of

Working memory plays an important role in human activities across academic,professional, and social settings. Working memory is dened as the memory extensively involved in goal-directed behaviors in which information must be retained and manipulated to ensure successful task execution. The aim of this research is to understand the effect of image captioning with image description on an individual's working memory. A study was conducted with eight neutral images comprising situations relatable to daily life such that each image could have a positive or negative description associated with the outcome of the situation in the image. The study consisted of three rounds where the first and second round involved two parts and the third round consisted of one part. The image was captioned a total of five times across the entire study. The findings highlighted that only 25% of participants were able to recall the captions which they captioned for an image after a span of 9-15 days; when comparing the recall rate of the captions, 50% of participants were able to recall the image caption from the previous round in the present round; and out of the positive and negative description associated with the image, 65% of participants recalled the former description rather than the latter. The conclusions drawn from the study are participants tend to retain information for longer periods than the expected duration for working memory, which may be because participants were able to relate the images with their everyday life situations and given a situation with positive and negative information, the human brain is aligned towards positive information over negative information.
Date Created
2021
Agent

Development of a Soft Robotic Exosuit for Knee Flexion Assistance

161834-Thumbnail Image.png
Description
The knee joint has essential functions to support the body weight and maintain normal walking. Neurological diseases like stroke and musculoskeletal disorders like osteoarthritis can affect the function of the knee. Besides physical therapy, robot-assisted therapy using wearable exoskeletons and

The knee joint has essential functions to support the body weight and maintain normal walking. Neurological diseases like stroke and musculoskeletal disorders like osteoarthritis can affect the function of the knee. Besides physical therapy, robot-assisted therapy using wearable exoskeletons and exosuits has shown the potential as an efficient therapy that helps patients restore their limbs’ functions. Exoskeletons and exosuits are being developed for either human performance augmentation or medical purposes like rehabilitation. Although, the research on exoskeletons started early before exosuits, the research and development on exosuits have recently grown rapidly as exosuits have advantages that exoskeletons lack. The objective of this research is to develop a soft exosuit for knee flexion assistance and validate its ability to reduce the EMG activity of the knee flexor muscles. The exosuit has been developed with a novel soft fabric actuator and novel 3D printed adjustable braces to attach the actuator aligned with the knee. A torque analytical model has been derived and validate experimentally to characterize and predict the torque output of the actuator. In addition to that, the actuator’s deflation and inflation time has been experimentally characterized and a controller has been implemented and the exosuit has been tested on a healthy human subject. It is found that the analytical torque model succeeded to predict the torque output in flexion angle range from 0° to 60° more precisely than analytical models in the literature. Deviations existed beyond 60° might have happened because some factors like fabric extensibility and actuator’s bending behavior. After human testing, results showed that, for the human subject tested, the exosuit gave the best performance when the controller was tuned to inflate at 31.9 % of the gait cycle. At this inflation timing, the biceps femoris, the semitendinosus and the vastus lateralis muscles showed average electromyography (EMG) reduction of - 32.02 %, - 23.05 % and - 2.85 % respectively. Finally, it is concluded that the developed exosuit may assist the knee flexion of more diverse healthy human subjects and it may potentially be used in the future in human performance augmentation and rehabilitation of people with disabilities.
Date Created
2021
Agent

Vibro-Thermal Haptic Display for Socio-Emotional Communication Through Pattern Generations

161425-Thumbnail Image.png
Description
Touch plays a vital role in maintaining human relationships through social andemotional communications. This research proposes a multi-modal haptic display capable of generating vibrotactile and thermal haptic signals individually and simultaneously. The main objective for creating this device is to explore the

Touch plays a vital role in maintaining human relationships through social andemotional communications. This research proposes a multi-modal haptic display capable of generating vibrotactile and thermal haptic signals individually and simultaneously. The main objective for creating this device is to explore the importance of touch in social communication, which is absent in traditional communication modes like a phone call or a video call. By studying how humans interpret haptically generated messages, this research aims to create a new communication channel for humans. This novel device will be worn on the user's forearm and has a broad scope of applications such as navigation, social interactions, notifications, health care, and education. The research methods include testing patterns in the vibro-thermal modality while noting its realizability and accuracy. Different patterns can be controlled and generated through an Android application connected to the proposed device via Bluetooth. Experimental results indicate that the patterns SINGLE TAP and HOLD/SQUEEZE were easily identifiable and more relatable to social interactions. In contrast, other patterns like UP-DOWN, DOWN-UP, LEFTRIGHT, LEFT-RIGHT, LEFT-DIAGONAL, and RIGHT-DIAGONAL were less identifiable and less relatable to social interactions. Finally, design modifications are required if complex social patterns are needed to be displayed on the forearm.
Date Created
2021
Agent

Exploring the Impact of a Haptic Glove on Immersion

148244-Thumbnail Image.png
Description

In this experiment, a haptic glove with vibratory motors on the fingertips was tested against the standard HTC Vive controller to see if the additional vibrations provided by the glove increased immersion in common gaming scenarios where haptic feedback is

In this experiment, a haptic glove with vibratory motors on the fingertips was tested against the standard HTC Vive controller to see if the additional vibrations provided by the glove increased immersion in common gaming scenarios where haptic feedback is provided. Specifically, two scenarios were developed: an explosion scene containing a small and large explosion and a box interaction scene that allowed the participants to touch the box virtually with their hand. At the start of this project, it was hypothesized that the haptic glove would have a significant positive impact in at least one of these scenarios. Nine participants took place in the study and immersion was measured through a post-experiment questionnaire. Statistical analysis on the results showed that the haptic glove did have a significant impact on immersion in the box interaction scene, but not in the explosion scene. In the end, I conclude that since this haptic glove does not significantly increase immersion across all scenarios when compared to the standard Vive controller, it should not be used at a replacement in its current state.

Date Created
2021-05
Agent

Haptic Vision: Augmenting Non-visual Travel Tools, Techniques, and Methods by Increasing Spatial Knowledge Through Dynamic Haptic Interactions

158792-Thumbnail Image.png
Description
Access to real-time situational information including the relative position and motion of surrounding objects is critical for safe and independent travel. Object or obstacle (OO) detection at a distance is primarily a task of the visual system due to the

Access to real-time situational information including the relative position and motion of surrounding objects is critical for safe and independent travel. Object or obstacle (OO) detection at a distance is primarily a task of the visual system due to the high resolution information the eyes are able to receive from afar. As a sensory organ in particular, the eyes have an unparalleled ability to adjust to varying degrees of light, color, and distance. Therefore, in the case of a non-visual traveler, someone who is blind or low vision, access to visual information is unattainable if it is positioned beyond the reach of the preferred mobility device or outside the path of travel. Although, the area of assistive technology in terms of electronic travel aids (ETA’s) has received considerable attention over the last two decades; surprisingly, the field has seen little work in the area focused on augmenting rather than replacing current non-visual travel techniques, methods, and tools. Consequently, this work describes the design of an intuitive tactile language and series of wearable tactile interfaces (the Haptic Chair, HaptWrap, and HapBack) to deliver real-time spatiotemporal data. The overall intuitiveness of the haptic mappings conveyed through the tactile interfaces are evaluated using a combination of absolute identification accuracy of a series of patterns and subjective feedback through post-experiment surveys. Two types of spatiotemporal representations are considered: static patterns representing object location at a single time instance, and dynamic patterns, added in the HaptWrap, which represent object movement over a time interval. Results support the viability of multi-dimensional haptics applied to the body to yield an intuitive understanding of dynamic interactions occurring around the navigator during travel. Lastly, it is important to point out that the guiding principle of this work centered on providing the navigator with spatial knowledge otherwise unattainable through current mobility techniques, methods, and tools, thus, providing the \emph{navigator} with the information necessary to make informed navigation decisions independently, at a distance.
Date Created
2020
Agent