Filtering by
- Creators: Electrical Engineering Program
One approach to support such personalization is via self-experimentation using single-case designs. ‘Hack Your Health’ is a tool that guides individuals through an 18-day self-experiment to test if an intervention they choose (e.g., meditation, gratitude journaling) improves their own psychological well-being (e.g., stress, happiness), whether it fits in their routine, and whether they enjoy it.
The purpose of this work was to conduct a formative evaluation of Hack Your Health to examine user burden, adherence, and to evaluate its usefulness in supporting decision-making about a health intervention. A mixed-methods approach was used, and two versions of the tool were tested via two waves of participants (Wave 1, N=20; Wave 2, N=8). Participants completed their self-experiments and provided feedback via follow-up surveys (n=26) and interviews (n=20).
Findings indicated that the tool had high usability and low burden overall. Average survey completion rate was 91%, and compliance to protocol was 72%. Overall, participants found the experience useful to test if their chosen intervention helped them. However, there were discrepancies between participants’ intuition about intervention effect and results from analyses. Participants often relied on intuition/lived experience over results for decision-making. This suggested that the usefulness of Hack Your Health in its current form might be through the structure, accountability, and means for self-reflection it provided rather than the specific experimental design/results. Additionally, situations where performing interventions within a rigorous/restrictive experimental set-up may not be appropriate (e.g., when goal is to assess intervention enjoyment) were uncovered. Plausible design implications include: longer experimental and phase durations, accounting for non-compliance, missingness, and proximal/acute effects, and exploring strategies to complement quantitative data with participants’ lived experiences with interventions to effectively support decision-making. Future work should explore ways to balance scientific rigor with participants’ needs for such decision-making.
Every communication system has a receiver and a transmitter. Irrespective if it is wired or wireless.The future of wireless communication consists of a massive number of transmitters and receivers. The question arises, can we use computer vision to help wireless communication? To satisfy the high data requirement, a large number of antennas are required. The devices that employ large-antenna arrays have other sensors such as RGB camera, depth camera, or LiDAR sensors.These vision sensors help us overcome the non-trivial wireless communication challenges, such as beam blockage prediction and hand-over prediction.This is further motivated by the recent advances in deep learning and computer vision that can extract high-level semantics from complex visual scenes, and the increasing interest of leveraging machine/deep learning tools in wireless communication problems.[1] <br/><br/>The research was focused solely based on technology like 3D cameras,object detection and object tracking using Computer vision and compression techniques. The main objective of using computer vision was to make Milli-meter Wave communication more robust, and to collect more data for the machine learning algorithms. Pre-build lossless and lossy compression algorithms, such as FFMPEG, were used in the research. An algorithm was developed that could use 3D cameras and machine learning models such as YOLOV3, to track moving objects using servo motors and low powered computers like the raspberry pi or the Jetson Nano. In other words, the receiver could track the highly mobile transmitter in 1 dimension using a 3D camera. Not only that, during the research, the transmitter was loaded on a DJI M600 pro drone, and then machine learning and object tracking was used to track the highly mobile drone. In order to build this machine learning model and object tracker, collecting data like depth, RGB images and position coordinates were the first yet the most important step. GPS coordinates from the DJI M600 were also pulled and were successfully plotted on google earth. This proved to be very useful during data collection using a drone and for the future applications of position estimation for a drone using machine learning. <br/><br/>Initially, images were taken from transmitter camera every second,and those frames were then converted to a text file containing hex-decimal values. Each text file was then transmitted from the transmitter to receiver, and on the receiver side, a python code converted the hex-decimal to JPG. This would give an efect of real time video transmission. However, towards the end of the research, an industry standard, real time video was streamed using pre-built FFMPEG modules, GNU radio and Universal Software Radio Peripheral (USRP). The transmitter camera was a PI-camera. More details will be discussed as we further dive deep into this research report.
This paper serves to report the research performed towards detecting PD and the effects of medication through the use of machine learning and finger tapping data collected through mobile devices. The primary objective for this research is to prototype a PD classification model and a medication classification model that predict the following: the individual’s disease status and the medication intake time relative to performing the finger-tapping activity, respectively.
This paper serves to report the research performed towards detecting PD and the effects of medication through the use of machine learning and finger tapping data collected through mobile devices. The primary objective for this research is to prototype a PD classification model and a medication classification model that predict the following: the individual’s disease status and the medication intake time relative to performing the finger-tapping activity, respectively.