Matching Items (19)
Filtering by

Clear all filters

150020-Thumbnail Image.png
Description
Dietary self-monitoring has been shown to be a predictor of weight loss success and is a prevalent part of behavioral weight control programs. As more weight loss applications have become available on smartphones, this feasibility study investigated whether the use of a smartphone application, or a smartphone memo feature would

Dietary self-monitoring has been shown to be a predictor of weight loss success and is a prevalent part of behavioral weight control programs. As more weight loss applications have become available on smartphones, this feasibility study investigated whether the use of a smartphone application, or a smartphone memo feature would improve dietary self-monitoring over the traditional paper-and-pencil method. The study also looked at whether the difference in methods would affect weight loss. Forty-seven adults (BMI 25 to 40 kg/m2) completed an 8-week study focused on tracking the difference in adherence to a self-monitoring protocol and subsequent weight loss. Participants owning iPhones (n=17) used the 'Lose It' application (AP) for diet and exercise tracking and were compared to smartphone participants who recorded dietary intake using a memo (ME) feature (n=15) on their phone and participants using the traditional paper-and-pencil (PA) method (n=15). There was no significant difference in completion rates between groups with an overall completion rate of 85.5%. The overall mean adherence to self-monitoring for the 8-week period was better in the AP group than the PA group (p = .024). No significant difference was found between the AP group and ME group (p = .148), or the ME group and the PA group (p = .457). Weight loss for the 8 week study was significant for all groups (p = .028). There was no significant difference in weight loss between groups. Number of days recorded regardless of group assignment showed a weak correlation to weight loss success (p = .068). Smartphone owners seeking to lose weight should be encouraged by the potential success associated with dietary tracking using a smartphone app as opposed to the traditional paper-and-pencil method.
ContributorsCunningham, Barbara (Author) / Wharton, Christopher (Christopher Mack), 1977- (Thesis advisor) / Johnston, Carol (Committee member) / Hall, Richard (Committee member) / Arizona State University (Publisher)
Created2012
150415-Thumbnail Image.png
Description
ABSTRACT This study evaluated the LoseIt Smart Phone app by Fit Now Inc. for nutritional quality among users during an 8 week behavioral modification weight loss protocol. All participants owned smart phones and were cluster randomized to either a control group using paper and pencil record keeping, a memo grou

ABSTRACT This study evaluated the LoseIt Smart Phone app by Fit Now Inc. for nutritional quality among users during an 8 week behavioral modification weight loss protocol. All participants owned smart phones and were cluster randomized to either a control group using paper and pencil record keeping, a memo group using a memo function on their smart phones, or the LoseIt app group which was composed of the participants who owned iPhones. Thirty one participants completed the study protocol: 10 participants from the LoseIt app group, 10 participants from the memo group, and 11 participants from the paper and pencil group. Food records were analyzed using Food Processor by ESHA and the nutritional quality was scored using the Healthy Eating Index - 2005 (HEI-2005). Scores were compared using One-Way ANOVA with no significant changes in any category across all groups. Non-parametric statistics were then used to determine changes between combined memo and paper and pencil groups and the LoseIt app group as the memo and paper and pencil group received live counseling at biweekly intervals and the LoseIt group did not. No significant difference was found in HEI scores across all categories, however a trend was noted for total HEI score with higher scores among the memo and paper and pencil group participants p=0.091. Conclusion, no significant difference was detected between users of the smart phone app LoseIt and memo and paper and pencil groups. More research is needed to determine the impact of in-person counseling versus user feedback provided with the LoseIt smart phone app.
ContributorsCowan, David Kevin (Author) / Johnston, Carol (Thesis advisor) / Wharton, Christopher (Christopher Mack), 1977- (Committee member) / Mayol-Kreiser, Sandra (Committee member) / Arizona State University (Publisher)
Created2011
152443-Thumbnail Image.png
Description
Dietary counseling from a registered dietitian has been shown in previous studies to aid in weight loss for those receiving counseling. With the increasing use of smartphone diet/weight loss applications (app), this study sought to investigate if an iPhone diet app providing feedback from a registered dietitian improved weight loss

Dietary counseling from a registered dietitian has been shown in previous studies to aid in weight loss for those receiving counseling. With the increasing use of smartphone diet/weight loss applications (app), this study sought to investigate if an iPhone diet app providing feedback from a registered dietitian improved weight loss and bio-markers of health. Twenty-four healthy adults who owned iPhones (BMI > 24 kg/m2) completed this trial. Participants were randomly assigned to one of three app groups: the MyDietitian app with daily feedback from a registered dietitian (n=7), the MyDietitian app without feedback (n=7), and the MyPlate feedback control app (n=10). Participants used their respective diet apps daily for 8-weeks while their weight loss, adherence to self-monitoring, blood bio-markers of health, and physical activity were monitored. All of the groups had a significant reduction in waist and hip circumference (p<0.001), a reduction in A1c (p=0.002), an increase in HDL cholesterol levels (p=0.012), and a reduction in calories consumed (p=0.022) over the duration of the trial. Adherence to diet monitoring via the apps did not differ between groups during the study. Body weight did not change during the study for any groups. However, when the participants were divided into low (<50% of days) or high adherence (>50% of days) groups, irrespective of study group, the high adherence group had a significant reduction in weight when compared to the low adherence group (p=0.046). These data suggest that diet apps may be useful tools for self-monitoring and even weight loss, but that the value appears to be the self-monitoring process and not the app specifically.
ContributorsThompson-Felty, Claudia (Author) / Johnston, Carol (Thesis advisor) / Wharton, Christopher (Christopher Mack), 1977- (Committee member) / Levinson, Simin (Committee member) / Arizona State University (Publisher)
Created2014
150876-Thumbnail Image.png
Description
The purpose of this study was to gather qualitative data on different and novel methods used to self-monitor diet and exercise during a weight loss study. Participants who used either a traditional paper and pencil method or a smart phone weight loss app for diet and exercise tracking were recruited

The purpose of this study was to gather qualitative data on different and novel methods used to self-monitor diet and exercise during a weight loss study. Participants who used either a traditional paper and pencil method or a smart phone weight loss app for diet and exercise tracking were recruited for focus groups. Focus group discussions centered on the liked and disliked aspects of recording, perceived behavior changes, and suggestions for improved self-monitoring. Focus groups were organized based on the method of self-monitoring. The app group tracked calorie intake and expenditure via the "Lose It" app on their smart phones. The paper & pencil group recorded exercise and food intake in a journal and self-regulated diet based on recommended servings from each food group (or exchange lists). Focus group sessions were audio-recorded, transcribed and coded by the researcher and an independent coder. Results indicated that app participants liked the convenience, affordability, and user-friendly features, but wanted more nutrition advice. App participants liked self-managing their diet, not restricting certain foods or food groups and allowing for indulgences by balancing calories and exercise. Also, they desired an accurate estimation of energy expenditure from an app, based on individual characteristics (i.e., gender and age). Participants who recorded on paper liked the size for a visual layout of food entries, but desired a technology-enhanced method with an auto-calculation of calorie intake and expenditure. They also suggested increased accountability and opportunities for social support would enhance self-monitoring. Overall, an ideal technology-assisted self-monitoring app or program would be free and include an auto-calculation of calorie intake, a gender- and age- specific estimation of calories expended, easy entry of foods from a large database, the ability to enter whole recipes, nutrition information and recommendations, and be available via phone, tablet or computer (based on personal preference).
ContributorsSterner, Danielle (Author) / Wharton, Christopher (Christopher Mack), 1977- (Thesis advisor) / Johnston, Carol (Committee member) / Hall, Richard (Committee member) / Arizona State University (Publisher)
Created2012
168821-Thumbnail Image.png
Description
It is not merely an aggregation of static entities that a video clip carries, but alsoa variety of interactions and relations among these entities. Challenges still remain for a video captioning system to generate natural language descriptions focusing on the prominent interest and aligning with the latent aspects beyond observations. This work presents

It is not merely an aggregation of static entities that a video clip carries, but alsoa variety of interactions and relations among these entities. Challenges still remain for a video captioning system to generate natural language descriptions focusing on the prominent interest and aligning with the latent aspects beyond observations. This work presents a Commonsense knowledge Anchored Video cAptioNing (dubbed as CAVAN) approach. CAVAN exploits inferential commonsense knowledge to assist the training of video captioning model with a novel paradigm for sentence-level semantic alignment. Specifically, commonsense knowledge is queried to complement per training caption by querying a generic knowledge atlas ATOMIC, and form the commonsense- caption entailment corpus. A BERT based language entailment model trained from this corpus then serves as a commonsense discriminator for the training of video captioning model, and penalizes the model from generating semantically misaligned captions. With extensive empirical evaluations on MSR-VTT, V2C and VATEX datasets, CAVAN consistently improves the quality of generations and shows higher keyword hit rate. Experimental results with ablations validate the effectiveness of CAVAN and reveals that the use of commonsense knowledge contributes to the video caption generation.
ContributorsShao, Huiliang (Author) / Yang, Yezhou (Thesis advisor) / Jayasuriya, Suren (Committee member) / Xiao, Chaowei (Committee member) / Arizona State University (Publisher)
Created2022
171818-Thumbnail Image.png
Description
Recent advances in autonomous vehicle (AV) technologies have ensured that autonomous driving will soon be present in real-world traffic. Despite the potential of AVs, many studies have shown that traffic accidents in hybrid traffic environments (where both AVs and human-driven vehicles (HVs) are present) are inevitable because of the unpredictability

Recent advances in autonomous vehicle (AV) technologies have ensured that autonomous driving will soon be present in real-world traffic. Despite the potential of AVs, many studies have shown that traffic accidents in hybrid traffic environments (where both AVs and human-driven vehicles (HVs) are present) are inevitable because of the unpredictability of human-driven vehicles. Given that eliminating accidents is impossible, an achievable goal of designing AVs is to design them in a way so that they will not be blamed for any accident in which they are involved in. This work proposes BlaFT – a Blame-Free motion planning algorithm in hybrid Traffic. BlaFT is designed to be compatible with HVs and other AVs, and will not be blamed for accidents in a structured road environment. Also, it proves that no accidents will happen if all AVs are using the BlaFT motion planner and that when in hybrid traffic, the AV using BlaFT will be blame-free even if it is involved in a collision. The work instantiated scores of BlaFT and HV vehicles in an urban road scape loop in the 'Simulation of Urban MObility', ran the simulation for several hours, and observe that as the percentage of BlaFT vehicles increases, the traffic becomes safer. Adding BlaFT vehicles to HVs also increases the efficiency of traffic as a whole by up to 34%.
ContributorsPark, Sanggu (Author) / Shrivastava, Aviral (Thesis advisor) / Wang, Ruoyu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2022
171862-Thumbnail Image.png
Description
Deep neural networks have been shown to be vulnerable to adversarial attacks. Typical attack strategies alter authentic data subtly so as to obtain adversarial samples that resemble the original but otherwise would cause a network's misbehavior such as a high misclassification rate. Various attack approaches have been reported, with some

Deep neural networks have been shown to be vulnerable to adversarial attacks. Typical attack strategies alter authentic data subtly so as to obtain adversarial samples that resemble the original but otherwise would cause a network's misbehavior such as a high misclassification rate. Various attack approaches have been reported, with some showing state-of-the-art performance in attacking certain networks. In the meanwhile, many defense mechanisms have been proposed in the literature, some of which are quite effective for guarding against typical attacks. Yet, most of these attacks fail when the targeted network modifies its architecture or uses another set of parameters and vice versa. Moreover, the emerging of more advanced deep neural networks, such as generative adversarial networks (GANs), has made the situation more complicated and the game between the attack and defense is continuing. This dissertation aims at exploring the venerability of the deep neural networks by investigating the mechanisms behind the success/failure of the existing attack and defense approaches. Therefore, several deep learning-based approaches have been proposed to study the problem from different perspectives. First, I developed an adversarial attack approach by exploring the unlearned region of a typical deep neural network which is often over-parameterized. Second, I proposed an end-to-end learning framework to analyze the images generated by different GAN models. Third, I developed a defense mechanism that can secure the deep neural network against adversarial attacks with a defense layer consisting of a set of orthogonal kernels. Substantial experiments are conducted to unveil the potential factors that contribute to attack/defense effectiveness. This dissertation also concludes with a discussion of possible future works of achieving a robust deep neural network.
ContributorsDing, Yuzhen (Author) / Li, Baoxin (Thesis advisor) / Davulcu, Hasan (Committee member) / Venkateswara, Hemanth Kumar Demakethepalli (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2022
161945-Thumbnail Image.png
Description
Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand,

Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand, the outer surface along with its enclosed internal volume can be taken as a three-dimensional embedding of interests. Most studies focus on the surface-based perspective by leveraging the intrinsic features on the tangent plane. But a two-dimensional model may fail to fully represent the realistic properties of shapes with both intrinsic and extrinsic properties. In this thesis, severalStochastic Partial Differential Equations (SPDEs) are thoroughly investigated and several methods are originated from these SPDEs to try to solve the problem of both two-dimensional and three-dimensional shape analyses. The unique physical meanings of these SPDEs inspired the findings of features, shape descriptors, metrics, and kernels in this series of works. Initially, the data generation of high-dimensional shapes, here, the tetrahedral meshes, is introduced. The cerebral cortex is taken as the study target and an automatic pipeline of generating the gray matter tetrahedral mesh is introduced. Then, a discretized Laplace-Beltrami operator (LBO) and a Hamiltonian operator (HO) in tetrahedral domain with Finite Element Method (FEM) are derived. Two high-dimensional shape descriptors are defined based on the solution of the heat equation and Schrödinger’s equation. Considering the fact that high-dimensional shape models usually contain massive redundancies, and the demands on effective landmarks in many applications, a Gaussian process landmarking on tetrahedral meshes is further studied. A SIWKS-based metric space is used to define a geometry-aware Gaussian process. The study of the periodic potential diffusion process further inspired the idea of a new kernel call the geometry-aware convolutional kernel. A series of Bayesian learning methods are then introduced to tackle the problem of shape retrieval and classification. Experiments of every single item are demonstrated. From the popular SPDE such as the heat equation and Schrödinger’s equation to the general potential diffusion equation and the specific periodic potential diffusion equation, it clearly shows that classical SPDEs play an important role in discovering new features, metrics, shape descriptors and kernels. I hope this thesis could be an example of using interdisciplinary knowledge to solve problems.
ContributorsFan, Yonghui (Author) / Wang, Yalin (Thesis advisor) / Lepore, Natasha (Committee member) / Turaga, Pavan (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
161997-Thumbnail Image.png
Description
Many real-world engineering problems require simulations to evaluate the design objectives and constraints. Often, due to the complexity of the system model, simulations can be prohibitive in terms of computation time. One approach to overcome this issue is to construct a surrogate model, which approximates the original model. The focus

Many real-world engineering problems require simulations to evaluate the design objectives and constraints. Often, due to the complexity of the system model, simulations can be prohibitive in terms of computation time. One approach to overcome this issue is to construct a surrogate model, which approximates the original model. The focus of this work is on the data-driven surrogate models, in which empirical approximations of the output are performed given the input parameters. Recently neural networks (NN) have re-emerged as a popular method for constructing data-driven surrogate models. Although, NNs have achieved excellent accuracy and are widely used, they pose their own challenges. This work addresses two common challenges, the need for: (1) hardware acceleration and (2) uncertainty quantification (UQ) in the presence of input variability. The high demand in the inference phase of deep NNs in cloud servers/edge devices calls for the design of low power custom hardware accelerators. The first part of this work describes the design of an energy-efficient long short-term memory (LSTM) accelerator. The overarching goal is to aggressively reduce the power consumption and area of the LSTM components using approximate computing, and then use architectural level techniques to boost the performance. The proposed design is synthesized and placed and routed as an application-specific integrated circuit (ASIC). The results demonstrate that this accelerator is 1.2X and 3.6X more energy-efficient and area-efficient than the baseline LSTM. In the second part of this work, a robust framework is developed based on an alternate data-driven surrogate model referred to as polynomial chaos expansion (PCE) for addressing UQ. In contrast to many existing approaches, no assumptions are made on the elements of the function space and UQ is a function of the expansion coefficients. Moreover, the sensitivity of the output with respect to any subset of the input variables can be computed analytically by post-processing the PCE coefficients. This provides a systematic and incremental method to pruning or changing the order of the model. This framework is evaluated on several real-world applications from different domains and is extended for classification tasks as well.
ContributorsAzari, Elham (Author) / Vrudhula, Sarma (Thesis advisor) / Fainekos, Georgios (Committee member) / Ren, Fengbo (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
168714-Thumbnail Image.png
Description
Deep neural network-based methods have been proved to achieve outstanding performance on object detection and classification tasks. Deep neural networks follow the ``deeper model with deeper confidence'' belief to gain a higher recognition accuracy. However, reducing these networks' computational costs remains a challenge, which impedes their deployment on embedded devices.

Deep neural network-based methods have been proved to achieve outstanding performance on object detection and classification tasks. Deep neural networks follow the ``deeper model with deeper confidence'' belief to gain a higher recognition accuracy. However, reducing these networks' computational costs remains a challenge, which impedes their deployment on embedded devices. For instance, the intersection management of Connected Autonomous Vehicles (CAVs) requires running computationally intensive object recognition algorithms on low-power traffic cameras. This dissertation aims to study the effect of a dynamic hardware and software approach to address this issue. Characteristics of real-world applications can facilitate this dynamic adjustment and reduce the computation. Specifically, this dissertation starts with a dynamic hardware approach that adjusts itself based on the toughness of input and extracts deeper features if needed. Next, an adaptive learning mechanism has been studied that use extracted feature from previous inputs to improve system performance. Finally, a system (ARGOS) was proposed and evaluated that can be run on embedded systems while maintaining the desired accuracy. This system adopts shallow features at inference time, but it can switch to deep features if the system desires a higher accuracy. To improve the performance, ARGOS distills the temporal knowledge from deep features to the shallow system. Moreover, ARGOS reduces the computation furthermore by focusing on regions of interest. The response time and mean average precision are adopted for the performance evaluation to evaluate the proposed ARGOS system.
ContributorsFarhadi, Mohammad (Author) / Yang, Yezhou (Thesis advisor) / Vrudhula, Sarma (Committee member) / Wu, Carole-Jean (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2022