Matching Items (284)
150020-Thumbnail Image.png
Description
Dietary self-monitoring has been shown to be a predictor of weight loss success and is a prevalent part of behavioral weight control programs. As more weight loss applications have become available on smartphones, this feasibility study investigated whether the use of a smartphone application, or a smartphone memo feature would

Dietary self-monitoring has been shown to be a predictor of weight loss success and is a prevalent part of behavioral weight control programs. As more weight loss applications have become available on smartphones, this feasibility study investigated whether the use of a smartphone application, or a smartphone memo feature would improve dietary self-monitoring over the traditional paper-and-pencil method. The study also looked at whether the difference in methods would affect weight loss. Forty-seven adults (BMI 25 to 40 kg/m2) completed an 8-week study focused on tracking the difference in adherence to a self-monitoring protocol and subsequent weight loss. Participants owning iPhones (n=17) used the 'Lose It' application (AP) for diet and exercise tracking and were compared to smartphone participants who recorded dietary intake using a memo (ME) feature (n=15) on their phone and participants using the traditional paper-and-pencil (PA) method (n=15). There was no significant difference in completion rates between groups with an overall completion rate of 85.5%. The overall mean adherence to self-monitoring for the 8-week period was better in the AP group than the PA group (p = .024). No significant difference was found between the AP group and ME group (p = .148), or the ME group and the PA group (p = .457). Weight loss for the 8 week study was significant for all groups (p = .028). There was no significant difference in weight loss between groups. Number of days recorded regardless of group assignment showed a weak correlation to weight loss success (p = .068). Smartphone owners seeking to lose weight should be encouraged by the potential success associated with dietary tracking using a smartphone app as opposed to the traditional paper-and-pencil method.
ContributorsCunningham, Barbara (Author) / Wharton, Christopher (Christopher Mack), 1977- (Thesis advisor) / Johnston, Carol (Committee member) / Hall, Richard (Committee member) / Arizona State University (Publisher)
Created2012
149777-Thumbnail Image.png
Description
Nut consumption, specifically almonds, have been shown to help maintain weight and influence disease risk factors in adult populations. Limited studies have been conducted examining the effect of a small dose of almonds on energy intake and body weight. The objective of this study was to determine the influence of

Nut consumption, specifically almonds, have been shown to help maintain weight and influence disease risk factors in adult populations. Limited studies have been conducted examining the effect of a small dose of almonds on energy intake and body weight. The objective of this study was to determine the influence of pre-meal almond consumption on energy intake and weight in overweight and obese adults. In this study included 21, overweight or obese, participants who were considered healthy or had a controlled disease state. This 8-week parallel arm study, participants were randomized to consume an isocaloric amount of almonds, (1 oz) serving, or two (2 oz) cheese stick serving, 30 minutes before the dinner meal, 5 times per week. Anthropometric measurements including weight, waist circumference, and body fat percentage were recorded at baseline, week 1, 4, and 8. Measurement of energy intake was self-reported for two consecutive days at week 1, 4 and 8 using the ASA24 automated dietary program. The energy intake after 8 weeks of almond consumption was not significantly different when compared to the control group (p=0.965). In addition, body weight was not significantly reduced after 8 weeks of the almond intervention (p=0.562). Other parameters measured in this 8-week trial did not differ between the intervention and the control group. These data presented are underpowered and therefore inconclusive on the effects that 1 oz of almonds, in the diet, 5 per week has on energy intake and bodyweight.
ContributorsMcBride, Lindsey (Author) / Johnston, Carol (Thesis advisor) / Swan, Pamela (Committee member) / Mayol-Kreiser, Sandra (Committee member) / Arizona State University (Publisher)
Created2011
149767-Thumbnail Image.png
Description
ABSTRACT Epidemiological studies have suggested a link between nut consumption and weight. The possible effects of regular nut consumption as a method of weight loss has shown minimal results with 2-3 servings of nut products per day. This 8 week study sought to investigate the effect of more modest nut

ABSTRACT Epidemiological studies have suggested a link between nut consumption and weight. The possible effects of regular nut consumption as a method of weight loss has shown minimal results with 2-3 servings of nut products per day. This 8 week study sought to investigate the effect of more modest nut consumption (1 oz./day, 5 days/week) on dietary compensation in healthy overweight individuals. Overweight and obese participants (n = 28) were recruited from the local community and were randomly assigned to either almond (NUT) or control (CON) group in this randomized, parallel-arm study. Subjects were instructed to eat their respective foods 30 minutes before the dinner meal. 24 hour diet recalls were completed pre-trial and at study weeks 1, 4 and 8. Self-reported satiety data were completed at study weeks 1, 4, and 8. Attrition was unexpectedly high, with 13 participants completing 24 dietary recall data through study week 8. High attrition limited statistical analyses. Results suggested a lack of effect for time or interaction for satiety data (within groups p = 0.997, between groups p = 0.367). Homogeneity of of inter-correlations could not be tested for 24-hour recall data as there were fewer than 2 nonsingular cell covariance matrices. In conclusion, this study was unable to prove or disprove the effectiveness of almonds to induce dietary compensation.
ContributorsJahns, Marshall (Author) / Johnston, Carol (Thesis advisor) / Hall, Richard (Committee member) / Wharton, Christopher (Christopher Mack), 1977- (Committee member) / Arizona State University (Publisher)
Created2011
150348-Thumbnail Image.png
Description
Demands in file size and transfer rates for consumer-orientated products have escalated in recent times. This is primarily due to the emergence of high definition video content. Now factor in the consumer desire for convenience, and we find that wireless service is the most desired approach for inter-connectivity. Consumers expect

Demands in file size and transfer rates for consumer-orientated products have escalated in recent times. This is primarily due to the emergence of high definition video content. Now factor in the consumer desire for convenience, and we find that wireless service is the most desired approach for inter-connectivity. Consumers expect wireless service to emulate wired service with little to virtually no difference in quality of service (QoS). The background section of this document examines the QoS requirements for wireless connectivity of high definition video applications. I then proceed to look at proposed solutions at the physical (PHY) and the media access control (MAC) layers as well as cross-layer schemes. These schemes are subsequently are evaluated in terms of usefulness in a multi-gigabit, 60 GHz wireless multimedia system targeting the average consumer. It is determined that a substantial gap in published literature exists pertinent to this application. Specifically, little or no work has been found that shows how an adaptive PHYMAC cross-layer solution that provides real-time compensation for varying channel conditions might be actually implemented. Further, no work has been found that shows results of such a model. This research proposes, develops and implements in Matlab code an alternate cross-layer solution that will provide acceptable QoS service for multimedia applications. Simulations using actual high definition video sequences are used to test the proposed solution. Results based on the average PSNR metric show that a quasi-adaptive algorithm provides greater than 7 dB of improvement over a non-adaptive approach while a fully-adaptive alogrithm provides over18 dB of improvement. The fully adaptive implementation has been conclusively shown to be superior to non-adaptive techniques and sufficiently superior to even quasi-adaptive algorithms.
ContributorsBosco, Bruce (Author) / Reisslein, Martin (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150398-Thumbnail Image.png
Description
Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely

Underwater acoustic communications face significant challenges unprecedented in radio terrestrial communications including long multipath delay spreads, strong Doppler effects, and stringent bandwidth requirements. Recently, multi-carrier communications based on orthogonal frequency division multiplexing (OFDM) have seen significant growth in underwater acoustic (UWA) communications, thanks to their well well-known robustness against severely time-dispersive channels. However, the performance of OFDM systems over UWA channels significantly deteriorates due to severe intercarrier interference (ICI) resulting from rapid time variations of the channel. With the motivation of developing enabling techniques for OFDM over UWA channels, the major contributions of this thesis include (1) two effective frequencydomain equalizers that provide general means to counteract the ICI; (2) a family of multiple-resampling receiver designs dealing with distortions caused by user and/or path specific Doppler scaling effects; (3) proposal of using orthogonal frequency division multiple access (OFDMA) as an effective multiple access scheme for UWA communications; (4) the capacity evaluation for single-resampling versus multiple-resampling receiver designs. All of the proposed receiver designs have been verified both through simulations and emulations based on data collected in real-life UWA communications experiments. Particularly, the frequency domain equalizers are shown to be effective with significantly reduced pilot overhead and offer robustness against Doppler and timing estimation errors. The multiple-resampling designs, where each branch is tasked with the Doppler distortion of different paths and/or users, overcome the disadvantages of the commonly-used single-resampling receivers and yield significant performance gains. Multiple-resampling receivers are also demonstrated to be necessary for UWA OFDMA systems. The unique design effectively mitigates interuser interference (IUI), opening up the possibility to exploit advanced user subcarrier assignment schemes. Finally, the benefits of the multiple-resampling receivers are further demonstrated through channel capacity evaluation results.
ContributorsTu, Kai (Author) / Duman, Tolga M. (Thesis advisor) / Zhang, Junshan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2011
Description
Fiber-Wireless (FiWi) network is the future network configuration that uses optical fiber as backbone transmission media and enables wireless network for the end user. Our study focuses on the Dynamic Bandwidth Allocation (DBA) algorithm for EPON upstream transmission. DBA, if designed properly, can dramatically improve the packet transmission delay and

Fiber-Wireless (FiWi) network is the future network configuration that uses optical fiber as backbone transmission media and enables wireless network for the end user. Our study focuses on the Dynamic Bandwidth Allocation (DBA) algorithm for EPON upstream transmission. DBA, if designed properly, can dramatically improve the packet transmission delay and overall bandwidth utilization. With new DBA components coming out in research, a comprehensive study of DBA is conducted in this thesis, adding in Double Phase Polling coupled with novel Limited with Share credits Excess distribution method. By conducting a series simulation of DBAs using different components, we found out that grant sizing has the strongest impact on average packet delay and grant scheduling also has a significant impact on the average packet delay; grant scheduling has the strongest impact on the stability limit or maximum achievable channel utilization. Whereas the grant sizing only has a modest impact on the stability limit; the SPD grant scheduling policy in the Double Phase Polling scheduling framework coupled with Limited with Share credits Excess distribution grant sizing produced both the lowest average packet delay and the highest stability limit.
ContributorsZhao, Du (Author) / Reisslein, Martin (Thesis advisor) / McGarry, Michael (Committee member) / Fowler, John (Committee member) / Arizona State University (Publisher)
Created2011
150362-Thumbnail Image.png
Description
There are many wireless communication and networking applications that require high transmission rates and reliability with only limited resources in terms of bandwidth, power, hardware complexity etc.. Real-time video streaming, gaming and social networking are a few such examples. Over the years many problems have been addressed towards the goal

There are many wireless communication and networking applications that require high transmission rates and reliability with only limited resources in terms of bandwidth, power, hardware complexity etc.. Real-time video streaming, gaming and social networking are a few such examples. Over the years many problems have been addressed towards the goal of enabling such applications; however, significant challenges still remain, particularly, in the context of multi-user communications. With the motivation of addressing some of these challenges, the main focus of this dissertation is the design and analysis of capacity approaching coding schemes for several (wireless) multi-user communication scenarios. Specifically, three main themes are studied: superposition coding over broadcast channels, practical coding for binary-input binary-output broadcast channels, and signalling schemes for two-way relay channels. As the first contribution, we propose an analytical tool that allows for reliable comparison of different practical codes and decoding strategies over degraded broadcast channels, even for very low error rates for which simulations are impractical. The second contribution deals with binary-input binary-output degraded broadcast channels, for which an optimal encoding scheme that achieves the capacity boundary is found, and a practical coding scheme is given by concatenation of an outer low density parity check code and an inner (non-linear) mapper that induces desired distribution of "one" in a codeword. The third contribution considers two-way relay channels where the information exchange between two nodes takes place in two transmission phases using a coding scheme called physical-layer network coding. At the relay, a near optimal decoding strategy is derived using a list decoding algorithm, and an approximation is obtained by a joint decoding approach. For the latter scheme, an analytical approximation of the word error rate based on a union bounding technique is computed under the assumption that linear codes are employed at the two nodes exchanging data. Further, when the wireless channel is frequency selective, two decoding strategies at the relay are developed, namely, a near optimal decoding scheme implemented using list decoding, and a reduced complexity detection/decoding scheme utilizing a linear minimum mean squared error based detector followed by a network coded sequence decoder.
ContributorsBhat, Uttam (Author) / Duman, Tolga M. (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Li, Baoxin (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2011
150415-Thumbnail Image.png
Description
ABSTRACT This study evaluated the LoseIt Smart Phone app by Fit Now Inc. for nutritional quality among users during an 8 week behavioral modification weight loss protocol. All participants owned smart phones and were cluster randomized to either a control group using paper and pencil record keeping, a memo grou

ABSTRACT This study evaluated the LoseIt Smart Phone app by Fit Now Inc. for nutritional quality among users during an 8 week behavioral modification weight loss protocol. All participants owned smart phones and were cluster randomized to either a control group using paper and pencil record keeping, a memo group using a memo function on their smart phones, or the LoseIt app group which was composed of the participants who owned iPhones. Thirty one participants completed the study protocol: 10 participants from the LoseIt app group, 10 participants from the memo group, and 11 participants from the paper and pencil group. Food records were analyzed using Food Processor by ESHA and the nutritional quality was scored using the Healthy Eating Index - 2005 (HEI-2005). Scores were compared using One-Way ANOVA with no significant changes in any category across all groups. Non-parametric statistics were then used to determine changes between combined memo and paper and pencil groups and the LoseIt app group as the memo and paper and pencil group received live counseling at biweekly intervals and the LoseIt group did not. No significant difference was found in HEI scores across all categories, however a trend was noted for total HEI score with higher scores among the memo and paper and pencil group participants p=0.091. Conclusion, no significant difference was detected between users of the smart phone app LoseIt and memo and paper and pencil groups. More research is needed to determine the impact of in-person counseling versus user feedback provided with the LoseIt smart phone app.
ContributorsCowan, David Kevin (Author) / Johnston, Carol (Thesis advisor) / Wharton, Christopher (Christopher Mack), 1977- (Committee member) / Mayol-Kreiser, Sandra (Committee member) / Arizona State University (Publisher)
Created2011
149953-Thumbnail Image.png
Description
The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability

The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability to exploit sparsity. Traditional interior point methods encounter difficulties in computation for solving the CS applications. In the first part of this work, a fast algorithm based on the augmented Lagrangian method for solving the large-scale TV-$\ell_1$ regularized inverse problem is proposed. Specifically, by taking advantage of the separable structure, the original problem can be approximated via the sum of a series of simple functions with closed form solutions. A preconditioner for solving the block Toeplitz with Toeplitz block (BTTB) linear system is proposed to accelerate the computation. An in-depth discussion on the rate of convergence and the optimal parameter selection criteria is given. Numerical experiments are used to test the performance and the robustness of the proposed algorithm to a wide range of parameter values. Applications of the algorithm in magnetic resonance (MR) imaging and a comparison with other existing methods are included. The second part of this work is the application of the TV-$\ell_1$ model in source localization using sensor arrays. The array output is reformulated into a sparse waveform via an over-complete basis and study the $\ell_p$-norm properties in detecting the sparsity. An algorithm is proposed for minimizing a non-convex problem. According to the results of numerical experiments, the proposed algorithm with the aid of the $\ell_p$-norm can resolve closely distributed sources with higher accuracy than other existing methods.
ContributorsShen, Wei (Author) / Mittlemann, Hans D (Thesis advisor) / Renaut, Rosemary A. (Committee member) / Jackiewicz, Zdzislaw (Committee member) / Gelb, Anne (Committee member) / Ringhofer, Christian (Committee member) / Arizona State University (Publisher)
Created2011
149848-Thumbnail Image.png
Description
With tremendous increase in the popularity of networked multimedia applications, video data is expected to account for a large portion of the traffic on the Internet and more importantly next-generation wireless systems. To be able to satisfy a broad range of customers requirements, two major problems need to be solved.

With tremendous increase in the popularity of networked multimedia applications, video data is expected to account for a large portion of the traffic on the Internet and more importantly next-generation wireless systems. To be able to satisfy a broad range of customers requirements, two major problems need to be solved. The first problem is the need for a scalable representation of the input video. The recently developed scalable extension of the state-of-the art H.264/MPEG-4 AVC video coding standard, also known as H.264/SVC (Scalable Video Coding) provides a solution to this problem. The second problem is that wireless transmission medium typically introduce errors in the bit stream due to noise, congestion and fading on the channel. Protection against these channel impairments can be realized by the use of forward error correcting (FEC) codes. In this research study, the performance of scalable video coding in the presence of bit errors is studied. The encoded video is channel coded using Reed Solomon codes to provide acceptable performance in the presence of channel impairments. In the scalable bit stream, some parts of the bit stream are more important than other parts. Parity bytes are assigned to the video packets based on their importance in unequal error protection scheme. In equal error protection scheme, parity bytes are assigned based on the length of the message. A quantitative comparison of the two schemes, along with the case where no channel coding is employed is performed. H.264 SVC single layer video streams for long video sequences of different genres is considered in this study which serves as a means of effective video characterization. JSVM reference software, in its current version, does not support decoding of erroneous bit streams. A framework to obtain H.264 SVC compatible bit stream is modeled in this study. It is concluded that assigning of parity bytes based on the distribution of data for different types of frames provides optimum performance. Application of error protection to the bit stream enhances the quality of the decoded video with minimal overhead added to the bit stream.
ContributorsSundararaman, Hari (Author) / Reisslein, Martin (Thesis advisor) / Seeling, Patrick (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011