This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 152
151718-Thumbnail Image.png
Description
The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a

The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a reputation score for each tweet that is based not just on content, but also additional information from the Twitter ecosystem that consists of users, tweets, and the web pages that tweets link to. This information is obtained by modeling the Twitter ecosystem as a three-layer graph. The reputation score is used to power two novel methods of ranking tweets by propagating the reputation over an agreement graph based on tweets' content similarity. Additionally, I show how the agreement graph helps counter tweet spam. An evaluation of my method on 16~million tweets from the TREC 2011 Microblog Dataset shows that it doubles the precision over baseline Twitter Search and achieves higher precision than current state of the art method. I present a detailed internal empirical evaluation of RAProp in comparison to several alternative approaches proposed by me, as well as external evaluation in comparison to the current state of the art method.
ContributorsRavikumar, Srijith (Author) / Kambhampati, Subbarao (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
152236-Thumbnail Image.png
Description
Continuous Delivery, as one of the youngest and most popular member of agile model family, has become a popular concept and method in software development industry recently. Instead of the traditional software development method, which requirements and solutions must be fixed before starting software developing, it promotes adaptive planning, evolutionary

Continuous Delivery, as one of the youngest and most popular member of agile model family, has become a popular concept and method in software development industry recently. Instead of the traditional software development method, which requirements and solutions must be fixed before starting software developing, it promotes adaptive planning, evolutionary development and delivery, and encourages rapid and flexible response to change. However, several problems prevent Continuous Delivery to be introduced into education world. Taking into the consideration of the barriers, we propose a new Cloud based Continuous Delivery Software Developing System. This system is designed to fully utilize the whole life circle of software developing according to Continuous Delivery concepts in a virtualized environment in Vlab platform.
ContributorsDeng, Yuli (Author) / Huang, Dijiang (Thesis advisor) / Davulcu, Hasan (Committee member) / Chen, Yinong (Committee member) / Arizona State University (Publisher)
Created2013
152248-Thumbnail Image.png
Description
Background: Evidence about the purported hypoglycemic and hypolipidemic effects of nopales (prickly pear cactus pads) is limited. Objective: To evaluate the efficacy of nopales for improving cardiometabolic risk factors and oxidative stress, compared to control, in adults with hypercholesterolemia. Design: In a randomized crossover trial, participants were assigned to a

Background: Evidence about the purported hypoglycemic and hypolipidemic effects of nopales (prickly pear cactus pads) is limited. Objective: To evaluate the efficacy of nopales for improving cardiometabolic risk factors and oxidative stress, compared to control, in adults with hypercholesterolemia. Design: In a randomized crossover trial, participants were assigned to a 2-wk intervention with 2 cups/day of nopales or cucumbers (control), with a 2 to 3-wk washout period. The study included 16 adults (5 male; 46±14 y; BMI = 31.4±5.7 kg/m2) with moderate hypercholesterolemia (low density lipoprotein cholesterol [LDL-c] = 137±21 mg/dL), but otherwise healthy. Main outcomes measured included: dietary intake (energy, macronutrients and micronutrients), cardiometabolic risk markers (total cholesterol, LDL-c, high density lipoprotein cholesterol [HDL-c], triglycerides, cholesterol distribution in LDL and HDL subfractions, glucose, insulin, homeostasis model assessment, and C-reactive protein), and oxidative stress markers (vitamin C, total antioxidant capacity, oxidized LDL, and LDL susceptibility to oxidation). Effects of treatment, time, or interactions were assessed using repeated measures ANOVA. Results: There was no significant treatment-by-time effect for any dietary composition data, lipid profile, cardiometabolic outcomes, or oxidative stress markers. A significant time effect was observed for energy, which was decreased in both treatments (cucumber, -8.3%; nopales, -10.1%; pTime=0.026) mostly due to lower mono and polyunsaturated fatty acids intake (pTime=0.023 and pTime=0.003, respectively). Both treatments significantly increased triglyceride concentrations (cucumber, 14.8%; nopales, 15.2%; pTime=0.020). Despite the lack of significant treatment-by-time effects, great individual response variability was observed for all outcomes. After the cucumber and nopales phases, a decrease in LDL-c was observed in 44% and 63% of the participants respectively. On average LDL-c was decreased by 2.0 mg/dL (-1.4%) after the cucumber phase and 3.9 mg/dL (-2.9%) after the nopales phase (pTime=0.176). Pro-atherogenic changes in HDL subfractions were observed in both interventions over time, by decreasing the proportion of HDL-c in large HDL (cucumber, -5.1%; nopales, -5.9%; pTime=0.021) and increasing the proportion in small HDL (cucumber, 4.1%; nopales, 7.9%; pTime=0.002). Conclusions: These data do not support the purported benefits of nopales at doses of 2 cups/day for 2-wk on markers of lipoprotein profile, cardiometabolic risk, and oxidative stress in hypercholesterolemic adults.
ContributorsPereira Pignotti, Giselle Adriana (Author) / Vega-Lopez, Sonia (Thesis advisor) / Gaesser, Glenn (Committee member) / Keller, Colleen (Committee member) / Shaibi, Gabriel (Committee member) / Sweazea, Karen (Committee member) / Arizona State University (Publisher)
Created2013
151371-Thumbnail Image.png
Description
This dissertation presents the Temporal Event Query Language (TEQL), a new language for querying event streams. Event Stream Processing enables online querying of streams of events to extract relevant data in a timely manner. TEQL enables querying of interval-based event streams using temporal database operators. Temporal databases and temporal query

This dissertation presents the Temporal Event Query Language (TEQL), a new language for querying event streams. Event Stream Processing enables online querying of streams of events to extract relevant data in a timely manner. TEQL enables querying of interval-based event streams using temporal database operators. Temporal databases and temporal query languages have been a subject of research for more than 30 years and are a natural fit for expressing queries that involve a temporal dimension. However, operators developed in this context cannot be directly applied to event streams. The research extends a preexisting relational framework for event stream processing to support temporal queries. The language features and formal semantic extensions to extend the relational framework are identified. The extended framework supports continuous, step-wise evaluation of temporal queries. The incremental evaluation of TEQL operators is formalized to avoid re-computation of previous results. The research includes the development of a prototype that supports the integrated event and temporal query processing framework, with support for incremental evaluation and materialization of intermediate results. TEQL enables reporting temporal data in the output, direct specification of conditions over timestamps, and specification of temporal relational operators. Through the integration of temporal database operators with event languages, a new class of temporal queries is made possible for querying event streams. New features include semantic aggregation, extraction of temporal patterns using set operators, and a more accurate specification of event co-occurrence.
ContributorsShiva, Foruhar Ali (Author) / Urban, Susan D (Thesis advisor) / Chen, Yi (Thesis advisor) / Davulcu, Hasan (Committee member) / Sarjoughian, Hessam S. (Committee member) / Arizona State University (Publisher)
Created2012
151275-Thumbnail Image.png
Description
The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations.
ContributorsBuell, Kevin, Ph.D (Author) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Lindquist, Timothy (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
151467-Thumbnail Image.png
Description
A semiconductor supply chain modeling and simulation platform using Linear Program (LP) optimization and parallel Discrete Event System Specification (DEVS) process models has been developed in a joint effort by ASU and Intel Corporation. A Knowledge Interchange Broker (KIBDEVS/LP) was developed to broker information synchronously between the DEVS and LP

A semiconductor supply chain modeling and simulation platform using Linear Program (LP) optimization and parallel Discrete Event System Specification (DEVS) process models has been developed in a joint effort by ASU and Intel Corporation. A Knowledge Interchange Broker (KIBDEVS/LP) was developed to broker information synchronously between the DEVS and LP models. Recently a single-echelon heuristic Inventory Strategy Module (ISM) was added to correct for forecast bias in customer demand data using different smoothing techniques. The optimization model could then use information provided by the forecast model to make better decisions for the process model. The composition of ISM with LP and DEVS models resulted in the first realization of what is now called the Optimization Simulation Forecast (OSF) platform. It could handle a single echelon supply chain system consisting of single hubs and single products In this thesis, this single-echelon simulation platform is extended to handle multiple echelons with multiple inventory elements handling multiple products. The main aspect for the multi-echelon OSF platform was to extend the KIBDEVS/LP such that ISM interactions with the LP and DEVS models could also be supported. To achieve this, a new, scalable XML schema for the KIB has been developed. The XML schema has also resulted in strengthening the KIB execution engine design. A sequential scheme controls the executions of the DEVS-Suite simulator, CPLEX optimizer, and ISM engine. To use the ISM for multiple echelons, it is extended to compute forecast customer demands and safety stocks over multiple hubs and products. Basic examples for semiconductor manufacturing spanning single and two echelon supply chain systems have been developed and analyzed. Experiments using perfect data were conducted to show the correctness of the OSF platform design and implementation. Simple, but realistic experiments have also been conducted. They highlight the kinds of supply chain dynamics that can be evaluated using discrete event process simulation, linear programming optimization, and heuristics forecasting models.
ContributorsSmith, James Melkon (Author) / Sarjoughian, Hessam S. (Thesis advisor) / Davulcu, Hasan (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2012
151524-Thumbnail Image.png
Description
Process migration is a heavily studied research area and has a number of applications in distributed systems. Process migration means transferring a process running on one machine to another such that it resumes execution from the point at which it was suspended. The conventional approach to implement process migration is

Process migration is a heavily studied research area and has a number of applications in distributed systems. Process migration means transferring a process running on one machine to another such that it resumes execution from the point at which it was suspended. The conventional approach to implement process migration is to move the entire state information of the process (including hardware context, virtual memory, files etc.) from one machine to another. Copying all the state information is costly. This thesis proposes and demonstrates a new approach of migrating a process between two cores of Intel Single Chip Cloud (SCC), an experimental 48-core processor by Intel, with each core running a separate instance of the operating system. In this method the amount of process state to be transferred from one core's memory to another is reduced by making use of special registers called Lookup tables (LUTs) present on each core of SCC. Thus this new approach is faster than the conventional method.
ContributorsJain, Vaibhav (Author) / Dasgupta, Partha (Thesis advisor) / Shriavstava, Aviral (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2013
152310-Thumbnail Image.png
Description
The wide adoption and continued advancement of information and communications technologies (ICT) have made it easier than ever for individuals and groups to stay connected over long distances. These advances have greatly contributed in dramatically changing the dynamics of the modern day workplace to the point where it is now

The wide adoption and continued advancement of information and communications technologies (ICT) have made it easier than ever for individuals and groups to stay connected over long distances. These advances have greatly contributed in dramatically changing the dynamics of the modern day workplace to the point where it is now commonplace to see large, distributed multidisciplinary teams working together on a daily basis. However, in this environment, motivating, understanding, and valuing the diverse contributions of individual workers in collaborative enterprises becomes challenging. To address these issues, this thesis presents the goals, design, and implementation of Taskville, a distributed workplace game played by teams on large, public displays. Taskville uses a city building metaphor to represent the completion of individual and group tasks within an organization. Promising results from two usability studies and two longitudinal studies at a multidisciplinary school demonstrate that Taskville supports personal reflection and improves team awareness through an engaging workplace activity.
ContributorsNikkila, Shawn (Author) / Sundaram, Hari (Thesis advisor) / Byrne, Daragh (Committee member) / Davulcu, Hasan (Committee member) / Olson, Loren (Committee member) / Arizona State University (Publisher)
Created2013
152558-Thumbnail Image.png
Description
Sustaining a fall can be hazardous for those with low bone mass. Interventions exist to reduce fall-risk, but may not retain long-term interest. "Exergaming" has become popular in older adults as a therapy, but no research has been done on its preventative ability in non-clinical populations. The purpose was to

Sustaining a fall can be hazardous for those with low bone mass. Interventions exist to reduce fall-risk, but may not retain long-term interest. "Exergaming" has become popular in older adults as a therapy, but no research has been done on its preventative ability in non-clinical populations. The purpose was to determine the impact of 12-weeks of interactive play with the Wii Fit® on balance, muscular fitness, and bone health in peri- menopausal women. METHODS: 24 peri-menopausal-women were randomized into study groups. Balance was assessed using the Berg/FICSIT-4 and a force plate. Muscular strength was measured using the isokinetic dynamometer at 60°/180°/240°/sec and endurance was assessed using 50 repetitions at 240°/sec. Bone health was tracked using dual-energy x-ray absorptiometry (DXA) for the hip/lumbar spine and qualitative ultrasound (QUS) of the heel. Serum osteocalcin was assessed by enzyme immunoassay. Physical activity was quantified using the Women's Health Initiative Physical Activity Questionnaire and dietary patterns were measured using the Nurses' Health Food Frequency Questionnaire. All measures were repeated at weeks 6 and 12, except for the DXA, which was completed pre-post. RESULTS: There were no significant differences in diet and PA between groups. Wii Fit® training did not improve scores on the Berg/FICSIT-4, but improved center of pressure on the force plate for Tandem Step, Eyes Closed (p-values: 0.001-0.051). There were no significant improvements for muscular fitness at any of the angular velocities. DXA BMD of the left femoral neck improved in the intervention group (+1.15%) and decreased in the control (-1.13%), but no other sites had significant changes. Osteocalcin indicated no differences in bone turnover between groups at baseline, but the intervention group showed increased bone turnover between weeks 6 and 12. CONCLUSIONS: Findings indicate that WiiFit® training may improve balance by preserving center of pressure. QUS, DXA and osteocalcin data confirm that those in the intervention group were experiencing more bone turnover and bone formation than the control group. In summary, twelve weeks of strength /balance training with the Wii Fit® shows promise as a preventative intervention to reduce fall and fracture risk in non-clinical middle aged women who are at risk.
ContributorsWherry, Sarah Jo (Author) / Swan, Pamela D (Thesis advisor) / Adams, Marc (Committee member) / Der Ananian, Cheryl (Committee member) / Sweazea, Karen (Committee member) / Vaughan, Linda (Committee member) / Arizona State University (Publisher)
Created2014
152158-Thumbnail Image.png
Description
Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many

Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many cases, where the most a cleaning system can do is to generate a (hopefully small) set of clean candidates for each dirty tuple. When the cleaning system is required to output a deterministic database, it is forced to pick one clean candidate (say the "most likely" candidate) per tuple. Such an approach can lead to loss of information. For example, consider a situation where there are three equally likely clean candidates of a dirty tuple. An appealing alternative that avoids such an information loss is to abandon the requirement that the output database be deterministic. In other words, even though the input (dirty) database is deterministic, I allow the reconstructed database to be probabilistic. Although such an approach does avoid the information loss, it also brings forth several challenges. For example, how many alternatives should be kept per tuple in the reconstructed database? Maintaining too many alternatives increases the size of the reconstructed database, and hence the query processing time. Second, while processing queries on the probabilistic database may well increase recall, how would they affect the precision of the query processing? In this thesis, I investigate these questions. My investigation is done in the context of a data cleaning system called BayesWipe that has the capability of producing multiple clean candidates per each dirty tuple, along with the probability that they are the correct cleaned version. I represent these alternatives as tuples in a tuple disjoint probabilistic database, and use the Mystiq system to process queries on it. This probabilistic reconstruction (called BayesWipe–PDB) is compared to a deterministic reconstruction (called BayesWipe–DET)—where the most likely clean candidate for each tuple is chosen, and the rest of the alternatives discarded.
ContributorsRihan, Preet Inder Singh (Author) / Kambhampati, Subbarao (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2013