Matching Items (4)
149930-Thumbnail Image.png
Description
Concern regarding the quality of traffic data exists among engineers and planners tasked with obtaining and using the data for various transportation applications. While data quality issues are often understood by analysts doing the hands on work, rarely are the quality characteristics of the data effectively communicated beyond the analyst.

Concern regarding the quality of traffic data exists among engineers and planners tasked with obtaining and using the data for various transportation applications. While data quality issues are often understood by analysts doing the hands on work, rarely are the quality characteristics of the data effectively communicated beyond the analyst. This research is an exercise in measuring and reporting data quality. The assessment was conducted to support the performance measurement program at the Maricopa Association of Governments in Phoenix, Arizona, and investigates the traffic data from 228 continuous monitoring freeway sensors in the metropolitan region. Results of the assessment provide an example of describing the quality of the traffic data with each of six data quality measures suggested in the literature, which are accuracy, completeness, validity, timeliness, coverage and accessibility. An important contribution is made in the use of data quality visualization tools. These visualization tools are used in evaluating the validity of the traffic data beyond pass/fail criteria commonly used. More significantly, they serve to educate an intuitive sense or understanding of the underlying characteristics of the data considered valid. Recommendations from the experience gained in this assessment include that data quality visualization tools be developed and used in the processing and quality control of traffic data, and that these visualization tools, along with other information on the quality control effort, be stored as metadata with the processed data.
ContributorsSamuelson, Jothan P (Author) / Pendyala, Ram M. (Thesis advisor) / Ahn, Soyoung (Committee member) / Arizona State University (Publisher)
Created2011
154800-Thumbnail Image.png
Description
The concept of Linked Data is gaining widespread popularity and importance. The method of publishing and linking structured data on the web is called Linked Data. Emergence of Linked Data has made it possible to make sense of huge data, which is scattered all over the web, and link multiple

The concept of Linked Data is gaining widespread popularity and importance. The method of publishing and linking structured data on the web is called Linked Data. Emergence of Linked Data has made it possible to make sense of huge data, which is scattered all over the web, and link multiple heterogeneous sources. This leads to the challenge of maintaining the quality of Linked Data, i.e., ensuring outdated data is removed and new data is included. The focus of this thesis is devising strategies to effectively integrate data from multiple sources, publish it as Linked Data, and maintain the quality of Linked Data. The domain used in the study is online education. With so many online courses offered by Massive Open Online Courses (MOOC), it is becoming increasingly difficult for an end user to gauge which course best fits his/her needs.

Users are spoilt for choices. It would be very helpful for them to make a choice if there is a single place where they can visually compare the offerings of various MOOC providers for the course they are interested in. Previous work has been done in this area through the MOOCLink project that involved integrating data from Coursera, EdX, and Udacity and generation of linked data, i.e. Resource Description Framework (RDF) triples.

The research objective of this thesis is to determine a methodology by which the quality

of data available through the MOOCLink application is maintained, as there are lots of new courses being constantly added and old courses being removed by data providers. This thesis presents the integration of data from various MOOC providers and algorithms for incrementally updating linked data to maintain their quality and compare it against a naïve approach in order to constantly keep the users engaged with up-to-date data. A master threshold value was determined through experiments and analysis that quantifies one algorithm being better than the other in terms of time efficiency. An evaluation of the tool shows the effectiveness of the algorithms presented in this thesis.
ContributorsDhekne, Chinmay (Author) / Bansal, Srividya (Thesis advisor) / Bansal, Ajay (Committee member) / Sohoni, Sohum (Committee member) / Arizona State University (Publisher)
Created2016
155250-Thumbnail Image.png
Description
For the past decade, mobile health applications are seeing greater acceptance due to their potential to remotely monitor and increase patient engagement, particularly for chronic disease. Sickle Cell Disease is an inherited chronic disorder of red blood cells requiring careful pain management. A significant number of mHealth applications have been

For the past decade, mobile health applications are seeing greater acceptance due to their potential to remotely monitor and increase patient engagement, particularly for chronic disease. Sickle Cell Disease is an inherited chronic disorder of red blood cells requiring careful pain management. A significant number of mHealth applications have been developed in the market to help clinicians collect and monitor information of SCD patients. Surveys are the most common way to self-report patient conditions. These are non-engaging and suffer from poor compliance. The quality of data gathered from survey instruments while using technology can be questioned as patients may be motivated to complete a task but not motivated to do it well. A compromise in quality and quantity of the collected patient data hinders the clinicians' effort to be able to monitor patient's health on a regular basis and derive effective treatment measures. This research study has two goals. The first is to monitor user compliance and data quality in mHealth apps with long and repetitive surveys delivered. The second is to identify possible motivational interventions to help improve compliance and data quality. As a form of intervention, will introduce intrinsic and extrinsic motivational factors within the application and test it on a small target population. I will validate the impact of these motivational factors by performing a comparative analysis on the test results to determine improvements in user performance. This study is relevant, as it will help analyze user behavior in long and repetitive self-reporting tasks and derive measures to improve user performance. The results will assist software engineers working with doctors in designing and developing improved self-reporting mHealth applications for collecting better quality data and enhance user compliance.
ContributorsRallabhandi, Pooja (Author) / Gary, Kevin A (Thesis advisor) / Gaffar, Ashraf (Committee member) / Bansal, Srividya (Committee member) / Amresh, Ashish (Committee member) / Arizona State University (Publisher)
Created2017
131591-Thumbnail Image.png
Description
This thesis investigates the use of MS Power BI in the case company’s heterogeneous computing environment. The empirical evidence was collected through the authors’ own observations and exposure to the modeling of dashboards, other supported external findings from interviews, published articles, academic journals, and speaking with leading experts at the

This thesis investigates the use of MS Power BI in the case company’s heterogeneous computing environment. The empirical evidence was collected through the authors’ own observations and exposure to the modeling of dashboards, other supported external findings from interviews, published articles, academic journals, and speaking with leading experts at the WA ‘Dynamic Talks Seattle/Redmond: Big Data Analytics’ conference. Power BI modeling is effective for advancing the development of statistical thinking and data retrieving skills, finding trends and patterns in data representations, and making predictions. Computer-based data modeling gave meaning to math results, and supported examining implications of these results with simple charts to improve perception. Querying and other add-ins that would be seen as affordances when using other BI softwares, with some complexity removed in Power BI, make modeling data an easier undertaking for report builders. Using computer-based qualitative data analysis software, this paper details opportunities and challenges of data modeling with dashboards. Simple linear regression is used for case study use only.
ContributorsKusen, Alexandra Jeshua (Co-author) / Briones, Jared (Co-author) / Fugleberg, Aaron (Co-author) / Lin, Amy (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05