Matching Items (49)
Filtering by

Clear all filters

137156-Thumbnail Image.png
Description
Due to the popularity of the movie industry, a film's opening weekend box-office performance is of great interest not only to movie studios, but to the general public, as well. In hopes of maximizing a film's opening weekend revenue, movie studios invest heavily in pre-release advertisement. The most visible advertisement

Due to the popularity of the movie industry, a film's opening weekend box-office performance is of great interest not only to movie studios, but to the general public, as well. In hopes of maximizing a film's opening weekend revenue, movie studios invest heavily in pre-release advertisement. The most visible advertisement is the movie trailer, which, in no more than two minutes and thirty seconds, serves as many people's first introduction to a film. The question, however, is how can we be confident that a trailer will succeed in its promotional task, and bring about the audience a studio expects? In this thesis, we use machine learning classification techniques to determine the effectiveness of a movie trailer in the promotion of its namesake. We accomplish this by creating a predictive model that automatically analyzes the audio and visual characteristics of a movie trailer to determine whether or not a film's opening will be successful by earning at least 35% of a film's production budget during its first U.S. box office weekend. Our predictive model performed reasonably well, achieving an accuracy of 68.09% in a binary classification. Accuracy increased to 78.62% when including genre in our predictive model.
ContributorsWilliams, Terrance D'Mitri (Author) / Pon-Barry, Heather (Thesis director) / Zafarani, Reza (Committee member) / Maciejewski, Ross (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2014-05
154403-Thumbnail Image.png
Description
Traditionally, visualization is one of the most important and commonly used methods of generating insight into large scale data. Particularly for spatiotemporal data, the translation of such data into a visual form allows users to quickly see patterns, explore summaries and relate domain knowledge about underlying geographical phenomena that would

Traditionally, visualization is one of the most important and commonly used methods of generating insight into large scale data. Particularly for spatiotemporal data, the translation of such data into a visual form allows users to quickly see patterns, explore summaries and relate domain knowledge about underlying geographical phenomena that would not be apparent in tabular form. However, several critical challenges arise when visualizing and exploring these large spatiotemporal datasets. While, the underlying geographical component of the data lends itself well to univariate visualization in the form of traditional cartographic representations (e.g., choropleth, isopleth, dasymetric maps), as the data becomes multivariate, cartographic representations become more complex. To simplify the visual representations, analytical methods such as clustering and feature extraction are often applied as part of the classification phase. The automatic classification can then be rendered onto a map; however, one common issue in data classification is that items near a classification boundary are often mislabeled.

This thesis explores methods to augment the automated spatial classification by utilizing interactive machine learning as part of the cluster creation step. First, this thesis explores the design space for spatiotemporal analysis through the development of a comprehensive data wrangling and exploratory data analysis platform. Second, this system is augmented with a novel method for evaluating the visual impact of edge cases for multivariate geographic projections. Finally, system features and functionality are demonstrated through a series of case studies, with key features including similarity analysis, multivariate clustering, and novel visual support for cluster comparison.
ContributorsZhang, Yifan (Author) / Maciejewski, Ross (Thesis advisor) / Mack, Elizabeth (Committee member) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2016
154057-Thumbnail Image.png
Description
The Global Change Assessment Model (GCAM) is an integrated assessment tool for exploring consequences and responses to global change. However, the current iteration of GCAM relies on NetCDF file outputs which need to be exported for visualization and analysis purposes. Such a requirement limits the uptake of this modeling platform

The Global Change Assessment Model (GCAM) is an integrated assessment tool for exploring consequences and responses to global change. However, the current iteration of GCAM relies on NetCDF file outputs which need to be exported for visualization and analysis purposes. Such a requirement limits the uptake of this modeling platform for analysts that may wish to explore future scenarios. This work has focused on a web-based geovisual analytics interface for GCAM. Challenges of this work include enabling both domain expert and model experts to be able to functionally explore the model. Furthermore, scenario analysis has been widely applied in climate science to understand the impact of climate change on the future human environment. The inter-comparison of scenario analysis remains a big challenge in both the climate science and visualization communities. In a close collaboration with the Global Change Assessment Model team, I developed the first visual analytics interface for GCAM with a series of interactive functions to help users understand the simulated impact of climate change on sectors of the global economy, and at the same time allow them to explore inter comparison of scenario analysis with GCAM models. This tool implements a hierarchical clustering approach to allow inter-comparison and similarity analysis among multiple scenarios over space, time, and multiple attributes through a set of coordinated multiple views. After working with this tool, the scientists from the GCAM team agree that the geovisual analytics tool can facilitate scenario exploration and enable scientific insight gaining process into scenario comparison. To demonstrate my work, I present two case studies, one of them explores the potential impact that the China south-north water transportation project in the Yangtze River basin will have on projected water demands. The other case study using GCAM models demonstrates how the impact of spatial variations and scales on similarity analysis of climate scenarios varies at world, continental, and country scales.
ContributorsChang, Zheng (Author) / Maciejewski, Ross (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / White, Dave (Committee member) / Luo, Wei (Committee member) / Arizona State University (Publisher)
Created2015
155108-Thumbnail Image.png
Description
The proper quantification and visualization of uncertainty requires a high level of domain knowledge. Despite this, few studies have collected and compared the roles, experiences and opinions of scientists in different types of uncertainty analysis. I address this gap by conducting two types of studies: 1) a domain characterization study

The proper quantification and visualization of uncertainty requires a high level of domain knowledge. Despite this, few studies have collected and compared the roles, experiences and opinions of scientists in different types of uncertainty analysis. I address this gap by conducting two types of studies: 1) a domain characterization study with general questions for experts from various fields based on a recent literature review in ensemble analysis and visualization, and; 2) a long-term interview with domain experts focusing on specific problems and challenges in uncertainty analysis. From the domain characterization, I identified the most common metrics applied for uncertainty quantification and discussed the current visualization applications of these methods. Based on the interviews with domain experts, I characterized the background and intents of the experts when performing uncertainty analysis. This enables me to characterize domain needs that are currently underrepresented or unsupported in the literature. Finally, I developed a new framework for visualizing uncertainty in climate ensembles.
ContributorsLiang, Xing (Author) / Maciejewski, Ross (Thesis advisor) / Mascaro, Giuseppe (Committee member) / Sarjoughian, Hessam S. (Committee member) / Arizona State University (Publisher)
Created2016
155738-Thumbnail Image.png
Description
Testing and Verification of Cyber-Physical Systems (CPS) is a challenging problem. The challenge arises as a result of the complex interactions between the components of these systems: the digital control, and the physical environment. Furthermore, the software complexity that governs the high-level control logic in these systems is increasing day

Testing and Verification of Cyber-Physical Systems (CPS) is a challenging problem. The challenge arises as a result of the complex interactions between the components of these systems: the digital control, and the physical environment. Furthermore, the software complexity that governs the high-level control logic in these systems is increasing day by day. As a result, in recent years, both the academic community and the industry have been heavily invested in developing tools and methodologies for the development of safety-critical systems. One scalable approach in testing and verification of these systems is through guided system simulation using stochastic optimization techniques. The goal of the stochastic optimizer is to find system behavior that does not meet the intended specifications.

In this dissertation, three methods that facilitate the testing and verification process for CPS are presented:

1. A graphical formalism and tool which enables the elicitation of formal requirements. To evaluate the performance of the tool, a usability study is conducted.

2. A parameter mining method to infer, analyze, and visually represent falsifying ranges for parametrized system specifications.

3. A notion of conformance between a CPS model and implementation along with a testing framework.

The methods are evaluated over high-fidelity case studies from the industry.
ContributorsHoxha, Bardh (Author) / Fainekos, Georgios (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Maciejewski, Ross (Committee member) / Ben Amor, Heni (Committee member) / Arizona State University (Publisher)
Created2017
155841-Thumbnail Image.png
Description
A major challenge in health-related policy and program evaluation research is attributing underlying causal relationships where complicated processes may exist in natural or quasi-experimental settings. Spatial interaction and heterogeneity between units at individual or group levels can violate both components of the Stable-Unit-Treatment-Value-Assumption (SUTVA) that are core to the counterfactual

A major challenge in health-related policy and program evaluation research is attributing underlying causal relationships where complicated processes may exist in natural or quasi-experimental settings. Spatial interaction and heterogeneity between units at individual or group levels can violate both components of the Stable-Unit-Treatment-Value-Assumption (SUTVA) that are core to the counterfactual framework, making treatment effects difficult to assess. New approaches are needed in health studies to develop spatially dynamic causal modeling methods to both derive insights from data that are sensitive to spatial differences and dependencies, and also be able to rely on a more robust, dynamic technical infrastructure needed for decision-making. To address this gap with a focus on causal applications theoretically, methodologically and technologically, I (1) develop a theoretical spatial framework (within single-level panel econometric methodology) that extends existing theories and methods of causal inference, which tend to ignore spatial dynamics; (2) demonstrate how this spatial framework can be applied in empirical research; and (3) implement a new spatial infrastructure framework that integrates and manages the required data for health systems evaluation.

The new spatially explicit counterfactual framework considers how spatial effects impact treatment choice, treatment variation, and treatment effects. To illustrate this new methodological framework, I first replicate a classic quasi-experimental study that evaluates the effect of drinking age policy on mortality in the United States from 1970 to 1984, and further extend it with a spatial perspective. In another example, I evaluate food access dynamics in Chicago from 2007 to 2014 by implementing advanced spatial analytics that better account for the complex patterns of food access, and quasi-experimental research design to distill the impact of the Great Recession on the foodscape. Inference interpretation is sensitive to both research design framing and underlying processes that drive geographically distributed relationships. Finally, I advance a new Spatial Data Science Infrastructure to integrate and manage data in dynamic, open environments for public health systems research and decision- making. I demonstrate an infrastructure prototype in a final case study, developed in collaboration with health department officials and community organizations.
ContributorsKolak, Marynia Aniela (Author) / Anselin, Luc (Thesis advisor) / Rey, Sergio (Committee member) / Koschinsky, Julia (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2017
155717-Thumbnail Image.png
Description
Exabytes of data are created online every day. This deluge of data is no more apparent than it is on social media. Naturally, finding ways to leverage this unprecedented source of human information is an active area of research. Social media platforms have become laboratories for conducting experiments about people

Exabytes of data are created online every day. This deluge of data is no more apparent than it is on social media. Naturally, finding ways to leverage this unprecedented source of human information is an active area of research. Social media platforms have become laboratories for conducting experiments about people at scales thought unimaginable only a few years ago.

Researchers and practitioners use social media to extract actionable patterns such as where aid should be distributed in a crisis. However, the validity of these patterns relies on having a representative dataset. As this dissertation shows, the data collected from social media is seldom representative of the activity of the site itself, and less so of human activity. This means that the results of many studies are limited by the quality of data they collect.

The finding that social media data is biased inspires the main challenge addressed by this thesis. I introduce three sets of methodologies to correct for bias. First, I design methods to deal with data collection bias. I offer a methodology which can find bias within a social media dataset. This methodology works by comparing the collected data with other sources to find bias in a stream. The dissertation also outlines a data collection strategy which minimizes the amount of bias that will appear in a given dataset. It introduces a crawling strategy which mitigates the amount of bias in the resulting dataset. Second, I introduce a methodology to identify bots and shills within a social media dataset. This directly addresses the concern that the users of a social media site are not representative. Applying these methodologies allows the population under study on a social media site to better match that of the real world. Finally, the dissertation discusses perceptual biases, explains how they affect analysis, and introduces computational approaches to mitigate them.

The results of the dissertation allow for the discovery and removal of different levels of bias within a social media dataset. This has important implications for social media mining, namely that the behavioral patterns and insights extracted from social media will be more representative of the populations under study.
ContributorsMorstatter, Fred (Author) / Liu, Huan (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Maciejewski, Ross (Committee member) / Carley, Kathleen M. (Committee member) / Arizona State University (Publisher)
Created2017
155343-Thumbnail Image.png
Description
Predictive analytics embraces an extensive area of techniques from statistical modeling to machine learning to data mining and is applied in business intelligence, public health, disaster management and response, and many other fields. To date, visualization has been broadly used to support tasks in the predictive analytics pipeline under the

Predictive analytics embraces an extensive area of techniques from statistical modeling to machine learning to data mining and is applied in business intelligence, public health, disaster management and response, and many other fields. To date, visualization has been broadly used to support tasks in the predictive analytics pipeline under the underlying assumption that a human-in-the-loop can aid the analysis by integrating domain knowledge that might not be broadly captured by the system. Primary uses of visualization in the predictive analytics pipeline have focused on data cleaning, exploratory analysis, and diagnostics. More recently, numerous visual analytics systems for feature selection, incremental learning, and various prediction tasks have been proposed to support the growing use of complex models, agent-specific optimization, and comprehensive model comparison and result exploration. Such work is being driven by advances in interactive machine learning and the desire of end-users to understand and engage with the modeling process. However, despite the numerous and promising applications of visual analytics to predictive analytics tasks, work to assess the effectiveness of predictive visual analytics is lacking.

This thesis studies the current methodologies in predictive visual analytics. It first defines the scope of predictive analytics and presents a predictive visual analytics (PVA) pipeline. Following the proposed pipeline, a predictive visual analytics framework is developed to be used to explore under what circumstances a human-in-the-loop prediction process is most effective. This framework combines sentiment analysis, feature selection mechanisms, similarity comparisons and model cross-validation through a variety of interactive visualizations to support analysts in model building and prediction. To test the proposed framework, an instantiation for movie box-office prediction is developed and evaluated. Results from small-scale user studies are presented and discussed, and a generalized user study is carried out to assess the role of predictive visual analytics under a movie box-office prediction scenario.
ContributorsLu, Yafeng (Author) / Maciejewski, Ross (Thesis advisor) / Cooke, Nancy J. (Committee member) / Liu, Huan (Committee member) / He, Jingrui (Committee member) / Arizona State University (Publisher)
Created2017
155291-Thumbnail Image.png
Description
The connections between different entities define different kinds of networks, and many such networked phenomena are influenced by their underlying geographical relationships. By integrating network and geospatial analysis, the goal is to extract information about interaction topologies and the relationships to related geographical constructs. In the recent decades, much work

The connections between different entities define different kinds of networks, and many such networked phenomena are influenced by their underlying geographical relationships. By integrating network and geospatial analysis, the goal is to extract information about interaction topologies and the relationships to related geographical constructs. In the recent decades, much work has been done analyzing the dynamics of spatial networks; however, many challenges still remain in this field. First, the development of social media and transportation technologies has greatly reshaped the typologies of communications between different geographical regions. Second, the distance metrics used in spatial analysis should also be enriched with the underlying network information to develop accurate models.

Visual analytics provides methods for data exploration, pattern recognition, and knowledge discovery. However, despite the long history of geovisualizations and network visual analytics, little work has been done to develop visual analytics tools that focus specifically on geographically networked phenomena. This thesis develops a variety of visualization methods to present data values and geospatial network relationships, which enables users to interactively explore the data. Users can investigate the connections in both virtual networks and geospatial networks and the underlying geographical context can be used to improve knowledge discovery. The focus of this thesis is on social media analysis and geographical hotspots optimization. A framework is proposed for social network analysis to unveil the links between social media interactions and their underlying networked geospatial phenomena. This will be combined with a novel hotspot approach to improve hotspot identification and boundary detection with the networks extracted from urban infrastructure. Several real world problems have been analyzed using the proposed visual analytics frameworks. The primary studies and experiments show that visual analytics methods can help analysts explore such data from multiple perspectives and help the knowledge discovery process.
ContributorsWang, Feng (Author) / Maciejewski, Ross (Thesis advisor) / Davulcu, Hasan (Committee member) / Grubesic, Anthony (Committee member) / Shakarian, Paulo (Committee member) / Arizona State University (Publisher)
Created2017
137623-Thumbnail Image.png
Description
Due to its difficult nature, organic chemistry is receiving much research attention across the nation to develop more efficient and effective means to teach it. As part of that, Dr. Ian Gould at ASU is developing an online organic chemistry educational website that provides help to students, adapts to their

Due to its difficult nature, organic chemistry is receiving much research attention across the nation to develop more efficient and effective means to teach it. As part of that, Dr. Ian Gould at ASU is developing an online organic chemistry educational website that provides help to students, adapts to their responses, and collects data about their performance. This thesis creative project addresses the design and implementation of an input parser for organic chemistry reagent questions, to appear on his website. After students used the form to submit questions throughout the Spring 2013 semester in Dr. Gould's organic chemistry class, the data gathered from their usage was analyzed, and feedback was collected. The feedback obtained from students was positive, and suggested that the input parser accomplished the educational goals that it sought to meet.
ContributorsBeerman, Eric Christopher (Author) / Gould, Ian (Thesis director) / Wilkerson, Kelly (Committee member) / Mosca, Vince (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-05