Matching Items (22)
Filtering by

Clear all filters

151341-Thumbnail Image.png
Description
With the rapid development of mobile sensing technologies like GPS, RFID, sensors in smartphones, etc., capturing position data in the form of trajectories has become easy. Moving object trajectory analysis is a growing area of interest these days owing to its applications in various domains such as marketing, security, traffic

With the rapid development of mobile sensing technologies like GPS, RFID, sensors in smartphones, etc., capturing position data in the form of trajectories has become easy. Moving object trajectory analysis is a growing area of interest these days owing to its applications in various domains such as marketing, security, traffic monitoring and management, etc. To better understand movement behaviors from the raw mobility data, this doctoral work provides analytic models for analyzing trajectory data. As a first contribution, a model is developed to detect changes in trajectories with time. If the taxis moving in a city are viewed as sensors that provide real time information of the traffic in the city, a change in these trajectories with time can reveal that the road network has changed. To detect changes, trajectories are modeled with a Hidden Markov Model (HMM). A modified training algorithm, for parameter estimation in HMM, called m-BaumWelch, is used to develop likelihood estimates under assumed changes and used to detect changes in trajectory data with time. Data from vehicles are used to test the method for change detection. Secondly, sequential pattern mining is used to develop a model to detect changes in frequent patterns occurring in trajectory data. The aim is to answer two questions: Are the frequent patterns still frequent in the new data? If they are frequent, has the time interval distribution in the pattern changed? Two different approaches are considered for change detection, frequency-based approach and distribution-based approach. The methods are illustrated with vehicle trajectory data. Finally, a model is developed for clustering and outlier detection in semantic trajectories. A challenge with clustering semantic trajectories is that both numeric and categorical attributes are present. Another problem to be addressed while clustering is that trajectories can be of different lengths and also have missing values. A tree-based ensemble is used to address these problems. The approach is extended to outlier detection in semantic trajectories.
ContributorsKondaveeti, Anirudh (Author) / Runger, George C. (Thesis advisor) / Mirchandani, Pitu (Committee member) / Pan, Rong (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2012
151405-Thumbnail Image.png
Description
Critical infrastructures in healthcare, power systems, and web services, incorporate cyber-physical systems (CPSes), where the software controlled computing systems interact with the physical environment through actuation and monitoring. Ensuring software safety in CPSes, to avoid hazards to property and human life as a result of un-controlled interactions, is essential and

Critical infrastructures in healthcare, power systems, and web services, incorporate cyber-physical systems (CPSes), where the software controlled computing systems interact with the physical environment through actuation and monitoring. Ensuring software safety in CPSes, to avoid hazards to property and human life as a result of un-controlled interactions, is essential and challenging. The principal hurdle in this regard is the characterization of the context driven interactions between software and the physical environment (cyber-physical interactions), which introduce multi-dimensional dynamics in space and time, complex non-linearities, and non-trivial aggregation of interaction in case of networked operations. Traditionally, CPS software is tested for safety either through experimental trials, which can be expensive, incomprehensive, and hazardous, or through static analysis of code, which ignore the cyber-physical interactions. This thesis considers model based engineering, a paradigm widely used in different disciplines of engineering, for safety verification of CPS software and contributes to three fundamental phases: a) modeling, building abstractions or models that characterize cyberphysical interactions in a mathematical framework, b) analysis, reasoning about safety based on properties of the model, and c) synthesis, implementing models on standard testbeds for performing preliminary experimental trials. In this regard, CPS modeling techniques are proposed that can accurately capture the context driven spatio-temporal aggregate cyber-physical interactions. Different levels of abstractions are considered, which result in high level architectural models, or more detailed formal behavioral models of CPSes. The outcomes include, a well defined architectural specification framework called CPS-DAS and a novel spatio-temporal formal model called Spatio-Temporal Hybrid Automata (STHA) for CPSes. Model analysis techniques are proposed for the CPS models, which can simulate the effects of dynamic context changes on non-linear spatio-temporal cyberphysical interactions, and characterize aggregate effects. The outcomes include tractable algorithms for simulation analysis and for theoretically proving safety properties of CPS software. Lastly a software synthesis technique is proposed that can automatically convert high level architectural models of CPSes in the healthcare domain into implementations in high level programming languages. The outcome is a tool called Health-Dev that can synthesize software implementations of CPS models in healthcare for experimental verification of safety properties.
ContributorsBanerjee, Ayan (Author) / Gupta, Sandeep K.S. (Thesis advisor) / Poovendran, Radha (Committee member) / Fainekos, Georgios (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2012
189385-Thumbnail Image.png
Description
Machine learning models are increasingly being deployed in real-world applications where their predictions are used to make critical decisions in a variety of domains. The proliferation of such models has led to a burgeoning need to ensure the reliability and safety of these models, given the potential negative consequences of

Machine learning models are increasingly being deployed in real-world applications where their predictions are used to make critical decisions in a variety of domains. The proliferation of such models has led to a burgeoning need to ensure the reliability and safety of these models, given the potential negative consequences of model vulnerabilities. The complexity of machine learning models, along with the extensive data sets they analyze, can result in unpredictable and unintended outcomes. Model vulnerabilities may manifest due to errors in data input, algorithm design, or model deployment, which can have significant implications for both individuals and society. To prevent such negative outcomes, it is imperative to identify model vulnerabilities at an early stage in the development process. This will aid in guaranteeing the integrity, dependability, and safety of the models, thus mitigating potential risks and enabling the full potential of these technologies to be realized. However, enumerating vulnerabilities can be challenging due to the complexity of the real-world environment. Visual analytics, situated at the intersection of human-computer interaction, computer graphics, and artificial intelligence, offers a promising approach for achieving high interpretability of complex black-box models, thus reducing the cost of obtaining insights into potential vulnerabilities of models. This research is devoted to designing novel visual analytics methods to support the identification and analysis of model vulnerabilities. Specifically, generalizable visual analytics frameworks are instantiated to explore vulnerabilities in machine learning models concerning security (adversarial attacks and data perturbation) and fairness (algorithmic bias). In the end, a visual analytics approach is proposed to enable domain experts to explain and diagnose the model improvement of addressing identified vulnerabilities of machine learning models in a human-in-the-loop fashion. The proposed methods hold the potential to enhance the security and fairness of machine learning models deployed in critical real-world applications.
ContributorsXie, Tiankai (Author) / Maciejewski, Ross (Thesis advisor) / Liu, Huan (Committee member) / Bryan, Chris (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2023
157587-Thumbnail Image.png
Description
In recent years, the rise in social media usage both vertically in terms of the number of users by platform and horizontally in terms of the number of platforms per user has led to data explosion.

User-generated social media content provides an excellent opportunity to mine data of interest and to

In recent years, the rise in social media usage both vertically in terms of the number of users by platform and horizontally in terms of the number of platforms per user has led to data explosion.

User-generated social media content provides an excellent opportunity to mine data of interest and to build resourceful applications. The rise in the number of healthcare-related social media platforms and the volume of healthcare knowledge available online in the last decade has resulted in increased social media usage for personal healthcare. In the United States, nearly ninety percent of adults, in the age group 50-75, have used social media to seek and share health information. Motivated by the growth of social media usage, this thesis focuses on healthcare-related applications, study various challenges posed by social media data, and address them through novel and effective machine learning algorithms.



The major challenges for effectively and efficiently mining social media data to build functional applications include: (1) Data reliability and acceptance: most social media data (especially in the context of healthcare-related social media) is not regulated and little has been studied on the benefits of healthcare-specific social media; (2) Data heterogeneity: social media data is generated by users with both demographic and geographic diversity; (3) Model transparency and trustworthiness: most existing machine learning models for addressing heterogeneity are considered as black box models, not many providing explanations for why they do what they do to trust them.

In response to these challenges, three main research directions have been investigated in this thesis: (1) Analyzing social media influence on healthcare: to study the real world impact of social media as a source to offer or seek support for patients with chronic health conditions; (2) Learning from task heterogeneity: to propose various models and algorithms that are adaptable to new social media platforms and robust to dynamic social media data, specifically on modeling user behaviors, identifying similar actors across platforms, and adapting black box models to a specific learning scenario; (3) Explaining heterogeneous models: to interpret predictive models in the presence of task heterogeneity. In this thesis, novel algorithms with theoretical analysis from various aspects (e.g., time complexity, convergence properties) have been proposed. The effectiveness and efficiency of the proposed algorithms is demonstrated by comparison with state-of-the-art methods and relevant case studies.
ContributorsNelakurthi, Arun Reddy (Author) / He, Jingrui (Thesis advisor) / Cook, Curtiss B (Committee member) / Maciejewski, Ross (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2019
157052-Thumbnail Image.png
Description
In the artificial intelligence literature, three forms of reasoning are commonly employed to understand agent behavior: inductive, deductive, and abductive.  More recently, data-driven approaches leveraging ideas such as machine learning, data mining, and social network analysis have gained popularity. While data-driven variants of the aforementioned forms of reasoning have been applied

In the artificial intelligence literature, three forms of reasoning are commonly employed to understand agent behavior: inductive, deductive, and abductive.  More recently, data-driven approaches leveraging ideas such as machine learning, data mining, and social network analysis have gained popularity. While data-driven variants of the aforementioned forms of reasoning have been applied separately, there is little work on how data-driven approaches across all three forms relate and lend themselves to practical applications. Given an agent behavior and the percept sequence, how one can identify a specific outcome such as the likeliest explanation? To address real-world problems, it is vital to understand the different types of reasonings which can lead to better data-driven inference.  

This dissertation has laid the groundwork for studying these relationships and applying them to three real-world problems. In criminal modeling, inductive and deductive reasonings are applied to early prediction of violent criminal gang members. To address this problem the features derived from the co-arrestee social network as well as geographical and temporal features are leveraged. Then, a data-driven variant of geospatial abductive inference is studied in missing person problem to locate the missing person. Finally, induction and abduction reasonings are studied for identifying pathogenic accounts of a cascade in social networks.
ContributorsShaabani, Elham (Author) / Shakarian, Paulo (Thesis advisor) / Davulcu, Hasan (Committee member) / Maciejewski, Ross (Committee member) / Decker, Scott (Committee member) / Arizona State University (Publisher)
Created2019
157095-Thumbnail Image.png
Description
An old proverb claims that “two heads are better than one”. Crowdsourcing research and practice have taken this to heart, attempting to show that thousands of heads can be even better. This is not limited to leveraging a crowd’s knowledge, but also their creativity—the ability to generate something not only

An old proverb claims that “two heads are better than one”. Crowdsourcing research and practice have taken this to heart, attempting to show that thousands of heads can be even better. This is not limited to leveraging a crowd’s knowledge, but also their creativity—the ability to generate something not only useful, but also novel. In practice, there are initiatives such as Free and Open Source Software communities developing innovative software. In research, the field of crowdsourced creativity, which attempts to design scalable support mechanisms, is blooming. However, both contexts still present many opportunities for advancement.

In this dissertation, I seek to advance both the knowledge of limitations in current technologies used in practice as well as the mechanisms that can be used for large-scale support. The overall research question I explore is: “How can we support large-scale creative collaboration in distributed online communities?” I first advance existing support techniques by evaluating the impact of active support in brainstorming performance. Furthermore, I leverage existing theoretical models of individual idea generation as well as recommender system techniques to design CrowdMuse, a novel adaptive large-scale idea generation system. CrowdMuse models users in order to adapt itself to each individual. I evaluate the system’s efficacy through two large-scale studies. I also advance knowledge of current large-scale practices by examining common communication channels under the lens of Creativity Support Tools, yielding a list of creativity bottlenecks brought about by the affordances of these channels. Finally, I connect both ends of this dissertation by deploying CrowdMuse in an Open Source online community for two weeks. I evaluate their usage of the system as well as its perceived benefits and issues compared to traditional communication tools.

This dissertation makes the following contributions to the field of large-scale creativity: 1) the design and evaluation of a first-of-its-kind adaptive brainstorming system; 2) the evaluation of the effects of active inspirations compared to simple idea exposure; 3) the development and application of a set of creativity support design heuristics to uncover creativity bottlenecks; and 4) an exploration of large-scale brainstorming systems’ usefulness to online communities.
Contributorsda Silva Girotto, Victor Augusto (Author) / Walker, Erin A (Thesis advisor) / Burleson, Winslow (Thesis advisor) / Maciejewski, Ross (Committee member) / Hsiao, Sharon (Committee member) / Bigham, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2019
154357-Thumbnail Image.png
Description
Vectorization is an important process in the fields of graphics and image processing. In computer-aided design (CAD), drawings are scanned, vectorized and written as CAD files in a process called paper-to-CAD conversion or drawing conversion. In geographic information systems (GIS), satellite or aerial images are vectorized to create maps. In

Vectorization is an important process in the fields of graphics and image processing. In computer-aided design (CAD), drawings are scanned, vectorized and written as CAD files in a process called paper-to-CAD conversion or drawing conversion. In geographic information systems (GIS), satellite or aerial images are vectorized to create maps. In graphic design and photography, raster graphics can be vectorized for easier usage and resizing. Vector arts are popular as online contents. Vectorization takes raster images, point clouds, or a series of scattered data samples in space, outputs graphic elements of various types including points, lines, curves, polygons, parametric curves and surface patches. The vectorized representations consist of a different set of components and elements from that of the inputs. The change of representation is the key difference between vectorization and practices such as smoothing and filtering. Compared to the inputs, the vector outputs provide higher order of control and attributes such as smoothness. Their curvatures or gradients at the points are scale invariant and they are more robust data sources for downstream applications and analysis. This dissertation explores and broadens the scope of vectorization in various contexts. I propose a novel vectorization algorithm on raster images along with several new applications for vectorization mechanism in processing and analysing both 2D and 3D data sets. The main components of the research are: using vectorization in generating 3D models from 2D floor plans; a novel raster image vectorization methods and its applications in computer vision, image processing, and animation; and vectorization in visualizing and information extraction in 3D laser scan data. I also apply vectorization analysis towards human body scans and rock surface scans to show insights otherwise difficult to obtain.
ContributorsYin, Xuetao (Author) / Razdan, Anshuman (Thesis advisor) / Wonka, Peter (Committee member) / Femiani, John (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2016
153427-Thumbnail Image.png
Description
Crises or large-scale emergencies such as earthquakes and hurricanes cause massive damage to lives and property. Crisis response is an essential task to mitigate the impact of a crisis. An effective response to a crisis necessitates information gathering and analysis. Traditionally, this process has been restricted to the information collected

Crises or large-scale emergencies such as earthquakes and hurricanes cause massive damage to lives and property. Crisis response is an essential task to mitigate the impact of a crisis. An effective response to a crisis necessitates information gathering and analysis. Traditionally, this process has been restricted to the information collected by first responders on the ground in the affected region or by official agencies such as local governments involved in the response. However, the ubiquity of mobile devices has empowered people to publish information during a crisis through social media, such as the damage reports from a hurricane. Social media has thus emerged as an important channel of information which can be leveraged to improve crisis response. Twitter is a popular medium which has been employed in recent crises. However, it presents new challenges: the data is noisy and uncurated, and it has high volume and high velocity. In this work, I study four key problems in the use of social media for crisis response: effective monitoring and analysis of high volume crisis tweets, detecting crisis events automatically in streaming data, identifying users who can be followed to effectively monitor crisis, and finally understanding user behavior during crisis to detect tweets inside crisis regions. To address these problems I propose two systems which assist disaster responders or analysts to collaboratively collect tweets related to crisis and analyze it using visual analytics to identify interesting regions, topics, and users involved in disaster response. I present a novel approach to detecting crisis events automatically in noisy, high volume Twitter streams. I also investigate and introduce novel methods to tackle information overload through the identification of information leaders in information diffusion who can be followed for efficient crisis monitoring and identification of messages originating from crisis regions using user behavior analysis.
ContributorsKumar, Shamanth (Author) / Liu, Huan (Thesis advisor) / Davulcu, Hasan (Committee member) / Maciejewski, Ross (Committee member) / Agarwal, Nitin (Committee member) / Arizona State University (Publisher)
Created2015
152996-Thumbnail Image.png
Description
This thesis focuses on generating and exploring design variations for architectural and urban layouts. I propose to study this general problem in three selected contexts.

First, I introduce a framework to generate many variations of a facade design that look similar to a given facade layout. Starting from an input image,

This thesis focuses on generating and exploring design variations for architectural and urban layouts. I propose to study this general problem in three selected contexts.

First, I introduce a framework to generate many variations of a facade design that look similar to a given facade layout. Starting from an input image, the facade is hierarchically segmented and labeled with a collection of manual and automatic tools. The user can then model constraints that should be maintained in any variation of the input facade design. Subsequently, facade variations are generated for different facade sizes, where multiple variations can be produced for a certain size.

Second, I propose a method for a user to understand and systematically explore good building layouts. Starting from a discrete set of good layouts, I analytically characterize the local shape space of good layouts around each initial layout, compactly encode these spaces, and link them to support transitions across the different local spaces. I represent such transitions in the form of a portal graph. The user can then use the portal graph, along with the family of local shape spaces, to globally and locally explore the space of good building layouts.

Finally, I propose an algorithm to computationally design street networks that balance competing requirements such as quick travel time and reduced through traffic in residential neighborhoods. The user simply provides high-level functional specifications for a target neighborhood, while my algorithm best satisfies the specification by solving for both connectivity and geometric layout of the network.
ContributorsBao, Fan (Author) / Wonka, Peter (Thesis advisor) / Maciejewski, Ross (Committee member) / Razdan, Anshuman (Committee member) / Farin, Gerald (Committee member) / Arizona State University (Publisher)
Created2014
153051-Thumbnail Image.png
Description
Quad-dominant (QD) meshes, i.e., three-dimensional, 2-manifold polygonal meshes comprising mostly four-sided faces (i.e., quads), are a popular choice for many applications such as polygonal shape modeling, computer animation, base meshes for spline and subdivision surface, simulation, and architectural design. This thesis investigates the topic of connectivity control, i.e., exploring different

Quad-dominant (QD) meshes, i.e., three-dimensional, 2-manifold polygonal meshes comprising mostly four-sided faces (i.e., quads), are a popular choice for many applications such as polygonal shape modeling, computer animation, base meshes for spline and subdivision surface, simulation, and architectural design. This thesis investigates the topic of connectivity control, i.e., exploring different choices of mesh connectivity to represent the same 3D shape or surface. One key concept of QD mesh connectivity is the distinction between regular and irregular elements: a vertex with valence 4 is regular; otherwise, it is irregular. In a similar sense, a face with four sides is regular; otherwise, it is irregular. For QD meshes, the placement of irregular elements is especially important since it largely determines the achievable geometric quality of the final mesh.

Traditionally, the research on QD meshes focuses on the automatic generation of pure quadrilateral or QD meshes from a given surface. Explicit control of the placement of irregular elements can only be achieved indirectly. To fill this gap, in this thesis, we make the following contributions. First, we formulate the theoretical background about the fundamental combinatorial properties of irregular elements in QD meshes. Second, we develop algorithms for the explicit control of irregular elements and the exhaustive enumeration of QD mesh connectivities. Finally, we demonstrate the importance of connectivity control for QD meshes in a wide range of applications.
ContributorsPeng, Chi-Han (Author) / Wonka, Peter (Thesis advisor) / Maciejewski, Ross (Committee member) / Farin, Gerald (Committee member) / Razdan, Anshuman (Committee member) / Arizona State University (Publisher)
Created2014