Matching Items (235)
Filtering by

Clear all filters

153427-Thumbnail Image.png
Description
Crises or large-scale emergencies such as earthquakes and hurricanes cause massive damage to lives and property. Crisis response is an essential task to mitigate the impact of a crisis. An effective response to a crisis necessitates information gathering and analysis. Traditionally, this process has been restricted to the information collected

Crises or large-scale emergencies such as earthquakes and hurricanes cause massive damage to lives and property. Crisis response is an essential task to mitigate the impact of a crisis. An effective response to a crisis necessitates information gathering and analysis. Traditionally, this process has been restricted to the information collected by first responders on the ground in the affected region or by official agencies such as local governments involved in the response. However, the ubiquity of mobile devices has empowered people to publish information during a crisis through social media, such as the damage reports from a hurricane. Social media has thus emerged as an important channel of information which can be leveraged to improve crisis response. Twitter is a popular medium which has been employed in recent crises. However, it presents new challenges: the data is noisy and uncurated, and it has high volume and high velocity. In this work, I study four key problems in the use of social media for crisis response: effective monitoring and analysis of high volume crisis tweets, detecting crisis events automatically in streaming data, identifying users who can be followed to effectively monitor crisis, and finally understanding user behavior during crisis to detect tweets inside crisis regions. To address these problems I propose two systems which assist disaster responders or analysts to collaboratively collect tweets related to crisis and analyze it using visual analytics to identify interesting regions, topics, and users involved in disaster response. I present a novel approach to detecting crisis events automatically in noisy, high volume Twitter streams. I also investigate and introduce novel methods to tackle information overload through the identification of information leaders in information diffusion who can be followed for efficient crisis monitoring and identification of messages originating from crisis regions using user behavior analysis.
ContributorsKumar, Shamanth (Author) / Liu, Huan (Thesis advisor) / Davulcu, Hasan (Committee member) / Maciejewski, Ross (Committee member) / Agarwal, Nitin (Committee member) / Arizona State University (Publisher)
Created2015
153428-Thumbnail Image.png
Description
Social networking services have emerged as an important platform for large-scale information sharing and communication. With the growing popularity of social media, spamming has become rampant in the platforms. Complex network interactions and evolving content present great challenges for social spammer detection. Different from some existing well-studied platforms, distinct characteristics

Social networking services have emerged as an important platform for large-scale information sharing and communication. With the growing popularity of social media, spamming has become rampant in the platforms. Complex network interactions and evolving content present great challenges for social spammer detection. Different from some existing well-studied platforms, distinct characteristics of newly emerged social media data present new challenges for social spammer detection. First, texts in social media are short and potentially linked with each other via user connections. Second, it is observed that abundant contextual information may play an important role in distinguishing social spammers and normal users. Third, not only the content information but also the social connections in social media evolve very fast. Fourth, it is easy to amass vast quantities of unlabeled data in social media, but would be costly to obtain labels, which are essential for many supervised algorithms. To tackle those challenges raise in social media data, I focused on developing effective and efficient machine learning algorithms for social spammer detection.

I provide a novel and systematic study of social spammer detection in the dissertation. By analyzing the properties of social network and content information, I propose a unified framework for social spammer detection by collectively using the two types of information in social media. Motivated by psychological findings in physical world, I investigate whether sentiment analysis can help spammer detection in online social media. In particular, I conduct an exploratory study to analyze the sentiment differences between spammers and normal users; and present a novel method to incorporate sentiment information into social spammer detection framework. Given the rapidly evolving nature, I propose a novel framework to efficiently reflect the effect of newly emerging social spammers. To tackle the problem of lack of labeling data in social media, I study how to incorporate network information into text content modeling, and design strategies to select the most representative and informative instances from social media for labeling. Motivated by publicly available label information from other media platforms, I propose to make use of knowledge learned from cross-media to help spammer detection on social media.
ContributorsHu, Xia, Ph.D (Author) / Liu, Huan (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Ye, Jieping (Committee member) / Faloutsos, Christos (Committee member) / Arizona State University (Publisher)
Created2015
153328-Thumbnail Image.png
Description
The aging process due to Bias Temperature Instability (both NBTI and PBTI) and Channel Hot Carrier (CHC) is a key limiting factor of circuit lifetime in CMOS design. Threshold voltage shift due to BTI is a strong function of stress voltage and temperature complicating stress and recovery prediction. This poses

The aging process due to Bias Temperature Instability (both NBTI and PBTI) and Channel Hot Carrier (CHC) is a key limiting factor of circuit lifetime in CMOS design. Threshold voltage shift due to BTI is a strong function of stress voltage and temperature complicating stress and recovery prediction. This poses a unique challenge for long-term aging prediction for wide range of stress patterns. Traditional approaches usually resort to an average stress waveform to simplify the lifetime prediction. They are efficient, but fail to capture circuit operation, especially under dynamic voltage scaling (DVS) or in analog/mixed signal designs where the stress waveform is much more random. This work presents a suite of modelling solutions for BTI that enable aging simulation under all possible stress conditions. Key features of this work are compact models to predict BTI aging based on Reaction-Diffusion theory when the stress voltage is varying. The results to both reaction-diffusion (RD) and trapping-detrapping (TD) mechanisms are presented to cover underlying physics. Silicon validation of these models is performed at 28nm, 45nm and 65nm technology nodes, at both device and circuit levels. Efficient simulation leveraging the BTI models under DVS and random input waveform is applied to both digital and analog representative circuits such as ring oscillators and LNA. Both physical mechanisms are combined into a unified model which improves prediction accuracy at 45nm and 65nm nodes. Critical failure condition is also illustrated based on NBTI and PBTI at 28nm. A comprehensive picture for duty cycle shift is shown. DC stress under clock gating schemes results in monotonic shift in duty cycle which an AC stress causes duty cycle to converge close to 50% value. Proposed work provides a general and comprehensive solution to aging analysis under random stress patterns under BTI.

Channel hot carrier (CHC) is another dominant degradation mechanism which affects analog and mixed signal circuits (AMS) as transistor operates continuously in saturation condition. New model is proposed to account for e-e scattering in advanced technology nodes due to high gate electric field. The model is validated with 28nm and 65nm thick oxide data for different stress voltages. It demonstrates shift in worst case CHC condition to Vgs=Vds from Vgs=0.5Vds. A novel iteration based aging simulation framework for AMS designs is proposed which eliminates limitation for conventional reliability tools. This approach helps us identify a unique positive feedback mechanism termed as Bias Runaway. Bias runaway, is rapid increase of the bias voltage in AMS circuits which occurs when the feedback between the bias current and the effect of channel hot carrier turns into positive. The degradation of CHC is a gradual process but under specific circumstances, the degradation rate can be dramatically accelerated. Such a catastrophic phenomenon is highly sensitive to the initial operation condition, as well as transistor gate length. Based on 65nm silicon data, our work investigates the critical condition that triggers bias runaway, and the impact of gate length tuning. We develop new compact models as well as the simulation methodology for circuit diagnosis, and propose design solutions and the trade-offs to avoid bias runaway, which is vitally important to reliable AMS designs.
ContributorsSutaria, Ketul (Author) / Cao, Yu (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Chakrabarti, Chaitali (Committee member) / Yu, Shimeng (Committee member) / Arizona State University (Publisher)
Created2014
153339-Thumbnail Image.png
Description
A myriad of social media services are emerging in recent years that allow people to communicate and express themselves conveniently and easily. The pervasive use of social media generates massive data at an unprecedented rate. It becomes increasingly difficult for online users to find relevant information or, in other words,

A myriad of social media services are emerging in recent years that allow people to communicate and express themselves conveniently and easily. The pervasive use of social media generates massive data at an unprecedented rate. It becomes increasingly difficult for online users to find relevant information or, in other words, exacerbates the information overload problem. Meanwhile, users in social media can be both passive content consumers and active content producers, causing the quality of user-generated content can vary dramatically from excellence to abuse or spam, which results in a problem of information credibility. Trust, providing evidence about with whom users can trust to share information and from whom users can accept information without additional verification, plays a crucial role in helping online users collect relevant and reliable information. It has been proven to be an effective way to mitigate information overload and credibility problems and has attracted increasing attention.

As the conceptual counterpart of trust, distrust could be as important as trust and its value has been widely recognized by social sciences in the physical world. However, little attention is paid on distrust in social media. Social media differs from the physical world - (1) its data is passively observed, large-scale, incomplete, noisy and embedded with rich heterogeneous sources; and (2) distrust is generally unavailable in social media. These unique properties of social media present novel challenges for computing distrust in social media: (1) passively observed social media data does not provide necessary information social scientists use to understand distrust, how can I understand distrust in social media? (2) distrust is usually invisible in social media, how can I make invisible distrust visible by leveraging unique properties of social media data? and (3) little is known about distrust and its role in social media applications, how can distrust help make difference in social media applications?

The chief objective of this dissertation is to figure out solutions to these challenges via innovative research and novel methods. In particular, computational tasks are designed to {\it understand distrust}, a innovative task, i.e., {\it predicting distrust} is proposed with novel frameworks to make invisible distrust visible, and principled approaches are develop to {\it apply distrust} in social media applications. Since distrust is a special type of negative links, I demonstrate the generalization of properties and algorithms of distrust to negative links, i.e., {\it generalizing findings of distrust}, which greatly expands the boundaries of research of distrust and largely broadens its applications in social media.
ContributorsTang, Jiliang (Author) / Liu, Huan (Thesis advisor) / Xue, Guoliang (Committee member) / Ye, Jieping (Committee member) / Aggarwal, Charu (Committee member) / Arizona State University (Publisher)
Created2015
153478-Thumbnail Image.png
Description
US Senate is the venue of political debates where the federal bills are formed and voted. Senators show their support/opposition along the bills with their votes. This information makes it possible to extract the polarity of the senators. Similarly, blogosphere plays an increasingly important role as a forum for public

US Senate is the venue of political debates where the federal bills are formed and voted. Senators show their support/opposition along the bills with their votes. This information makes it possible to extract the polarity of the senators. Similarly, blogosphere plays an increasingly important role as a forum for public debate. Authors display sentiment toward issues, organizations or people using a natural language.

In this research, given a mixed set of senators/blogs debating on a set of political issues from opposing camps, I use signed bipartite graphs for modeling debates, and I propose an algorithm for partitioning both the opinion holders (senators or blogs) and the issues (bills or topics) comprising the debate into binary opposing camps. Simultaneously, my algorithm scales the entities on a univariate scale. Using this scale, a researcher can identify moderate and extreme senators/blogs within each camp, and polarizing versus unifying issues. Through performance evaluations I show that my proposed algorithm provides an effective solution to the problem, and performs much better than existing baseline algorithms adapted to solve this new problem. In my experiments, I used both real data from political blogosphere and US Congress records, as well as synthetic data which were obtained by varying polarization and degree distribution of the vertices of the graph to show the robustness of my algorithm.

I also applied my algorithm on all the terms of the US Senate to the date for longitudinal analysis and developed a web based interactive user interface www.PartisanScale.com to visualize the analysis.

US politics is most often polarized with respect to the left/right alignment of the entities. However, certain issues do not reflect the polarization due to political parties, but observe a split correlating to the demographics of the senators, or simply receive consensus. I propose a hierarchical clustering algorithm that identifies groups of bills that share the same polarization characteristics. I developed a web based interactive user interface www.ControversyAnalysis.com to visualize the clusters while providing a synopsis through distribution charts, word clouds, and heat maps.
ContributorsGokalp, Sedat (Author) / Davulcu, Hasan (Thesis advisor) / Sen, Arunabha (Committee member) / Liu, Huan (Committee member) / Woodward, Mark (Committee member) / Arizona State University (Publisher)
Created2015
152991-Thumbnail Image.png
Description
Several state of the art, monitoring and control systems, such as DC motor

controllers, power line monitoring and protection systems, instrumentation systems and battery monitors require direct digitization of a high voltage input signals. Analog-to-Digital Converters (ADCs) that can digitize high voltage signals require high linearity and low voltage coefficient capacitors.

Several state of the art, monitoring and control systems, such as DC motor

controllers, power line monitoring and protection systems, instrumentation systems and battery monitors require direct digitization of a high voltage input signals. Analog-to-Digital Converters (ADCs) that can digitize high voltage signals require high linearity and low voltage coefficient capacitors. A built in self-calibration and digital-trim algorithm correcting static mismatches in Capacitive Digital-to-Analog Converter (CDAC) used in Successive Approximation Register Analog to Digital Converters (SARADCs) is proposed. The algorithm uses a dynamic error correction (DEC) capacitor to cancel the static errors occurring in each capacitor of the array as the first step upon power-up and eliminates the need for an extra calibration DAC. Self-trimming is performed digitally during normal ADC operation. The algorithm is implemented on a 14-bit high-voltage input range SAR ADC with integrated dynamic error correction capacitors. The IC is fabricated in 0.6-um high voltage compliant CMOS process, accepting up to 24Vpp differential input signal. The proposed approach achieves 73.32 dB Signal to Noise and Distortion Ratio (SNDR) which is an improvement of 12.03 dB after self-calibration at 400 kS/s sampling rate, consuming 90-mW from a +/-15V supply. The calibration circuitry occupies 28% of the capacitor DAC, and consumes less than 15mW during operation. Measurement results shows that this algorithm reduces INL from as high as 7 LSBs down to 1 LSB and it works even in the presence of larger mismatches exceeding 260 LSBs. Similarly, it reduces DNL errors from 10 LSBs down to 1 LSB. The ADC occupies an active area of 9.76 mm2.
ContributorsThirunakkarasu, Shankar (Author) / Bakkaloglu, Bertan (Thesis advisor) / Garrity, Douglas (Committee member) / Kozicki, Michael (Committee member) / Kitchen, Jennifer (Committee member) / Arizona State University (Publisher)
Created2014
153003-Thumbnail Image.png
Description
Recent efforts in data cleaning have focused mostly on problems like data deduplication, record matching, and data standardization; few of these focus on fixing incorrect attribute values in tuples. Correcting values in tuples is typically performed by a minimum cost repair of tuples that violate static constraints like CFDs (which

Recent efforts in data cleaning have focused mostly on problems like data deduplication, record matching, and data standardization; few of these focus on fixing incorrect attribute values in tuples. Correcting values in tuples is typically performed by a minimum cost repair of tuples that violate static constraints like CFDs (which have to be provided by domain experts, or learned from a clean sample of the database). In this thesis, I provide a method for correcting individual attribute values in a structured database using a Bayesian generative model and a statistical error model learned from the noisy database directly. I thus avoid the necessity for a domain expert or master data. I also show how to efficiently perform consistent query answering using this model over a dirty database, in case write permissions to the database are unavailable. A Map-Reduce architecture to perform this computation in a distributed manner is also shown. I evaluate these methods over both synthetic and real data.
ContributorsDe, Sushovan (Author) / Kambhampati, Subbarao (Thesis advisor) / Chen, Yi (Committee member) / Candan, K. Selcuk (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2014
153091-Thumbnail Image.png
Description
As robotic technology and its various uses grow steadily more complex and ubiquitous, humans are coming into increasing contact with robotic agents. A large portion of such contact is cooperative interaction, where both humans and robots are required to work on the same application towards achieving common goals. These application

As robotic technology and its various uses grow steadily more complex and ubiquitous, humans are coming into increasing contact with robotic agents. A large portion of such contact is cooperative interaction, where both humans and robots are required to work on the same application towards achieving common goals. These application scenarios are characterized by a need to leverage the strengths of each agent as part of a unified team to reach those common goals. To ensure that the robotic agent is truly a contributing team-member, it must exhibit some degree of autonomy in achieving goals that have been delegated to it. Indeed, a significant portion of the utility of such human-robot teams derives from the delegation of goals to the robot, and autonomy on the part of the robot in achieving those goals. In order to be considered truly autonomous, the robot must be able to make its own plans to achieve the goals assigned to it, with only minimal direction and assistance from the human.

Automated planning provides the solution to this problem -- indeed, one of the main motivations that underpinned the beginnings of the field of automated planning was to provide planning support for Shakey the robot with the STRIPS system. For long, however, automated planners suffered from scalability issues that precluded their application to real world, real time robotic systems. Recent decades have seen a gradual abeyance of those issues, and fast planning systems are now the norm rather than the exception. However, some of these advances in speedup and scalability have been achieved by ignoring or abstracting out challenges that real world integrated robotic systems must confront.

In this work, the problem of planning for human-hobot teaming is introduced. The central idea -- the use of automated planning systems as mediators in such human-robot teaming scenarios -- and the main challenges inspired from real world scenarios that must be addressed in order to make such planning seamless are presented: (i) Goals which can be specified or changed at execution time, after the planning process has completed; (ii) Worlds and scenarios where the state changes dynamically while a previous plan is executing; (iii) Models that are incomplete and can be changed during execution; and (iv) Information about the human agent's plan and intentions that can be used for coordination. These challenges are compounded by the fact that the human-robot team must execute in an open world, rife with dynamic events and other agents; and in a manner that encourages the exchange of information between the human and the robot. As an answer to these challenges, implemented solutions and a fielded prototype that combines all of those solutions into one planning system are discussed. Results from running this prototype in real world scenarios are presented, and extensions to some of the solutions are offered as appropriate.
ContributorsTalamadupula, Kartik (Author) / Kambhampati, Subbarao (Thesis advisor) / Baral, Chitta (Committee member) / Liu, Huan (Committee member) / Scheutz, Matthias (Committee member) / Smith, David E. (Committee member) / Arizona State University (Publisher)
Created2014
153124-Thumbnail Image.png
Description
Writing instruction poses both cognitive and affective challenges, particularly for adolescents. American teens not only fall short of national writing standards, but also tend to lack motivation for school writing, claiming it is too challenging and that they have nothing interesting to write about. Yet, teens enthusiastically immerse themselves in

Writing instruction poses both cognitive and affective challenges, particularly for adolescents. American teens not only fall short of national writing standards, but also tend to lack motivation for school writing, claiming it is too challenging and that they have nothing interesting to write about. Yet, teens enthusiastically immerse themselves in informal writing via text messaging, email, and social media, regularly sharing their thoughts and experiences with a real audience. While these activities are, in fact, writing, research indicates that teens instead view them as simply "communication" or "being social." Accordingly, the aim of this work was to infuse formal classroom writing with naturally engaging elements of informal social media writing to positively impact writing quality and the motivation to write, resulting in the development and implementation of Sparkfolio, an online prewriting tool that: a) addresses affective challenges by allowing students to choose personally relevant topics using their own social media data; and b) provides cognitive support with a planner that helps develop and organize ideas in preparation for writing a first draft. This tool was evaluated in a study involving 46 eleventh-grade English students writing three personal narratives each, and including three experimental conditions: a) using self-authored social media post data while planning with Sparkfolio; b) using only data from posts authored by one's friends while planning with Sparkfolio; and c) a control group that did not use Sparkfolio. The dependent variables were the change in writing motivation and the change in writing quality that occurred before and after the intervention. A scaled pre/posttest measured writing motivation, and the first and third narratives were used as writing quality pre/posttests. A usability scale, logged Sparkfolio data, and qualitative measures were also analyzed. Results indicated that participants who used Sparkfolio had statistically significantly higher gains in writing quality than the control group, validating Sparkfolio as effective. Additionally, while nonsignificant, results suggested that planning with self-authored data provided more writing quality and motivational benefits than data authored by others. This work provides initial empirical evidence that leveraging students' own social media data (securely) holds potential in fostering meaningful personalized learning.
ContributorsSadauskas, John (Author) / Atkinson, Robert K (Thesis advisor) / Savenye, Wilhelmina (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2014
153140-Thumbnail Image.png
Description
The rapid urban expansion has greatly extended the physical boundary of our living area, along with a large number of POIs (points of interest) being developed. A POI is a specific location (e.g., hotel, restaurant, theater, mall) that a user may find useful or interesting. When exploring the city and

The rapid urban expansion has greatly extended the physical boundary of our living area, along with a large number of POIs (points of interest) being developed. A POI is a specific location (e.g., hotel, restaurant, theater, mall) that a user may find useful or interesting. When exploring the city and neighborhood, the increasing number of POIs could enrich people's daily life, providing them with more choices of life experience than before, while at the same time also brings the problem of "curse of choices", resulting in the difficulty for a user to make a satisfied decision on "where to go" in an efficient way. Personalized POI recommendation is a task proposed on purpose of helping users filter out uninteresting POIs and reduce time in decision making, which could also benefit virtual marketing.

Developing POI recommender systems requires observation of human mobility w.r.t. real-world POIs, which is infeasible with traditional mobile data. However, the recent development of location-based social networks (LBSNs) provides such observation. Typical location-based social networking sites allow users to "check in" at POIs with smartphones, leave tips and share that experience with their online friends. The increasing number of LBSN users has generated large amounts of LBSN data, providing an unprecedented opportunity to study human mobility for personalized POI recommendation in spatial, temporal, social, and content aspects.

Different from recommender systems in other categories, e.g., movie recommendation in NetFlix, friend recommendation in dating websites, item recommendation in online shopping sites, personalized POI recommendation on LBSNs has its unique challenges due to the stochastic property of human mobility and the mobile behavior indications provided by LBSN information layout. The strong correlations between geographical POI information and other LBSN information result in three major human mobile properties, i.e., geo-social correlations, geo-temporal patterns, and geo-content indications, which are neither observed in other recommender systems, nor exploited in current POI recommendation. In this dissertation, we investigate these properties on LBSNs, and propose personalized POI recommendation models accordingly. The performance evaluated on real-world LBSN datasets validates the power of these properties in capturing user mobility, and demonstrates the ability of our models for personalized POI recommendation.
ContributorsGao, Huiji (Author) / Liu, Huan (Thesis advisor) / Xue, Guoliang (Committee member) / Ye, Jieping (Committee member) / Caverlee, James (Committee member) / Arizona State University (Publisher)
Created2014