Matching Items (10)
Filtering by

Clear all filters

151718-Thumbnail Image.png
Description
The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a

The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a reputation score for each tweet that is based not just on content, but also additional information from the Twitter ecosystem that consists of users, tweets, and the web pages that tweets link to. This information is obtained by modeling the Twitter ecosystem as a three-layer graph. The reputation score is used to power two novel methods of ranking tweets by propagating the reputation over an agreement graph based on tweets' content similarity. Additionally, I show how the agreement graph helps counter tweet spam. An evaluation of my method on 16~million tweets from the TREC 2011 Microblog Dataset shows that it doubles the precision over baseline Twitter Search and achieves higher precision than current state of the art method. I present a detailed internal empirical evaluation of RAProp in comparison to several alternative approaches proposed by me, as well as external evaluation in comparison to the current state of the art method.
ContributorsRavikumar, Srijith (Author) / Kambhampati, Subbarao (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
157421-Thumbnail Image.png
Description
Human-robot interaction has expanded immensely within dynamic environments. The goals of human-robot interaction are to increase productivity, efficiency and safety. In order for the integration of human-robot interaction to be seamless and effective humans must be willing to trust the capabilities of assistive robots. A major priority for human-robot interaction

Human-robot interaction has expanded immensely within dynamic environments. The goals of human-robot interaction are to increase productivity, efficiency and safety. In order for the integration of human-robot interaction to be seamless and effective humans must be willing to trust the capabilities of assistive robots. A major priority for human-robot interaction should be to understand how human dyads have been historically effective within a joint-task setting. This will ensure that all goals can be met in human robot settings. The aim of the present study was to examine human dyads and the effects of an unexpected interruption. Humans’ interpersonal and individual levels of trust were studied in order to draw appropriate conclusions. Seventeen undergraduate and graduate level dyads were collected from Arizona State University. Participants were broken up into either a surprise condition or a baseline condition. Participants individually took two surveys in order to have an accurate understanding of levels of dispositional and individual levels of trust. The findings showed that participant levels of interpersonal trust were average. Surprisingly, participants who participated in the surprise condition afterwards, showed moderate to high levels of dyad trust. This effect showed that participants became more reliant on their partners when interrupted by a surprising event. Future studies will take this knowledge and apply it to human-robot interaction, in order to mimic the seamless team-interaction shown in historically effective dyads, specifically human team interaction.
ContributorsShaw, Alexandra Luann (Author) / Chiou, Erin (Thesis advisor) / Cooke, Nancy J. (Committee member) / Craig, Scotty (Committee member) / Arizona State University (Publisher)
Created2019
157393-Thumbnail Image.png
Description
Social scientists from many disciplines have examined trust, including trust between those with different religious affiliations, emotional antecedents of trust, and physiological correlates of trust. However, little is known about how all of these factors intersect to shape trust behaviors. The current study aimed to examine physiological responses while

Social scientists from many disciplines have examined trust, including trust between those with different religious affiliations, emotional antecedents of trust, and physiological correlates of trust. However, little is known about how all of these factors intersect to shape trust behaviors. The current study aimed to examine physiological responses while individuals engaged in a trust game with a religious in-group or out-group member. Participants were randomly assigned to one of four conditions in which they were presented with the target’s profile before playing the game. In each of the conditions the target was described as either Catholic or Muslim and as someone who engaged in either costly signaling or anti-costly signaling behavior. In addition to assessing the amount of money invested as a behavioral measure of trust, physiological responses, specifically cardiac interbeat interval (IBI) and respiratory sinus arrhythmia (RSA), were measured. I hypothesized that when playing the trust game with a Catholic target as opposed to a Muslim target, Christian participants would (1) report being more similar to the target, (2) trust the target more, (3) invest more money in the target, (4) have a more positive outlook on the amount invested, and (5) show greater cardiorespiratory down-regulation, reflected by increases in IBI and RSA. Findings revealed that Christian participants reported greater similarity and showed a non-significant trend toward reporting a more positive outlook on (greater confidence in/satisfaction with) their investment decision when playing a Catholic versus Muslim target. Additionally, Christian participants who played an anti-costly signaling Catholic target showed greater cardiorespiratory down-regulation (increases from baseline for IBI, reflecting slower heart rate, and increases in RSA) than Christian participants who played an anti-costly signaling Muslim target. Results from this study echo previous findings suggesting that perceived similarity may facilitate trust. Findings also are consistent with previous research suggesting that religious ingroup or outgroup membership may not be as influential in shaping trust decisions if the trustee is costly signaling; for anti-signaling, however, cardiorespiratory down-regulation to a religious ingroup member may be apparent. These physiological signals may provide interoceptive information about a peer’s trustworthiness.
ContributorsThibault, Stephanie A (Author) / Roberts, Nicole A. (Thesis advisor) / Burleson, Mary (Committee member) / Hall, Deborah (Committee member) / Arizona State University (Publisher)
Created2019
157253-Thumbnail Image.png
Description
Reading partners’ actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners’ actions and falsely assign blame after an unexpected event. These biases thus further influence people’s trust in their partners, including

Reading partners’ actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners’ actions and falsely assign blame after an unexpected event. These biases thus further influence people’s trust in their partners, including machine partners. The increasing capabilities and complexity of machines allow them to work physically with humans. However, their improvements may interfere with the accuracy for people to calibrate trust in machines and their capabilities, which requires an understanding of attribution biases’ effect on human-machine coordination. Specifically, the current thesis explores how the development of trust in a partner is influenced by attribution biases and people’s assignment of blame for a negative outcome. This study can also suggest how a machine partner should be designed to react to environmental disturbances and report the appropriate level of information about external conditions.
ContributorsHsiung, Chi-Ping (M.S.) (Author) / Chiou, Erin (Thesis advisor) / Cooke, Nancy J. (Thesis advisor) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2019
157710-Thumbnail Image.png
Description
With the growth of autonomous vehicles’ prevalence, it is important to understand the relationship between autonomous vehicles and the other drivers around them. More specifically, how does one’s knowledge about autonomous vehicles (AV) affect positive and negative affect towards driving in their presence? Furthermore, how does trust of autonomous vehicles

With the growth of autonomous vehicles’ prevalence, it is important to understand the relationship between autonomous vehicles and the other drivers around them. More specifically, how does one’s knowledge about autonomous vehicles (AV) affect positive and negative affect towards driving in their presence? Furthermore, how does trust of autonomous vehicles correlate with those emotions? These questions were addressed by conducting a survey to measure participant’s positive affect, negative affect, and trust when driving in the presence of autonomous vehicles. Participants’ were issued a pretest measuring existing knowledge of autonomous vehicles, followed by measures of affect and trust. After completing this pre-test portion of the study, participants were given information about how autonomous vehicles work, and were then presented with a posttest identical to the pretest. The educational intervention had no effect on positive or negative affect, though there was a positive relationship between positive affect and trust and a negative relationship between negative affect and trust. These findings will be used to inform future research endeavors researching trust and autonomous vehicles using a test bed developed at Arizona State University. This test bed allows for researchers to examine the behavior of multiple participants at the same time and include autonomous vehicles in studies.
ContributorsMartin, Sterling (Author) / Cooke, Nancy J. (Thesis advisor) / Chiou, Erin (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2019
171724-Thumbnail Image.png
Description
Human-robot teams (HRTs) have seen more frequent use over the past few years,specifically, in the context of Search and Rescue (SAR) environments. Trust is an important factor in the success of HRTs. Both trust and reliance must be appropriately calibrated for the human operator to work faultlessly with a robot

Human-robot teams (HRTs) have seen more frequent use over the past few years,specifically, in the context of Search and Rescue (SAR) environments. Trust is an important factor in the success of HRTs. Both trust and reliance must be appropriately calibrated for the human operator to work faultlessly with a robot teammate. In highly complex and time restrictive environments, such as a search and rescue mission following a disaster, uncertainty information may be given by the robot in the form of confidence to help properly calibrate trust and reliance. This study seeks to examine the impact that confidence information may have on trust and how it may help calibrate reliance in complex HRTs. Trust and reliance data were gathered using a simulated SAR task environment for participants who then received confidence information from the robot for one of two missions. Results from this study indicated that trust was higher when participants received confidence information from the robot, however, no clear relationship between confidence and reliance were found. The findings from this study can be used to further improve human-robot teaming in search and rescue tasks.
ContributorsWolff, Alexandra (Author) / Cooke, Nancy J (Thesis advisor) / Chiou, Erin (Committee member) / Gray, Rob (Committee member) / Arizona State University (Publisher)
Created2022
158084-Thumbnail Image.png
Description
Food-sharing is central to the human experience, involving biological and sociocultural functions. In small-scale societies, sharing food reduces variance in daily food-consumption, allowing effective risk-management, and creating networks of interdependence. It was hypothesized that trust and interdependence would be fostered between people who shared food. Recruiting 221 participants (51% Female,

Food-sharing is central to the human experience, involving biological and sociocultural functions. In small-scale societies, sharing food reduces variance in daily food-consumption, allowing effective risk-management, and creating networks of interdependence. It was hypothesized that trust and interdependence would be fostered between people who shared food. Recruiting 221 participants (51% Female, Mage = 19.31), sharing food was found to decrease trust and interdependence in a Trust Game with $3.00 and a Dictator Game with chocolates. Participants trusted the least and gave the fewest chocolates when sharing food. Contrary to lay beliefs about sharing food, breaking bread with strangers may hinder rather than foster trust and giving in situations where competition over limited resources is salient, or under one-shot scenarios where people are unlikely to see each other again in the future.
ContributorsGuevara Beltran, Diego Guevara (Author) / Aktipis, Athena C (Thesis advisor) / Kenrick, Douglas T. (Committee member) / Varnum, Michael C (Committee member) / Arizona State University (Publisher)
Created2020
157988-Thumbnail Image.png
Description
The current study aims to explore factors affecting trust in human-drone collaboration. A current gap exists in research surrounding civilian drone use and the role of trust in human-drone interaction and collaboration. Specifically, existing research lacks an explanation of the relationship between drone pilot experience, trust, and trust-related behaviors as

The current study aims to explore factors affecting trust in human-drone collaboration. A current gap exists in research surrounding civilian drone use and the role of trust in human-drone interaction and collaboration. Specifically, existing research lacks an explanation of the relationship between drone pilot experience, trust, and trust-related behaviors as well as other factors. Using two dimensions of trust in human-automation team—purpose and performance—the effects of experience on drone design and trust is studied to explore factors that may contribute to such a model. An online survey was conducted to examine civilian drone operators’ experience, familiarity, expertise, and trust in commercially available drones. It was predicted that factors of prior experience (familiarity, self-reported expertise) would have a significant effect on trust in drones. The choice to use or exclude the drone propellers in a search-and-identify scenario, paired with the pilots’ experience with drones, would further confirm the relevance of the trust dimensions of purpose versus performance in the human-drone relationship. If the pilot has a positive sense of purpose and benevolence with the drone, the pilot trusts the drone has a positive intent towards them and the task. If the pilot has trust in the performance of the drone, they ascertain that the drone has the skill to do the task. The researcher found no significant differences between mean trust scores across levels of familiarity, but did find some interaction between self-report expertise, familiarity, and trust. Future research should further explore more concrete measures of situational participant factors such as self-confidence and expertise to understand their role in civilian pilots’ trust in their drone.
ContributorsNiichel, Madeline Kathleen (Author) / Chiou, Erin (Thesis advisor) / Cooke, Nancy J. (Committee member) / Craig, Scotty (Committee member) / Arizona State University (Publisher)
Created2019
161708-Thumbnail Image.png
Description
Today, the United States consumer vehicle market consists of about 276 million legally registered units, a prime candidate for service skulduggery (BTS, 2019). It raised some concerns when research conducted by the author revealed that about half of United States survey participants state they feel uneasy about approaching either a

Today, the United States consumer vehicle market consists of about 276 million legally registered units, a prime candidate for service skulduggery (BTS, 2019). It raised some concerns when research conducted by the author revealed that about half of United States survey participants state they feel uneasy about approaching either a mechanic they know or one that was new to them. Additionally, when only 10% of participants from the same survey fully trust mechanics, this raises the question, why are so many drivers of consumer vehicles wary about bringing their cars in for service or repair? Furthermore, the author determined that trust within the automotive repair industry is a worldwide issue, and countries with scarce resources have additional struggles of their own. The success of repair centers in countries closer to the equator weighs heavily on the mechanic's knowledge and access to repair resources. The author found that this is partially due to the rapid acceleration of the car market without a proper backbone to the automotive repair industry. Ultimately, this resulted in repair shops with untrained mechanics who perform poor quality labor for an inflated rate (Izogo, 2015). The author focuses on this global industry through the example of the Maasai Automotive Education Center (MAEC), a proposed facility and school located in Talek, Kenya. MAEC is designed to bring automotive customer and repair resources to a rural community that needs it the most to save their land, culture, and people. The author uses various recently conducted global studies, news articles and videos, and personal research to determine the crucial steps and considerations the MAEC development team needs to ensure project sustainability and success. This study's conclusion lists 11 essential attributes recommended for the MAEC repair facility for ethical and high-quality operation.
ContributorsMiller, Miles (Author) / Henderson, Mark (Thesis advisor) / Martin, Thomas (Committee member) / Rogers, Bradley (Committee member) / Arizona State University (Publisher)
Created2021
161959-Thumbnail Image.png
Description
The prevalence of autonomous technology is advancing at a rapid rate and is becoming more sophisticated. As this technology becomes more advanced, humans and autonomy may work together as teammates in various settings. A crucial component of teaming is trust, but to date, researchers are limited in assessing trust calibration

The prevalence of autonomous technology is advancing at a rapid rate and is becoming more sophisticated. As this technology becomes more advanced, humans and autonomy may work together as teammates in various settings. A crucial component of teaming is trust, but to date, researchers are limited in assessing trust calibration dynamically in human-autonomy teams. Traditional methods of measuring trust (e.g., Likert scale questionnaires) capture trust after the fact or at a specific time. However, trust fluctuates, and determining what causes this might give machine designers insight into how machines can be improved upon so that operator’s trust towards the machines is more properly calibrated. This thesis aimed to assess the validity of an interaction-based metric of trust: anticipatory pushing of information. Anticipatory pushing of information refers to teammate A anticipating the needs of teammate B and pushing that information to teammate B. It was hypothesized there would be a positive relationship between the frequency of anticipatory pushing and self-reported trust scores. To test this hypothesis, text chat data and self-reported trust scores were analyzed in a previously conducted study in two different sessions (routine and degraded). Findings indicate that the anticipatory pushing of information and the self-reported trust scores between the human-human pairs in the degraded sessions were higher than the routine sessions. In degraded sessions, the anticipatory pushing of information between the human-human pairs was associated with human-human trust.
ContributorsBhatti, Shawaiz (Author) / Cooke, Nancy (Thesis advisor) / Chiou, Erin K (Committee member) / Gutzwiller, Robert (Committee member) / Arizona State University (Publisher)
Created2021