Matching Items (684)
Filtering by

Clear all filters

153127-Thumbnail Image.png
Description
Many web search improvements have been developed since the advent of the modern search engine, but one underrepresented area is the application of specific customizations to search results for educational web sites. In order to address this issue and improve the relevance of search results in automated learning environments, this

Many web search improvements have been developed since the advent of the modern search engine, but one underrepresented area is the application of specific customizations to search results for educational web sites. In order to address this issue and improve the relevance of search results in automated learning environments, this work has integrated context-aware search principles with applications of preference based re-ranking and query modifications. This research investigates several aspects of context-aware search principles, specifically context-sensitive and preference based re-ranking of results which take user inputs as to their preferred content, and combines this with search query modifications which automatically search for a variety of modified terms based on the given search query, integrating these results into the overall re-ranking for the context. The result of this work is a novel web search algorithm which could be applied to any online learning environment attempting to collect relevant resources for learning about a given topic. The algorithm has been evaluated through user studies comparing traditional search results to the context-aware results returned through the algorithm for a given topic. These studies explore how this integration of methods could provide improved relevance in the search results returned when compared against other modern search engines.
ContributorsVan Egmond, Eric (Author) / Burleson, Winslow (Thesis advisor) / Syrotiuk, Violet (Thesis advisor) / Nelson, Brian (Committee member) / Arizona State University (Publisher)
Created2014
153265-Thumbnail Image.png
Description
Corporations invest considerable resources to create, preserve and analyze

their data; yet while organizations are interested in protecting against

unauthorized data transfer, there lacks a comprehensive metric to discriminate

what data are at risk of leaking.

This thesis motivates the need for a quantitative leakage risk metric, and

provides a risk assessment system,

Corporations invest considerable resources to create, preserve and analyze

their data; yet while organizations are interested in protecting against

unauthorized data transfer, there lacks a comprehensive metric to discriminate

what data are at risk of leaking.

This thesis motivates the need for a quantitative leakage risk metric, and

provides a risk assessment system, called Whispers, for computing it. Using

unsupervised machine learning techniques, Whispers uncovers themes in an

organization's document corpus, including previously unknown or unclassified

data. Then, by correlating the document with its authors, Whispers can

identify which data are easier to contain, and conversely which are at risk.

Using the Enron email database, Whispers constructs a social network segmented

by topic themes. This graph uncovers communication channels within the

organization. Using this social network, Whispers determines the risk of each

topic by measuring the rate at which simulated leaks are not detected. For the

Enron set, Whispers identified 18 separate topic themes between January 1999

and December 2000. The highest risk data emanated from the legal department

with a leakage risk as high as 60%.
ContributorsWright, Jeremy (Author) / Syrotiuk, Violet (Thesis advisor) / Davulcu, Hasan (Committee member) / Yau, Stephen (Committee member) / Arizona State University (Publisher)
Created2014
150382-Thumbnail Image.png
Description
This thesis proposed a novel approach to establish the trust model in a social network scenario based on users' emails. Email is one of the most important social connections nowadays. By analyzing email exchange activities among users, a social network trust model can be established to judge the trust rate

This thesis proposed a novel approach to establish the trust model in a social network scenario based on users' emails. Email is one of the most important social connections nowadays. By analyzing email exchange activities among users, a social network trust model can be established to judge the trust rate between each two users. The whole trust checking process is divided into two steps: local checking and remote checking. Local checking directly contacts the email server to calculate the trust rate based on user's own email communication history. Remote checking is a distributed computing process to get help from user's social network friends and built the trust rate together. The email-based trust model is built upon a cloud computing framework called MobiCloud. Inside MobiCloud, each user occupies a virtual machine which can directly communicate with others. Based on this feature, the distributed trust model is implemented as a combination of local analysis and remote analysis in the cloud. Experiment results show that the trust evaluation model can give accurate trust rate even in a small scale social network which does not have lots of social connections. With this trust model, the security in both social network services and email communication could be improved.
ContributorsZhong, Yunji (Author) / Huang, Dijiang (Thesis advisor) / Dasgupta, Partha (Committee member) / Syrotiuk, Violet (Committee member) / Arizona State University (Publisher)
Created2011
154160-Thumbnail Image.png
Description
Exhaustive testing is generally infeasible except in the smallest of systems. Research

has shown that testing the interactions among fewer (up to 6) components is generally

sufficient while retaining the capability to detect up to 99% of defects. This leads to a

substantial decrease in the number of tests. Covering arrays are combinatorial

Exhaustive testing is generally infeasible except in the smallest of systems. Research

has shown that testing the interactions among fewer (up to 6) components is generally

sufficient while retaining the capability to detect up to 99% of defects. This leads to a

substantial decrease in the number of tests. Covering arrays are combinatorial objects

that guarantee that every interaction is tested at least once.

In the absence of direct constructions, forming small covering arrays is generally

an expensive computational task. Algorithms to generate covering arrays have been

extensively studied yet no single algorithm provides the smallest solution. More

recently research has been directed towards a new technique called post-optimization.

These algorithms take an existing covering array and attempt to reduce its size.

This thesis presents a new idea for post-optimization by representing covering

arrays as graphs. Some properties of these graphs are established and the results are

contrasted with existing post-optimization algorithms. The idea is then generalized to

close variants of covering arrays with surprising results which in some cases reduce

the size by 30%. Applications of the method to generation and test prioritization are

studied and some interesting results are reported.
ContributorsKaria, Rushang Vinod (Author) / Colbourn, Charles J (Thesis advisor) / Syrotiuk, Violet (Committee member) / Richa, Andréa W. (Committee member) / Arizona State University (Publisher)
Created2015
156392-Thumbnail Image.png
Description
Medium access control (MAC) is a fundamental problem in wireless networks.

In ad-hoc wireless networks especially, many of the performance and scaling issues

these networks face can be attributed to their use of the core IEEE 802.11 MAC

protocol: distributed coordination function (DCF). Smoothed Airtime Linear Tuning

(SALT) is a new contention window tuning

Medium access control (MAC) is a fundamental problem in wireless networks.

In ad-hoc wireless networks especially, many of the performance and scaling issues

these networks face can be attributed to their use of the core IEEE 802.11 MAC

protocol: distributed coordination function (DCF). Smoothed Airtime Linear Tuning

(SALT) is a new contention window tuning algorithm proposed to address some of the

deficiencies of DCF in 802.11 ad-hoc networks. SALT works alongside a new user level

and optimized implementation of REACT, a distributed resource allocation protocol,

to ensure that each node secures the amount of airtime allocated to it by REACT.

The algorithm accomplishes that by tuning the contention window size parameter

that is part of the 802.11 backoff process. SALT converges more tightly on airtime

allocations than a contention window tuning algorithm from previous work and this

increases fairness in transmission opportunities and reduces jitter more than either

802.11 DCF or the other tuning algorithm. REACT and SALT were also extended

to the multi-hop flow scenario with the introduction of a new airtime reservation

algorithm. With a reservation in place multi-hop TCP throughput actually increased

when running SALT and REACT as compared to 802.11 DCF, and the combination of

protocols still managed to maintain its fairness and jitter advantages. All experiments

were performed on a wireless testbed, not in simulation.
ContributorsMellott, Matthew (Author) / Syrotiuk, Violet (Thesis advisor) / Colbourn, Charles (Committee member) / Tinnirello, Ilenia (Committee member) / Arizona State University (Publisher)
Created2018
131498-Thumbnail Image.png
Description
This project uses Kenneth Burke’s theory of dramatism and the pentad to analyze popular narrative films about human sex trafficking. It seeks to understand the relationship between a film’s dominant philosophy (as highlighted by utilizing Burke’s pentad), its inherently suggested solutions to trafficking, and the effect that the film has

This project uses Kenneth Burke’s theory of dramatism and the pentad to analyze popular narrative films about human sex trafficking. It seeks to understand the relationship between a film’s dominant philosophy (as highlighted by utilizing Burke’s pentad), its inherently suggested solutions to trafficking, and the effect that the film has on viewers’ perception of trafficking. 20 narrative feature films about sex trafficking such as the 2008 film Taken were analyzed for this study. Three out of five of Burke’s philosophies were uncovered after analysis: idealism, mysticism, and materialism. Films that aligned with idealism were found to implicitly blame women for their own trafficking. Films that aligned with mysticism were found to rally audiences around violence and racism as opposed to women’s freedom. Films that aligned with materialism were found to be the most empathetic towards trafficked women. The conclusion of this paper is that films about sex trafficking have a high potential to be harmful to women who have exited trafficking. This paper asserts that the most valuable films about trafficking are those that are not simply based on a true story but are created by trafficking survivors themselves, such as the 2016 film Apartment 407.
ContributorsHamby, Hannah Mary (Co-author) / Raum, Brionna (Co-author) / Edson, Belle (Thesis director) / Zanin, Alaina (Committee member) / Dean, W.P. Carey School of Business (Contributor) / Hugh Downs School of Human Communication (Contributor) / School of Film, Dance and Theatre (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131511-Thumbnail Image.png
Description
This document is a proposal for a research project, submitted as an Honors Thesis to Barrett, The Honors College at Arizona State University. The proposal summarizes previous findings and literature about women survivors of domestic violence who are suffering from post-traumatic stress disorder as well as outlining the design and

This document is a proposal for a research project, submitted as an Honors Thesis to Barrett, The Honors College at Arizona State University. The proposal summarizes previous findings and literature about women survivors of domestic violence who are suffering from post-traumatic stress disorder as well as outlining the design and measures of the study. At this time, the study has not been completed. However, it may be completed at a future time.
ContributorsKunst, Jessica (Author) / Hernandez Ruiz, Eugenia (Thesis director) / Belgrave, Melita (Committee member) / School of Music (Contributor) / Dean, W.P. Carey School of Business (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
136615-Thumbnail Image.png
Description
As an example of "big data," we consider a repository of Arctic sea ice concentration data collected from satellites over the years 1979-2005. The data is represented by a graph, where vertices correspond to measurement points, and an edge is inserted between two vertices if the Pearson correlation coefficient between

As an example of "big data," we consider a repository of Arctic sea ice concentration data collected from satellites over the years 1979-2005. The data is represented by a graph, where vertices correspond to measurement points, and an edge is inserted between two vertices if the Pearson correlation coefficient between them exceeds a threshold. We investigate new questions about the structure of the graph related to betweenness, closeness centrality, vertex degrees, and characteristic path length. We also investigate whether an offset of weeks and years in graph generation results in a cosine similarity value that differs significantly from expected values. Finally, we relate the computational results to trends in Arctic ice.
ContributorsDougherty, Ryan Edward (Author) / Syrotiuk, Violet (Thesis director) / Colbourn, Charles (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2015-05
133199-Thumbnail Image.png
Description
While there are many characteristics that make up a woman, femininity is one that is difficult to define because it is a communication and expression practice defined by culture. This research explores historical accounts of femininity in the 1950s as seen through the exemplar of the white, middle-class "happy homemaker"

While there are many characteristics that make up a woman, femininity is one that is difficult to define because it is a communication and expression practice defined by culture. This research explores historical accounts of femininity in the 1950s as seen through the exemplar of the white, middle-class "happy homemaker" or "happy housewife." The 1950s is important to study in light of changing gender and social dynamics due to the transition from World War II to a period of prosperity. By using primary sources from the 1950s and secondary historical analyses, this research takes the form of a sociological accounting of 1950s' femininity and the lessons that can be applied today. Four cultural forces led to homemakers having an unspoken identity crisis because they defined themselves in terms of relationship with others and struggled to uphold a certain level of femininity. The forces are: the feminine mystique, patriotism, cultural normalcy, and unnecessary choices. These forces caused women to have unhealthy home relationships in their marriages and motherhood while persistently doing acts to prove their self-worth, such as housework and consuming. It is important to not look back at the 1950s as an idyllic time without also considering the social and cultural practices that fostered a feminine conformity in women. Today, changes can be made to allow women to express femininity in modern ways by adapting to reality instead of to outdated values. For example, changes in maternity leave policies allow women to be mothers and still be in the workforce. Additionally, women should find fulfillment in themselves by establishing a strong personal identity and confidence in their womanhood before identifying through other people or through society.
Created2018-12
133202-Thumbnail Image.png
Description
The Centers for Disease Control and Prevention in the United States announced that there has been roughly a 50% increase in the prevalence of food allergies among people between the years of 1997 - 2011. A food allergy can be described as a medical condition where being exposed to a

The Centers for Disease Control and Prevention in the United States announced that there has been roughly a 50% increase in the prevalence of food allergies among people between the years of 1997 - 2011. A food allergy can be described as a medical condition where being exposed to a certain food triggers a harmful immune response in the body, known as an allergic reaction. These reactions can range from mild to fatal, and they are caused mainly by the top 8 major food allergens: dairy, eggs, peanuts, tree nuts, wheat, soy, fish, and shellfish. Food allergies mainly plague children under the age of 3, as some of them will grow out of their allergy sensitivity over time, and most people develop their allergies at a young age, and not when they are older. The rise in prevalence is becoming a frightening problem around the world, and there are emerging theories that are attempting to ascribe a cause. There are three well-known hypotheses that will be discussed: the Hygiene Hypothesis, the Dual-Allergen Exposure Hypothesis, and the Vitamin-D Deficiency Hypothesis. Beyond that, this report proposes that a new hypothesis be studied, the Food Systems Hypothesis. This hypothesis theorizes that the cause of the rise of food allergies is actually caused by changes in the food itself and particularly the pesticides that are used to cultivate it.
ContributorsCromer, Kelly (Author) / Lee, Rebecca (Thesis director) / MacFadyen, Joshua (Committee member) / Sanford School of Social and Family Dynamics (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2018-12