Matching Items (125)
149709-Thumbnail Image.png
Description
The price based marketplace has dominated the construction industry. The majority of owners use price based practices of management (expectation and decision making, control, direction, and inspection.) The price based/management and control paradigm has not worked. Clients have now been moving toward the best value environment (hire

The price based marketplace has dominated the construction industry. The majority of owners use price based practices of management (expectation and decision making, control, direction, and inspection.) The price based/management and control paradigm has not worked. Clients have now been moving toward the best value environment (hire contractors who know what they are doing, who preplan, and manage and minimize risk and deviation.) Owners are trying to move from client direction and control to hiring an expert and allowing them to do the quality control/risk management. The movement of environments changes the paradigm for the contractors from a reactive to a proactive, from a bureaucratic
on-accountable to an accountable position, from a relationship based
on-measuring to a measuring entity, and to a contractor who manages and minimizes the risk that they do not control. Years of price based practices have caused poor quality and low performance in the construction industry. This research identifies what is a best value contractor or vendor, what factors make up a best value vendor, and the methodology to transform a vendor to a best value vendor. It will use deductive logic, a case study to confirm the logic and the proposed methodology.
ContributorsPauli, Michele (Author) / Kashiwagi, Dean (Thesis advisor) / Sullivan, Kenneth (Committee member) / Badger, William (Committee member) / Arizona State University (Publisher)
Created2011
150226-Thumbnail Image.png
Description
As the information available to lay users through autonomous data sources continues to increase, mediators become important to ensure that the wealth of information available is tapped effectively. A key challenge that these information mediators need to handle is the varying levels of incompleteness in the underlying databases in terms

As the information available to lay users through autonomous data sources continues to increase, mediators become important to ensure that the wealth of information available is tapped effectively. A key challenge that these information mediators need to handle is the varying levels of incompleteness in the underlying databases in terms of missing attribute values. Existing approaches such as Query Processing over Incomplete Autonomous Databases (QPIAD) aim to mine and use Approximate Functional Dependencies (AFDs) to predict and retrieve relevant incomplete tuples. These approaches make independence assumptions about missing values--which critically hobbles their performance when there are tuples containing missing values for multiple correlated attributes. In this thesis, I present a principled probabilis- tic alternative that views an incomplete tuple as defining a distribution over the complete tuples that it stands for. I learn this distribution in terms of Bayes networks. My approach involves min- ing/"learning" Bayes networks from a sample of the database, and using it do both imputation (predict a missing value) and query rewriting (retrieve relevant results with incompleteness on the query-constrained attributes, when the data sources are autonomous). I present empirical studies to demonstrate that (i) at higher levels of incompleteness, when multiple attribute values are missing, Bayes networks do provide a significantly higher classification accuracy and (ii) the relevant possible answers retrieved by the queries reformulated using Bayes networks provide higher precision and recall than AFDs while keeping query processing costs manageable.
ContributorsRaghunathan, Rohit (Author) / Kambhampati, Subbarao (Thesis advisor) / Liu, Huan (Committee member) / Lee, Joohyung (Committee member) / Arizona State University (Publisher)
Created2011
150235-Thumbnail Image.png
Description
Source selection is one of the foremost challenges for searching deep-web. For a user query, source selection involves selecting a subset of deep-web sources expected to provide relevant answers to the user query. Existing source selection models employ query-similarity based local measures for assessing source quality. These local measures are

Source selection is one of the foremost challenges for searching deep-web. For a user query, source selection involves selecting a subset of deep-web sources expected to provide relevant answers to the user query. Existing source selection models employ query-similarity based local measures for assessing source quality. These local measures are necessary but not sufficient as they are agnostic to source trustworthiness and result importance, which, given the autonomous and uncurated nature of deep-web, have become indispensible for searching deep-web. SourceRank provides a global measure for assessing source quality based on source trustworthiness and result importance. SourceRank's effectiveness has been evaluated in single-topic deep-web environments. The goal of the thesis is to extend sourcerank to a multi-topic deep-web environment. Topic-sensitive sourcerank is introduced as an effective way of extending sourcerank to a deep-web environment containing a set of representative topics. In topic-sensitive sourcerank, multiple sourcerank vectors are created, each biased towards a representative topic. At query time, using the topic of query keywords, a query-topic sensitive, composite sourcerank vector is computed as a linear combination of these pre-computed biased sourcerank vectors. Extensive experiments on more than a thousand sources in multiple domains show 18-85% improvements in result quality over Google Product Search and other existing methods.
ContributorsJha, Manishkumar (Author) / Kambhampati, Subbarao (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2011
150062-Thumbnail Image.png
Description
TaxiWorld is a Matlab simulation of a city with a fleet of taxis which operate within it, with the goal of transporting passengers to their destinations. The size of the city, as well as the number of available taxis and the frequency and general locations of fare appearances can all

TaxiWorld is a Matlab simulation of a city with a fleet of taxis which operate within it, with the goal of transporting passengers to their destinations. The size of the city, as well as the number of available taxis and the frequency and general locations of fare appearances can all be set on a scenario-by-scenario basis. The taxis must attempt to service the fares as quickly as possible, by picking each one up and carrying it to its drop-off location. The TaxiWorld scenario is formally modeled using both Decentralized Partially-Observable Markov Decision Processes (Dec-POMDPs) and Multi-agent Markov Decision Processes (MMDPs). The purpose of developing formal models is to learn how to build and use formal Markov models, such as can be given to planners to solve for optimal policies in problem domains. However, finding optimal solutions for Dec-POMDPs is NEXP-Complete, so an empirical algorithm was also developed as an improvement to the method already in use on the simulator, and the methods were compared in identical scenarios to determine which is more effective. The empirical method is of course not optimal - rather, it attempts to simply account for some of the most important factors to achieve an acceptable level of effectiveness while still retaining a reasonable level of computational complexity for online solving.
ContributorsWhite, Christopher (Author) / Kambhampati, Subbarao (Thesis advisor) / Gupta, Sandeep (Committee member) / Varsamopoulos, Georgios (Committee member) / Arizona State University (Publisher)
Created2011
150133-Thumbnail Image.png
Description
ABSTRACT Facility managers have an important job in today's competitive business world by caring for the backbone of the corporation's capital. Maintaining assets and the support efforts cause facility managers to fight an uphill battle to prove the worth of their organizations. This thesis will discuss the important and flexible

ABSTRACT Facility managers have an important job in today's competitive business world by caring for the backbone of the corporation's capital. Maintaining assets and the support efforts cause facility managers to fight an uphill battle to prove the worth of their organizations. This thesis will discuss the important and flexible use of measurement and leadership reports and the benefits of justifying the work required to maintain or upgrade a facility. The task is streamlined by invoking accountability to subject experts. The facility manager must trust in the ability of his or her work force to get the job done. However, with accountability comes increased risk. Even though accountability may not alleviate total control or cease reactionary actions, facility managers can develop key leadership based reports to reassign accountability and measure subject matter experts while simultaneously reducing reactionary actions leading to increased cost. Identifying and reassigning risk that are not controlled to subject matter experts is imperative for effective facility management leadership and allows facility managers to create an accurate and solid facility management plan, supports the organization's succession plan, and allows the organization to focus on key competencies.
ContributorsTellefsen, Thor (Author) / Sullivan, Kenneth (Thesis advisor) / Kashiwagi, Dean (Committee member) / Badger, William (Committee member) / Arizona State University (Publisher)
Created2011
151471-Thumbnail Image.png
Description
In this dissertation I develop a deep theory of temporal planning well-suited to analyzing, understanding, and improving the state of the art implementations (as of 2012). At face-value the work is strictly theoretical; nonetheless its impact is entirely real and practical. The easiest portion of that impact to highlight concerns

In this dissertation I develop a deep theory of temporal planning well-suited to analyzing, understanding, and improving the state of the art implementations (as of 2012). At face-value the work is strictly theoretical; nonetheless its impact is entirely real and practical. The easiest portion of that impact to highlight concerns the notable improvements to the format of the temporal fragment of the International Planning Competitions (IPCs). Particularly: the theory I expound upon here is the primary cause of--and justification for--the altered (i) selection of benchmark problems, and (ii) notion of "winning temporal planner". For higher level motivation: robotics, web service composition, industrial manufacturing, business process management, cybersecurity, space exploration, deep ocean exploration, and logistics all benefit from applying domain-independent automated planning technique. Naturally, actually carrying out such case studies has much to offer. For example, we may extract the lesson that reasoning carefully about deadlines is rather crucial to planning in practice. More generally, effectively automating specifically temporal planning is well-motivated from applications. Entirely abstractly, the aim is to improve the theory of automated temporal planning by distilling from its practice. My thesis is that the key feature of computational interest is concurrency. To support, I demonstrate by way of compilation methods, worst-case counting arguments, and analysis of algorithmic properties such as completeness that the more immediately pressing computational obstacles (facing would-be temporal generalizations of classical planning systems) can be dealt with in theoretically efficient manner. So more accurately the technical contribution here is to demonstrate: The computationally significant obstacle to automated temporal planning that remains is just concurrency.
ContributorsCushing, William Albemarle (Author) / Kambhampati, Subbarao (Thesis advisor) / Weld, Daniel S. (Committee member) / Smith, David E. (Committee member) / Baral, Chitta (Committee member) / Davalcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2012
151605-Thumbnail Image.png
Description
In most social networking websites, users are allowed to perform interactive activities. One of the fundamental features that these sites provide is to connecting with users of their kind. On one hand, this activity makes online connections visible and tangible; on the other hand, it enables the exploration of our

In most social networking websites, users are allowed to perform interactive activities. One of the fundamental features that these sites provide is to connecting with users of their kind. On one hand, this activity makes online connections visible and tangible; on the other hand, it enables the exploration of our connections and the expansion of our social networks easier. The aggregation of people who share common interests forms social groups, which are fundamental parts of our social lives. Social behavioral analysis at a group level is an active research area and attracts many interests from the industry. Challenges of my work mainly arise from the scale and complexity of user generated behavioral data. The multiple types of interactions, highly dynamic nature of social networking and the volatile user behavior suggest that these data are complex and big in general. Effective and efficient approaches are required to analyze and interpret such data. My work provide effective channels to help connect the like-minded and, furthermore, understand user behavior at a group level. The contributions of this dissertation are in threefold: (1) proposing novel representation of collective tagging knowledge via tag networks; (2) proposing the new information spreader identification problem in egocentric soical networks; (3) defining group profiling as a systematic approach to understanding social groups. In sum, the research proposes novel concepts and approaches for connecting the like-minded, enables the understanding of user groups, and exposes interesting research opportunities.
ContributorsWang, Xufei (Author) / Liu, Huan (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Sundaram, Hari (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151718-Thumbnail Image.png
Description
The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a

The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a reputation score for each tweet that is based not just on content, but also additional information from the Twitter ecosystem that consists of users, tweets, and the web pages that tweets link to. This information is obtained by modeling the Twitter ecosystem as a three-layer graph. The reputation score is used to power two novel methods of ranking tweets by propagating the reputation over an agreement graph based on tweets' content similarity. Additionally, I show how the agreement graph helps counter tweet spam. An evaluation of my method on 16~million tweets from the TREC 2011 Microblog Dataset shows that it doubles the precision over baseline Twitter Search and achieves higher precision than current state of the art method. I present a detailed internal empirical evaluation of RAProp in comparison to several alternative approaches proposed by me, as well as external evaluation in comparison to the current state of the art method.
ContributorsRavikumar, Srijith (Author) / Kambhampati, Subbarao (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
152185-Thumbnail Image.png
Description
Over the past couple of decades, quality has been an area of increased focus. Multiple models and approaches have been proposed to measure the quality in the construction industry. This paper focuses on determining the quality of one of the types of roofing systems used in the construction industry, i.e.

Over the past couple of decades, quality has been an area of increased focus. Multiple models and approaches have been proposed to measure the quality in the construction industry. This paper focuses on determining the quality of one of the types of roofing systems used in the construction industry, i.e. Sprayed Polyurethane Foam Roofs (SPF roofs). Thirty seven urethane coated SPF roofs that were installed in 2005 / 2006 were visually inspected to measure the percentage of blisters and repairs three times over a period of 4 year, 6 year and 7 year marks. A repairing criteria was established after a 6 year mark based on the data that were reported to contractors as vulnerable roofs. Furthermore, the relation between four possible contributing time of installation factors i.e. contractor, demographics, season, and difficulty (number of penetrations and size of the roof in square feet) that could affect the quality of the roof was determined. Demographics and difficulty did not affect the quality of the roofs whereas the contractor and the season when the roof was installed did affect the quality of the roofs.
ContributorsGajjar, Dhaval (Author) / Kashiwagi, Dean (Thesis advisor) / Sullivan, Kenneth (Committee member) / Badger, William (Committee member) / Arizona State University (Publisher)
Created2013
152158-Thumbnail Image.png
Description
Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many

Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many cases, where the most a cleaning system can do is to generate a (hopefully small) set of clean candidates for each dirty tuple. When the cleaning system is required to output a deterministic database, it is forced to pick one clean candidate (say the "most likely" candidate) per tuple. Such an approach can lead to loss of information. For example, consider a situation where there are three equally likely clean candidates of a dirty tuple. An appealing alternative that avoids such an information loss is to abandon the requirement that the output database be deterministic. In other words, even though the input (dirty) database is deterministic, I allow the reconstructed database to be probabilistic. Although such an approach does avoid the information loss, it also brings forth several challenges. For example, how many alternatives should be kept per tuple in the reconstructed database? Maintaining too many alternatives increases the size of the reconstructed database, and hence the query processing time. Second, while processing queries on the probabilistic database may well increase recall, how would they affect the precision of the query processing? In this thesis, I investigate these questions. My investigation is done in the context of a data cleaning system called BayesWipe that has the capability of producing multiple clean candidates per each dirty tuple, along with the probability that they are the correct cleaned version. I represent these alternatives as tuples in a tuple disjoint probabilistic database, and use the Mystiq system to process queries on it. This probabilistic reconstruction (called BayesWipe–PDB) is compared to a deterministic reconstruction (called BayesWipe–DET)—where the most likely clean candidate for each tuple is chosen, and the rest of the alternatives discarded.
ContributorsRihan, Preet Inder Singh (Author) / Kambhampati, Subbarao (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2013