Matching Items (1,138)
Filtering by

Clear all filters

151545-Thumbnail Image.png
Description
A Pairwise Comparison Matrix (PCM) is used to compute for relative priorities of criteria or alternatives and are integral components of widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, a PCM suffers from several issues limiting its application

A Pairwise Comparison Matrix (PCM) is used to compute for relative priorities of criteria or alternatives and are integral components of widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, a PCM suffers from several issues limiting its application to large-scale decision problems, specifically: (1) to the curse of dimensionality, that is, a large number of pairwise comparisons need to be elicited from a decision maker (DM), (2) inconsistent and (3) imprecise preferences maybe obtained due to the limited cognitive power of DMs. This dissertation proposes a PCM Framework for Large-Scale Decisions to address these limitations in three phases as follows. The first phase proposes a binary integer program (BIP) to intelligently decompose a PCM into several mutually exclusive subsets using interdependence scores. As a result, the number of pairwise comparisons is reduced and the consistency of the PCM is improved. Since the subsets are disjoint, the most independent pivot element is identified to connect all subsets. This is done to derive the global weights of the elements from the original PCM. The proposed BIP is applied to both AHP and ANP methodologies. However, it is noted that the optimal number of subsets is provided subjectively by the DM and hence is subject to biases and judgement errors. The second phase proposes a trade-off PCM decomposition methodology to decompose a PCM into a number of optimally identified subsets. A BIP is proposed to balance the: (1) time savings by reducing pairwise comparisons, the level of PCM inconsistency, and (2) the accuracy of the weights. The proposed methodology is applied to the AHP to demonstrate its advantages and is compared to established methodologies. In the third phase, a beta distribution is proposed to generalize a wide variety of imprecise pairwise comparison distributions via a method of moments methodology. A Non-Linear Programming model is then developed that calculates PCM element weights which maximizes the preferences of the DM as well as minimizes the inconsistency simultaneously. Comparison experiments are conducted using datasets collected from literature to validate the proposed methodology.
ContributorsJalao, Eugene Rex Lazaro (Author) / Shunk, Dan L. (Thesis advisor) / Wu, Teresa (Thesis advisor) / Askin, Ronald G. (Committee member) / Goul, Kenneth M (Committee member) / Arizona State University (Publisher)
Created2013
151605-Thumbnail Image.png
Description
In most social networking websites, users are allowed to perform interactive activities. One of the fundamental features that these sites provide is to connecting with users of their kind. On one hand, this activity makes online connections visible and tangible; on the other hand, it enables the exploration of our

In most social networking websites, users are allowed to perform interactive activities. One of the fundamental features that these sites provide is to connecting with users of their kind. On one hand, this activity makes online connections visible and tangible; on the other hand, it enables the exploration of our connections and the expansion of our social networks easier. The aggregation of people who share common interests forms social groups, which are fundamental parts of our social lives. Social behavioral analysis at a group level is an active research area and attracts many interests from the industry. Challenges of my work mainly arise from the scale and complexity of user generated behavioral data. The multiple types of interactions, highly dynamic nature of social networking and the volatile user behavior suggest that these data are complex and big in general. Effective and efficient approaches are required to analyze and interpret such data. My work provide effective channels to help connect the like-minded and, furthermore, understand user behavior at a group level. The contributions of this dissertation are in threefold: (1) proposing novel representation of collective tagging knowledge via tag networks; (2) proposing the new information spreader identification problem in egocentric soical networks; (3) defining group profiling as a systematic approach to understanding social groups. In sum, the research proposes novel concepts and approaches for connecting the like-minded, enables the understanding of user groups, and exposes interesting research opportunities.
ContributorsWang, Xufei (Author) / Liu, Huan (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Sundaram, Hari (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151467-Thumbnail Image.png
Description
A semiconductor supply chain modeling and simulation platform using Linear Program (LP) optimization and parallel Discrete Event System Specification (DEVS) process models has been developed in a joint effort by ASU and Intel Corporation. A Knowledge Interchange Broker (KIBDEVS/LP) was developed to broker information synchronously between the DEVS and LP

A semiconductor supply chain modeling and simulation platform using Linear Program (LP) optimization and parallel Discrete Event System Specification (DEVS) process models has been developed in a joint effort by ASU and Intel Corporation. A Knowledge Interchange Broker (KIBDEVS/LP) was developed to broker information synchronously between the DEVS and LP models. Recently a single-echelon heuristic Inventory Strategy Module (ISM) was added to correct for forecast bias in customer demand data using different smoothing techniques. The optimization model could then use information provided by the forecast model to make better decisions for the process model. The composition of ISM with LP and DEVS models resulted in the first realization of what is now called the Optimization Simulation Forecast (OSF) platform. It could handle a single echelon supply chain system consisting of single hubs and single products In this thesis, this single-echelon simulation platform is extended to handle multiple echelons with multiple inventory elements handling multiple products. The main aspect for the multi-echelon OSF platform was to extend the KIBDEVS/LP such that ISM interactions with the LP and DEVS models could also be supported. To achieve this, a new, scalable XML schema for the KIB has been developed. The XML schema has also resulted in strengthening the KIB execution engine design. A sequential scheme controls the executions of the DEVS-Suite simulator, CPLEX optimizer, and ISM engine. To use the ISM for multiple echelons, it is extended to compute forecast customer demands and safety stocks over multiple hubs and products. Basic examples for semiconductor manufacturing spanning single and two echelon supply chain systems have been developed and analyzed. Experiments using perfect data were conducted to show the correctness of the OSF platform design and implementation. Simple, but realistic experiments have also been conducted. They highlight the kinds of supply chain dynamics that can be evaluated using discrete event process simulation, linear programming optimization, and heuristics forecasting models.
ContributorsSmith, James Melkon (Author) / Sarjoughian, Hessam S. (Thesis advisor) / Davulcu, Hasan (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2012
151471-Thumbnail Image.png
Description
In this dissertation I develop a deep theory of temporal planning well-suited to analyzing, understanding, and improving the state of the art implementations (as of 2012). At face-value the work is strictly theoretical; nonetheless its impact is entirely real and practical. The easiest portion of that impact to highlight concerns

In this dissertation I develop a deep theory of temporal planning well-suited to analyzing, understanding, and improving the state of the art implementations (as of 2012). At face-value the work is strictly theoretical; nonetheless its impact is entirely real and practical. The easiest portion of that impact to highlight concerns the notable improvements to the format of the temporal fragment of the International Planning Competitions (IPCs). Particularly: the theory I expound upon here is the primary cause of--and justification for--the altered (i) selection of benchmark problems, and (ii) notion of "winning temporal planner". For higher level motivation: robotics, web service composition, industrial manufacturing, business process management, cybersecurity, space exploration, deep ocean exploration, and logistics all benefit from applying domain-independent automated planning technique. Naturally, actually carrying out such case studies has much to offer. For example, we may extract the lesson that reasoning carefully about deadlines is rather crucial to planning in practice. More generally, effectively automating specifically temporal planning is well-motivated from applications. Entirely abstractly, the aim is to improve the theory of automated temporal planning by distilling from its practice. My thesis is that the key feature of computational interest is concurrency. To support, I demonstrate by way of compilation methods, worst-case counting arguments, and analysis of algorithmic properties such as completeness that the more immediately pressing computational obstacles (facing would-be temporal generalizations of classical planning systems) can be dealt with in theoretically efficient manner. So more accurately the technical contribution here is to demonstrate: The computationally significant obstacle to automated temporal planning that remains is just concurrency.
ContributorsCushing, William Albemarle (Author) / Kambhampati, Subbarao (Thesis advisor) / Weld, Daniel S. (Committee member) / Smith, David E. (Committee member) / Baral, Chitta (Committee member) / Davalcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2012
151484-Thumbnail Image.png
Description
An understanding of diet habits is crucial in implementing proper management strategies for wildlife. Diet analysis, however, remains a challenge for ruminant species. Microhistological analysis, the method most often employed in herbivore diet studies, is tedious and time consuming. In addition, it requires considerable training and an extensive reference plant

An understanding of diet habits is crucial in implementing proper management strategies for wildlife. Diet analysis, however, remains a challenge for ruminant species. Microhistological analysis, the method most often employed in herbivore diet studies, is tedious and time consuming. In addition, it requires considerable training and an extensive reference plant collection. The development of DNA barcoding (species identification using a standardized DNA sequence) and the availability of recent DNA sequencing techniques offer new possibilities in diet analysis for ungulates. Using fecal material collected from controlled feeding trials on pygmy goats, (Capra hicus), novel DNA barcoding technology using the P6-loop of the chloroplast trnL (UAA) intron was compared with the traditional microhistological technique. At its current stage of technological development, this study demonstrated that DNA barcoding did not enhance the ability to detect plant species in herbivore diets. A higher mean species composition was reported with microhistological analysis (79%) as compared to DNA barcoding (50%). Microhistological analysis consistently reported a higher species presence by forage class. For affect positive species identification, microhistology estimated an average of 89% correct detection in control diets, while DNA barcoding estimated 50% correct detection of species. It was hypothesized that a number of factors, including variation in chloroplast content in feed species and the effect of rumen bacteria on degradation of DNA, influenced the ability to detect plant species in herbivore diets and concluded that while DNA barcoding opens up new possibilities in the study of plant-herbivore interactions, further studies are needed to standardize techniques and for DNA bar-coding in this context.
ContributorsMurphree, Julie Joan (Author) / Miller, William H. (Thesis advisor) / Steele, Kelly (Committee member) / Salywon, Andrew (Committee member) / Arizona State University (Publisher)
Created2012
151500-Thumbnail Image.png
Description
Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding

Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding the location of node/link faults, i.e., the faulty nodes and links may be close to each other or far from each other. However, in many real life scenarios, there exists a strong spatial correlation among the faulty nodes and links. Such failures are often encountered in disaster situations, e.g., natural calamities or enemy attacks. In presence of such region-based faults, many of traditional network analysis and fault-tolerant metrics, that are valid under non-spatially correlated faults, are no longer applicable. To this effect, the main thrust of this research is design and analysis of robust networks in presence of such region-based faults. One important finding of this research is that if some prior knowledge is available on the maximum size of the region that might be affected due to a region-based fault, this piece of knowledge can be effectively utilized for resource efficient design of networks. It has been shown in this dissertation that in some scenarios, effective utilization of this knowledge may result in substantial saving is transmission power in wireless networks. In this dissertation, the impact of region-based faults on the connectivity of wireless networks has been studied and a new metric, region-based connectivity, is proposed to measure the fault-tolerance capability of a network. In addition, novel metrics, such as the region-based component decomposition number(RBCDN) and region-based largest component size(RBLCS) have been proposed to capture the network state, when a region-based fault disconnects the network. Finally, this dissertation presents efficient resource allocation techniques that ensure tolerance against region-based faults, in distributed file storage networks and data center networks.
ContributorsBanerjee, Sujogya (Author) / Sen, Arunabha (Thesis advisor) / Xue, Guoliang (Committee member) / Richa, Andrea (Committee member) / Hurlbert, Glenn (Committee member) / Arizona State University (Publisher)
Created2013
151503-Thumbnail Image.png
Description
Objective: Vinegar consumption studies have demonstrated possible therapeutic effects in reducing HbA1c and postprandial glycemia. The purpose of the study was to closely examine the effects of a commercial vinegar drink on daily fluctuations in fasting glucose concentrations and postprandial glycemia, and on HbA1c, in individuals at risk for Type

Objective: Vinegar consumption studies have demonstrated possible therapeutic effects in reducing HbA1c and postprandial glycemia. The purpose of the study was to closely examine the effects of a commercial vinegar drink on daily fluctuations in fasting glucose concentrations and postprandial glycemia, and on HbA1c, in individuals at risk for Type 2 Diabetes Mellitus (T2D). Design: Thirteen women and one man (21-62 y; mean, 46.0±3.9 y) participated in this 12-week parallel-arm trial. Participants were recruited from a campus community and were healthy and not diabetic by self-report. Participants were not prescribed oral hypoglycemic medications or insulin; other medications were allowed if use was stable for > 3 months. Subjects were randomized to one of two groups: VIN (8 ounces vinegar drink providing 1.5 g acetic acid) or CON (1 vinegar pill providing 0.04 g acetic acid). Treatments were taken twice daily immediately prior to the lunch and dinner meals. Venous blood samples were drawn at trial weeks 0 and 12 to measure insulin, fasting glucose, and HbA1c. Subjects recorded fasting glucose and 2-h postprandial glycemia concentrations daily using a glucometer. Results: The VIN group showed significant reductions in fasting capillary blood glucose concentrations (p=0.05) that were immediate and sustained throughout the duration of the study. The VIN group had reductions in 2-h postprandial glucose (mean change of −7.6±6.8 mg/dL over the 12-week trial), but this value was not significantly different than that for the CON group (mean change of 3.3±5.3 mg/dL over the 12-week trial, p=0.232). HbA1c did not significantly change (p=0.702), but the reduction in HbA1c in the VIN group, −0.14±0.1%, may have physiological relevance. Conclusions: Significant reductions in HbA1c were not observed after daily consumption of a vinegar drink containing 1.5 g acetic acid in non-diabetic individuals. However, the vinegar drink did significantly reduce fasting capillary blood glucose concentrations in these individuals as compared to a vinegar pill containing 0.04 g acetic acid. These results support a therapeutic effect for vinegar in T2D prevention and progression, specifically in high-risk populations.
ContributorsQuagliano, Samantha (Author) / Johnston, Carol (Thesis advisor) / Appel, Christy (Committee member) / Dixon, Kathleen (Committee member) / Arizona State University (Publisher)
Created2013
151504-Thumbnail Image.png
Description
Objective: The purpose of this randomized parallel arm trial was to demonstrate the effects of daily fish oil supplementation (600mg per day for eight weeks) on body composition and body mass in young healthy women, aged 18-38, at a large southwestern university. Design: 26 non-obese (mean BMI 23.7±0.6 kg/m2), healthy

Objective: The purpose of this randomized parallel arm trial was to demonstrate the effects of daily fish oil supplementation (600mg per day for eight weeks) on body composition and body mass in young healthy women, aged 18-38, at a large southwestern university. Design: 26 non-obese (mean BMI 23.7±0.6 kg/m2), healthy women (18-38y; mean, 23.5±1.1 y) from a southwestern Arizona university campus community completed the study. Subjects were healthy, non-smokers, consuming less than 3.5 oz of fish per week according to self-report. Participants were randomized to one of two groups: FISH (600 mg omega-3 fatty acids provided in one gel capsule per day), or CON (1000 mg coconut oil placebo provided in one gel capsule per day). Body weight, BMI, and percent body fat were measured using a stadiometer and bioelectrical impedance scale at the screening visit and intervention weeks 1, 4, and 8. 24-hour dietary recalls were also performed at weeks 1 and 8. Results: 8 weeks of omega-3 fatty acid supplementation did not significantly alter body weight (p=0.830), BMI (p=1.00), or body fat percentage (p=0.600) as compared to placebo. Although not statistically significant, 24-hour dietary recalls performed at the beginning and end of the intervention revealed a trend towards increased caloric intake in the FISH group and decreased caloric intake in the CON group throughout the course of the study (p=0.069). If maintained, this difference in caloric intake could have physiological relevance. Conclusions: Omega-3 fatty acids do not significantly alter body weight or body composition in healthy young females. These findings do not refute the current recommendations for Americans to consume at least 8 oz of omega-3-rich seafood per week, supplying 250 mg EPA and DHA per day. More research is needed to investigate the potential for omega-3 fatty acids to modulate daily caloric intake.
ContributorsTeran, Bianca (Author) / Johnston, Carol (Thesis advisor) / Johnson, Melinda (Committee member) / Ohri-Vachaspati, Punam (Committee member) / Arizona State University (Publisher)
Created2013
151511-Thumbnail Image.png
Description
With the increase in computing power and availability of data, there has never been a greater need to understand data and make decisions from it. Traditional statistical techniques may not be adequate to handle the size of today's data or the complexities of the information hidden within the data. Thus

With the increase in computing power and availability of data, there has never been a greater need to understand data and make decisions from it. Traditional statistical techniques may not be adequate to handle the size of today's data or the complexities of the information hidden within the data. Thus knowledge discovery by machine learning techniques is necessary if we want to better understand information from data. In this dissertation, we explore the topics of asymmetric loss and asymmetric data in machine learning and propose new algorithms as solutions to some of the problems in these topics. We also studied variable selection of matched data sets and proposed a solution when there is non-linearity in the matched data. The research is divided into three parts. The first part addresses the problem of asymmetric loss. A proposed asymmetric support vector machine (aSVM) is used to predict specific classes with high accuracy. aSVM was shown to produce higher precision than a regular SVM. The second part addresses asymmetric data sets where variables are only predictive for a subset of the predictor classes. Asymmetric Random Forest (ARF) was proposed to detect these kinds of variables. The third part explores variable selection for matched data sets. Matched Random Forest (MRF) was proposed to find variables that are able to distinguish case and control without the restrictions that exists in linear models. MRF detects variables that are able to distinguish case and control even in the presence of interaction and qualitative variables.
ContributorsKoh, Derek (Author) / Runger, George C. (Thesis advisor) / Wu, Tong (Committee member) / Pan, Rong (Committee member) / Cesta, John (Committee member) / Arizona State University (Publisher)
Created2013
151517-Thumbnail Image.png
Description
Data mining is increasing in importance in solving a variety of industry problems. Our initiative involves the estimation of resource requirements by skill set for future projects by mining and analyzing actual resource consumption data from past projects in the semiconductor industry. To achieve this goal we face difficulties like

Data mining is increasing in importance in solving a variety of industry problems. Our initiative involves the estimation of resource requirements by skill set for future projects by mining and analyzing actual resource consumption data from past projects in the semiconductor industry. To achieve this goal we face difficulties like data with relevant consumption information but stored in different format and insufficient data about project attributes to interpret consumption data. Our first goal is to clean the historical data and organize it into meaningful structures for analysis. Once the preprocessing on data is completed, different data mining techniques like clustering is applied to find projects which involve resources of similar skillsets and which involve similar complexities and size. This results in "resource utilization templates" for groups of related projects from a resource consumption perspective. Then project characteristics are identified which generate this diversity in headcounts and skillsets. These characteristics are not currently contained in the data base and are elicited from the managers of historical projects. This represents an opportunity to improve the usefulness of the data collection system for the future. The ultimate goal is to match the product technical features with the resource requirement for projects in the past as a model to forecast resource requirements by skill set for future projects. The forecasting model is developed using linear regression with cross validation of the training data as the past project execution are relatively few in number. Acceptable levels of forecast accuracy are achieved relative to human experts' results and the tool is applied to forecast some future projects' resource demand.
ContributorsBhattacharya, Indrani (Author) / Sen, Arunabha (Thesis advisor) / Kempf, Karl G. (Thesis advisor) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013