Matching Items (120)
151545-Thumbnail Image.png
Description
A Pairwise Comparison Matrix (PCM) is used to compute for relative priorities of criteria or alternatives and are integral components of widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, a PCM suffers from several issues limiting its application

A Pairwise Comparison Matrix (PCM) is used to compute for relative priorities of criteria or alternatives and are integral components of widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, a PCM suffers from several issues limiting its application to large-scale decision problems, specifically: (1) to the curse of dimensionality, that is, a large number of pairwise comparisons need to be elicited from a decision maker (DM), (2) inconsistent and (3) imprecise preferences maybe obtained due to the limited cognitive power of DMs. This dissertation proposes a PCM Framework for Large-Scale Decisions to address these limitations in three phases as follows. The first phase proposes a binary integer program (BIP) to intelligently decompose a PCM into several mutually exclusive subsets using interdependence scores. As a result, the number of pairwise comparisons is reduced and the consistency of the PCM is improved. Since the subsets are disjoint, the most independent pivot element is identified to connect all subsets. This is done to derive the global weights of the elements from the original PCM. The proposed BIP is applied to both AHP and ANP methodologies. However, it is noted that the optimal number of subsets is provided subjectively by the DM and hence is subject to biases and judgement errors. The second phase proposes a trade-off PCM decomposition methodology to decompose a PCM into a number of optimally identified subsets. A BIP is proposed to balance the: (1) time savings by reducing pairwise comparisons, the level of PCM inconsistency, and (2) the accuracy of the weights. The proposed methodology is applied to the AHP to demonstrate its advantages and is compared to established methodologies. In the third phase, a beta distribution is proposed to generalize a wide variety of imprecise pairwise comparison distributions via a method of moments methodology. A Non-Linear Programming model is then developed that calculates PCM element weights which maximizes the preferences of the DM as well as minimizes the inconsistency simultaneously. Comparison experiments are conducted using datasets collected from literature to validate the proposed methodology.
ContributorsJalao, Eugene Rex Lazaro (Author) / Shunk, Dan L. (Thesis advisor) / Wu, Teresa (Thesis advisor) / Askin, Ronald G. (Committee member) / Goul, Kenneth M (Committee member) / Arizona State University (Publisher)
Created2013
151475-Thumbnail Image.png
Description
The cyber-physical systems (CPS) are emerging as the underpinning technology for major industries in the 21-th century. This dissertation is focused on two fundamental issues in cyber-physical systems: network interdependence and information dynamics. It consists of the following two main thrusts. The first thrust is targeted at understanding the impact

The cyber-physical systems (CPS) are emerging as the underpinning technology for major industries in the 21-th century. This dissertation is focused on two fundamental issues in cyber-physical systems: network interdependence and information dynamics. It consists of the following two main thrusts. The first thrust is targeted at understanding the impact of network interdependence. It is shown that a cyber-physical system built upon multiple interdependent networks are more vulnerable to attacks since node failures in one network may result in failures in the other network, causing a cascade of failures that would potentially lead to the collapse of the entire infrastructure. There is thus a need to develop a new network science for modeling and quantifying cascading failures in multiple interdependent networks, and to develop network management algorithms that improve network robustness and ensure overall network reliability against cascading failures. To enhance the system robustness, a "regular" allocation strategy is proposed that yields better resistance against cascading failures compared to all possible existing strategies. Furthermore, in view of the load redistribution feature in many physical infrastructure networks, e.g., power grids, a CPS model is developed where the threshold model and the giant connected component model are used to capture the node failures in the physical infrastructure network and the cyber network, respectively. The second thrust is centered around the information dynamics in the CPS. One speculation is that the interconnections over multiple networks can facilitate information diffusion since information propagation in one network can trigger further spread in the other network. With this insight, a theoretical framework is developed to analyze information epidemic across multiple interconnecting networks. It is shown that the conjoining among networks can dramatically speed up message diffusion. Along a different avenue, many cyber-physical systems rely on wireless networks which offer platforms for information exchanges. To optimize the QoS of wireless networks, there is a need to develop a high-throughput and low-complexity scheduling algorithm to control link dynamics. To that end, distributed link scheduling algorithms are explored for multi-hop MIMO networks and two CSMA algorithms under the continuous-time model and the discrete-time model are devised, respectively.
ContributorsQian, Dajun (Author) / Zhang, Junshan (Thesis advisor) / Ying, Lei (Committee member) / Zhang, Yanchao (Committee member) / Cochran, Douglas (Committee member) / Arizona State University (Publisher)
Created2012
151436-Thumbnail Image.png
Description
Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay

Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay (MSIA) which has been one of the primary methods of biomarker discovery techniques. MSIA analyzes protein molecules as potential biomarkers using time of flight mass spectrometry (TOF-MS). Peak detection in TOF-MS is important for biomarker analysis and many other MS related application. Though many peak detection algorithms exist, most of them are based on heuristics models. One of the ways of detecting signal peaks is by deploying stochastic models of the signal and noise observations. Likelihood ratio test (LRT) detector, based on the Neyman-Pearson (NP) lemma, is an uniformly most powerful test to decision making in the form of a hypothesis test. The primary goal of this dissertation is to develop signal and noise models for the electrospray ionization (ESI) TOF-MS data. A new method is proposed for developing the signal model by employing first principles calculations based on device physics and molecular properties. The noise model is developed by analyzing MS data from careful experiments in the ESI mass spectrometer. A non-flat baseline in MS data is common. The reasons behind the formation of this baseline has not been fully comprehended. A new signal model explaining the presence of baseline is proposed, though detailed experiments are needed to further substantiate the model assumptions. Signal detection schemes based on these signal and noise models are proposed. A maximum likelihood (ML) method is introduced for estimating the signal peak amplitudes. The performance of the detection methods and ML estimation are evaluated with Monte Carlo simulation which shows promising results. An application of these methods is proposed for fractional abundance calculation for biomarker analysis, which is mathematically robust and fundamentally different than the current algorithms. Biomarker panels for type 2 diabetes and cardiovascular disease are analyzed using existing MS analysis algorithms. Finally, a support vector machine based multi-classification algorithm is developed for evaluating the biomarkers' effectiveness in discriminating type 2 diabetes and cardiovascular diseases and is shown to perform better than a linear discriminant analysis based classifier.
ContributorsBuddi, Sai (Author) / Taylor, Thomas (Thesis advisor) / Cochran, Douglas (Thesis advisor) / Nelson, Randall (Committee member) / Duman, Tolga (Committee member) / Arizona State University (Publisher)
Created2012
152414-Thumbnail Image.png
Description
Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may

Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may be desired at component, sub-system or full system level. Two issues that are considered in this work are: 1. Information about design ideas is incomplete, informal and sketchy 2. Designers often work at multiple levels; different aspects or subsystems may be at different levels of abstraction Thus, high fidelity analysis and simulation tools are not appropriate for this purpose. This thesis looks at the requirements for a simulation tool and how it could facilitate concept evaluation. The specific tasks reported in this thesis are: 1. The typical types of information available after an ideation session 2. The typical types of technical evaluations done in early stages 3. How to conduct low fidelity design evaluation given a well-defined feasibility question A computational tool for supporting idea evaluation was designed and implemented. It was assumed that the results of the ideation session are represented as a morphological chart and each entry is expressed as some combination of a sketch, text and references to physical effects and machine components. Approximately 110 physical effects were identified and represented in terms of algebraic equations, physical variables and a textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 16 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works. textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 15 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works.
ContributorsKhorshidi, Maryam (Author) / Shah, Jami J. (Thesis advisor) / Wu, Teresa (Committee member) / Gel, Esma (Committee member) / Arizona State University (Publisher)
Created2014
152893-Thumbnail Image.png
Description
Network traffic analysis by means of Quality of Service (QoS) is a popular research and development area among researchers for a long time. It is becoming even more relevant recently due to ever increasing use of the Internet and other public and private communication networks. Fast and precise QoS analysis

Network traffic analysis by means of Quality of Service (QoS) is a popular research and development area among researchers for a long time. It is becoming even more relevant recently due to ever increasing use of the Internet and other public and private communication networks. Fast and precise QoS analysis is a vital task in mission-critical communication networks (MCCNs), where providing a certain level of QoS is essential for national security, safety or economic vitality. In this thesis, the details of all aspects of a comprehensive computational framework for QoS analysis in MCCNs are provided. There are three main QoS analysis tasks in MCCNs; QoS measurement, QoS visualization and QoS prediction. Definitions of these tasks are provided and for each of those, complete solutions are suggested either by referring to an existing work or providing novel methods.

A scalable and accurate passive one-way QoS measurement algorithm is proposed. It is shown that accurate QoS measurements are possible using network flow data.

Requirements of a good QoS visualization platform are listed. Implementations of the capabilities of a complete visualization platform are presented.

Steps of QoS prediction task in MCCNs are defined. The details of feature selection, class balancing through sampling and assessing classification algorithms for this task are outlined. Moreover, a novel tree based logistic regression method for knowledge discovery is introduced. Developed prediction framework is capable of making very accurate packet level QoS predictions and giving valuable insights to network administrators.
ContributorsSenturk, Muhammet Burhan (Author) / Li, Jing (Thesis advisor) / Baydogan, Mustafa G (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2014
153109-Thumbnail Image.png
Description
This thesis presents a meta-analysis of lead-free solder reliability. The qualitative analyses of the failure modes of lead- free solder under different stress tests including drop test, bend test, thermal test and vibration test are discussed. The main cause of failure of lead- free solder is fatigue crack, and the

This thesis presents a meta-analysis of lead-free solder reliability. The qualitative analyses of the failure modes of lead- free solder under different stress tests including drop test, bend test, thermal test and vibration test are discussed. The main cause of failure of lead- free solder is fatigue crack, and the speed of propagation of the initial crack could differ from different test conditions and different solder materials. A quantitative analysis about the fatigue behavior of SAC lead-free solder under thermal preconditioning process is conducted. This thesis presents a method of making prediction of failure life of solder alloy by building a Weibull regression model. The failure life of solder on circuit board is assumed Weibull distributed. Different materials and test conditions could affect the distribution by changing the shape and scale parameters of Weibull distribution. The method is to model the regression of parameters with different test conditions as predictors based on Bayesian inference concepts. In the process of building regression models, prior distributions are generated according to the previous studies, and Markov Chain Monte Carlo (MCMC) is used under WinBUGS environment.
ContributorsXu, Xinyue (Author) / Pan, Rong (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2014
153145-Thumbnail Image.png
Description
The main objective of this research is to develop an approach to PV module lifetime prediction. In doing so, the aim is to move from empirical generalizations to a formal predictive science based on data-driven case studies of the crystalline silicon PV systems. The evaluation of PV systems aged 5

The main objective of this research is to develop an approach to PV module lifetime prediction. In doing so, the aim is to move from empirical generalizations to a formal predictive science based on data-driven case studies of the crystalline silicon PV systems. The evaluation of PV systems aged 5 to 30 years old that results in systematic predictive capability that is absent today. The warranty period provided by the manufacturers typically range from 20 to 25 years for crystalline silicon modules. The end of lifetime (for example, the time-to-degrade by 20% from rated power) of PV modules is usually calculated using a simple linear extrapolation based on the annual field degradation rate (say, 0.8% drop in power output per year). It has been 26 years since systematic studies on solar PV module lifetime prediction were undertaken as part of the 11-year flat-plate solar array (FSA) project of the Jet Propulsion Laboratory (JPL) funded by DOE. Since then, PV modules have gone through significant changes in construction materials and design; making most of the field data obsolete, though the effect field stressors on the old designs/materials is valuable to be understood. Efforts have been made to adapt some of the techniques developed to the current technologies, but they are too often limited in scope and too reliant on empirical generalizations of previous results. Some systematic approaches have been proposed based on accelerated testing, but no or little experimental studies have followed. Consequently, the industry does not exactly know today how to test modules for a 20 - 30 years lifetime.

This research study focuses on the behavior of crystalline silicon PV module technology in the dry and hot climatic condition of Tempe/Phoenix, Arizona. A three-phase approach was developed: (1) A quantitative failure modes, effects, and criticality analysis (FMECA) was developed for prioritizing failure modes or mechanisms in a given environment; (2) A time-series approach was used to model environmental stress variables involved and prioritize their effect on the power output drop; and (3) A procedure for developing a prediction model was proposed for the climatic specific condition based on accelerated degradation testing
ContributorsKuitche, Joseph Mathurin (Author) / Pan, Rong (Thesis advisor) / Tamizhmani, Govindasamy (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2014
153153-Thumbnail Image.png
Description
Since Duffin and Schaeffer's introduction of frames in 1952, the concept of a frame has received much attention in the mathematical community and has inspired several generalizations. The focus of this thesis is on the concept of an operator-valued frame (OVF) and a more general concept called herein an operator-valued

Since Duffin and Schaeffer's introduction of frames in 1952, the concept of a frame has received much attention in the mathematical community and has inspired several generalizations. The focus of this thesis is on the concept of an operator-valued frame (OVF) and a more general concept called herein an operator-valued frame associated with a measure space (MS-OVF), which is sometimes called a continuous g-frame. The first of two main topics explored in this thesis is the relationship between MS-OVFs and objects prominent in quantum information theory called positive operator-valued measures (POVMs). It has been observed that every MS-OVF gives rise to a POVM with invertible total variation in a natural way. The first main result of this thesis is a characterization of which POVMs arise in this way, a result obtained by extending certain existing Radon-Nikodym theorems for POVMs. The second main topic investigated in this thesis is the role of the theory of unitary representations of a Lie group G in the construction of OVFs for the L^2-space of a relatively compact subset of G. For G=R, Duffin and Schaeffer have given general conditions that ensure a sequence of (one-dimensional) representations of G, restricted to (-1/2,1/2), forms a frame for L^{2}(-1/2,1/2), and similar conditions exist for G=R^n. The second main result of this thesis expresses conditions related to Duffin and Schaeffer's for two more particular Lie groups: the Euclidean motion group on R^2 and the (2n+1)-dimensional Heisenberg group. This proceeds in two steps. First, for a Lie group admitting a uniform lattice and an appropriate relatively compact subset E of G, the Selberg Trace Formula is used to obtain a Parseval OVF for L^{2}(E) that is expressed in terms of irreducible representations of G. Second, for the two particular Lie groups an appropriate set E is found, and it is shown that for each of these groups, with suitably parametrized unitary duals, the Parseval OVF remains an OVF when perturbations are made to the parameters of the included representations.
ContributorsRobinson, Benjamin (Author) / Cochran, Douglas (Thesis advisor) / Moran, William (Thesis advisor) / Boggess, Albert (Committee member) / Milner, Fabio (Committee member) / Spielberg, John (Committee member) / Arizona State University (Publisher)
Created2014
153065-Thumbnail Image.png
Description
Data imbalance and data noise often coexist in real world datasets. Data imbalance affects the learning classifier by degrading the recognition power of the classifier on the minority class, while data noise affects the learning classifier by providing inaccurate information and thus misleads the classifier. Because of these differences, data

Data imbalance and data noise often coexist in real world datasets. Data imbalance affects the learning classifier by degrading the recognition power of the classifier on the minority class, while data noise affects the learning classifier by providing inaccurate information and thus misleads the classifier. Because of these differences, data imbalance and data noise have been treated separately in the data mining field. Yet, such approach ignores the mutual effects and as a result may lead to new problems. A desirable solution is to tackle these two issues jointly. Noting the complementary nature of generative and discriminative models, this research proposes a unified model fusion based framework to handle the imbalanced classification with noisy dataset.

The phase I study focuses on the imbalanced classification problem. A generative classifier, Gaussian Mixture Model (GMM) is studied which can learn the distribution of the imbalance data to improve the discrimination power on imbalanced classes. By fusing this knowledge into cost SVM (cSVM), a CSG method is proposed. Experimental results show the effectiveness of CSG in dealing with imbalanced classification problems.

The phase II study expands the research scope to include the noisy dataset into the imbalanced classification problem. A model fusion based framework, K Nearest Gaussian (KNG) is proposed. KNG employs a generative modeling method, GMM, to model the training data as Gaussian mixtures and form adjustable confidence regions which are less sensitive to data imbalance and noise. Motivated by the K-nearest neighbor algorithm, the neighboring Gaussians are used to classify the testing instances. Experimental results show KNG method greatly outperforms traditional classification methods in dealing with imbalanced classification problems with noisy dataset.

The phase III study addresses the issues of feature selection and parameter tuning of KNG algorithm. To further improve the performance of KNG algorithm, a Particle Swarm Optimization based method (PSO-KNG) is proposed. PSO-KNG formulates model parameters and data features into the same particle vector and thus can search the best feature and parameter combination jointly. The experimental results show that PSO can greatly improve the performance of KNG with better accuracy and much lower computational cost.
ContributorsHe, Miao (Author) / Wu, Teresa (Thesis advisor) / Li, Jing (Committee member) / Silva, Alvin (Committee member) / Borror, Connie (Committee member) / Arizona State University (Publisher)
Created2014
153188-Thumbnail Image.png
Description
Conceptual design stage plays a critical role in product development. However, few systematic methods and tools exist to support conceptual design. The long term aim of this project is to develop a tool for facilitating holistic ideation for conceptual design. This research is a continuation of past efforts in ASU

Conceptual design stage plays a critical role in product development. However, few systematic methods and tools exist to support conceptual design. The long term aim of this project is to develop a tool for facilitating holistic ideation for conceptual design. This research is a continuation of past efforts in ASU Design Automation Lab. In past research, an interactive software test bed (Holistic Ideation Tool - version 1) was developed to explore logical ideation methods. Ideation states were identified and ideation strategies were developed to overcome common ideation blocks. The next version (version 2) of the holistic ideation tool added Cascading Evolutionary Morphological Charts (CEMC) framework and intuitive ideation strategies (reframing, restructuring, random connection, and forced connection).

Despite these remarkable contributions, there exist shortcomings in the previous versions (version 1 and version 2) of the holistic ideation tool. First, there is a need to add new ideation methods to the holistic ideation tool. Second, the organizational framework provided by previous versions needs to be improved, and a holistic approach needs to be devised, instead of separate logical or intuitive approaches. Therefore, the main objective of this thesis is to make the improvements and to resolve technical issues that are involved in their implementation.

Towards this objective, a new web based holistic ideation tool (version 3) has been created. The new tool adds and integrates Knowledge Bases of Mechanisms and Components Off-The-Shelf (COTS) into logical ideation methods. Additionally, an improved CEMC framework has been devised for organizing ideas efficiently. Furthermore, the usability of the tool has been improved by designing and implementing a new graphical user interface (GUI) which is more user friendly. It is hoped that these new features will lead to a platform for the designers to not only generate creative ideas but also effectively organize and store them in the conceptual design stage. By placing it on the web for public use, the Testbed has the potential to be used for research on the ideation process by effectively collecting large amounts of data from designers.
ContributorsNarsale, Sumit Sunil (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2014