Matching Items (21)
150111-Thumbnail Image.png
Description
Finding the optimal solution to a problem with an enormous search space can be challenging. Unless a combinatorial construction technique is found that also guarantees the optimality of the resulting solution, this could be an infeasible task. If such a technique is unavailable, different heuristic methods are generally used to

Finding the optimal solution to a problem with an enormous search space can be challenging. Unless a combinatorial construction technique is found that also guarantees the optimality of the resulting solution, this could be an infeasible task. If such a technique is unavailable, different heuristic methods are generally used to improve the upper bound on the size of the optimal solution. This dissertation presents an alternative method which can be used to improve a solution to a problem rather than construct a solution from scratch. Necessity analysis, which is the key to this approach, is the process of analyzing the necessity of each element in a solution. The post-optimization algorithm presented here utilizes the result of the necessity analysis to improve the quality of the solution by eliminating unnecessary objects from the solution. While this technique could potentially be applied to different domains, this dissertation focuses on k-restriction problems, where a solution to the problem can be presented as an array. A scalable post-optimization algorithm for covering arrays is described, which starts from a valid solution and performs necessity analysis to iteratively improve the quality of the solution. It is shown that not only can this technique improve upon the previously best known results, it can also be added as a refinement step to any construction technique and in most cases further improvements are expected. The post-optimization algorithm is then modified to accommodate every k-restriction problem; and this generic algorithm can be used as a starting point to create a reasonable sized solution for any such problem. This generic algorithm is then further refined for hash family problems, by adding a conflict graph analysis to the necessity analysis phase. By recoloring the conflict graphs a new degree of flexibility is explored, which can further improve the quality of the solution.
ContributorsNayeri, Peyman (Author) / Colbourn, Charles (Thesis advisor) / Konjevod, Goran (Thesis advisor) / Sen, Arunabha (Committee member) / Stanzione Jr, Daniel (Committee member) / Arizona State University (Publisher)
Created2011
150095-Thumbnail Image.png
Description
Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It

Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It is particularly desirable to share the domain knowledge (among the tasks) when there are a number of related tasks but only limited training data is available for each task. Modeling the relationship of multiple tasks is critical to the generalization performance of the MTL algorithms. In this dissertation, I propose a series of MTL approaches which assume that multiple tasks are intrinsically related via a shared low-dimensional feature space. The proposed MTL approaches are developed to deal with different scenarios and settings; they are respectively formulated as mathematical optimization problems of minimizing the empirical loss regularized by different structures. For all proposed MTL formulations, I develop the associated optimization algorithms to find their globally optimal solution efficiently. I also conduct theoretical analysis for certain MTL approaches by deriving the globally optimal solution recovery condition and the performance bound. To demonstrate the practical performance, I apply the proposed MTL approaches on different real-world applications: (1) Automated annotation of the Drosophila gene expression pattern images; (2) Categorization of the Yahoo web pages. Our experimental results demonstrate the efficiency and effectiveness of the proposed algorithms.
ContributorsChen, Jianhui (Author) / Ye, Jieping (Thesis advisor) / Kumar, Sudhir (Committee member) / Liu, Huan (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2011
156345-Thumbnail Image.png
Description
The energy consumption by public drinking water and wastewater utilities represent up to 30%-40% of a municipality energy bill. The largest energy consumption is used to operate motors for pumping. As a result, the engineering and control community develop the Variable Speed Pumps (VSPs) which allow for regulating valves in

The energy consumption by public drinking water and wastewater utilities represent up to 30%-40% of a municipality energy bill. The largest energy consumption is used to operate motors for pumping. As a result, the engineering and control community develop the Variable Speed Pumps (VSPs) which allow for regulating valves in the network instead of the traditional binary ON/OFF pumps. Potentially, VSPs save up to 90% of annual energy cost compared to the binary pump. The control problem has been tackled in the literature as “Pump Scheduling Optimization” (PSO) with a main focus on the cost minimization. Nonetheless, engineering literature is mostly concerned with the problem of understanding “healthy working conditions” (e.g., leakages, breakages) for a water infrastructure rather than the costs. This is very critical because if we operate a network under stress, it may satisfy the demand at present but will likely hinder network functionality in the future.

This research addresses the problem of analyzing working conditions of large water systems by means of a detailed hydraulic simulation model (e.g., EPANet) to gain insights into feasibility with respect to pressure, tank level, etc. This work presents a new framework called Feasible Set Approximation – Probabilistic Branch and Bound (FSA-PBnB) for the definition and determination of feasible solutions in terms of pumps regulation. We propose the concept of feasibility distance, which is measured as the distance of the current solution from the feasibility frontier to estimate the distribution of the feasibility values across the solution space. Based on this estimate, pruning the infeasible regions and maintaining the feasible regions are proposed to identify the desired feasible solutions. We test the proposed algorithm with both theoretical and real water networks. The results demonstrate that FSA-PBnB has the capability to identify the feasibility profile in an efficient way. Additionally, with the feasibility distance, we can understand the quality of sub-region in terms of feasibility.

The present work provides a basic feasibility determination framework on the low dimension problems. When FSA-PBnB extends to large scale constraint optimization problems, a more intelligent sampling method may be developed to further reduce the computational effort.
ContributorsTsai, Yi-An (Author) / Pedrielli, Giulia (Thesis advisor) / Mirchandani, Pitu (Committee member) / Mascaro, Giuseppe (Committee member) / Zabinsky, Zelda (Committee member) / Candelieri, Antonio (Committee member) / Arizona State University (Publisher)
Created2018
157251-Thumbnail Image.png
Description
This thesis presents a family of adaptive curvature methods for gradient-based stochastic optimization. In particular, a general algorithmic framework is introduced along with a practical implementation that yields an efficient, adaptive curvature gradient descent algorithm. To this end, a theoretical and practical link between curvature matrix estimation and shrinkage methods

This thesis presents a family of adaptive curvature methods for gradient-based stochastic optimization. In particular, a general algorithmic framework is introduced along with a practical implementation that yields an efficient, adaptive curvature gradient descent algorithm. To this end, a theoretical and practical link between curvature matrix estimation and shrinkage methods for covariance matrices is established. The use of shrinkage improves estimation accuracy of the curvature matrix when data samples are scarce. This thesis also introduce several insights that result in data- and computation-efficient update equations. Empirical results suggest that the proposed method compares favorably with existing second-order techniques based on the Fisher or Gauss-Newton and with adaptive stochastic gradient descent methods on both supervised and reinforcement learning tasks.
ContributorsBarron, Trevor (Author) / Ben Amor, Heni (Thesis advisor) / He, Jingrui (Committee member) / Levihn, Martin (Committee member) / Arizona State University (Publisher)
Created2019
155076-Thumbnail Image.png
Description
Tall building developments are spreading across the globe at an ever-increasing rate (www.ctbuh.org). In 1982, the number of ‘tall buildings’ in North America was merely 1,701. This number rose to 26,053, in 2006. The global number of buildings, 200m or more in height, has risen from 286 to 602 in

Tall building developments are spreading across the globe at an ever-increasing rate (www.ctbuh.org). In 1982, the number of ‘tall buildings’ in North America was merely 1,701. This number rose to 26,053, in 2006. The global number of buildings, 200m or more in height, has risen from 286 to 602 in the last decade alone. This dissertation concentrates on design optimization of such, about-to-be modular, structures by implementing AISC 2010 design requirements. Along with a discussion on and classification of lateral load resisting systems, a few design optimization cases are also being studied. The design optimization results of full scale three dimensional buildings subject to multiple design criteria including stress, serviceability and dynamic response are discussed. The tool being used for optimization is GS-USA Frame3D© (henceforth referred to as Frame3D). Types of analyses being verified against a strong baseline of Abaqus 6.11-1, are stress analysis, modal analysis and buckling analysis.

The provisions in AISC 2010 allows us to bypass the limit state of flexural buckling in compression checks with a satisfactory buckling analysis. This grants us relief from the long and tedious effective length factor computations. Besides all the AISC design checks, an empirical equation to check beams with high shear and flexure is also being enforced.

In this study, we present the details of a tool that can be useful in design optimization - finite element modeling, translating AISC 2010 design code requirements into components of the FE and design optimization models. A comparative study of designs based on AISC 2010 and fixed allowable stresses, (regardless of the shape of cross section) is also being carried out.
ContributorsUnde, Yogesh (Author) / Rajan, Subramaniam D. (Thesis advisor) / Neithalath, Narayanan (Committee member) / Mobasher, Barzin (Committee member) / Arizona State University (Publisher)
Created2016
155245-Thumbnail Image.png
Description
Large-scale integration of wind generation introduces planning and operational difficulties due to the intermittent and highly variable nature of wind. In particular, the generation from non-hydro renewable resources is inherently variable and often times difficult to predict. Integrating significant amounts of renewable generation, thus, presents a challenge to the power

Large-scale integration of wind generation introduces planning and operational difficulties due to the intermittent and highly variable nature of wind. In particular, the generation from non-hydro renewable resources is inherently variable and often times difficult to predict. Integrating significant amounts of renewable generation, thus, presents a challenge to the power systems operators, requiring additional flexibility, which may incur a decrease of conventional generation capacity.

This research investigates the algorithms employing emerging computational advances in system operation policies that can improve the flexibility of the electricity industry. The focus of this study is on flexible operation policies for renewable generation, particularly wind generation. Specifically, distributional forecasts of windfarm generation are used to dispatch a “discounted” amount of the wind generation, leaving a reserve margin that can be used for reserve if needed. This study presents systematic mathematic formulations that allow the operator incorporate this flexibility into the operation optimization model to increase the benefits in the energy and reserve scheduling procedure. Incorporating this formulation into the dispatch optimization problem provides the operator with the ability of using forecasted probability distributions as well as the off-line generated policies to choose proper approaches for operating the system in real-time. Methods to generate such policies are discussed and a forecast-based approach for developing wind margin policies is presented. The impacts of incorporating such policies in the electricity market models are also investigated.
ContributorsHedayati Mehdiabadi, Mojgan (Author) / Zhang, Junshan (Thesis advisor) / Hedman, Kory (Thesis advisor) / Heydt, Gerald (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2017
155759-Thumbnail Image.png
Description
Carbon Capture and Storage (CCS) is a climate stabilization strategy that prevents CO2 emissions from entering the atmosphere. Despite its benefits, impactful CCS projects require large investments in infrastructure, which could deter governments from implementing this strategy. In this sense, the development of innovative tools to support large-scale cost-efficient CCS

Carbon Capture and Storage (CCS) is a climate stabilization strategy that prevents CO2 emissions from entering the atmosphere. Despite its benefits, impactful CCS projects require large investments in infrastructure, which could deter governments from implementing this strategy. In this sense, the development of innovative tools to support large-scale cost-efficient CCS deployment decisions is critical for climate change mitigation. This thesis proposes an improved mathematical formulation for the scalable infrastructure model for CCS (SimCCS), whose main objective is to design a minimum-cost pipe network to capture, transport, and store a target amount of CO2. Model decisions include source, reservoir, and pipe selection, as well as CO2 amounts to capture, store, and transport. By studying the SimCCS optimal solution and the subjacent network topology, new valid inequalities (VI) are proposed to strengthen the existing mathematical formulation. These constraints seek to improve the quality of the linear relaxation solutions in the branch and bound algorithm used to solve SimCCS. Each VI is explained with its intuitive description, mathematical structure and examples of resulting improvements. Further, all VIs are validated by assessing the impact of their elimination from the new formulation. The validated new formulation solves the 72-nodes Alberta problem up to 7 times faster than the original model. The upgraded model reduces the computation time required to solve SimCCS in 72% of randomly generated test instances, solving SimCCS up to 200 times faster. These formulations can be tested and then applied to enhance variants of the SimCCS and general fixed-charge network flow problems. Finally, an experience from testing a Benders decomposition approach for SimCCS is discussed and future scope of probable efficient solution-methods is outlined.
ContributorsLobo, Loy Joseph (Author) / Sefair, Jorge A (Thesis advisor) / Escobedo, Adolfo (Committee member) / Kuby, Michael (Committee member) / Middleton, Richard (Committee member) / Arizona State University (Publisher)
Created2017
155220-Thumbnail Image.png
Description
In this dissertation, I propose potential techniques to improve the quality-of-service (QoS) of real-time applications in cognitive radio (CR) systems. Unlike best-effort applications, real-time applications, such as audio and video, have a QoS that need to be met. There are two different frameworks that are used to study the QoS

In this dissertation, I propose potential techniques to improve the quality-of-service (QoS) of real-time applications in cognitive radio (CR) systems. Unlike best-effort applications, real-time applications, such as audio and video, have a QoS that need to be met. There are two different frameworks that are used to study the QoS in the literature, namely, the average-delay and the hard-deadline frameworks. In the former, the scheduling algorithm has to guarantee that the packet's average delay is below a prespecified threshold while the latter imposes a hard deadline on each packet in the system. In this dissertation, I present joint power allocation and scheduling algorithms for each framework and show their applications in CR systems which are known to have strict power limitations so as to protect the licensed users from interference.

A common aspect of the two frameworks is the packet service time. Thus, the effect of multiple channels on the service time is studied first. The problem is formulated as an optimal stopping rule problem where it is required to decide at which channel the SU should stop sensing and begin transmission. I provide a closed-form expression for this optimal stopping rule and the optimal transmission power of secondary user (SU).

The average-delay framework is then presented in a single CR channel system with a base station (BS) that schedules the SUs to minimize the average delay while protecting the primary users (PUs) from harmful interference. One of the contributions of the proposed algorithm is its suitability for heterogeneous-channels systems where users with statistically low channel quality suffer worse delay performances. The proposed algorithm guarantees the prespecified delay performance to each SU without violating the PU's interference constraint.

Finally, in the hard-deadline framework, I propose three algorithms that maximize the system's throughput while guaranteeing the required percentage of packets to be transmitted by their deadlines. The proposed algorithms work in heterogeneous systems where the BS is serving different types of users having real-time (RT) data and non-real-time (NRT) data. I show that two of the proposed algorithms have the low complexity where the power policies of both the RT and NRT users are in closed-form expressions and a low-complexity scheduler.
ContributorsEwaisha, Ahmed Emad (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Ying, Lei (Committee member) / Bliss, Daniel (Committee member) / Kosut, Oliver (Committee member) / Arizona State University (Publisher)
Created2016
154048-Thumbnail Image.png
Description
Vegetative filter strips (VFS) are an effective methodology used for storm water management particularly for large urban parking lots. An optimization model for the design of vegetative filter strips that minimizes the amount of land required for stormwater management using the VFS is developed in this study. The

Vegetative filter strips (VFS) are an effective methodology used for storm water management particularly for large urban parking lots. An optimization model for the design of vegetative filter strips that minimizes the amount of land required for stormwater management using the VFS is developed in this study. The resulting optimization model is based upon the kinematic wave equation for overland sheet flow along with equations defining the cumulative infiltration and infiltration rate.

In addition to the stormwater management function, Vegetative filter strips (VFS) are effective mechanisms for control of sediment flow and soil erosion from agricultural and urban lands. Erosion is a major problem associated with areas subjected to high runoffs or steep slopes across the globe. In order to effect economy in the design of grass filter strips as a mechanism for sediment control & stormwater management, an optimization model is required that minimizes the land requirements for the VFS. The optimization model presented in this study includes an intricate system of equations including the equations defining the sheet flow on the paved and grassed area combined with the equations defining the sediment transport over the vegetative filter strip using a non-linear programming optimization model. In this study, the optimization model has been applied using a sensitivity analysis of parameters such as different soil types, rainfall characteristics etc., performed to validate the model
ContributorsKhatavkar, Puneet N (Author) / Mays, Larry W. (Thesis advisor) / Fox, Peter (Committee member) / Wang, Zhihua (Committee member) / Mascaro, Giuseppe (Committee member) / Arizona State University (Publisher)
Created2015
154349-Thumbnail Image.png
Description
In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a

In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems - in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) - whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers - machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers.

We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to efficiently utilize hundreds and potentially thousands of processors, and analyze systems with 100+ dimensional state-space. Furthermore, we extend our algorithms to analyze robust stability over more complicated geometries such as hypercubes and arbitrary convex polytopes. Our algorithms can be readily extended to address a wide variety of problems in control such as Hinfinity synthesis for systems with parametric uncertainty and computing control Lyapunov functions.
ContributorsKamyar, Reza (Author) / Peet, Matthew (Thesis advisor) / Berman, Spring (Committee member) / Rivera, Daniel (Committee member) / Artemiadis, Panagiotis (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2016