Matching Items (16)
150153-Thumbnail Image.png
Description
A new method of adaptive mesh generation for the computation of fluid flows is investigated. The method utilizes gradients of the flow solution to adapt the size and stretching of elements or volumes in the computational mesh as is commonly done in the conventional Hessian approach. However, in

A new method of adaptive mesh generation for the computation of fluid flows is investigated. The method utilizes gradients of the flow solution to adapt the size and stretching of elements or volumes in the computational mesh as is commonly done in the conventional Hessian approach. However, in the new method, higher-order gradients are used in place of the Hessian. The method is applied to the finite element solution of the incompressible Navier-Stokes equations on model problems. Results indicate that a significant efficiency benefit is realized.
ContributorsShortridge, Randall (Author) / Chen, Kang Ping (Thesis advisor) / Herrmann, Marcus (Thesis advisor) / Wells, Valana (Committee member) / Huang, Huei-Ping (Committee member) / Mittelmann, Hans (Committee member) / Arizona State University (Publisher)
Created2011
151716-Thumbnail Image.png
Description
The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a

The rapid escalation of technology and the widespread emergence of modern technological equipments have resulted in the generation of humongous amounts of digital data (in the form of images, videos and text). This has expanded the possibility of solving real world problems using computational learning frameworks. However, while gathering a large amount of data is cheap and easy, annotating them with class labels is an expensive process in terms of time, labor and human expertise. This has paved the way for research in the field of active learning. Such algorithms automatically select the salient and exemplar instances from large quantities of unlabeled data and are effective in reducing human labeling effort in inducing classification models. To utilize the possible presence of multiple labeling agents, there have been attempts towards a batch mode form of active learning, where a batch of data instances is selected simultaneously for manual annotation. This dissertation is aimed at the development of novel batch mode active learning algorithms to reduce manual effort in training classification models in real world multimedia pattern recognition applications. Four major contributions are proposed in this work: $(i)$ a framework for dynamic batch mode active learning, where the batch size and the specific data instances to be queried are selected adaptively through a single formulation, based on the complexity of the data stream in question, $(ii)$ a batch mode active learning strategy for fuzzy label classification problems, where there is an inherent imprecision and vagueness in the class label definitions, $(iii)$ batch mode active learning algorithms based on convex relaxations of an NP-hard integer quadratic programming (IQP) problem, with guaranteed bounds on the solution quality and $(iv)$ an active matrix completion algorithm and its application to solve several variants of the active learning problem (transductive active learning, multi-label active learning, active feature acquisition and active learning for regression). These contributions are validated on the face recognition and facial expression recognition problems (which are commonly encountered in real world applications like robotics, security and assistive technology for the blind and the visually impaired) and also on collaborative filtering applications like movie recommendation.
ContributorsChakraborty, Shayok (Author) / Panchanathan, Sethuraman (Thesis advisor) / Balasubramanian, Vineeth N. (Committee member) / Li, Baoxin (Committee member) / Mittelmann, Hans (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
152149-Thumbnail Image.png
Description
Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating

Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating range. In this document a linear model is derived for an inverter connected to the Thevenin equivalent of a microgrid. This model is then compared to a nonlinear simulation model and analyzed using the open and closed loop systems in both the time and frequency domains. The modeling error is quantified with emphasis on its use for controller design purposes. Control design examples are given using a Glover McFarlane controller, gain sched- uled Glover McFarlane controller, and bumpless transfer controller which are compared to the standard droop control approach. These examples serve as a guide to illustrate the use of multi-variable modeling techniques in the context of robust controller design and show that gain scheduled MIMO control techniques can extend the operating range of a microgrid. A hardware implementation is used to compare constant gain droop controllers with Glover McFarlane controllers and shows a clear advantage of the Glover McFarlane approach.
ContributorsSteenis, Joel (Author) / Ayyanar, Raja (Thesis advisor) / Mittelmann, Hans (Committee member) / Tsakalis, Konstantinos (Committee member) / Tylavsky, Daniel (Committee member) / Arizona State University (Publisher)
Created2013
171461-Thumbnail Image.png
Description
This work aims to address the design optimization of bio-inspired locomotive devices in collective swimming by developing a computational methodology which combines surrogate-based optimization with high fidelity fluid-structure interactions (FSI) simulations of thunniform swimmers. Three main phases highlight the contribution and novelty of the current work. The first phase includes

This work aims to address the design optimization of bio-inspired locomotive devices in collective swimming by developing a computational methodology which combines surrogate-based optimization with high fidelity fluid-structure interactions (FSI) simulations of thunniform swimmers. Three main phases highlight the contribution and novelty of the current work. The first phase includes the development and bench-marking of a constrained surrogate-based optimization algorithm which is appropriate to the current design problem. Additionally, new FSI techniques, such as a volume-conservation scheme, has been developed to enhance the accuracy and speed of the simulations. The second phase involves an investigation of the optimized hydrodynamics of a solitary accelerating self-propelled thunniform swimmer during start-up. The third phase extends the analysis to include the optimized hydrodynamics of accelerating swimmers in phalanx schools. Future work includes extending the analysis to the optimized hydrodynamics of steady-state and accelerating swimmers in a diamond-shaped school. The results of the first phase indicate that the proposed optimization algorithm maintains a competitive performance when compared to other gradient-based and gradient-free methods, in dealing with expensive simulations-based black-box optimization problems with constraints. In addition, the proposed optimization algorithm is capable of insuring strictly feasible candidates during the optimization procedure, which is a desirable property in applied engineering problems where design variables must remain feasible for simulations or experiments not to fail. The results of the second phase indicate that the optimized kinematic gait of a solitary accelerating swimmer generates the reverse Karman vortex street associated with high propulsive efficiency. Moreover, the efficiency of sub-optimum modes, in solitary swimming, is found to increase with both the tail amplitude and the effective flapping length of the swimmer, and a new scaling law is proposed to capture these trends. Results of the third phase indicate that the optimal midline kinematics in accelerating phalanx schools resemble those of accelerating solitary swimmers. The optimal separation distance in a phalanx school is shown to be around 2L (where L is the swimmer's total length). Furthermore, separation distance is shown to have a stronger effect, ceteris paribus, on the propulsion efficiency of a school when compared to phase synchronization.
ContributorsAbouhussein, Ahmed (Author) / Peet, Yulia (Thesis advisor) / Adrian, Ronald (Committee member) / Kim, Jeonglae (Committee member) / Kasbaoui, Mohamed (Committee member) / Mittelmann, Hans (Committee member) / Arizona State University (Publisher)
Created2022
171423-Thumbnail Image.png
Description
The emerging multimodal mobility as a service (MaaS) and connected and automated mobility (CAM) are expected to improve individual travel experience and entire transportation system performance in various aspects, such as convenience, safety, and reliability. There have been extensive efforts in the literature devoted to enhancing existing and developing new

The emerging multimodal mobility as a service (MaaS) and connected and automated mobility (CAM) are expected to improve individual travel experience and entire transportation system performance in various aspects, such as convenience, safety, and reliability. There have been extensive efforts in the literature devoted to enhancing existing and developing new methodologies and tools to investigate the impacts and potentials of CAM systems. Due to the hierarchical nature of CAM systems and associated intrinsic correlated human factors and physical infrastructures from various resolutions, simply considering components across different levels into a single model may be practically infeasible and computationally prohibitive in operation and decision stages. One of the greatest challenges in existing studies is to construct a theoretically sound and computationally efficient architecture such that CAM system modeling can be performed in an inherently consistent cross-resolution manner. This research aims to contribute to the modeling of CAM systems on layered transportation networks, with a special focus on the following three aspects: (1) layered CAM system architecture with a tight network and modeling consistency, in which different levels of tasks can be efficiently performed at dedicated layers; (2) cross-resolution traffic state estimation in CAM systems using heterogeneous observations; and (3) integrated city logistics operation optimization in CAM for improving system performance.
ContributorsLu, Jiawei (Author) / Zhou, Xuesong (Thesis advisor) / Pendyala, Ram (Committee member) / Xue, Guoliang (Committee member) / Mittelmann, Hans (Committee member) / Arizona State University (Publisher)
Created2022
157651-Thumbnail Image.png
Description
This dissertation develops a second order accurate approximation to the magnetic resonance (MR) signal model used in the PARSE (Parameter Assessment by Retrieval from Single Encoding) method to recover information about the reciprocal of the spin-spin relaxation time function (R2*) and frequency offset function (w) in addition to the typical

This dissertation develops a second order accurate approximation to the magnetic resonance (MR) signal model used in the PARSE (Parameter Assessment by Retrieval from Single Encoding) method to recover information about the reciprocal of the spin-spin relaxation time function (R2*) and frequency offset function (w) in addition to the typical steady-state transverse magnetization (M) from single-shot magnetic resonance imaging (MRI) scans. Sparse regularization on an approximation to the edge map is used to solve the associated inverse problem. Several studies are carried out for both one- and two-dimensional test problems, including comparisons to the first order approximation method, as well as the first order approximation method with joint sparsity across multiple time windows enforced. The second order accurate model provides increased accuracy while reducing the amount of data required to reconstruct an image when compared to piecewise constant in time models. A key component of the proposed technique is the use of fast transforms for the forward evaluation. It is determined that the second order model is capable of providing accurate single-shot MRI reconstructions, but requires an adequate coverage of k-space to do so. Alternative data sampling schemes are investigated in an attempt to improve reconstruction with single-shot data, as current trajectories do not provide ideal k-space coverage for the proposed method.
ContributorsJesse, Aaron Mitchel (Author) / Platte, Rodrigo (Thesis advisor) / Gelb, Anne (Committee member) / Kostelich, Eric (Committee member) / Mittelmann, Hans (Committee member) / Moustaoui, Mohamed (Committee member) / Arizona State University (Publisher)
Created2019
152833-Thumbnail Image.png
Description
In many fields one needs to build predictive models for a set of related machine learning tasks, such as information retrieval, computer vision and biomedical informatics. Traditionally these tasks are treated independently and the inference is done separately for each task, which ignores important connections among the tasks. Multi-task learning

In many fields one needs to build predictive models for a set of related machine learning tasks, such as information retrieval, computer vision and biomedical informatics. Traditionally these tasks are treated independently and the inference is done separately for each task, which ignores important connections among the tasks. Multi-task learning aims at simultaneously building models for all tasks in order to improve the generalization performance, leveraging inherent relatedness of these tasks. In this thesis, I firstly propose a clustered multi-task learning (CMTL) formulation, which simultaneously learns task models and performs task clustering. I provide theoretical analysis to establish the equivalence between the CMTL formulation and the alternating structure optimization, which learns a shared low-dimensional hypothesis space for different tasks. Then I present two real-world biomedical informatics applications which can benefit from multi-task learning. In the first application, I study the disease progression problem and present multi-task learning formulations for disease progression. In the formulations, the prediction at each point is a regression task and multiple tasks at different time points are learned simultaneously, leveraging the temporal smoothness among the tasks. The proposed formulations have been tested extensively on predicting the progression of the Alzheimer's disease, and experimental results demonstrate the effectiveness of the proposed models. In the second application, I present a novel data-driven framework for densifying the electronic medical records (EMR) to overcome the sparsity problem in predictive modeling using EMR. The densification of each patient is a learning task, and the proposed algorithm simultaneously densify all patients. As such, the densification of one patient leverages useful information from other patients.
ContributorsZhou, Jiayu (Author) / Ye, Jieping (Thesis advisor) / Mittelmann, Hans (Committee member) / Li, Baoxin (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2014
156420-Thumbnail Image.png
Description
The Kuramoto model is an archetypal model for studying synchronization in groups

of nonidentical oscillators where oscillators are imbued with their own frequency and

coupled with other oscillators though a network of interactions. As the coupling

strength increases, there is a bifurcation to complete synchronization where all oscillators

move with the same frequency and

The Kuramoto model is an archetypal model for studying synchronization in groups

of nonidentical oscillators where oscillators are imbued with their own frequency and

coupled with other oscillators though a network of interactions. As the coupling

strength increases, there is a bifurcation to complete synchronization where all oscillators

move with the same frequency and show a collective rhythm. Kuramoto-like

dynamics are considered a relevant model for instabilities of the AC-power grid which

operates in synchrony under standard conditions but exhibits, in a state of failure,

segmentation of the grid into desynchronized clusters.

In this dissertation the minimum coupling strength required to ensure total frequency

synchronization in a Kuramoto system, called the critical coupling, is investigated.

For coupling strength below the critical coupling, clusters of oscillators form

where oscillators within a cluster are on average oscillating with the same long-term

frequency. A unified order parameter based approach is developed to create approximations

of the critical coupling. Some of the new approximations provide strict lower

bounds for the critical coupling. In addition, these approximations allow for predictions

of the partially synchronized clusters that emerge in the bifurcation from the

synchronized state.

Merging the order parameter approach with graph theoretical concepts leads to a

characterization of this bifurcation as a weighted graph partitioning problem on an

arbitrary networks which then leads to an optimization problem that can efficiently

estimate the partially synchronized clusters. Numerical experiments on random Kuramoto

systems show the high accuracy of these methods. An interpretation of the

methods in the context of power systems is provided.
ContributorsGilg, Brady (Author) / Armbruster, Dieter (Thesis advisor) / Mittelmann, Hans (Committee member) / Scaglione, Anna (Committee member) / Strogatz, Steven (Committee member) / Welfert, Bruno (Committee member) / Arizona State University (Publisher)
Created2018
158103-Thumbnail Image.png
Description
Global optimization (programming) has been attracting the attention of researchers for almost a century. Since linear programming (LP) and mixed integer linear programming (MILP) had been well studied in early stages, MILP methods and software tools had improved in their efficiency in the past few years. They are now fast

Global optimization (programming) has been attracting the attention of researchers for almost a century. Since linear programming (LP) and mixed integer linear programming (MILP) had been well studied in early stages, MILP methods and software tools had improved in their efficiency in the past few years. They are now fast and robust even for problems with millions of variables. Therefore, it is desirable to use MILP software to solve mixed integer nonlinear programming (MINLP) problems. For an MINLP problem to be solved by an MILP solver, its nonlinear functions must be transformed to linear ones. The most common method to do the transformation is the piecewise linear approximation (PLA). This dissertation will summarize the types of optimization and the most important tools and methods, and will discuss in depth the PLA tool. PLA will be done using nonuniform partitioning of the domain of the variables involved in the function that will be approximated. Also partial PLA models that approximate only parts of a complicated optimization problem will be introduced. Computational experiments will be done and the results will show that nonuniform partitioning and partial PLA can be beneficial.
ContributorsAlkhalifa, Loay (Author) / Mittelmann, Hans (Thesis advisor) / Armbruster, Hans (Committee member) / Escobedo, Adolfo (Committee member) / Renaut, Rosemary (Committee member) / Sefair, Jorge (Committee member) / Arizona State University (Publisher)
Created2020
158028-Thumbnail Image.png
Description
For the last 50 years, oscillator modeling in ranging systems has received considerable

attention. Many components in a navigation system, such as the master oscillator

driving the receiver system, as well the master oscillator in the transmitting system

contribute significantly to timing errors. Algorithms in the navigation processor must

be able to predict and

For the last 50 years, oscillator modeling in ranging systems has received considerable

attention. Many components in a navigation system, such as the master oscillator

driving the receiver system, as well the master oscillator in the transmitting system

contribute significantly to timing errors. Algorithms in the navigation processor must

be able to predict and compensate such errors to achieve a specified accuracy. While

much work has been done on the fundamentals of these problems, the thinking on said

problems has not progressed. On the hardware end, the designers of local oscillators

focus on synthesized frequency and loop noise bandwidth. This does nothing to

mitigate, or reduce frequency stability degradation in band. Similarly, there are not

systematic methods to accommodate phase and frequency anomalies such as clock

jumps. Phase locked loops are fundamentally control systems, and while control

theory has had significant advancement over the last 30 years, the design of timekeeping

sources has not advanced beyond classical control. On the software end,

single or two state oscillator models are typically embedded in a Kalman Filter to

alleviate time errors between the transmitter and receiver clock. Such models are

appropriate for short term time accuracy, but insufficient for long term time accuracy.

Additionally, flicker frequency noise may be present in oscillators, and it presents

mathematical modeling complications. This work proposes novel H∞ control methods

to address the shortcomings in the standard design of time-keeping phase locked loops.

Such methods allow the designer to address frequency stability degradation as well

as high phase/frequency dynamics. Additionally, finite-dimensional approximants of

flicker frequency noise that are more representative of the truth system than the

tradition Gauss Markov approach are derived. Last, to maintain timing accuracy in

a wide variety of operating environments, novel Banks of Adaptive Extended Kalman

Filters are used to address both stochastic and dynamic uncertainty.
ContributorsEchols, Justin A (Author) / Bliss, Daniel W (Thesis advisor) / Tsakalis, Konstantinos S (Committee member) / Berman, Spring (Committee member) / Mittelmann, Hans (Committee member) / Arizona State University (Publisher)
Created2020