Matching Items (6)
Filtering by

Clear all filters

152696-Thumbnail Image.png
Description
Increasing interest in individualized treatment strategies for prevention and treatment of health disorders has created a new application domain for dynamic modeling and control. Standard population-level clinical trials, while useful, are not the most suitable vehicle for understanding the dynamics of dosage changes to patient response. A secondary analysis of

Increasing interest in individualized treatment strategies for prevention and treatment of health disorders has created a new application domain for dynamic modeling and control. Standard population-level clinical trials, while useful, are not the most suitable vehicle for understanding the dynamics of dosage changes to patient response. A secondary analysis of intensive longitudinal data from a naltrexone intervention for fibromyalgia examined in this dissertation shows the promise of system identification and control. This includes datacentric identification methods such as Model-on-Demand, which are attractive techniques for estimating nonlinear dynamical systems from noisy data. These methods rely on generating a local function approximation using a database of regressors at the current operating point, with this process repeated at every new operating condition. This dissertation examines generating input signals for data-centric system identification by developing a novel framework of geometric distribution of regressors and time-indexed output points, in the finite dimensional space, to generate sufficient support for the estimator. The input signals are generated while imposing “patient-friendly” constraints on the design as a means to operationalize single-subject clinical trials. These optimization-based problem formulations are examined for linear time-invariant systems and block-structured Hammerstein systems, and the results are contrasted with alternative designs based on Weyl's criterion. Numerical solution to the resulting nonconvex optimization problems is proposed through semidefinite programming approaches for polynomial optimization and nonlinear programming methods. It is shown that useful bounds on the objective function can be calculated through relaxation procedures, and that the data-centric formulations are amenable to sparse polynomial optimization. In addition, input design formulations are formulated for achieving a desired output and specified input spectrum. Numerical examples illustrate the benefits of the input signal design formulations including an example of a hypothetical clinical trial using the drug gabapentin. In the final part of the dissertation, the mixed logical dynamical framework for hybrid model predictive control is extended to incorporate a switching time strategy, where decisions are made at some integer multiple of the sample time, and manipulation of only one input at a given sample time among multiple inputs. These are considerations important for clinical use of the algorithm.
ContributorsDeśapāṇḍe, Sunīla (Author) / Rivera, Daniel E. (Thesis advisor) / Peet, Matthew M. (Committee member) / Si, Jennie (Committee member) / Tsakalis, Konstantinos S. (Committee member) / Arizona State University (Publisher)
Created2014
153096-Thumbnail Image.png
Description
Control engineering offers a systematic and efficient approach to optimizing the effectiveness of individually tailored treatment and prevention policies, also known as adaptive or ``just-in-time'' behavioral interventions. These types of interventions represent promising strategies for addressing many significant public health concerns. This dissertation explores the development of decision algorithms for

Control engineering offers a systematic and efficient approach to optimizing the effectiveness of individually tailored treatment and prevention policies, also known as adaptive or ``just-in-time'' behavioral interventions. These types of interventions represent promising strategies for addressing many significant public health concerns. This dissertation explores the development of decision algorithms for adaptive sequential behavioral interventions using dynamical systems modeling, control engineering principles and formal optimization methods. A novel gestational weight gain (GWG) intervention involving multiple intervention components and featuring a pre-defined, clinically relevant set of sequence rules serves as an excellent example of a sequential behavioral intervention; it is examined in detail in this research.

 

A comprehensive dynamical systems model for the GWG behavioral interventions is developed, which demonstrates how to integrate a mechanistic energy balance model with dynamical formulations of behavioral models, such as the Theory of Planned Behavior and self-regulation. Self-regulation is further improved with different advanced controller formulations. These model-based controller approaches enable the user to have significant flexibility in describing a participant's self-regulatory behavior through the tuning of controller adjustable parameters. The dynamic simulation model demonstrates proof of concept for how self-regulation and adaptive interventions influence GWG, how intra-individual and inter-individual variability play a critical role in determining intervention outcomes, and the evaluation of decision rules.

 

Furthermore, a novel intervention decision paradigm using Hybrid Model Predictive Control framework is developed to generate sequential decision policies in the closed-loop. Clinical considerations are systematically taken into account through a user-specified dosage sequence table corresponding to the sequence rules, constraints enforcing the adjustment of one input at a time, and a switching time strategy accounting for the difference in frequency between intervention decision points and sampling intervals. Simulation studies illustrate the potential usefulness of the intervention framework.

The final part of the dissertation presents a model scheduling strategy relying on gain-scheduling to address nonlinearities in the model, and a cascade filter design for dual-rate control system is introduced to address scenarios with variable sampling rates. These extensions are important for addressing real-life scenarios in the GWG intervention.
ContributorsDong, Yuwen (Author) / Rivera, Daniel E (Thesis advisor) / Dai, Lenore (Committee member) / Forzani, Erica (Committee member) / Rege, Kaushal (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2014
150671-Thumbnail Image.png
Description
Contemporary methods for dynamic security assessment (DSA) mainly re-ly on time domain simulations to explore the influence of large disturbances in a power system. These methods are computationally intensive especially when the system operating point changes continually. The trajectory sensitivity method, when implemented and utilized as a complement to the

Contemporary methods for dynamic security assessment (DSA) mainly re-ly on time domain simulations to explore the influence of large disturbances in a power system. These methods are computationally intensive especially when the system operating point changes continually. The trajectory sensitivity method, when implemented and utilized as a complement to the existing DSA time domain simulation routine, can provide valuable insights into the system variation in re-sponse to system parameter changes. The implementation of the trajectory sensitivity analysis is based on an open source power system analysis toolbox called PSAT. Eight categories of sen-sitivity elements have been implemented and tested. The accuracy assessment of the implementation demonstrates the validity of both the theory and the imple-mentation. The computational burden introduced by the additional sensitivity equa-tions is relieved by two innovative methods: one is by employing a cluster to per-form the sensitivity calculations in parallel; the other one is by developing a mod-ified very dishonest Newton method in conjunction with the latest sparse matrix processing technology. The relation between the linear approximation accuracy and the perturba-tion size is also studied numerically. It is found that there is a fixed connection between the linear approximation accuracy and the perturbation size. Therefore this finding can serve as a general application guide to evaluate the accuracy of the linear approximation. The applicability of the trajectory sensitivity approach to a large realistic network has been demonstrated in detail. This research work applies the trajectory sensitivity analysis method to the Western Electricity Coordinating Council (WECC) system. Several typical power system dynamic security problems, in-cluding the transient angle stability problem, the voltage stability problem consid-ering load modeling uncertainty and the transient stability constrained interface real power flow limit calculation, have been addressed. Besides, a method based on the trajectory sensitivity approach and the model predictive control has been developed for determination of under frequency load shedding strategy for real time stability assessment. These applications have shown the great efficacy and accuracy of the trajectory sensitivity method in handling these traditional power system stability problems.
ContributorsHou, Guanji (Author) / Vittal, Vijay (Thesis advisor) / Heydt, Gerald (Committee member) / Tylavsky, Daniel (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2012
149913-Thumbnail Image.png
Description
One necessary condition for the two-pass risk premium estimator to be consistent and asymptotically normal is that the rank of the beta matrix in a proposed linear asset pricing model is full column. I first investigate the asymptotic properties of the risk premium estimators and the related t-test and

One necessary condition for the two-pass risk premium estimator to be consistent and asymptotically normal is that the rank of the beta matrix in a proposed linear asset pricing model is full column. I first investigate the asymptotic properties of the risk premium estimators and the related t-test and Wald test statistics when the full rank condition fails. I show that the beta risk of useless factors or multiple proxy factors for a true factor are priced more often than they should be at the nominal size in the asset pricing models omitting some true factors. While under the null hypothesis that the risk premiums of the true factors are equal to zero, the beta risk of the true factors are priced less often than the nominal size. The simulation results are consistent with the theoretical findings. Hence, the factor selection in a proposed factor model should not be made solely based on their estimated risk premiums. In response to this problem, I propose an alternative estimation of the underlying factor structure. Specifically, I propose to use the linear combination of factors weighted by the eigenvectors of the inner product of estimated beta matrix. I further propose a new method to estimate the rank of the beta matrix in a factor model. For this method, the idiosyncratic components of asset returns are allowed to be correlated both over different cross-sectional units and over different time periods. The estimator I propose is easy to use because it is computed with the eigenvalues of the inner product of an estimated beta matrix. Simulation results show that the proposed method works well even in small samples. The analysis of US individual stock returns suggests that there are six common risk factors in US individual stock returns among the thirteen factor candidates used. The analysis of portfolio returns reveals that the estimated number of common factors changes depending on how the portfolios are constructed. The number of risk sources found from the analysis of portfolio returns is generally smaller than the number found in individual stock returns.
ContributorsWang, Na (Author) / Ahn, Seung C. (Thesis advisor) / Kallberg, Jarl G. (Committee member) / Liu, Crocker H. (Committee member) / Arizona State University (Publisher)
Created2011
149506-Thumbnail Image.png
Description
A systematic top down approach to minimize risk and maximize the profits of an investment over a given period of time is proposed. Macroeconomic factors such as Gross Domestic Product (GDP), Consumer Price Index (CPI), Outstanding Consumer Credit, Industrial Production Index, Money Supply (MS), Unemployment Rate, and Ten-Year Treasury are

A systematic top down approach to minimize risk and maximize the profits of an investment over a given period of time is proposed. Macroeconomic factors such as Gross Domestic Product (GDP), Consumer Price Index (CPI), Outstanding Consumer Credit, Industrial Production Index, Money Supply (MS), Unemployment Rate, and Ten-Year Treasury are used to predict/estimate asset (sector ETF`s) returns. Fundamental ratios of individual stocks are used to predict the stock returns. An a priori known cash-flow sequence is assumed available for investment. Given the importance of sector performance on stock performance, sector based Exchange Traded Funds (ETFs) for the S&P; and Dow Jones are considered and wealth is allocated. Mean variance optimization with risk and return constraints are used to distribute the wealth in individual sectors among the selected stocks. The results presented should be viewed as providing an outer control/decision loop generating sector target allocations that will ultimately drive an inner control/decision loop focusing on stock selection. Receding horizon control (RHC) ideas are exploited to pose and solve two relevant constrained optimization problems. First, the classic problem of wealth maximization subject to risk constraints (as measured by a metric on the covariance matrices) is considered. Special consideration is given to an optimization problem that attempts to minimize the peak risk over the prediction horizon, while trying to track a wealth objective. It is concluded that this approach may be particularly beneficial during downturns - appreciably limiting downside during downturns while providing most of the upside during upturns. Investment in stocks during upturns and in sector ETF`s during downturns is profitable.
ContributorsChitturi, Divakar (Author) / Rodriguez, Armando (Thesis advisor) / Tsakalis, Konstantinos S (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2010
161260-Thumbnail Image.png
Description
Over the past few decades, there is an increase in demand for various ground robot applications such as warehouse management, surveillance, mapping, infrastructure inspection, etc. This steady increase in demand has led to a significant rise in the nonholonomic differential drive vehicles (DDV) research. Albeit extensive work has been done

Over the past few decades, there is an increase in demand for various ground robot applications such as warehouse management, surveillance, mapping, infrastructure inspection, etc. This steady increase in demand has led to a significant rise in the nonholonomic differential drive vehicles (DDV) research. Albeit extensive work has been done in developing various control laws for trajectory tracking, point stabilization, formation control, etc., there are still problems and critical questions in regards to design, modeling, and control of DDV’s - that need to be adequately addressed. In this thesis, three different dynamical models are considered that are formed by varying the input/output parameters of the DDV model. These models are analyzed to understand their stability, bandwidth, input-output coupling, and control design properties. Furthermore, a systematic approach has been presented to show the impact of design parameters such as mass, inertia, radius of the wheels, and center of gravity location on the dynamic and inner-loop (speed) control design properties. Subsequently, extensive simulation and hardware trade studies have been conductedto quantify the impact of design parameters and modeling variations on the performance of outer-loop cruise and position control (along a curve). In addition to this, detailed guidelines are provided for when a multi-input multi-output (MIMO) control strategy is advisable over a single-input single-output (SISO) control strategy; when a less stable plant is preferable over a more stable one in order to accommodate performance specifications. Additionally, a multi-robot trajectory tracking implementation based on receding horizon optimization approach is also presented. In most of the optimization-based trajectory tracking approaches found in the literature, only the constraints imposed by the kinematic model are incorporated into the problem. This thesis elaborates the fundamental problem associated with these methods and presents a systematic approach to understand and quantify when kinematic model based constraints are sufficient and when dynamic model-based constraints are necessary to obtain good tracking properties. Detailed instructions are given for designing and building the DDV based on performance specifications, and also, an open-source platform capable of handling high-speed multi-robot research is developed in C++.
ContributorsManne, Sai Sravan (Author) / Rodriguez, Armando A (Thesis advisor) / Si, Jennie (Committee member) / Berman, Spring (Committee member) / Arizona State University (Publisher)
Created2021