Matching Items (4)
Filtering by

Clear all filters

152341-Thumbnail Image.png
Description
The problem of systematically designing a control system continues to remain a subject of intense research. In this thesis, a very powerful control system design environment for Linear Time-Invariant (LTI) Multiple-Input Multiple-Output (MIMO) plants is presented. The environment has been designed to address a broad set of closed loop metrics

The problem of systematically designing a control system continues to remain a subject of intense research. In this thesis, a very powerful control system design environment for Linear Time-Invariant (LTI) Multiple-Input Multiple-Output (MIMO) plants is presented. The environment has been designed to address a broad set of closed loop metrics and constraints; e.g. weighted H-infinity closed loop performance subject to closed loop frequency and/or time domain constraints (e.g. peak frequency response, peak overshoot, peak controls, etc.). The general problem considered - a generalized weighted mixed-sensitivity problem subject to constraints - permits designers to directly address and tradeoff multivariable properties at distinct loop breaking points; e.g. at plant outputs and at plant inputs. As such, the environment is particularly powerful for (poorly conditioned) multivariable plants. The Youla parameterization is used to parameterize the set of all stabilizing LTI proper controllers. This is used to convexify the general problem being addressed. Several bases are used to turn the resulting infinite-dimensional problem into a finite-dimensional problem for which there exist many efficient convex optimization algorithms. A simple cutting plane algorithm is used within the environment. Academic and physical examples are presented to illustrate the utility of the environment.
ContributorsPuttannaiah, Karan (Author) / Rodriguez, Armando A (Thesis advisor) / Tsakalis, Konstantinos S (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2013
149913-Thumbnail Image.png
Description
One necessary condition for the two-pass risk premium estimator to be consistent and asymptotically normal is that the rank of the beta matrix in a proposed linear asset pricing model is full column. I first investigate the asymptotic properties of the risk premium estimators and the related t-test and

One necessary condition for the two-pass risk premium estimator to be consistent and asymptotically normal is that the rank of the beta matrix in a proposed linear asset pricing model is full column. I first investigate the asymptotic properties of the risk premium estimators and the related t-test and Wald test statistics when the full rank condition fails. I show that the beta risk of useless factors or multiple proxy factors for a true factor are priced more often than they should be at the nominal size in the asset pricing models omitting some true factors. While under the null hypothesis that the risk premiums of the true factors are equal to zero, the beta risk of the true factors are priced less often than the nominal size. The simulation results are consistent with the theoretical findings. Hence, the factor selection in a proposed factor model should not be made solely based on their estimated risk premiums. In response to this problem, I propose an alternative estimation of the underlying factor structure. Specifically, I propose to use the linear combination of factors weighted by the eigenvectors of the inner product of estimated beta matrix. I further propose a new method to estimate the rank of the beta matrix in a factor model. For this method, the idiosyncratic components of asset returns are allowed to be correlated both over different cross-sectional units and over different time periods. The estimator I propose is easy to use because it is computed with the eigenvalues of the inner product of an estimated beta matrix. Simulation results show that the proposed method works well even in small samples. The analysis of US individual stock returns suggests that there are six common risk factors in US individual stock returns among the thirteen factor candidates used. The analysis of portfolio returns reveals that the estimated number of common factors changes depending on how the portfolios are constructed. The number of risk sources found from the analysis of portfolio returns is generally smaller than the number found in individual stock returns.
ContributorsWang, Na (Author) / Ahn, Seung C. (Thesis advisor) / Kallberg, Jarl G. (Committee member) / Liu, Crocker H. (Committee member) / Arizona State University (Publisher)
Created2011
149506-Thumbnail Image.png
Description
A systematic top down approach to minimize risk and maximize the profits of an investment over a given period of time is proposed. Macroeconomic factors such as Gross Domestic Product (GDP), Consumer Price Index (CPI), Outstanding Consumer Credit, Industrial Production Index, Money Supply (MS), Unemployment Rate, and Ten-Year Treasury are

A systematic top down approach to minimize risk and maximize the profits of an investment over a given period of time is proposed. Macroeconomic factors such as Gross Domestic Product (GDP), Consumer Price Index (CPI), Outstanding Consumer Credit, Industrial Production Index, Money Supply (MS), Unemployment Rate, and Ten-Year Treasury are used to predict/estimate asset (sector ETF`s) returns. Fundamental ratios of individual stocks are used to predict the stock returns. An a priori known cash-flow sequence is assumed available for investment. Given the importance of sector performance on stock performance, sector based Exchange Traded Funds (ETFs) for the S&P; and Dow Jones are considered and wealth is allocated. Mean variance optimization with risk and return constraints are used to distribute the wealth in individual sectors among the selected stocks. The results presented should be viewed as providing an outer control/decision loop generating sector target allocations that will ultimately drive an inner control/decision loop focusing on stock selection. Receding horizon control (RHC) ideas are exploited to pose and solve two relevant constrained optimization problems. First, the classic problem of wealth maximization subject to risk constraints (as measured by a metric on the covariance matrices) is considered. Special consideration is given to an optimization problem that attempts to minimize the peak risk over the prediction horizon, while trying to track a wealth objective. It is concluded that this approach may be particularly beneficial during downturns - appreciably limiting downside during downturns while providing most of the upside during upturns. Investment in stocks during upturns and in sector ETF`s during downturns is profitable.
ContributorsChitturi, Divakar (Author) / Rodriguez, Armando (Thesis advisor) / Tsakalis, Konstantinos S (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2010
158028-Thumbnail Image.png
Description
For the last 50 years, oscillator modeling in ranging systems has received considerable

attention. Many components in a navigation system, such as the master oscillator

driving the receiver system, as well the master oscillator in the transmitting system

contribute significantly to timing errors. Algorithms in the navigation processor must

be able to predict and

For the last 50 years, oscillator modeling in ranging systems has received considerable

attention. Many components in a navigation system, such as the master oscillator

driving the receiver system, as well the master oscillator in the transmitting system

contribute significantly to timing errors. Algorithms in the navigation processor must

be able to predict and compensate such errors to achieve a specified accuracy. While

much work has been done on the fundamentals of these problems, the thinking on said

problems has not progressed. On the hardware end, the designers of local oscillators

focus on synthesized frequency and loop noise bandwidth. This does nothing to

mitigate, or reduce frequency stability degradation in band. Similarly, there are not

systematic methods to accommodate phase and frequency anomalies such as clock

jumps. Phase locked loops are fundamentally control systems, and while control

theory has had significant advancement over the last 30 years, the design of timekeeping

sources has not advanced beyond classical control. On the software end,

single or two state oscillator models are typically embedded in a Kalman Filter to

alleviate time errors between the transmitter and receiver clock. Such models are

appropriate for short term time accuracy, but insufficient for long term time accuracy.

Additionally, flicker frequency noise may be present in oscillators, and it presents

mathematical modeling complications. This work proposes novel H∞ control methods

to address the shortcomings in the standard design of time-keeping phase locked loops.

Such methods allow the designer to address frequency stability degradation as well

as high phase/frequency dynamics. Additionally, finite-dimensional approximants of

flicker frequency noise that are more representative of the truth system than the

tradition Gauss Markov approach are derived. Last, to maintain timing accuracy in

a wide variety of operating environments, novel Banks of Adaptive Extended Kalman

Filters are used to address both stochastic and dynamic uncertainty.
ContributorsEchols, Justin A (Author) / Bliss, Daniel W (Thesis advisor) / Tsakalis, Konstantinos S (Committee member) / Berman, Spring (Committee member) / Mittelmann, Hans (Committee member) / Arizona State University (Publisher)
Created2020