Matching Items (8)
Filtering by

Clear all filters

Description
Multicore processors have proliferated in nearly all forms of computing, from servers, desktop, to smartphones. The primary reason for this large adoption of multicore processors is due to its ability to overcome the power-wall by providing higher performance at a lower power consumption rate. With multi-cores, there is increased need

Multicore processors have proliferated in nearly all forms of computing, from servers, desktop, to smartphones. The primary reason for this large adoption of multicore processors is due to its ability to overcome the power-wall by providing higher performance at a lower power consumption rate. With multi-cores, there is increased need for dynamic energy management (DEM), much more than for single-core processors, as DEM for multi-cores is no more a mechanism just to ensure that a processor is kept under specified temperature limits, but also a set of techniques that manage various processor controls like dynamic voltage and frequency scaling (DVFS), task migration, fan speed, etc. to achieve a stated objective. The objectives span a wide range from maximizing throughput, minimizing power consumption, reducing peak temperature, maximizing energy efficiency, maximizing processor reliability, and so on, along with much more wider constraints of temperature, power, timing, and reliability constraints. Thus DEM can be very complex and challenging to achieve. Since often times many DEMs operate together on a single processor, there is a need to unify various DEM techniques. This dissertation address such a need. In this work, a framework for DEM is proposed that provides a unifying processor model that includes processor power, thermal, timing, and reliability models, supports various DEM control mechanisms, many different objective functions along with equally diverse constraint specifications. Using the framework, a range of novel solutions is derived for instances of DEM problems, that include maximizing processor performance, energy efficiency, or minimizing power consumption, peak temperature under constraints of maximum temperature, memory reliability and task deadlines. Finally, a robust closed-loop controller to implement the above solutions on a real processor platform with a very low operational overhead is proposed. Along with the controller design, a model identification methodology for obtaining the required power and thermal models for the controller is also discussed. The controller is architecture independent and hence easily portable across many platforms. The controller has been successfully deployed on Intel Sandy Bridge processor and the use of the controller has increased the energy efficiency of the processor by over 30%
ContributorsHanumaiah, Vinay (Author) / Vrudhula, Sarma (Thesis advisor) / Chatha, Karamvir (Committee member) / Chakrabarti, Chaitali (Committee member) / Rodriguez, Armando (Committee member) / Askin, Ronald (Committee member) / Arizona State University (Publisher)
Created2013
155932-Thumbnail Image.png
Description
The purpose of this dissertation is to develop a design technique for fractional PID controllers to achieve a closed loop sensitivity bandwidth approximately equal to a desired bandwidth using frequency loop shaping techniques. This dissertation analyzes the effect of the order of a fractional integrator which is used as a

The purpose of this dissertation is to develop a design technique for fractional PID controllers to achieve a closed loop sensitivity bandwidth approximately equal to a desired bandwidth using frequency loop shaping techniques. This dissertation analyzes the effect of the order of a fractional integrator which is used as a target on loop shaping, on stability and performance robustness. A comparison between classical PID controllers and fractional PID controllers is presented. Case studies where fractional PID controllers have an advantage over classical PID controllers are discussed. A frequency-domain loop shaping algorithm is developed, extending past results from classical PID’s that have been successful in tuning controllers for a variety of practical systems.
ContributorsSaleh, Khalid M (Author) / Tsakalis, Konstantinos (Thesis advisor) / Rodriguez, Armando (Committee member) / Si, Jennie (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2017
156318-Thumbnail Image.png
Description
VTOL drones were designed and built at the beginning of the 20th century for military applications due to easy take-off and landing operations. Many companies like Lockheed, Convair, NASA and Bell Labs built their own aircrafts but only a few from them came in to the market. Usually, flight automation

VTOL drones were designed and built at the beginning of the 20th century for military applications due to easy take-off and landing operations. Many companies like Lockheed, Convair, NASA and Bell Labs built their own aircrafts but only a few from them came in to the market. Usually, flight automation starts from first principles modeling which helps in the controller design and dynamic analysis of the system.

In this project, a VTOL drone with a shape similar to a Convair XFY-1 is studied and the primary focus is stabilizing and controlling the flight path of the drone in
its hover and horizontal flying modes. The model of the plane is obtained using first principles modeling and controllers are designed to stabilize the yaw, pitch and roll rotational motions.

The plane is modeled for its yaw, pitch and roll rotational motions. Subsequently, the rotational dynamics of the system are linearized about the hover flying mode, hover to horizontal flying mode, horizontal flying mode, horizontal to hover flying mode for ease of implementation of linear control design techniques. The controllers are designed based on an H∞ loop shaping procedure and the results are verified on the actual nonlinear model for the stability of the closed loop system about hover flying, hover to horizontal transition flying, horizontal flying, horizontal to hover transition flying. An experiment is conducted to study the dynamics of the motor by recording the PWM input to the electronic speed controller as input and the rotational speed of the motor as output. A theoretical study is also done to study the thrust generated by the propellers for lift, slipstream velocity analysis, torques acting on the system for various thrust profiles.
ContributorsRAGHURAMAN, VIGNESH (Author) / Tsakalis, Konstantinos (Thesis advisor) / Rodriguez, Armando (Committee member) / Yong, Sze Zheng (Committee member) / Arizona State University (Publisher)
Created2018
154053-Thumbnail Image.png
Description
Vertical taking off and landing (VTOL) drones started to emerge at the beginning of this century, and finds applications in the vast areas of mapping, rescuing, logistics, etc. Usually a VTOL drone control system design starts from a first principles model. Most of the VTOL drones are in the shape

Vertical taking off and landing (VTOL) drones started to emerge at the beginning of this century, and finds applications in the vast areas of mapping, rescuing, logistics, etc. Usually a VTOL drone control system design starts from a first principles model. Most of the VTOL drones are in the shape of a quad-rotor which is convenient for dynamic analysis.

In this project, a VTOL drone with shape similar to a Convair XFY-1 is studied and the primary focus is developing and examining an alternative method to identify a system model from the input and output data, with which it is possible to estimate system parameters and compute model uncertainties on discontinuous data sets. We verify the models by designing controllers that stabilize the yaw, pitch, and roll angles for the VTOL drone in the hovering state.

This project comprises of three stages: an open-loop identification to identify the yaw and pitch dynamics, an intermediate closed-loop identification to identify the roll action dynamic and a closed-loop identification to refine the identification of yaw and pitch action. In open and closed loop identifications, the reference signals sent to the servos were recorded as inputs to the system and the angles and angular velocities in yaw and pitch directions read by inertial measurement unit were recorded as outputs of the system. In the intermediate closed loop identification, the difference between the reference signals sent to the motors on the contra-rotators was recorded as input and the roll angular velocity is recorded as output. Next, regressors were formed by using a coprime factor structure and then parameters of the system were estimated using the least square method. Multiplicative and divisive uncertainties were calculated from the data set and were used to guide PID loop-shaping controller design.
ContributorsLiu, Yiqiu (Author) / Tsakalis, Konstantinos (Thesis advisor) / Rodriguez, Armando (Thesis advisor) / Rivera, Daniel (Committee member) / Arizona State University (Publisher)
Created2015
154835-Thumbnail Image.png
Description
Buck converters are electronic devices that changes a voltage from one level to a lower one and are present in many everyday applications. However, due to factors like aging, degradation or failures, these devices require a system identification process to track and diagnose their parameters. The system identification process should

Buck converters are electronic devices that changes a voltage from one level to a lower one and are present in many everyday applications. However, due to factors like aging, degradation or failures, these devices require a system identification process to track and diagnose their parameters. The system identification process should be performed on-line to not affect the normal operation of the device. Identifying the parameters of the system is essential to design and tune an adaptive proportional-integral-derivative (PID) controller.

Three techniques were used to design the PID controller. Phase and gain margin still prevails as one of the easiest methods to design controllers. Pole-zero cancellation is another technique which is based on pole-placement. However, although these controllers can be easily designed, they did not provide the best response compared to the Frequency Loop Shaping (FLS) technique. Therefore, since FLS showed to have a better frequency and time responses compared to the other two controllers, it was selected to perform the adaptation of the system.

An on-line system identification process was performed for the buck converter using indirect adaptation and the least square algorithm. The estimation error and the parameter error were computed to determine the rate of convergence of the system. The indirect adaptation required about 2000 points to converge to the true parameters prior designing the controller. These results were compared to the adaptation executed using robust stability condition (RSC) and a switching controller. Two different scenarios were studied consisting of five plants that defined the percentage of deterioration of the capacitor and inductor within the buck converter. The switching logic did not always select the optimal controller for the first scenario because the frequency response of the different plants was not significantly different. However, the second scenario consisted of plants with more noticeable different frequency responses and the switching logic selected the optimal controller all the time in about 500 points. Additionally, a disturbance was introduced at the plant input to observe its effect in the switching controller. However, for reasonable low disturbances no change was detected in the proper selection of controllers.
ContributorsSerrano Rodriguez, Victoria Melissa (Author) / Tsakalis, Konstantinos (Thesis advisor) / Bakkaloglu, Bertan (Thesis advisor) / Rodriguez, Armando (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2016
155064-Thumbnail Image.png
Description
From time immemorial, epilepsy has persisted to be one of the greatest impediments to human life for those stricken by it. As the fourth most common neurological disorder, epilepsy causes paroxysmal electrical discharges in the brain that manifest as seizures. Seizures have the effect of debilitating patients on a physical

From time immemorial, epilepsy has persisted to be one of the greatest impediments to human life for those stricken by it. As the fourth most common neurological disorder, epilepsy causes paroxysmal electrical discharges in the brain that manifest as seizures. Seizures have the effect of debilitating patients on a physical and psychological level. Although not lethal by themselves, they can bring about total disruption in consciousness which can, in hazardous conditions, lead to fatality. Roughly 1\% of the world population suffer from epilepsy and another 30 to 50 new cases per 100,000 increase the number of affected annually. Controlling seizures in epileptic patients has therefore become a great medical and, in recent years, engineering challenge.



In this study, the conditions of human seizures are recreated in an animal model of temporal lobe epilepsy. The rodents used in this study are chemically induced to become chronically epileptic. Their Electroencephalogram (EEG) data is then recorded and analyzed to detect and predict seizures; with the ultimate goal being the control and complete suppression of seizures.



Two methods, the maximum Lyapunov exponent and the Generalized Partial Directed Coherence (GPDC), are applied on EEG data to extract meaningful information. Their effectiveness have been reported in the literature for the purpose of prediction of seizures and seizure focus localization. This study integrates these measures, through some modifications, to robustly detect seizures and separately find precursors to them and in consequence provide stimulation to the epileptic brain of rats in order to suppress seizures. Additionally open-loop stimulation with biphasic currents of various pairs of sites in differing lengths of time have helped us create control efficacy maps. While GPDC tells us about the possible location of the focus, control efficacy maps tells us how effective stimulating a certain pair of sites will be.



The results from computations performed on the data are presented and the feasibility of the control problem is discussed. The results show a new reliable means of seizure detection even in the presence of artifacts in the data. The seizure precursors provide a means of prediction, in the order of tens of minutes, prior to seizures. Closed loop stimulation experiments based on these precursors and control efficacy maps on the epileptic animals show a maximum reduction of seizure frequency by 24.26\% in one animal and reduction of length of seizures by 51.77\% in another. Thus, through this study it was shown that the implementation of the methods can ameliorate seizures in an epileptic patient. It is expected that the new knowledge and experimental techniques will provide a guide for future research in an effort to ultimately eliminate seizures in epileptic patients.
ContributorsShafique, Md Ashfaque Bin (Author) / Tsakalis, Konstantinos (Thesis advisor) / Rodriguez, Armando (Committee member) / Muthuswamy, Jitendran (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2016
149506-Thumbnail Image.png
Description
A systematic top down approach to minimize risk and maximize the profits of an investment over a given period of time is proposed. Macroeconomic factors such as Gross Domestic Product (GDP), Consumer Price Index (CPI), Outstanding Consumer Credit, Industrial Production Index, Money Supply (MS), Unemployment Rate, and Ten-Year Treasury are

A systematic top down approach to minimize risk and maximize the profits of an investment over a given period of time is proposed. Macroeconomic factors such as Gross Domestic Product (GDP), Consumer Price Index (CPI), Outstanding Consumer Credit, Industrial Production Index, Money Supply (MS), Unemployment Rate, and Ten-Year Treasury are used to predict/estimate asset (sector ETF`s) returns. Fundamental ratios of individual stocks are used to predict the stock returns. An a priori known cash-flow sequence is assumed available for investment. Given the importance of sector performance on stock performance, sector based Exchange Traded Funds (ETFs) for the S&P; and Dow Jones are considered and wealth is allocated. Mean variance optimization with risk and return constraints are used to distribute the wealth in individual sectors among the selected stocks. The results presented should be viewed as providing an outer control/decision loop generating sector target allocations that will ultimately drive an inner control/decision loop focusing on stock selection. Receding horizon control (RHC) ideas are exploited to pose and solve two relevant constrained optimization problems. First, the classic problem of wealth maximization subject to risk constraints (as measured by a metric on the covariance matrices) is considered. Special consideration is given to an optimization problem that attempts to minimize the peak risk over the prediction horizon, while trying to track a wealth objective. It is concluded that this approach may be particularly beneficial during downturns - appreciably limiting downside during downturns while providing most of the upside during upturns. Investment in stocks during upturns and in sector ETF`s during downturns is profitable.
ContributorsChitturi, Divakar (Author) / Rodriguez, Armando (Thesis advisor) / Tsakalis, Konstantinos S (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2010
158270-Thumbnail Image.png
Description
This work is concerned with how best to reconstruct images from limited angle tomographic measurements. An introduction to tomography and to limited angle tomography will be provided and a brief overview of the many fields to which this work may contribute is given.

The traditional tomographic image reconstruction approach involves

This work is concerned with how best to reconstruct images from limited angle tomographic measurements. An introduction to tomography and to limited angle tomography will be provided and a brief overview of the many fields to which this work may contribute is given.

The traditional tomographic image reconstruction approach involves Fourier domain representations. The classic Filtered Back Projection algorithm will be discussed and used for comparison throughout the work. Bayesian statistics and information entropy considerations will be described. The Maximum Entropy reconstruction method will be derived and its performance in limited angular measurement scenarios will be examined.

Many new approaches become available once the reconstruction problem is placed within an algebraic form of Ax=b in which the measurement geometry and instrument response are defined as the matrix A, the measured object as the column vector x, and the resulting measurements by b. It is straightforward to invert A. However, for the limited angle measurement scenarios of interest in this work, the inversion is highly underconstrained and has an infinite number of possible solutions x consistent with the measurements b in a high dimensional space.

The algebraic formulation leads to the need for high performing regularization approaches which add constraints based on prior information of what is being measured. These are constraints beyond the measurement matrix A added with the goal of selecting the best image from this vast uncertainty space. It is well established within this work that developing satisfactory regularization techniques is all but impossible except for the simplest pathological cases. There is a need to capture the "character" of the objects being measured.

The novel result of this effort will be in developing a reconstruction approach that will match whatever reconstruction approach has proven best for the types of objects being measured given full angular coverage. However, when confronted with limited angle tomographic situations or early in a series of measurements, the approach will rely on a prior understanding of the "character" of the objects measured. This understanding will be learned by a parallel Deep Neural Network from examples.
ContributorsDallmann, Nicholas A. (Author) / Tsakalis, Konstantinos (Thesis advisor) / Hardgrove, Craig (Committee member) / Rodriguez, Armando (Committee member) / Si, Jennie (Committee member) / Arizona State University (Publisher)
Created2020