Matching Items (347)
152149-Thumbnail Image.png
Description
Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating

Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating range. In this document a linear model is derived for an inverter connected to the Thevenin equivalent of a microgrid. This model is then compared to a nonlinear simulation model and analyzed using the open and closed loop systems in both the time and frequency domains. The modeling error is quantified with emphasis on its use for controller design purposes. Control design examples are given using a Glover McFarlane controller, gain sched- uled Glover McFarlane controller, and bumpless transfer controller which are compared to the standard droop control approach. These examples serve as a guide to illustrate the use of multi-variable modeling techniques in the context of robust controller design and show that gain scheduled MIMO control techniques can extend the operating range of a microgrid. A hardware implementation is used to compare constant gain droop controllers with Glover McFarlane controllers and shows a clear advantage of the Glover McFarlane approach.
ContributorsSteenis, Joel (Author) / Ayyanar, Raja (Thesis advisor) / Mittelmann, Hans (Committee member) / Tsakalis, Konstantinos (Committee member) / Tylavsky, Daniel (Committee member) / Arizona State University (Publisher)
Created2013
152151-Thumbnail Image.png
Description
Fluxgate sensors are magnetic field sensors that can measure DC and low frequency AC magnetic fields. They can measure much lower magnetic fields than other magnetic sensors like Hall effect sensors, magnetoresistive sensors etc. They also have high linearity, high sensitivity and low noise. The major application of fluxgate sensors

Fluxgate sensors are magnetic field sensors that can measure DC and low frequency AC magnetic fields. They can measure much lower magnetic fields than other magnetic sensors like Hall effect sensors, magnetoresistive sensors etc. They also have high linearity, high sensitivity and low noise. The major application of fluxgate sensors is in magnetometers for the measurement of earth's magnetic field. Magnetometers are used in navigation systems and electronic compasses. Fluxgate sensors can also be used to measure high DC currents. Integrated micro-fluxgate sensors have been developed in recent years. These sensors have much lower power consumption and area compared to their PCB counterparts. The output voltage of micro-fluxgate sensors is very low which makes the analog front end more complex and results in an increase in power consumption of the system. In this thesis a new analog front-end circuit for micro-fluxgate sensors is developed. This analog front-end circuit uses charge pump based excitation circuit and phase delay based read-out chain. With these two features the power consumption of analog front-end is reduced. The output is digital and it is immune to amplitude noise at the output of the sensor. Digital output is produced without using an ADC. A SPICE model of micro-fluxgate sensor is used to verify the operation of the analog front-end and the simulation results show very good linearity.
ContributorsPappu, Karthik (Author) / Bakkaloglu, Bertan (Thesis advisor) / Christen, Jennifer Blain (Committee member) / Yu, Hongbin (Committee member) / Arizona State University (Publisher)
Created2013
152260-Thumbnail Image.png
Description
Autonomous vehicle control systems utilize real-time kinematic Global Navigation Satellite Systems (GNSS) receivers to provide a position within two-centimeter of truth. GNSS receivers utilize the satellite signal time of arrival estimates to solve for position; and multipath corrupts the time of arrival estimates with a time-varying bias. Time of arrival

Autonomous vehicle control systems utilize real-time kinematic Global Navigation Satellite Systems (GNSS) receivers to provide a position within two-centimeter of truth. GNSS receivers utilize the satellite signal time of arrival estimates to solve for position; and multipath corrupts the time of arrival estimates with a time-varying bias. Time of arrival estimates are based upon accurate direct sequence spread spectrum (DSSS) code and carrier phase tracking. Current multipath mitigating GNSS solutions include fixed radiation pattern antennas and windowed delay-lock loop code phase discriminators. A new multipath mitigating code tracking algorithm is introduced that utilizes a non-symmetric correlation kernel to reject multipath. Independent parameters provide a means to trade-off code tracking discriminant gain against multipath mitigation performance. The algorithm performance is characterized in terms of multipath phase error bias, phase error estimation variance, tracking range, tracking ambiguity and implementation complexity. The algorithm is suitable for modernized GNSS signals including Binary Phase Shift Keyed (BPSK) and a variety of Binary Offset Keyed (BOC) signals. The algorithm compensates for unbalanced code sequences to ensure a code tracking bias does not result from the use of asymmetric correlation kernels. The algorithm does not require explicit knowledge of the propagation channel model. Design recommendations for selecting the algorithm parameters to mitigate precorrelation filter distortion are also provided.
ContributorsMiller, Steven (Author) / Spanias, Andreas (Thesis advisor) / Tepedelenlioğlu, Cihan (Committee member) / Tsakalis, Konstantinos (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2013
152234-Thumbnail Image.png
Description
One of the main challenges in planetary robotics is to traverse the shortest path through a set of waypoints. The shortest distance between any two waypoints is a direct linear traversal. Often times, there are physical restrictions that prevent a rover form traversing straight to a waypoint. Thus, knowledge of

One of the main challenges in planetary robotics is to traverse the shortest path through a set of waypoints. The shortest distance between any two waypoints is a direct linear traversal. Often times, there are physical restrictions that prevent a rover form traversing straight to a waypoint. Thus, knowledge of the terrain is needed prior to traversal. The Digital Terrain Model (DTM) provides information about the terrain along with waypoints for the rover to traverse. However, traversing a set of waypoints linearly is burdensome, as the rovers would constantly need to modify their orientation as they successively approach waypoints. Although there are various solutions to this problem, this research paper proposes the smooth traversability of the rover using splines as a quick and easy implementation to traverse a set of waypoints. In addition, a rover was used to compare the smoothness of the linear traversal along with the spline interpolations. The data collected illustrated that spline traversals had a less rate of change in the velocity over time, indicating that the rover performed smoother than with linear paths.
ContributorsKamasamudram, Anurag (Author) / Saripalli, Srikanth (Thesis advisor) / Fainekos, Georgios (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151310-Thumbnail Image.png
Description
Characterization of standard cells is one of the crucial steps in the IC design. Scaling of CMOS technology has lead to timing un-certainties such as that of cross coupling noise due to interconnect parasitic, skew variation due to voltage jitter and proximity effect of multiple inputs switching (MIS). Due to

Characterization of standard cells is one of the crucial steps in the IC design. Scaling of CMOS technology has lead to timing un-certainties such as that of cross coupling noise due to interconnect parasitic, skew variation due to voltage jitter and proximity effect of multiple inputs switching (MIS). Due to increased operating frequency and process variation, the probability of MIS occurrence and setup / hold failure within a clock cycle is high. The delay variation due to temporal proximity of MIS is significant for multiple input gates in the standard cell library. The shortest paths are affected by MIS due to the lack of averaging effect. Thus, sensitive designs such as that of SRAM row and column decoder circuits have high probability for MIS impact. The traditional static timing analysis (STA) assumes single input switching (SIS) scenario which is not adequate enough to capture gate delay accurately, as the delay variation due to temporal proximity of the MIS is ~15%-45%. Whereas, considering all possible scenarios of MIS for characterization is computationally intensive with huge data volume. Various modeling techniques are developed for the characterization of MIS effect. Some techniques require coefficient extraction through multiple spice simulation, and do not discuss speed up approach or apply models with complicated algorithms to account for MIS effect. The STA flow accounts for process variation through uncertainty parameter to improve product yield. Some of the MIS delay variability models account for MIS variation through table look up approach, resulting in huge data volume or do not consider propagation of RAT in the design flow. Thus, there is a need for a methodology to model MIS effect with less computational resource, and integration of such effect into design flow without trading off the accuracy. A finite-point based analytical model for MIS effect is proposed for multiple input logic gates and similar approach is extended for setup/hold characterization of sequential elements. Integration of MIS variation into design flow is explored. The proposed methodology is validated using benchmark circuits at 45nm technology node under process variation. Experimental results show significant reduction in runtime and data volume with ~10% error compared to that of SPICE simulation.
ContributorsSubramaniam, Anupama R (Author) / Cao, Yu (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Roveda, Janet (Committee member) / Yu, Hongbin (Committee member) / Arizona State University (Publisher)
Created2012
151537-Thumbnail Image.png
Description
Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses

Effective modeling of high dimensional data is crucial in information processing and machine learning. Classical subspace methods have been very effective in such applications. However, over the past few decades, there has been considerable research towards the development of new modeling paradigms that go beyond subspace methods. This dissertation focuses on the study of sparse models and their interplay with modern machine learning techniques such as manifold, ensemble and graph-based methods, along with their applications in image analysis and recovery. By considering graph relations between data samples while learning sparse models, graph-embedded codes can be obtained for use in unsupervised, supervised and semi-supervised problems. Using experiments on standard datasets, it is demonstrated that the codes obtained from the proposed methods outperform several baseline algorithms. In order to facilitate sparse learning with large scale data, the paradigm of ensemble sparse coding is proposed, and different strategies for constructing weak base models are developed. Experiments with image recovery and clustering demonstrate that these ensemble models perform better when compared to conventional sparse coding frameworks. When examples from the data manifold are available, manifold constraints can be incorporated with sparse models and two approaches are proposed to combine sparse coding with manifold projection. The improved performance of the proposed techniques in comparison to sparse coding approaches is demonstrated using several image recovery experiments. In addition to these approaches, it might be required in some applications to combine multiple sparse models with different regularizations. In particular, combining an unconstrained sparse model with non-negative sparse coding is important in image analysis, and it poses several algorithmic and theoretical challenges. A convex and an efficient greedy algorithm for recovering combined representations are proposed. Theoretical guarantees on sparsity thresholds for exact recovery using these algorithms are derived and recovery performance is also demonstrated using simulations on synthetic data. Finally, the problem of non-linear compressive sensing, where the measurement process is carried out in feature space obtained using non-linear transformations, is considered. An optimized non-linear measurement system is proposed, and improvements in recovery performance are demonstrated in comparison to using random measurements as well as optimized linear measurements.
ContributorsNatesan Ramamurthy, Karthikeyan (Author) / Spanias, Andreas (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Karam, Lina (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151544-Thumbnail Image.png
Description
Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse

Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse modeling, data is represented as a sparse linear combination of atoms from a "dictionary" matrix. This dissertation focuses on understanding different aspects of sparse learning, thereby enhancing the use of sparse methods by incorporating tools from machine learning. With the growing need to adapt models for large scale data, it is important to design dictionaries that can model the entire data space and not just the samples considered. By exploiting the relation of dictionary learning to 1-D subspace clustering, a multilevel dictionary learning algorithm is developed, and it is shown to outperform conventional sparse models in compressed recovery, and image denoising. Theoretical aspects of learning such as algorithmic stability and generalization are considered, and ensemble learning is incorporated for effective large scale learning. In addition to building strategies for efficiently implementing 1-D subspace clustering, a discriminative clustering approach is designed to estimate the unknown mixing process in blind source separation. By exploiting the non-linear relation between the image descriptors, and allowing the use of multiple features, sparse methods can be made more effective in recognition problems. The idea of multiple kernel sparse representations is developed, and algorithms for learning dictionaries in the feature space are presented. Using object recognition experiments on standard datasets it is shown that the proposed approaches outperform other sparse coding-based recognition frameworks. Furthermore, a segmentation technique based on multiple kernel sparse representations is developed, and successfully applied for automated brain tumor identification. Using sparse codes to define the relation between data samples can lead to a more robust graph embedding for unsupervised clustering. By performing discriminative embedding using sparse coding-based graphs, an algorithm for measuring the glomerular number in kidney MRI images is developed. Finally, approaches to build dictionaries for local sparse coding of image descriptors are presented, and applied to object recognition and image retrieval.
ContributorsJayaraman Thiagarajan, Jayaraman (Author) / Spanias, Andreas (Thesis advisor) / Frakes, David (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151337-Thumbnail Image.png
Description
One dimensional (1D) and quasi-one dimensional quantum wires have been a subject of both theoretical and experimental interest since 1990s and before. Phenomena such as the "0.7 structure" in the conductance leave many open questions. In this dissertation, I study the properties and the internal electron states of semiconductor quantum

One dimensional (1D) and quasi-one dimensional quantum wires have been a subject of both theoretical and experimental interest since 1990s and before. Phenomena such as the "0.7 structure" in the conductance leave many open questions. In this dissertation, I study the properties and the internal electron states of semiconductor quantum wires with the path integral Monte Carlo (PIMC) method. PIMC is a tool for simulating many-body quantum systems at finite temperature. Its ability to calculate thermodynamic properties and various correlation functions makes it an ideal tool in bridging experiments with theories. A general study of the features interpreted by the Luttinger liquid theory and observed in experiments is first presented, showing the need for new PIMC calculations in this field. I calculate the DC conductance at finite temperature for both noninteracting and interacting electrons. The quantized conductance is identified in PIMC simulations without making the same approximation in the Luttinger model. The low electron density regime is subject to strong interactions, since the kinetic energy decreases faster than the Coulomb interaction at low density. An electron state called the Wigner crystal has been proposed in this regime for quasi-1D wires. By using PIMC, I observe the zig-zag structure of the Wigner crystal. The quantum fluctuations suppress the long range correla- tions, making the order short-ranged. Spin correlations are calculated and used to evaluate the spin coupling strength in a zig-zag state. I also find that as the density increases, electrons undergo a structural phase transition to a dimer state, in which two electrons of opposite spins are coupled across the two rows of the zig-zag. A phase diagram is sketched for a range of densities and transverse confinements. The quantum point contact (QPC) is a typical realization of quantum wires. I study the QPC by explicitly simulating a system of electrons in and around a Timp potential (Timp, 1992). Localization of a single electron in the middle of the channel is observed at 5 K, as the split gate voltage increases. The DC conductance is calculated, which shows the effect of the Coulomb interaction. At 1 K and low electron density, a state similar to the Wigner crystal is found inside the channel.
ContributorsLiu, Jianheng, 1982- (Author) / Shumway, John B (Thesis advisor) / Schmidt, Kevin E (Committee member) / Chen, Tingyong (Committee member) / Yu, Hongbin (Committee member) / Ros, Robert (Committee member) / Arizona State University (Publisher)
Created2012
152003-Thumbnail Image.png
Description
We solve the problem of activity verification in the context of sustainability. Activity verification is the process of proving the user assertions pertaining to a certain activity performed by the user. Our motivation lies in incentivizing the user for engaging in sustainable activities like taking public transport or recycling. Such

We solve the problem of activity verification in the context of sustainability. Activity verification is the process of proving the user assertions pertaining to a certain activity performed by the user. Our motivation lies in incentivizing the user for engaging in sustainable activities like taking public transport or recycling. Such incentivization schemes require the system to verify the claim made by the user. The system verifies these claims by analyzing the supporting evidence captured by the user while performing the activity. The proliferation of portable smart-phones in the past few years has provided us with a ubiquitous and relatively cheap platform, having multiple sensors like accelerometer, gyroscope, microphone etc. to capture this evidence data in-situ. In this research, we investigate the supervised and semi-supervised learning techniques for activity verification. Both these techniques make use the data set constructed using the evidence submitted by the user. Supervised learning makes use of annotated evidence data to build a function to predict the class labels of the unlabeled data points. The evidence data captured can be either unimodal or multimodal in nature. We use the accelerometer data as evidence for transportation mode verification and image data as evidence for recycling verification. After training the system, we achieve maximum accuracy of 94% when classifying the transport mode and 81% when detecting recycle activity. In the case of recycle verification, we could improve the classification accuracy by asking the user for more evidence. We present some techniques to ask the user for the next best piece of evidence that maximizes the probability of classification. Using these techniques for detecting recycle activity, the accuracy increases to 93%. The major disadvantage of using supervised models is that it requires extensive annotated training data, which expensive to collect. Due to the limited training data, we look at the graph based inductive semi-supervised learning methods to propagate the labels among the unlabeled samples. In the semi-supervised approach, we represent each instance in the data set as a node in the graph. Since it is a complete graph, edges interconnect these nodes, with each edge having some weight representing the similarity between the points. We propagate the labels in this graph, based on the proximity of the data points to the labeled nodes. We estimate the performance of these algorithms by measuring how close the probability distribution of the data after label propagation is to the probability distribution of the ground truth data. Since labeling has a cost associated with it, in this thesis we propose two algorithms that help us in selecting minimum number of labeled points to propagate the labels accurately. Our proposed algorithm achieves a maximum of 73% increase in performance when compared to the baseline algorithm.
ContributorsDesai, Vaishnav (Author) / Sundaram, Hari (Thesis advisor) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151937-Thumbnail Image.png
Description
Integrated photonics requires high gain optical materials in the telecom wavelength range for optical amplifiers and coherent light sources. Erbium (Er) containing materials are ideal candidates due to the 1.5 μm emission from Er3+ ions. However, the Er density in typical Er-doped materials is less than 1 x 1020 cm-3,

Integrated photonics requires high gain optical materials in the telecom wavelength range for optical amplifiers and coherent light sources. Erbium (Er) containing materials are ideal candidates due to the 1.5 μm emission from Er3+ ions. However, the Er density in typical Er-doped materials is less than 1 x 1020 cm-3, thus limiting the maximum optical gain to a few dB/cm, too small to be useful for integrated photonics applications. Er compounds could potentially solve this problem since they contain much higher Er density. So far the existing Er compounds suffer from short lifetime and strong upconversion effects, mainly due to poor quality of crystals produced by various methods of thin film growth and deposition. This dissertation explores a new Er compound: erbium chloride silicate (ECS, Er3(SiO4)2Cl ) in the nanowire form, which facilitates the growth of high quality single crystals. Growth methods for such single crystal ECS nanowires have been established. Various structural and optical characterizations have been carried out. The high crystal quality of ECS material leads to a long lifetime of the first excited state of Er3+ ions up to 1 ms at Er density higher than 1022 cm-3. This Er lifetime-density product was found to be the largest among all Er containing materials. A unique integrating sphere method was developed to measure the absorption cross section of ECS nanowires from 440 to 1580 nm. Pump-probe experiments demonstrated a 644 dB/cm signal enhancement from a single ECS wire. It was estimated that such large signal enhancement can overcome the absorption to result in a net material gain, but not sufficient to compensate waveguide propagation loss. In order to suppress the upconversion process in ECS, Ytterbium (Yb) and Yttrium (Y) ions are introduced as substituent ions of Er in the ECS crystal structure to reduce Er density. While the addition of Yb ions only partially succeeded, erbium yttrium chloride silicate (EYCS) with controllable Er density was synthesized successfully. EYCS with 30 at. % Er was found to be the best. It shows the strongest PL emission at 1.5 μm, and thus can be potentially used as a high gain material.
ContributorsYin, Leijun (Author) / Ning, Cun-Zheng (Thesis advisor) / Chamberlin, Ralph (Committee member) / Yu, Hongbin (Committee member) / Menéndez, Jose (Committee member) / Ponce, Fernando (Committee member) / Arizona State University (Publisher)
Created2013