Matching Items (1,294)
Filtering by

Clear all filters

151341-Thumbnail Image.png
Description
With the rapid development of mobile sensing technologies like GPS, RFID, sensors in smartphones, etc., capturing position data in the form of trajectories has become easy. Moving object trajectory analysis is a growing area of interest these days owing to its applications in various domains such as marketing, security, traffic

With the rapid development of mobile sensing technologies like GPS, RFID, sensors in smartphones, etc., capturing position data in the form of trajectories has become easy. Moving object trajectory analysis is a growing area of interest these days owing to its applications in various domains such as marketing, security, traffic monitoring and management, etc. To better understand movement behaviors from the raw mobility data, this doctoral work provides analytic models for analyzing trajectory data. As a first contribution, a model is developed to detect changes in trajectories with time. If the taxis moving in a city are viewed as sensors that provide real time information of the traffic in the city, a change in these trajectories with time can reveal that the road network has changed. To detect changes, trajectories are modeled with a Hidden Markov Model (HMM). A modified training algorithm, for parameter estimation in HMM, called m-BaumWelch, is used to develop likelihood estimates under assumed changes and used to detect changes in trajectory data with time. Data from vehicles are used to test the method for change detection. Secondly, sequential pattern mining is used to develop a model to detect changes in frequent patterns occurring in trajectory data. The aim is to answer two questions: Are the frequent patterns still frequent in the new data? If they are frequent, has the time interval distribution in the pattern changed? Two different approaches are considered for change detection, frequency-based approach and distribution-based approach. The methods are illustrated with vehicle trajectory data. Finally, a model is developed for clustering and outlier detection in semantic trajectories. A challenge with clustering semantic trajectories is that both numeric and categorical attributes are present. Another problem to be addressed while clustering is that trajectories can be of different lengths and also have missing values. A tree-based ensemble is used to address these problems. The approach is extended to outlier detection in semantic trajectories.
ContributorsKondaveeti, Anirudh (Author) / Runger, George C. (Thesis advisor) / Mirchandani, Pitu (Committee member) / Pan, Rong (Committee member) / Maciejewski, Ross (Committee member) / Arizona State University (Publisher)
Created2012
151370-Thumbnail Image.png
Description
The focus of this document is the examination of a new robot simulator developed to aid students in learning robotics programming and provide the ability to test their programs in a simulated world. The simulator, accessed via a website, provides a simulated environment, programming interface, and the ability to control

The focus of this document is the examination of a new robot simulator developed to aid students in learning robotics programming and provide the ability to test their programs in a simulated world. The simulator, accessed via a website, provides a simulated environment, programming interface, and the ability to control a simulated robot. The simulated environment consists of a user-customizable maze and a robot, which can be controlled manually, via Web service, or by utilizing the Web programming interface. The Web programming interface provides dropdown boxes from which the users may select various options to program their implementations. It is designed to aid new students in the learning of basic skills and thought processes used to program robots. Data was collected and analyzed to determine how effective this system is in helping students learn. This included how quickly students were able to program the algorithms assigned to them and how many lines of code were used to implement them. Students' performance was also monitored to determine how well they were able to use the program and if there were any significant problems. The students also completed surveys to communicate how well the website helped them learn and understand various concepts. The data collected shows that the website was a helpful learning tool for the students and that they were able to use the programming interface quickly and effectively.
ContributorsDrown, Garrett (Author) / Tsai, Wei-Tek (Thesis advisor) / Chen, Yinong (Thesis advisor) / Claveau, David (Committee member) / Arizona State University (Publisher)
Created2012
151371-Thumbnail Image.png
Description
This dissertation presents the Temporal Event Query Language (TEQL), a new language for querying event streams. Event Stream Processing enables online querying of streams of events to extract relevant data in a timely manner. TEQL enables querying of interval-based event streams using temporal database operators. Temporal databases and temporal query

This dissertation presents the Temporal Event Query Language (TEQL), a new language for querying event streams. Event Stream Processing enables online querying of streams of events to extract relevant data in a timely manner. TEQL enables querying of interval-based event streams using temporal database operators. Temporal databases and temporal query languages have been a subject of research for more than 30 years and are a natural fit for expressing queries that involve a temporal dimension. However, operators developed in this context cannot be directly applied to event streams. The research extends a preexisting relational framework for event stream processing to support temporal queries. The language features and formal semantic extensions to extend the relational framework are identified. The extended framework supports continuous, step-wise evaluation of temporal queries. The incremental evaluation of TEQL operators is formalized to avoid re-computation of previous results. The research includes the development of a prototype that supports the integrated event and temporal query processing framework, with support for incremental evaluation and materialization of intermediate results. TEQL enables reporting temporal data in the output, direct specification of conditions over timestamps, and specification of temporal relational operators. Through the integration of temporal database operators with event languages, a new class of temporal queries is made possible for querying event streams. New features include semantic aggregation, extraction of temporal patterns using set operators, and a more accurate specification of event co-occurrence.
ContributorsShiva, Foruhar Ali (Author) / Urban, Susan D (Thesis advisor) / Chen, Yi (Thesis advisor) / Davulcu, Hasan (Committee member) / Sarjoughian, Hessam S. (Committee member) / Arizona State University (Publisher)
Created2012
151374-Thumbnail Image.png
Description
ABSTRACT As the use of photovoltaic (PV) modules in large power plants continues to increase globally, more studies on degradation, reliability, failure modes, and mechanisms of field aged modules are needed to predict module life expectancy based on accelerated lifetime testing of PV modules. In this work, a 26+ year

ABSTRACT As the use of photovoltaic (PV) modules in large power plants continues to increase globally, more studies on degradation, reliability, failure modes, and mechanisms of field aged modules are needed to predict module life expectancy based on accelerated lifetime testing of PV modules. In this work, a 26+ year old PV power plant in Phoenix, Arizona has been evaluated for performance, reliability, and durability. The PV power plant, called Solar One, is owned and operated by John F. Long's homeowners association. It is a 200 kWdc, standard test conditions (STC) rated power plant comprised of 4000 PV modules or frameless laminates, in 100 panel groups (rated at 175 kWac). The power plant is made of two center-tapped bipolar arrays, the north array and the south array. Due to a limited time frame to execute this large project, this work was performed by two masters students (Jonathan Belmont and Kolapo Olakonu) and the test results are presented in two masters theses. This thesis presents the results obtained on the south array and the other thesis presents the results obtained on the north array. Each of these two arrays is made of four sub arrays, the east sub arrays (positive and negative polarities) and the west sub arrays (positive and negative polarities), making up eight sub arrays. The evaluation and analyses of the power plant included in this thesis consists of: visual inspection, electrical performance measurements, and infrared thermography. A possible presence of potential induced degradation (PID) due to potential difference between ground and strings was also investigated. Some installation practices were also studied and found to contribute to the power loss observed in this investigation. The power output measured in 2011 for all eight sub arrays at STC is approximately 76 kWdc and represents a power loss of 62% (from 200 kW to 76 kW) over 26+ years. The 2011 measured power output for the four south sub arrays at STC is 39 kWdc and represents a power loss of 61% (from 100 kW to 39 kW) over 26+ years. Encapsulation browning and non-cell interconnect ribbon breakages were determined to be the primary causes for the power loss.
ContributorsOlakonu, Kolapo (Author) / Tamizhmani, Govindasamy (Thesis advisor) / Srinivasan, Devarajan (Committee member) / Rogers, Bradley (Committee member) / Arizona State University (Publisher)
Created2012
151383-Thumbnail Image.png
Description
Motion capture using cost-effective sensing technology is challenging and the huge success of Microsoft Kinect has been attracting researchers to uncover the potential of using this technology into computer vision applications. In this thesis, an upper-body motion analysis in a home-based system for stroke rehabilitation using novel RGB-D camera -

Motion capture using cost-effective sensing technology is challenging and the huge success of Microsoft Kinect has been attracting researchers to uncover the potential of using this technology into computer vision applications. In this thesis, an upper-body motion analysis in a home-based system for stroke rehabilitation using novel RGB-D camera - Kinect is presented. We address this problem by first conducting a systematic analysis of the usability of Kinect for motion analysis in stroke rehabilitation. Then a hybrid upper body tracking approach is proposed which combines off-the-shelf skeleton tracking with a novel depth-fused mean shift tracking method. We proposed several kinematic features reliably extracted from the proposed inexpensive and portable motion capture system and classifiers that correlate torso movement to clinical measures of unimpaired and impaired. Experiment results show that the proposed sensing and analysis works reliably on measuring torso movement quality and is promising for end-point tracking. The system is currently being deployed for large-scale evaluations.
ContributorsDu, Tingfang (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Rikakis, Thanassis (Committee member) / Arizona State University (Publisher)
Created2012
151275-Thumbnail Image.png
Description
The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations.
ContributorsBuell, Kevin, Ph.D (Author) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Lindquist, Timothy (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
151276-Thumbnail Image.png
Description
This thesis presents a new technique to develop an air-conditioner (A/C) compressor single phase induction motor model for use in an electro-magnetic transient program (EMTP) simulation tool. The method developed also has the capability to represent multiple units of the component in a specific three-phase distribution feeder and investigate the

This thesis presents a new technique to develop an air-conditioner (A/C) compressor single phase induction motor model for use in an electro-magnetic transient program (EMTP) simulation tool. The method developed also has the capability to represent multiple units of the component in a specific three-phase distribution feeder and investigate the phenomenon of fault-induced delayed voltage recovery (FIDVR) and the cause of motor stalling. The system of differential equations representing the single phase induction motor model is developed and formulated. Implicit backward Euler method is applied to numerically integrate the stator currents that are to be drawn from the electric network. The angular position dependency of the rotor shaft is retained in the inductance matrix associated with the model to accurately capture the dynamics of the motor loads. The equivalent circuit of the new model is interfaced with the electric network in the EMTP. The dynamic response of the motor when subjected to faults at different points on voltage waveform has been studied using the EMTP simulator. The mechanism and the impacts of motor stalling need to be explored with multiple units of the detailed model connected to a realistic three-phase distribution system. The model developed can be utilized to assess and improve the product design of compressor motors by air-conditioner manufacturers. Another critical application of the model would be to examine the impacts of asymmetric transmission faults on distribution systems to investigate and develop mitigation measures for the FIDVR problem.
ContributorsLiu, Yuan (Author) / Vittal, Vijay (Thesis advisor) / Undrill, John (Committee member) / Ayyanar, Raja (Committee member) / Arizona State University (Publisher)
Created2012
151278-Thumbnail Image.png
Description
This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the

This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the NVIDIA CUDA framework; however, the proposed solution in this document uses the Microsoft general-purpose computing on graphics processing units API. The implementation allows for the simulation of a large number of particles in a real-time scenario. The solution presented here uses the Smoothed Particles Hydrodynamics algorithm to calculate the forces within the fluid; this algorithm provides a Lagrangian approach for discretizes the Navier-Stockes equations into a set of particles. Our solution uses the DirectCompute compute shaders to evaluate each particle using the multithreading and multi-core capabilities of the GPU increasing the overall performance. The solution then describes a method for extracting the fluid surface using the Marching Cubes method and the programmable interfaces exposed by the DirectX pipeline. Particularly, this document presents a method for using the Geometry Shader Stage to generate the triangle mesh as defined by the Marching Cubes method. The implementation results show the ability to simulate over 64K particles at a rate of 900 and 400 frames per second, not including the surface reconstruction steps and including the Marching Cubes steps respectively.
ContributorsFigueroa, Gustavo (Author) / Farin, Gerald (Thesis advisor) / Maciejewski, Ross (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2012
151301-Thumbnail Image.png
Description
Zinc oxide (ZnO) has attracted much interest during last decades as a functional material. Furthermore, ZnO is a potential material for transparent conducting oxide material competing with indium tin oxide (ITO), graphene, and carbon nanotube film. It has been known as a conductive material when doped with elements such as

Zinc oxide (ZnO) has attracted much interest during last decades as a functional material. Furthermore, ZnO is a potential material for transparent conducting oxide material competing with indium tin oxide (ITO), graphene, and carbon nanotube film. It has been known as a conductive material when doped with elements such as indium, gallium and aluminum. The solubility of those dopant elements in ZnO is still debatable; but, it is necessary to find alternative conducting materials when their form is film or nanostructure for display devices. This is a consequence of the ever increasing price of indium. In addition, a new generation solar cell (nanostructured or hybrid photovoltaics) requires compatible materials which are capable of free standing on substrates without seed or buffer layers and have the ability introduce electrons or holes pathway without blocking towards electrodes. The nanostructures for solar cells using inorganic materials such as silicon (Si), titanium oxide (TiO2), and ZnO have been an interesting topic for research in solar cell community in order to overcome the limitation of efficiency for organic solar cells. This dissertation is a study of the rational solution-based synthesis of 1-dimentional ZnO nanomaterial and its solar cell applications. These results have implications in cost effective and uniform nanomanufacturing for the next generation solar cells application by controlling growth condition and by doping transition metal element in solution.
ContributorsChoi, Hyung Woo (Author) / Alford, Terry L. (Thesis advisor) / Krause, Stephen (Committee member) / Theodore, N. David (Committee member) / Arizona State University (Publisher)
Created2012
151544-Thumbnail Image.png
Description
Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse

Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse modeling, data is represented as a sparse linear combination of atoms from a "dictionary" matrix. This dissertation focuses on understanding different aspects of sparse learning, thereby enhancing the use of sparse methods by incorporating tools from machine learning. With the growing need to adapt models for large scale data, it is important to design dictionaries that can model the entire data space and not just the samples considered. By exploiting the relation of dictionary learning to 1-D subspace clustering, a multilevel dictionary learning algorithm is developed, and it is shown to outperform conventional sparse models in compressed recovery, and image denoising. Theoretical aspects of learning such as algorithmic stability and generalization are considered, and ensemble learning is incorporated for effective large scale learning. In addition to building strategies for efficiently implementing 1-D subspace clustering, a discriminative clustering approach is designed to estimate the unknown mixing process in blind source separation. By exploiting the non-linear relation between the image descriptors, and allowing the use of multiple features, sparse methods can be made more effective in recognition problems. The idea of multiple kernel sparse representations is developed, and algorithms for learning dictionaries in the feature space are presented. Using object recognition experiments on standard datasets it is shown that the proposed approaches outperform other sparse coding-based recognition frameworks. Furthermore, a segmentation technique based on multiple kernel sparse representations is developed, and successfully applied for automated brain tumor identification. Using sparse codes to define the relation between data samples can lead to a more robust graph embedding for unsupervised clustering. By performing discriminative embedding using sparse coding-based graphs, an algorithm for measuring the glomerular number in kidney MRI images is developed. Finally, approaches to build dictionaries for local sparse coding of image descriptors are presented, and applied to object recognition and image retrieval.
ContributorsJayaraman Thiagarajan, Jayaraman (Author) / Spanias, Andreas (Thesis advisor) / Frakes, David (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013