Matching Items (1,594)
Filtering by

Clear all filters

151370-Thumbnail Image.png
Description
The focus of this document is the examination of a new robot simulator developed to aid students in learning robotics programming and provide the ability to test their programs in a simulated world. The simulator, accessed via a website, provides a simulated environment, programming interface, and the ability to control

The focus of this document is the examination of a new robot simulator developed to aid students in learning robotics programming and provide the ability to test their programs in a simulated world. The simulator, accessed via a website, provides a simulated environment, programming interface, and the ability to control a simulated robot. The simulated environment consists of a user-customizable maze and a robot, which can be controlled manually, via Web service, or by utilizing the Web programming interface. The Web programming interface provides dropdown boxes from which the users may select various options to program their implementations. It is designed to aid new students in the learning of basic skills and thought processes used to program robots. Data was collected and analyzed to determine how effective this system is in helping students learn. This included how quickly students were able to program the algorithms assigned to them and how many lines of code were used to implement them. Students' performance was also monitored to determine how well they were able to use the program and if there were any significant problems. The students also completed surveys to communicate how well the website helped them learn and understand various concepts. The data collected shows that the website was a helpful learning tool for the students and that they were able to use the programming interface quickly and effectively.
ContributorsDrown, Garrett (Author) / Tsai, Wei-Tek (Thesis advisor) / Chen, Yinong (Thesis advisor) / Claveau, David (Committee member) / Arizona State University (Publisher)
Created2012
151371-Thumbnail Image.png
Description
This dissertation presents the Temporal Event Query Language (TEQL), a new language for querying event streams. Event Stream Processing enables online querying of streams of events to extract relevant data in a timely manner. TEQL enables querying of interval-based event streams using temporal database operators. Temporal databases and temporal query

This dissertation presents the Temporal Event Query Language (TEQL), a new language for querying event streams. Event Stream Processing enables online querying of streams of events to extract relevant data in a timely manner. TEQL enables querying of interval-based event streams using temporal database operators. Temporal databases and temporal query languages have been a subject of research for more than 30 years and are a natural fit for expressing queries that involve a temporal dimension. However, operators developed in this context cannot be directly applied to event streams. The research extends a preexisting relational framework for event stream processing to support temporal queries. The language features and formal semantic extensions to extend the relational framework are identified. The extended framework supports continuous, step-wise evaluation of temporal queries. The incremental evaluation of TEQL operators is formalized to avoid re-computation of previous results. The research includes the development of a prototype that supports the integrated event and temporal query processing framework, with support for incremental evaluation and materialization of intermediate results. TEQL enables reporting temporal data in the output, direct specification of conditions over timestamps, and specification of temporal relational operators. Through the integration of temporal database operators with event languages, a new class of temporal queries is made possible for querying event streams. New features include semantic aggregation, extraction of temporal patterns using set operators, and a more accurate specification of event co-occurrence.
ContributorsShiva, Foruhar Ali (Author) / Urban, Susan D (Thesis advisor) / Chen, Yi (Thesis advisor) / Davulcu, Hasan (Committee member) / Sarjoughian, Hessam S. (Committee member) / Arizona State University (Publisher)
Created2012
Description
Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use

Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use them for developing software for laboratory automation systems. This thesis proposes an architecture that is based on existing software architectural paradigms and is specifically tailored to developing software for a laboratory automation system. The architecture is based on fairly autonomous software components that can be distributed across multiple computers. The components in the architecture make use of asynchronous communication methodologies that are facilitated by passing messages between one another. The architecture can be used to develop software that is distributed, responsive and thread-safe. The thesis also proposes a framework that has been developed to implement the ideas proposed by the architecture. The framework is used to develop software that is scalable, distributed, responsive and thread-safe. The framework currently has components to control very commonly used laboratory automation devices such as mechanical stages, cameras, and also to do common laboratory automation functionalities such as imaging.
ContributorsKuppuswamy, Venkataramanan (Author) / Meldrum, Deirdre (Thesis advisor) / Collofello, James (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Johnson, Roger (Committee member) / Arizona State University (Publisher)
Created2012
151383-Thumbnail Image.png
Description
Motion capture using cost-effective sensing technology is challenging and the huge success of Microsoft Kinect has been attracting researchers to uncover the potential of using this technology into computer vision applications. In this thesis, an upper-body motion analysis in a home-based system for stroke rehabilitation using novel RGB-D camera -

Motion capture using cost-effective sensing technology is challenging and the huge success of Microsoft Kinect has been attracting researchers to uncover the potential of using this technology into computer vision applications. In this thesis, an upper-body motion analysis in a home-based system for stroke rehabilitation using novel RGB-D camera - Kinect is presented. We address this problem by first conducting a systematic analysis of the usability of Kinect for motion analysis in stroke rehabilitation. Then a hybrid upper body tracking approach is proposed which combines off-the-shelf skeleton tracking with a novel depth-fused mean shift tracking method. We proposed several kinematic features reliably extracted from the proposed inexpensive and portable motion capture system and classifiers that correlate torso movement to clinical measures of unimpaired and impaired. Experiment results show that the proposed sensing and analysis works reliably on measuring torso movement quality and is promising for end-point tracking. The system is currently being deployed for large-scale evaluations.
ContributorsDu, Tingfang (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Rikakis, Thanassis (Committee member) / Arizona State University (Publisher)
Created2012
151275-Thumbnail Image.png
Description
The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to

The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations.
ContributorsBuell, Kevin, Ph.D (Author) / Collofello, James (Thesis advisor) / Davulcu, Hasan (Committee member) / Lindquist, Timothy (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2012
151278-Thumbnail Image.png
Description
This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the

This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the NVIDIA CUDA framework; however, the proposed solution in this document uses the Microsoft general-purpose computing on graphics processing units API. The implementation allows for the simulation of a large number of particles in a real-time scenario. The solution presented here uses the Smoothed Particles Hydrodynamics algorithm to calculate the forces within the fluid; this algorithm provides a Lagrangian approach for discretizes the Navier-Stockes equations into a set of particles. Our solution uses the DirectCompute compute shaders to evaluate each particle using the multithreading and multi-core capabilities of the GPU increasing the overall performance. The solution then describes a method for extracting the fluid surface using the Marching Cubes method and the programmable interfaces exposed by the DirectX pipeline. Particularly, this document presents a method for using the Geometry Shader Stage to generate the triangle mesh as defined by the Marching Cubes method. The implementation results show the ability to simulate over 64K particles at a rate of 900 and 400 frames per second, not including the surface reconstruction steps and including the Marching Cubes steps respectively.
ContributorsFigueroa, Gustavo (Author) / Farin, Gerald (Thesis advisor) / Maciejewski, Ross (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2012
151280-Thumbnail Image.png
Description
The work presented in this thesis covers the synthesis and characterization of an ionomer that is applicable to zinc-air batteries. Polysulfone polymer is first chloromethylated and then quaternized to create an ion-conducting polymer. Nuclear magnetic resonance (NMR) spectra indicates that the degree of chloromethylation was 114%. The chemical and physical

The work presented in this thesis covers the synthesis and characterization of an ionomer that is applicable to zinc-air batteries. Polysulfone polymer is first chloromethylated and then quaternized to create an ion-conducting polymer. Nuclear magnetic resonance (NMR) spectra indicates that the degree of chloromethylation was 114%. The chemical and physical properties that were investigated include: the ionic conductivity, ion exchange capacity, water retention capacity, diameter and thickness swelling ratios, porosity, glass transition temperature, ionic conductivity enhanced by free salt addition, and the concentration and diffusivity of oxygen within the ionomer. It was found that the fully hydrated hydroxide form of the ionomer had a room temperature ionic conductivity of 39.92mS/cm while the chloride form had a room temperature ionic conductivity of 11.80mS/cm. The ion exchange capacity of the ionomer was found to be 1.022mmol/g. The water retention capacity (WRC) of the hydroxide form was found to be 172.6% while the chloride form had a WRC of 67.9%. The hydroxide form of the ionomer had a diameter swelling ratio of 34% and a thickness swelling ratio of 55%. The chloride form had a diameter swelling ratio of 32% and a thickness swelling ratio of 28%. The largest pore size in the ionomer was found to be 32.6nm in diameter. The glass transition temperature of the ionomer is speculated to be 344°C. A definite measurement could not be made. The room temperature ionic conductivity at 50% relative humidity was improved to 12.90mS/cm with the addition of 80% free salt. The concentration and diffusivity of oxygen in the ionomer was found to be 1.3 ±0.2mMol and (0.49 ±0.15)x10-5 cm2/s respectively. The ionomer synthesized in this research had material properties and performance that is comparable to other ionomers reported in the literature. This is an indication that this ionomer is suitable for further study and integration into a zinc-air battery. This thesis is concluded with suggestions for future research that is focused on improving the performance of the ionomer as well as improving the methodology.
ContributorsPadilla, Manuel (Author) / Friesen, Cody A (Thesis advisor) / Buttry, Daniel (Committee member) / Sieradzki, Karl (Committee member) / Arizona State University (Publisher)
Created2012
Description
This thesis introduces the Model-Based Development of Multi-iRobot Toolbox (MBDMIRT), a Simulink-based toolbox designed to provide the means to acquire and practice the Model-Based Development (MBD) skills necessary to design real-time embedded system. The toolbox was developed in the Cyber-Physical System Laboratory at Arizona State University. The MBDMIRT toolbox runs

This thesis introduces the Model-Based Development of Multi-iRobot Toolbox (MBDMIRT), a Simulink-based toolbox designed to provide the means to acquire and practice the Model-Based Development (MBD) skills necessary to design real-time embedded system. The toolbox was developed in the Cyber-Physical System Laboratory at Arizona State University. The MBDMIRT toolbox runs under MATLAB/Simulink to simulate the movements of multiple iRobots and to control, after verification by simulation, multiple physical iRobots accordingly. It adopts the Simulink/Stateflow, which exemplifies an approach to MBD, to program the behaviors of the iRobots. The MBDMIRT toolbox reuses and augments the open-source MATLAB-Based Simulator for the iRobot Create from Cornell University to run the simulation. Regarding the mechanism of iRobot control, the MBDMIRT toolbox applies the MATLAB Toolbox for the iRobot Create (MTIC) from United States Naval Academy to command the physical iRobots. The MBDMIRT toolbox supports a timer in both the simulation and the control, which is based on the local clock of the PC running the toolbox. In addition to the build-in sensors of an iRobot, the toolbox can simulate four user-added sensors, which are overhead localization system (OLS), sonar sensors, a camera, and Light Detection And Ranging (LIDAR). While controlling a physical iRobot, the toolbox supports the StarGazer OLS manufactured by HAGISONIC, Inc.
ContributorsSu, Shih-Kai (Author) / Fainekos, Georgios E (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Artemiadis, Panagiotis K (Committee member) / Arizona State University (Publisher)
Created2012
151301-Thumbnail Image.png
Description
Zinc oxide (ZnO) has attracted much interest during last decades as a functional material. Furthermore, ZnO is a potential material for transparent conducting oxide material competing with indium tin oxide (ITO), graphene, and carbon nanotube film. It has been known as a conductive material when doped with elements such as

Zinc oxide (ZnO) has attracted much interest during last decades as a functional material. Furthermore, ZnO is a potential material for transparent conducting oxide material competing with indium tin oxide (ITO), graphene, and carbon nanotube film. It has been known as a conductive material when doped with elements such as indium, gallium and aluminum. The solubility of those dopant elements in ZnO is still debatable; but, it is necessary to find alternative conducting materials when their form is film or nanostructure for display devices. This is a consequence of the ever increasing price of indium. In addition, a new generation solar cell (nanostructured or hybrid photovoltaics) requires compatible materials which are capable of free standing on substrates without seed or buffer layers and have the ability introduce electrons or holes pathway without blocking towards electrodes. The nanostructures for solar cells using inorganic materials such as silicon (Si), titanium oxide (TiO2), and ZnO have been an interesting topic for research in solar cell community in order to overcome the limitation of efficiency for organic solar cells. This dissertation is a study of the rational solution-based synthesis of 1-dimentional ZnO nanomaterial and its solar cell applications. These results have implications in cost effective and uniform nanomanufacturing for the next generation solar cells application by controlling growth condition and by doping transition metal element in solution.
ContributorsChoi, Hyung Woo (Author) / Alford, Terry L. (Thesis advisor) / Krause, Stephen (Committee member) / Theodore, N. David (Committee member) / Arizona State University (Publisher)
Created2012
151544-Thumbnail Image.png
Description
Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse

Image understanding has been playing an increasingly crucial role in vision applications. Sparse models form an important component in image understanding, since the statistics of natural images reveal the presence of sparse structure. Sparse methods lead to parsimonious models, in addition to being efficient for large scale learning. In sparse modeling, data is represented as a sparse linear combination of atoms from a "dictionary" matrix. This dissertation focuses on understanding different aspects of sparse learning, thereby enhancing the use of sparse methods by incorporating tools from machine learning. With the growing need to adapt models for large scale data, it is important to design dictionaries that can model the entire data space and not just the samples considered. By exploiting the relation of dictionary learning to 1-D subspace clustering, a multilevel dictionary learning algorithm is developed, and it is shown to outperform conventional sparse models in compressed recovery, and image denoising. Theoretical aspects of learning such as algorithmic stability and generalization are considered, and ensemble learning is incorporated for effective large scale learning. In addition to building strategies for efficiently implementing 1-D subspace clustering, a discriminative clustering approach is designed to estimate the unknown mixing process in blind source separation. By exploiting the non-linear relation between the image descriptors, and allowing the use of multiple features, sparse methods can be made more effective in recognition problems. The idea of multiple kernel sparse representations is developed, and algorithms for learning dictionaries in the feature space are presented. Using object recognition experiments on standard datasets it is shown that the proposed approaches outperform other sparse coding-based recognition frameworks. Furthermore, a segmentation technique based on multiple kernel sparse representations is developed, and successfully applied for automated brain tumor identification. Using sparse codes to define the relation between data samples can lead to a more robust graph embedding for unsupervised clustering. By performing discriminative embedding using sparse coding-based graphs, an algorithm for measuring the glomerular number in kidney MRI images is developed. Finally, approaches to build dictionaries for local sparse coding of image descriptors are presented, and applied to object recognition and image retrieval.
ContributorsJayaraman Thiagarajan, Jayaraman (Author) / Spanias, Andreas (Thesis advisor) / Frakes, David (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013