Matching Items (139)
Filtering by

Clear all filters

151685-Thumbnail Image.png
Description
A proposed visible spectrum nanoscale imaging method requires material with permittivity values much larger than those available in real world materials to shrink the visible wavelength to attain the desired resolution. It has been proposed that the extraordinarily slow propagation experienced by light guided along plasmon resonant structures is a

A proposed visible spectrum nanoscale imaging method requires material with permittivity values much larger than those available in real world materials to shrink the visible wavelength to attain the desired resolution. It has been proposed that the extraordinarily slow propagation experienced by light guided along plasmon resonant structures is a viable approach to obtaining these short wavelengths. To assess the feasibility of such a system, an effective medium model of a chain of Noble metal plasmonic nanospheres is developed, leading to a straightforward calculation of the waveguiding properties. Evaluation of other models for such structures that have appeared in the literature, including an eigenvalue problem nearest neighbor approximation, a multi- neighbor approximation with retardation, and a method-of-moments method for a finite chain, show conflicting expectations of such a structure. In particular, recent publications suggest the possibility of regions of invalidity for eigenvalue problem solutions that are considered far below the onset of guidance, and for solutions that assume the loss is low enough to justify perturbation approximations. Even the published method-of-moments approach suffers from an unjustified assumption in the original interpretation, leading to overly optimistic estimations of the attenuation of the plasmon guided wave. In this work it is shown that the method of moments approach solution was dominated by the radiation from the source dipole, and not the waveguiding behavior claimed. If this dipolar radiation is removed the remaining fields ought to contain the desired guided wave information. Using a Prony's-method-based algorithm the dispersion properties of the chain of spheres are assessed at two frequencies, and shown to be dramatically different from the optimistic expectations in much of the literature. A reliable alternative to these models is to replace the chain of spheres with an effective medium model, thus mapping the chain problem into the well-known problem of the dielectric rod. The solution of the Green function problem for excitation of the symmetric longitudinal mode (TM01) is performed by numerical integration. Using this method the frequency ranges over which the rod guides and the associated attenuation are clearly seen. The effective medium model readily allows for variation of the sphere size and separation, and can be taken to the limit where instead of a chain of spheres we have a solid Noble metal rod. This latter case turns out to be the optimal for minimizing the attenuation of the guided wave. Future work is proposed to simulate the chain of photonic nanospheres and the nanowire using finite-difference time-domain to verify observed guided behavior in the Green's function method devised in this thesis and to simulate the proposed nanosensing devices.
ContributorsHale, Paul (Author) / Diaz, Rodolfo E (Thesis advisor) / Goodnick, Stephen (Committee member) / Aberle, James T., 1961- (Committee member) / Palais, Joseph (Committee member) / Arizona State University (Publisher)
Created2013
151718-Thumbnail Image.png
Description
The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a

The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a reputation score for each tweet that is based not just on content, but also additional information from the Twitter ecosystem that consists of users, tweets, and the web pages that tweets link to. This information is obtained by modeling the Twitter ecosystem as a three-layer graph. The reputation score is used to power two novel methods of ranking tweets by propagating the reputation over an agreement graph based on tweets' content similarity. Additionally, I show how the agreement graph helps counter tweet spam. An evaluation of my method on 16~million tweets from the TREC 2011 Microblog Dataset shows that it doubles the precision over baseline Twitter Search and achieves higher precision than current state of the art method. I present a detailed internal empirical evaluation of RAProp in comparison to several alternative approaches proposed by me, as well as external evaluation in comparison to the current state of the art method.
ContributorsRavikumar, Srijith (Author) / Kambhampati, Subbarao (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
151908-Thumbnail Image.png
Description
Loading a cavity-backed slot (CBS) antenna with ferrite material and applying a biasing static magnetic field can be used to control its resonant frequency. Such a mechanism results in a frequency reconfigurable antenna. However, placing a lossy ferrite material inside the cavity can reduce the gain or negatively impact the

Loading a cavity-backed slot (CBS) antenna with ferrite material and applying a biasing static magnetic field can be used to control its resonant frequency. Such a mechanism results in a frequency reconfigurable antenna. However, placing a lossy ferrite material inside the cavity can reduce the gain or negatively impact the impedance bandwidth. This thesis develops guidelines, based on a non-uniform applied magnetic field and non-uniform magnetic field internal to the ferrite specimen, for the design of ferrite-loaded CBS antennas which enhance their gain and tunable bandwidth by shaping the ferrite specimen and judiciously locating it within the cavity. To achieve these objectives, it is necessary to examine the influence of the shape and relative location of the ferrite material, and also the proximity of the ferrite specimen from the probe on the DC magnetic field and RF electric field distributions inside the cavity. The geometry of the probe and its impacts on figures-of-merit of the antenna is of interest as well. Two common cavity backed-slot antennas (rectangular and circular cross-section) were designed, and corresponding simulations and measurements were performed and compared. The cavities were mounted on 30 cm $\times$ 30 cm perfect electric conductor (PEC) ground planes and partially loaded with ferrite material. The ferrites were biased with an external magnetic field produced by either an electromagnet or permanent magnets. Simulations were performed using FEM-based commercial software, Ansys' Maxwell 3D and HFSS. Maxwell 3D is utilized to model the non-uniform DC applied magnetic field and non-uniform magnetic field internal to the ferrite specimen; HFSS however, is used to simulate and obtain the RF characteristics of the antenna. To validate the simulations they were compared with measurements performed in ASU's EM Anechoic Chamber. After many examinations using simulations and measurements, some optimal designs guidelines with respect to the gain, return loss and tunable impedance bandwidth, were obtained and recommended for ferrite-loaded CBS antennas.
ContributorsAskarian Amiri, Mikal (Author) / Balanis, Constantine A. (Thesis advisor) / Aberle, James T. (Committee member) / Pan, Geroge (Committee member) / Arizona State University (Publisher)
Created2013
151780-Thumbnail Image.png
Description
Objective of this thesis project is to build a prototype using Linear Temporal Logic specifications for generating a 2D motion plan commanding an iRobot to fulfill the specifications. This thesis project was created for Cyber Physical Systems Lab in Arizona State University. The end product of this thesis is creation

Objective of this thesis project is to build a prototype using Linear Temporal Logic specifications for generating a 2D motion plan commanding an iRobot to fulfill the specifications. This thesis project was created for Cyber Physical Systems Lab in Arizona State University. The end product of this thesis is creation of a software solution which can be used in the academia and industry for research in cyber physical systems related applications. The major features of the project are: creating a modular system for motion planning, use of Robot Operating System (ROS), use of triangulation for environment decomposition and using stargazer sensor for localization. The project is built on an open source software called ROS which provides an environment where it is very easy to integrate different modules be it software or hardware on a Linux based platform. Use of ROS implies the project or its modules can be adapted quickly for different applications as the need arises. The final software package created and tested takes a data file as its input which contains the LTL specifications, a symbols list used in the LTL and finally the environment polygon data containing real world coordinates for all polygons and also information on neighbors and parents of each polygon. The software package successfully ran the experiment of coverage, reachability with avoidance and sequencing.
ContributorsPandya, Parth (Author) / Fainekos, Georgios (Thesis advisor) / Dasgupta, Partha (Committee member) / Lee, Yann-Hang (Committee member) / Arizona State University (Publisher)
Created2013
151793-Thumbnail Image.png
Description
Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to

Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to create high level motion plans to control robots in the field by converting a visual representation of the motion/task plan into a Linear Temporal Logic (LTL) specification. The visual interface is built on the Android tablet platform and provides functionality to create task plans through a set of well defined gestures and on screen controls. It uses the notion of waypoints to quickly and efficiently describe the motion plan and enables a variety of complex Linear Temporal Logic specifications to be described succinctly and intuitively by the user without the need for the knowledge and understanding of LTL specification. Thus, it opens avenues for its use by personnel in military, warehouse management, and search and rescue missions. This thesis describes the construction of LTL for various scenarios used for robot navigation using the visual interface developed and leverages the use of existing LTL based motion planners to carry out the task plan by a robot.
ContributorsSrinivas, Shashank (Author) / Fainekos, Georgios (Thesis advisor) / Baral, Chitta (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2013
Description
This thesis introduces the Model-Based Development of Multi-iRobot Toolbox (MBDMIRT), a Simulink-based toolbox designed to provide the means to acquire and practice the Model-Based Development (MBD) skills necessary to design real-time embedded system. The toolbox was developed in the Cyber-Physical System Laboratory at Arizona State University. The MBDMIRT toolbox runs

This thesis introduces the Model-Based Development of Multi-iRobot Toolbox (MBDMIRT), a Simulink-based toolbox designed to provide the means to acquire and practice the Model-Based Development (MBD) skills necessary to design real-time embedded system. The toolbox was developed in the Cyber-Physical System Laboratory at Arizona State University. The MBDMIRT toolbox runs under MATLAB/Simulink to simulate the movements of multiple iRobots and to control, after verification by simulation, multiple physical iRobots accordingly. It adopts the Simulink/Stateflow, which exemplifies an approach to MBD, to program the behaviors of the iRobots. The MBDMIRT toolbox reuses and augments the open-source MATLAB-Based Simulator for the iRobot Create from Cornell University to run the simulation. Regarding the mechanism of iRobot control, the MBDMIRT toolbox applies the MATLAB Toolbox for the iRobot Create (MTIC) from United States Naval Academy to command the physical iRobots. The MBDMIRT toolbox supports a timer in both the simulation and the control, which is based on the local clock of the PC running the toolbox. In addition to the build-in sensors of an iRobot, the toolbox can simulate four user-added sensors, which are overhead localization system (OLS), sonar sensors, a camera, and Light Detection And Ranging (LIDAR). While controlling a physical iRobot, the toolbox supports the StarGazer OLS manufactured by HAGISONIC, Inc.
ContributorsSu, Shih-Kai (Author) / Fainekos, Georgios E (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Artemiadis, Panagiotis K (Committee member) / Arizona State University (Publisher)
Created2012
152168-Thumbnail Image.png
Description
There has been a lot of research in the field of artificial intelligence about thinking machines. Alan Turing proposed a test to observe a machine's intelligent behaviour with respect to natural language conversation. The Winograd schema challenge is suggested as an alternative, to the Turing test. It needs inferencing capabilities,

There has been a lot of research in the field of artificial intelligence about thinking machines. Alan Turing proposed a test to observe a machine's intelligent behaviour with respect to natural language conversation. The Winograd schema challenge is suggested as an alternative, to the Turing test. It needs inferencing capabilities, reasoning abilities and background knowledge to get the answer right. It involves a coreference resolution task in which a machine is given a sentence containing a situation which involves two entities, one pronoun and some more information about the situation and the machine has to come up with the right resolution of a pronoun to one of the entities. The complexity of the task is increased with the fact that the Winograd sentences are not constrained by one domain or specific sentence structure and it also contains a lot of human proper names. This modification makes the task of association of entities, to one particular word in the sentence, to derive the answer, difficult. I have developed a pronoun resolver system for the confined domain Winograd sentences. I have developed a classifier or filter which takes input sentences and decides to accept or reject them based on a particular criteria. Once the sentence is accepted. I run parsers on it to obtain the detailed analysis. Furthermore I have developed four answering modules which use world knowledge and inferencing mechanisms to try and resolve the pronoun. The four techniques I use are : ConceptNet knowledgebase, Search engine pattern counts,Narrative event chains and sentiment analysis. I have developed a particular aggregation mechanism for the answers from these modules to arrive at a final answer. I have used caching technique for the association relations that I obtain for different modules, so as to boost the performance. I run my system on the standard ‘nyu dataset’ of Winograd sentences and questions. This dataset is then restricted, by my classifier, to 90 sentences. I evaluate my system on this 90 sentence dataset. When I compare my results against the state of the art system on the same dataset, I get nearly 4.5 % improvement in the restricted domain.
ContributorsBudukh, Tejas Ulhas (Author) / Baral, Chitta (Thesis advisor) / VanLehn, Kurt (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2013
152514-Thumbnail Image.png
Description
As the size and scope of valuable datasets has exploded across many industries and fields of research in recent years, an increasingly diverse audience has sought out effective tools for their large-scale data analytics needs. Over this period, machine learning researchers have also been very prolific in designing improved algorithms

As the size and scope of valuable datasets has exploded across many industries and fields of research in recent years, an increasingly diverse audience has sought out effective tools for their large-scale data analytics needs. Over this period, machine learning researchers have also been very prolific in designing improved algorithms which are capable of finding the hidden structure within these datasets. As consumers of popular Big Data frameworks have sought to apply and benefit from these improved learning algorithms, the problems encountered with the frameworks have motivated a new generation of Big Data tools to address the shortcomings of the previous generation. One important example of this is the improved performance in the newer tools with the large class of machine learning algorithms which are highly iterative in nature. In this thesis project, I set about to implement a low-rank matrix completion algorithm (as an example of a highly iterative algorithm) within a popular Big Data framework, and to evaluate its performance processing the Netflix Prize dataset. I begin by describing several approaches which I attempted, but which did not perform adequately. These include an implementation of the Singular Value Thresholding (SVT) algorithm within the Apache Mahout framework, which runs on top of the Apache Hadoop MapReduce engine. I then describe an approach which uses the Divide-Factor-Combine (DFC) algorithmic framework to parallelize the state-of-the-art low-rank completion algorithm Orthogoal Rank-One Matrix Pursuit (OR1MP) within the Apache Spark engine. I describe the results of a series of tests running this implementation with the Netflix dataset on clusters of various sizes, with various degrees of parallelism. For these experiments, I utilized the Amazon Elastic Compute Cloud (EC2) web service. In the final analysis, I conclude that the Spark DFC + OR1MP implementation does indeed produce competitive results, in both accuracy and performance. In particular, the Spark implementation performs nearly as well as the MATLAB implementation of OR1MP without any parallelism, and improves performance to a significant degree as the parallelism increases. In addition, the experience demonstrates how Spark's flexible programming model makes it straightforward to implement this parallel and iterative machine learning algorithm.
ContributorsKrouse, Brian (Author) / Ye, Jieping (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2014
151846-Thumbnail Image.png
Description
Efficiency of components is an ever increasing area of importance to portable applications, where a finite battery means finite operating time. Higher efficiency devices need to be designed that don't compromise on the performance that the consumer has come to expect. Class D amplifiers deliver on the goal of increased

Efficiency of components is an ever increasing area of importance to portable applications, where a finite battery means finite operating time. Higher efficiency devices need to be designed that don't compromise on the performance that the consumer has come to expect. Class D amplifiers deliver on the goal of increased efficiency, but at the cost of distortion. Class AB amplifiers have low efficiency, but high linearity. By modulating the supply voltage of a Class AB amplifier to make a Class H amplifier, the efficiency can increase while still maintaining the Class AB level of linearity. A 92dB Power Supply Rejection Ratio (PSRR) Class AB amplifier and a Class H amplifier were designed in a 0.24um process for portable audio applications. Using a multiphase buck converter increased the efficiency of the Class H amplifier while still maintaining a fast response time to respond to audio frequencies. The Class H amplifier had an efficiency above the Class AB amplifier by 5-7% from 5-30mW of output power without affecting the total harmonic distortion (THD) at the design specifications. The Class H amplifier design met all design specifications and showed performance comparable to the designed Class AB amplifier across 1kHz-20kHz and 0.01mW-30mW. The Class H design was able to output 30mW into 16Ohms without any increase in THD. This design shows that Class H amplifiers merit more research into their potential for increasing efficiency of audio amplifiers and that even simple designs can give significant increases in efficiency without compromising linearity.
ContributorsPeterson, Cory (Author) / Bakkaloglu, Bertan (Thesis advisor) / Barnaby, Hugh (Committee member) / Kiaei, Sayfe (Committee member) / Arizona State University (Publisher)
Created2013
Description
Twitter is a micro-blogging platform where the users can be social, informational or both. In certain cases, users generate tweets that have no "hashtags" or "@mentions"; we call it an orphaned tweet. The user will be more interested to find more "context" of an orphaned tweet presumably to engage with

Twitter is a micro-blogging platform where the users can be social, informational or both. In certain cases, users generate tweets that have no "hashtags" or "@mentions"; we call it an orphaned tweet. The user will be more interested to find more "context" of an orphaned tweet presumably to engage with his/her friend on that topic. Finding context for an Orphaned tweet manually is challenging because of larger social graph of a user , the enormous volume of tweets generated per second, topic diversity, and limited information from tweet length of 140 characters. To help the user to get the context of an orphaned tweet, this thesis aims at building a hashtag recommendation system called TweetSense, to suggest hashtags as a context or metadata for the orphaned tweets. This in turn would increase user's social engagement and impact Twitter to maintain its monthly active online users in its social network. In contrast to other existing systems, this hashtag recommendation system recommends personalized hashtags by exploiting the social signals of users in Twitter. The novelty with this system is that it emphasizes on selecting the suitable candidate set of hashtags from the related tweets of user's social graph (timeline).The system then rank them based on the combination of features scores computed from their tweet and user related features. It is evaluated based on its ability to predict suitable hashtags for a random sample of tweets whose existing hashtags are deliberately removed for evaluation. I present a detailed internal empirical evaluation of TweetSense, as well as an external evaluation in comparison with current state of the art method.
ContributorsVijayakumar, Manikandan (Author) / Kambhampati, Subbarao (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2014