Filtering by
- Resource Type: Text
To facilitate rapid, correct, efficient, and intuitive development of graph based solutions we propose a new programming language construct - the search statement. Given a supra-root node, a procedure which determines the children of a given parent node, and optional definitions of the fail-fast acceptance or rejection of a solution, the search statement can conduct a search over any graph or network. Structurally, this statement is modelled after the common switch statement and is put into a largely imperative/procedural context to allow for immediate and intuitive development by most programmers. The Go programming language has been used as a foundation and proof-of-concept of the search statement. A Go compiler is provided which implements this construct.
The tool was developed following the incremental development process in order to quickly create a functional and testable tool. The incremental process also allowed for feedback from radio astronomers to help guide the project's development.
UVLabel provides both a functional product, and a modifiable and scalable code base for radio astronomer developers. This enables astronomers studying various astronomical interferometric data labelling capabilities. The tool can then be used to improve their filtering methods, pursue machine learning solutions, and discover new trends. Finally, UVLabel will be open source to put customization, scalability, and adaptability in the hands of these researchers.
considered a difficult problem to be solved by computers. Image captioning involves not just detecting objects from images but understanding the interactions between the objects to be translated into relevant captions. So, expertise in the fields of computer vision paired with natural language processing are supposed to be crucial for this purpose. The sequence to sequence modelling strategy of deep neural networks is the traditional approach to generate a sequential list of words which are combined to represent the image. But these models suffer from the problem of high variance by not being able to generalize well on the training data.
The main focus of this thesis is to reduce the variance factor which will help in generating better captions. To achieve this, Ensemble Learning techniques have been explored, which have the reputation of solving the high variance problem that occurs in machine learning algorithms. Three different ensemble techniques namely, k-fold ensemble, bootstrap aggregation ensemble and boosting ensemble have been evaluated in this thesis. For each of these techniques, three output combination approaches have been analyzed. Extensive experiments have been conducted on the Flickr8k dataset which has a collection of 8000 images and 5 different captions for every image. The bleu score performance metric, which is considered to be the standard for evaluating natural language processing (NLP) problems, is used to evaluate the predictions. Based on this metric, the analysis shows that ensemble learning performs significantly better and generates more meaningful captions compared to any of the individual models used.
In this dissertation the emergent designs of three unique focal planes are discussed. These focal planes were each designed for a different astronomical platform: suborbital balloon, suborbital rocket, and ground-based observatory. The balloon-based payload is a hexapod-actuated focal plane that uses tip-tilt motion to increase angular resolution through the removal of jitter – known as the HExapod Resolution-Enhancement SYstem (HERESY), the suborbital rocket imaging payload is a Jet Propulsion Laboratory (JPL) delta-doped charge-coupled device (CCD) packaged to survive the rigors of launch and image far-ultra-violet (FUV) spectra, and the ground-based observatory payload is a star centroid tracking modification to the balloon version of HERESY for the tip-tilt correction of atmospheric turbulence.
The design, construction, verification, and validation of each focal plane payload is discussed in detail. For HERESY’s balloon implementation, pointing error data from the Stratospheric Terahertz Observatory (STO) Antarctic balloon mission was used to form an experimental lab test setup to demonstrate the hexapod can eliminate jitter in flight-like conditions. For the suborbital rocket focal plane, a harsh set of unit-level tests to ensure the payload could survive launch and space conditions, as well as the characterization and optimization of the JPL detector, are detailed. Finally, a modification of co-mounting a fast-read detector to the HERESY focal plane, for use on ground-based observatories, intended to reduce atmospherically induced tip-tilt error through the centroid tracking of bright natural guidestars, is described.
Current IoT integration utilities attempt to help simplify this task, but most fail to satisfy one of the requirements many users want in such a system ‒ simplified integration with third party devices. This project seeks to solve this issue through the creation of an easily extendable, modular integrating utility. It is open-source and does not require the use of a cloud-based server, with users hosting the server themselves. With a server and data controller implemented in pure Python and a library for embedded ESP8266 microcontroller-powered devices, the solution seeks to satisfy both casual users as well as those interested in developing their own integrations.
CubeSats can encounter a myriad of difficulties in space like cosmic rays, temperature<br/>issues, and loss of control. By creating better, more reliable software, these problems can be<br/>mitigated and increase the chance of success for the mission. This research sets out to answer the<br/>question: how do we create reliable flight software for CubeSats? by providing a concentrated<br/>list of the best flight software development practices. The CubeSat used in this research is the<br/>Deployable Optical Receiver Aperture (DORA) CubeSat, which is a 3U CubeSat that seeks to<br/>demonstrate optical communication data rates of 1 Gbps over long distances. We present an<br/>analysis over many of the flight software development practices currently in use in the industry,<br/>from industry leads NASA, and identify three key flight software development areas of focus:<br/>memory, concurrency, and error handling. Within each of these areas, the best practices were<br/>defined for how to approach the area. These practices were also developed using experience<br/>from the creation of flight software for the DORA CubeSat in order to drive the design and<br/>testing of the system. We analyze DORA’s effectiveness in the three areas of focus, as well as<br/>discuss how following the best practices identified helped to create a more reliable flight<br/>software system for the DORA CubeSat.
The aim of this project is to understand the basic algorithmic components of the transformer deep learning architecture. At a high level, a transformer is a machine learning model based off of a recurrent neural network that adopts a self-attention mechanism, which can weigh significant parts of sequential input data which is very useful for solving problems in natural language processing and computer vision. There are other approaches to solving these problems which have been implemented in the past (i.e., convolutional neural networks and recurrent neural networks), but these architectures introduce the issue of the vanishing gradient problem when an input becomes too long (which essentially means the network loses its memory and halts learning) and have a slow training time in general. The transformer architecture’s features enable a much better “memory” and a faster training time, which makes it a more optimal architecture in solving problems. Most of this project will be spent producing a survey that captures the current state of research on the transformer, and any background material to understand it. First, I will do a keyword search of the most well cited and up-to-date peer reviewed publications on transformers to understand them conceptually. Next, I will investigate any necessary programming frameworks that will be required to implement the architecture. I will use this to implement a simplified version of the architecture or follow an easy to use guide or tutorial in implementing the architecture. Once the programming aspect of the architecture is understood, I will then Implement a transformer based on the academic paper “Attention is All You Need”. I will then slightly tweak this model using my understanding of the architecture to improve performance. Once finished, the details (i.e., successes, failures, process and inner workings) of the implementation will be evaluated and reported, as well as the fundamental concepts surveyed. The motivation behind this project is to explore the rapidly growing area of AI algorithms, and the transformer algorithm in particular was chosen because it is a major milestone for engineering with AI and software. Since their introduction, transformers have provided a very effective way of solving natural language processing, which has allowed any related applications to succeed with high speed while maintaining accuracy. Since then, this type of model can be applied to more cutting edge natural language processing applications, such as extracting semantic information from a text description and generating an image to satisfy it.