Matching Items (21)
Filtering by

Clear all filters

Description
Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use

Laboratory automation systems have seen a lot of technological advances in recent times. As a result, the software that is written for them are becoming increasingly sophisticated. Existing software architectures and standards are targeted to a wider domain of software development and need to be customized in order to use them for developing software for laboratory automation systems. This thesis proposes an architecture that is based on existing software architectural paradigms and is specifically tailored to developing software for a laboratory automation system. The architecture is based on fairly autonomous software components that can be distributed across multiple computers. The components in the architecture make use of asynchronous communication methodologies that are facilitated by passing messages between one another. The architecture can be used to develop software that is distributed, responsive and thread-safe. The thesis also proposes a framework that has been developed to implement the ideas proposed by the architecture. The framework is used to develop software that is scalable, distributed, responsive and thread-safe. The framework currently has components to control very commonly used laboratory automation devices such as mechanical stages, cameras, and also to do common laboratory automation functionalities such as imaging.
ContributorsKuppuswamy, Venkataramanan (Author) / Meldrum, Deirdre (Thesis advisor) / Collofello, James (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Johnson, Roger (Committee member) / Arizona State University (Publisher)
Created2012
150085-Thumbnail Image.png
Description
The wood-framing trade has not sufficiently been investigated to understand the work task sequencing and coordination among crew members. A new mental framework for a performing crew was developed and tested through four case studies. This framework ensured similar team performance as the one provided by task micro-scheduling in planning

The wood-framing trade has not sufficiently been investigated to understand the work task sequencing and coordination among crew members. A new mental framework for a performing crew was developed and tested through four case studies. This framework ensured similar team performance as the one provided by task micro-scheduling in planning software. It also allowed evaluation of the effect of individual coordination within the crew on the crew's productivity. Using design information, a list of micro-activities/tasks and their predecessors was automatically generated for each piece of lumber in the four wood frames. The task precedence was generated by applying elementary geometrical and technological reasoning to each frame. Then, the duration of each task was determined based on observations from videotaped activities. Primavera's (P6) resource leveling rules were used to calculate the sequencing of tasks and the minimum duration of the whole activity for various crew sizes. The results showed quick convergence towards the minimum production time and allowed to use information from Building Information Models (BIM) to automatically establish the optimal crew sizes for frames. Late Start (LS) leveling priority rule gave the shortest duration in every case. However, the logic of LS tasks rule is too complex to be conveyed to the framing crew. Therefore, the new mental framework of a well performing framer was developed and tested to ensure high coordination. This mental framework, based on five simple rules, can be easily taught to the crew and ensures a crew productivity congruent with the one provided by the LS logic. The case studies indicate that once the worst framer in the crew surpasses the limit of 11% deviation from applying the said five rules, every additional percent of deviation reduces the productivity of the whole crew by about 4%.
ContributorsMaghiar, Marcel M (Author) / Wiezel, Avi (Thesis advisor) / Mitropoulos, Panagiotis (Committee member) / Cooke, Nancy J. (Committee member) / Arizona State University (Publisher)
Created2011
156924-Thumbnail Image.png
Description
Highly automated vehicles require drivers to remain aware enough to takeover

during critical events. Driver distraction is a key factor that prevents drivers from reacting

adequately, and thus there is need for an alert to help drivers regain situational awareness

and be able to act quickly and successfully should a

Highly automated vehicles require drivers to remain aware enough to takeover

during critical events. Driver distraction is a key factor that prevents drivers from reacting

adequately, and thus there is need for an alert to help drivers regain situational awareness

and be able to act quickly and successfully should a critical event arise. This study

examines two aspects of alerts that could help facilitate driver takeover: mode (auditory

and tactile) and direction (towards and away). Auditory alerts appear to be somewhat

more effective than tactile alerts, though both modes produce significantly faster reaction

times than no alert. Alerts moving towards the driver also appear to be more effective

than alerts moving away from the driver. Future research should examine how

multimodal alerts differ from single mode, and see if higher fidelity alerts influence

takeover times.
ContributorsBrogdon, Michael A (Author) / Gray, Robert (Thesis advisor) / Branaghan, Russell (Committee member) / Chiou, Erin (Committee member) / Arizona State University (Publisher)
Created2018
Description
Technical innovation has always played a part in live theatre, whether in the form of mechanical pieces like lifts and trapdoors to the more recent integration of digital media. The advances of the art form encourage the development of technology, and at the same time, technological development enables the advancement

Technical innovation has always played a part in live theatre, whether in the form of mechanical pieces like lifts and trapdoors to the more recent integration of digital media. The advances of the art form encourage the development of technology, and at the same time, technological development enables the advancement of theatrical expression. As mechanics, lighting, sound, and visual media have made their way into the spotlight, advances in theatrical robotics continue to push for their inclusion in the director's toolbox. However, much of the technology available is gated by high prices and unintuitive interfaces, designed for large troupes and specialized engineers, making it difficult to access for small schools and students new to the medium. As a group of engineering students with a vested interest in the development of the arts, this thesis team designed a system that will enable troupes from any background to participate in the advent of affordable automation. The intended result of this thesis project was to create a robotic platform that interfaces with custom software, receiving commands and transmitting position data, and to design that software so that a user can define intuitive cues for their shows. In addition, a new pathfinding algorithm was developed to support free-roaming automation in a 2D space. The final product consisted of a relatively inexpensive (< $2000) free-roaming platform, made entirely with COTS and standard materials, and a corresponding control system with cue design, wireless path following, and position tracking. This platform was built to support 1000 lbs, and includes integrated emergency stopping. The software allows for custom cue design, speed variation, and dynamic path following. Both the blueprints and the source code for the platform and control system have been released to open-source repositories, to encourage further development in the area of affordable automation. The platform itself was donated to the ASU School of Theater.
ContributorsHollenbeck, Matthew D. (Co-author) / Wiebel, Griffin (Co-author) / Winnemann, Christopher (Thesis director) / Christensen, Stephen (Committee member) / Computer Science and Engineering Program (Contributor) / School of Film, Dance and Theatre (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135018-Thumbnail Image.png
Description
The software element of home and small business networking solutions has failed to keep pace with annual development of newer and faster hardware. The software running on these devices is an afterthought, oftentimes equipped with minimal features, an obtuse user interface, or both. At the same time, this past year

The software element of home and small business networking solutions has failed to keep pace with annual development of newer and faster hardware. The software running on these devices is an afterthought, oftentimes equipped with minimal features, an obtuse user interface, or both. At the same time, this past year has seen the rise of smart home assistants that represent the next step in human-computer interaction with their advanced use of natural language processing. This project seeks to quell the issues with the former by exploring a possible fusion of a powerful, feature-rich software-defined networking stack and the incredible natural language processing tools of smart home assistants. To accomplish these ends, a piece of software was developed to leverage the powerful natural language processing capabilities of one such smart home assistant, the Amazon Echo. On one end, this software interacts with Amazon Web Services to retrieve information about a user's speech patterns and key information contained in their speech. On the other end, the software joins that information with its previous session state to intelligently translate speech into a series of commands for the separate components of a networking stack. The software developed for this project empowers a user to quickly make changes to several facets of their networking gear or acquire information about it with just their language \u2014 no terminals, java applets, or web configuration interfaces needed, thus circumventing clunky UI's or jumping from shell to shell. It is the author's hope that showing how networking equipment can be configured in this innovative way will draw more attention to the current failings of networking equipment and inspire a new series of intuitive user interfaces.
ContributorsHermens, Ryan Joseph (Author) / Meuth, Ryan (Thesis director) / Burger, Kevin (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
154942-Thumbnail Image.png
Description
Tolerance specification for manufacturing components from 3D models is a tedious task and often requires expertise of “detailers”. The work presented here is a part of a larger ongoing project aimed at automating tolerance specification to aid less experienced designers by producing consistent geometric dimensioning and tolerancing (GD&T). Tolerance specification

Tolerance specification for manufacturing components from 3D models is a tedious task and often requires expertise of “detailers”. The work presented here is a part of a larger ongoing project aimed at automating tolerance specification to aid less experienced designers by producing consistent geometric dimensioning and tolerancing (GD&T). Tolerance specification can be separated into two major tasks; tolerance schema generation and tolerance value specification. This thesis will focus on the latter part of automated tolerance specification, namely tolerance value allocation and analysis. The tolerance schema (sans values) required prior to these tasks have already been generated by the auto-tolerancing software. This information is communicated through a constraint tolerance feature graph file developed previously at Design Automation Lab (DAL) and is consistent with ASME Y14.5 standard.

The objective of this research is to allocate tolerance values to ensure that the assemblability conditions are satisfied. Assemblability refers to “the ability to assemble/fit a set of parts in specified configuration given a nominal geometry and its corresponding tolerances”. Assemblability is determined by the clearances between the mating features. These clearances are affected by accumulation of tolerances in tolerance loops and hence, the tolerance loops are extracted first. Once tolerance loops have been identified initial tolerance values are allocated to the contributors in these loops. It is highly unlikely that the initial allocation would satisfice assemblability requirements. Overlapping loops have to be simultaneously satisfied progressively. Hence, tolerances will need to be re-allocated iteratively. This is done with the help of tolerance analysis module.

The tolerance allocation and analysis module receives the constraint graph which contains all basic dimensions and mating constraints from the generated schema. The tolerance loops are detected by traversing the constraint graph. The initial allocation distributes the tolerance budget computed from clearance available in the loop, among its contributors in proportion to the associated nominal dimensions. The analysis module subjects the loops to 3D parametric variation analysis and estimates the variation parameters for the clearances. The re-allocation module uses hill climbing heuristics derived from the distribution parameters to select a loop. Re-allocation Of the tolerance values is done using sensitivities and the weights associated with the contributors in the stack.

Several test cases have been run with this software and the desired user input acceptance rates are achieved. Three test cases are presented and output of each module is discussed.
ContributorsBiswas, Deepanjan (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2016
155460-Thumbnail Image.png
Description
On-line dynamic security assessment (DSA) analysis has been developed and applied in several power dispatching control centers. Existing applications of DSA systems are limited by the assumption of the present system operating conditions and computational speeds. To overcome these obstacles, this research developed a novel two-stage DSA system to provide

On-line dynamic security assessment (DSA) analysis has been developed and applied in several power dispatching control centers. Existing applications of DSA systems are limited by the assumption of the present system operating conditions and computational speeds. To overcome these obstacles, this research developed a novel two-stage DSA system to provide periodic security prediction in real time. The major contribution of this research is to develop an open source on-line DSA system incorporated with Phasor Measurement Unit (PMU) data and forecast load. The pre-fault prediction of the system can provide more accurate assessment of the system and minimize the disadvantage of a low computational speed of time domain simulation.

This Thesis describes the development of the novel two-stage on-line DSA scheme using phasor measurement and load forecasting data. The computational scheme of the new system determines the steady state stability and identifies endangerments in a small time frame near real time. The new on-line DSA system will periodically examine system status and predict system endangerments in the near future every 30 minutes. System real-time operating conditions will be determined by state estimation using phasor measurement data. The assessment of transient stability is carried out by running the time-domain simulation using a forecast working point as the initial condition. The forecast operating point is calculated by DC optimal power flow based on forecast load.
ContributorsWang, Qiushi (Author) / Karady, George G. (Thesis advisor) / Pal, Anamitra (Committee member) / Holbert, Keith E. (Committee member) / Arizona State University (Publisher)
Created2017
155399-Thumbnail Image.png
Description
The 21st century will be the site of numerous changes in education systems in response to a rapidly evolving technological environment where existing skill sets and career structures may cease to exist or, at the very least, change dramatically. Likewise, the nature of work will also change to become more

The 21st century will be the site of numerous changes in education systems in response to a rapidly evolving technological environment where existing skill sets and career structures may cease to exist or, at the very least, change dramatically. Likewise, the nature of work will also change to become more automated and more technologically intensive across all sectors, from food service to scientific research. Simply having technical expertise or the ability to process and retain facts will in no way guarantee success in higher education or a satisfying career. Instead, the future will value those educated in a way that encourages collaboration with technology, critical thinking, creativity, clear communication skills, and strong lifelong learning strategies. These changes pose a challenge for higher education’s promise of employability and success post-graduation. Addressing how to prepare students for a technologically uncertain future is challenging. One possible model for education to prepare students for the future of work can be found within the Maker Movement. However, it is not fully understood what parts of this movement are most meaningful to implement in education more broadly, and higher education in particular. Through the qualitative analysis of nearly 160 interviews of adult makers, young makers and young makers’ parents, this dissertation unpacks how makers are learning, what they are learning, and how these qualities are applicable to education goals and the future of work in the 21st century. This research demonstrates that makers are learning valuable skills to prepare them for the future of work in the 21st century. Makers are learning communication skills, technical skills in fabrication and design, and developing lifelong learning strategies that will help prepare them for life in an increasingly technologically integrated future. This work discusses what aspects of the Maker Movement are most important for integration into higher education.
ContributorsWigner, Aubrey (Author) / Lande, Micah (Thesis advisor) / Allenby, Braden (Committee member) / Bennett, Ira (Committee member) / Arizona State University (Publisher)
Created2017
155631-Thumbnail Image.png
Description
The information era has brought about many technological advancements in the past

few decades, and that has led to an exponential increase in the creation of digital images and

videos. Constantly, all digital images go through some image processing algorithm for

various reasons like compression, transmission, storage, etc. There is data loss during

The information era has brought about many technological advancements in the past

few decades, and that has led to an exponential increase in the creation of digital images and

videos. Constantly, all digital images go through some image processing algorithm for

various reasons like compression, transmission, storage, etc. There is data loss during this

process which leaves us with a degraded image. Hence, to ensure minimal degradation of

images, the requirement for quality assessment has become mandatory. Image Quality

Assessment (IQA) has been researched and developed over the last several decades to

predict the quality score in a manner that agrees with human judgments of quality. Modern

image quality assessment (IQA) algorithms are quite effective at prediction accuracy, and

their development has not focused on improving computational performance. The existing

serial implementation requires a relatively large run-time on the order of seconds for a single

frame. Hardware acceleration using Field programmable gate arrays (FPGAs) provides

reconfigurable computing fabric that can be tailored for a broad range of applications.

Usually, programming FPGAs has required expertise in hardware descriptive languages

(HDLs) or high-level synthesis (HLS) tool. OpenCL is an open standard for cross-platform,

parallel programming of heterogeneous systems along with Altera OpenCL SDK, enabling

developers to use FPGA's potential without extensive hardware knowledge. Hence, this

thesis focuses on accelerating the computationally intensive part of the most apparent

distortion (MAD) algorithm on FPGA using OpenCL. The results are compared with CPU

implementation to evaluate performance and efficiency gains.
ContributorsGunavelu Mohan, Aswin (Author) / Sohoni, Sohum (Thesis advisor) / Ren, Fengbo (Thesis advisor) / Seo, Jae-Sun (Committee member) / Arizona State University (Publisher)
Created2017
154871-Thumbnail Image.png
Description
Parts are always manufactured with deviations from their nominal geometry due to many reasons such as inherent inaccuracies in the machine tools and environmental conditions. It is a designer job to devise a proper tolerance scheme to allow reasonable freedom to a manufacturer for imperfections without compromising performance. It takes

Parts are always manufactured with deviations from their nominal geometry due to many reasons such as inherent inaccuracies in the machine tools and environmental conditions. It is a designer job to devise a proper tolerance scheme to allow reasonable freedom to a manufacturer for imperfections without compromising performance. It takes years of experience and strong practical knowledge of the device function, manufacturing process and GD&T standards for a designer to create a good tolerance scheme. There is almost no theoretical resource to help designers in GD&T synthesis. As a result, designers often create inconsistent and incomplete tolerance schemes that lead to high assembly scrap rates. Auto-Tolerancing project was started in the Design Automation Lab (DAL) to investigate the degree to which tolerance synthesis can be automated. Tolerance synthesis includes tolerance schema generation (sans tolerance values) and tolerance value allocation. This thesis aims to address the tolerance schema generation. To develop an automated tolerance schema synthesis toolset, to-be-toleranced features need to be identified, required tolerance types should be determined, a scheme for computer representation of the GD&T information need to be developed, sequence of control should be identified, and a procedure for creating datum reference frames (DRFs) should be developed. The first three steps define the architecture of the tolerance schema generation module while the last two steps setup a base to create a proper tolerance scheme with the help of GD&T good practice rules obtained from experts. The GD&T scheme recommended by this module is used by the tolerance value allocation/analysis module to complete the process of automated tolerance synthesis. Various test cases are studied to verify the suitability of this module. The results show that software-generated schemas are proper enough to address the assemblability issues (first order tolerancing). Since this novel technology is at its initial stage of development, performing further researches and case studies will definitely help to improve the software for making more comprehensive tolerance schemas that cover design intent (second order tolerancing) and cost optimization (third order tolerancing).
ContributorsHejazi, Sayed Mohammad (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Hansford, Dianne (Committee member) / Arizona State University (Publisher)
Created2016