Matching Items (10)
Filtering by

Clear all filters

134286-Thumbnail Image.png
Description
Many researchers aspire to create robotics systems that assist humans in common office tasks, especially by taking over delivery and messaging tasks. For meaningful interactions to take place, a mobile robot must be able to identify the humans it interacts with and communicate successfully with them. It must also be

Many researchers aspire to create robotics systems that assist humans in common office tasks, especially by taking over delivery and messaging tasks. For meaningful interactions to take place, a mobile robot must be able to identify the humans it interacts with and communicate successfully with them. It must also be able to successfully navigate the office environment. While mobile robots are well suited for navigating and interacting with elements inside a deterministic office environment, attempting to interact with human beings in an office environment remains a challenge due to the limits on the amount of cost-efficient compute power onboard the robot. In this work, I propose the use of remote cloud services to offload intensive interaction tasks. I detail the interactions required in an office environment and discuss the challenges faced when implementing a human-robot interaction platform in a stochastic office environment. I also experiment with cloud services for facial recognition, speech recognition, and environment navigation and discuss my results. As part of my thesis, I have implemented a human-robot interaction system utilizing cloud APIs into a mobile robot, enabling it to navigate the office environment, identify humans within the environment, and communicate with these humans.
Created2017-05
136716-Thumbnail Image.png
Description
The increasing civilian demand for autonomous aerial vehicle platforms in both hobby and professional markets has resulted in an abundance of inexpensive inertial navigation systems and hardware. Many of these systems lack full autonomy, relying on the pilot's guidance with the assistance of inertial sensors for guidance. Autonomous systems depend

The increasing civilian demand for autonomous aerial vehicle platforms in both hobby and professional markets has resulted in an abundance of inexpensive inertial navigation systems and hardware. Many of these systems lack full autonomy, relying on the pilot's guidance with the assistance of inertial sensors for guidance. Autonomous systems depend heavily on the use of a global positioning satellite receiver which can be inhibited by satellite signal strength, low update rates and poor positioning accuracy. For precise navigation of a micro air vehicle in locations where GPS signals are unobtainable, such as indoors or throughout a dense urban environment, additional sensors must complement the inertial sensors to provide improved navigation state estimations without the use of a GPS. By creating a system that allows for the rapid development of experimental guidance, navigation and control algorithms on versatile, low-cost development platforms, improved navigation systems may be tested with relative ease and at reduced cost. Incorporating a downward-facing camera with this system may also be utilized to further improve vehicle autonomy in denied-GPS environments.
ContributorsPolak, Adam Michael (Author) / Rodriguez, Armando (Thesis director) / Saripalli, Srikanth (Committee member) / Hannan, Mike (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2014-12
148263-Thumbnail Image.png
Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develo

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

Contributorsde Guzman, Lorenzo (Co-author) / Chmelnik, Nathan (Co-author) / Martz, Emma (Co-author) / Johnson, Katelyn (Co-author) / Ju, Feng (Thesis director) / Courter, Brandon (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / School of Politics and Global Studies (Contributor) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148215-Thumbnail Image.png
Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develo

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

ContributorsJohnson, Katelyn Rose (Co-author) / Martz, Emma (Co-author) / Chmelnik, Nathan (Co-author) / de Guzman, Lorenzo (Co-author) / Ju, Feng (Thesis director) / Courter, Brandon (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148216-Thumbnail Image.png
Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develo

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

ContributorsChmelnik, Nathan (Co-author) / de Guzman, Lorenzo (Co-author) / Johnson, Katelyn (Co-author) / Martz, Emma (Co-author) / Ju, Feng (Thesis director) / Courter, Brandon (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147540-Thumbnail Image.png
Description

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develo

Time studies are an effective tool to analyze current production systems and propose improvements. The problem that motivated the project was that conducting time studies and observing the progression of components across the factory floor is a manual process. Four Industrial Engineering students worked with a manufacturing company to develop Computer Vision technology that would automate the data collection process for time studies. The team worked in an Agile environment to complete over 120 classification sets, create 8 strategy documents, and utilize Root Cause Analysis techniques to audit and validate the performance of the trained Computer Vision data models. In the future, there is an opportunity to continue developing this product and expand the team’s work scope to apply more engineering skills on the data collected to drive factory improvements.

ContributorsMartz, Emma Marie (Co-author) / de Guzman, Lorenzo (Co-author) / Johnson, Katelyn (Co-author) / Chmelnik, Nathan (Co-author) / Ju, Feng (Thesis director) / Courter, Brandon (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
158066-Thumbnail Image.png
Description
Recently, a well-designed and well-trained neural network can yield state-of-the-art results across many domains, including data mining, computer vision, and medical image analysis. But progress has been limited for tasks where labels are difficult or impossible to obtain. This reliance on exhaustive labeling is a critical limitation in the rapid

Recently, a well-designed and well-trained neural network can yield state-of-the-art results across many domains, including data mining, computer vision, and medical image analysis. But progress has been limited for tasks where labels are difficult or impossible to obtain. This reliance on exhaustive labeling is a critical limitation in the rapid deployment of neural networks. Besides, the current research scales poorly to a large number of unseen concepts and is passively spoon-fed with data and supervision.

To overcome the above data scarcity and generalization issues, in my dissertation, I first propose two unsupervised conventional machine learning algorithms, hyperbolic stochastic coding, and multi-resemble multi-target low-rank coding, to solve the incomplete data and missing label problem. I further introduce a deep multi-domain adaptation network to leverage the power of deep learning by transferring the rich knowledge from a large-amount labeled source dataset. I also invent a novel time-sequence dynamically hierarchical network that adaptively simplifies the network to cope with the scarce data.

To learn a large number of unseen concepts, lifelong machine learning enjoys many advantages, including abstracting knowledge from prior learning and using the experience to help future learning, regardless of how much data is currently available. Incorporating this capability and making it versatile, I propose deep multi-task weight consolidation to accumulate knowledge continuously and significantly reduce data requirements in a variety of domains. Inspired by the recent breakthroughs in automatically learning suitable neural network architectures (AutoML), I develop a nonexpansive AutoML framework to train an online model without the abundance of labeled data. This work automatically expands the network to increase model capability when necessary, then compresses the model to maintain the model efficiency.

In my current ongoing work, I propose an alternative method of supervised learning that does not require direct labels. This could utilize various supervision from an image/object as a target value for supervising the target tasks without labels, and it turns out to be surprisingly effective. The proposed method only requires few-shot labeled data to train, and can self-supervised learn the information it needs and generalize to datasets not seen during training.
ContributorsZhang, Jie (Author) / Wang, Yalin (Thesis advisor) / Liu, Huan (Committee member) / Stonnington, Cynthia (Committee member) / Liang, Jianming (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020
130884-Thumbnail Image.png
Description
Commonly, image processing is handled on a CPU that is connected to the image sensor by a wire. In these far-sensor processing architectures, there is energy loss associated with sending data across an interconnect from the sensor to the CPU. In an effort to increase energy efficiency, near-sensor processing architectures

Commonly, image processing is handled on a CPU that is connected to the image sensor by a wire. In these far-sensor processing architectures, there is energy loss associated with sending data across an interconnect from the sensor to the CPU. In an effort to increase energy efficiency, near-sensor processing architectures have been developed, in which the sensor and processor are stacked directly on top of each other. This reduces energy loss associated with sending data off-sensor. However, processing near the image sensor causes the sensor to heat up. Reports of thermal noise in near-sensor processing architectures motivated us to study how temperature affects image quality on a commercial image sensor and how thermal noise affects computer vision task accuracy. We analyzed image noise across nine different temperatures and three sensor configurations to determine how image noise responds to an increase in temperature. Ultimately, our team used this information, along with transient analysis of a stacked image sensor’s thermal behavior, to advise thermal management strategies that leverage the benefits of near-sensor processing and prevent accuracy loss at problematic temperatures.
ContributorsJones, Britton Steele (Author) / LiKamWa, Robert (Thesis director) / Jayasuriya, Suren (Committee member) / Watts College of Public Service & Community Solut (Contributor) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-12
130894-Thumbnail Image.png
Description
The use of Artificial Intelligence in assistive systems is growing in application and efficiency. From self-driving cars, to medical and surgical robots and industrial tasked unsupervised co-robots; the use of AI and robotics to eliminate human error in high-stress environments and perform automated tasks is something that is advancing society’s

The use of Artificial Intelligence in assistive systems is growing in application and efficiency. From self-driving cars, to medical and surgical robots and industrial tasked unsupervised co-robots; the use of AI and robotics to eliminate human error in high-stress environments and perform automated tasks is something that is advancing society’s status quo. Not only has the understanding of co-robotics exploded in the industrial world, but in research as well. The National Science Foundation (NSF) defines co-robots as the following: “...a robot whose main purpose is to work with people or other robots to accomplish a goal” (NSF, 1). The latest iteration of their National Robotics Initiative, NRI-2.0, focuses on efforts of creating co-robots optimized for ‘scalability, customizability, lowering barriers to entry, and societal impact’(NSF, 1). While many avenues have been explored for the implementation of co-robotics to create more efficient processes and sustainable lifestyles, this project’s focus was on societal impact co-robotics in the field of human safety and well-being. Introducing a co-robotics and computer vision AI solution for first responder assistance would help bring awareness and efficiency to public safety. The use of real-time identification techniques would create a greater range of awareness for first responders in high-stress situations. A combination of environmental features collected through sensors (camera and radar) could be used to identify people and objects within certain environments where visual impairments and obstructions are high (eg. burning buildings, smoke-filled rooms, ect.). Information about situational conditions (environmental readings, locations of other occupants, etc.) could be transmitted to first responders in emergency situations, maximizing situational awareness. This would not only aid first responders in the evaluation of emergency situations, but it would provide useful data for the first responder that would help materialize the most effective course of action for said situation.
ContributorsScott, Kylel D (Author) / Benjamin, Victor (Thesis director) / Liu, Xiao (Committee member) / Engineering Programs (Contributor) / College of Integrative Sciences and Arts (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2020-12
165011-Thumbnail Image.png
Description

Recent advancements in machine learning methods have allowed companies to develop advanced computer vision aided production lines that take advantage of the raw and labeled data captured by high-definition cameras mounted at vantage points in their factory floor. We experiment with two different methods of developing one such system to

Recent advancements in machine learning methods have allowed companies to develop advanced computer vision aided production lines that take advantage of the raw and labeled data captured by high-definition cameras mounted at vantage points in their factory floor. We experiment with two different methods of developing one such system to automatically track key components on a production line. By tracking the state of these key components using object detection we can accurately determine and report production line metrics like part arrival and start/stop times for key factory processes. We began by collecting and labeling raw image data from the cameras overlooking the factory floor. Using that data we trained two dedicated object detection models. Our training utilized transfer learning to start from a Faster R-CNN ResNet model trained on Microsoft’s COCO dataset. The first model we developed is a binary classifier that detects the state of a single object while the second model is a multiclass classifier that detects the state of two distinct objects on the factory floor. Both models achieved over 95% classification and localization accuracy on our test datasets. Having two additional classes did not affect the classification or localization accuracy of the multiclass model compared to the binary model.

ContributorsPaulson, Hunter (Author) / Ju, Feng (Thesis director) / Balasubramanian, Ramkumar (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05