Matching Items (4)
Filtering by

Clear all filters

Description
For a conventional quadcopter system with 4 planar rotors, flight times vary between 10 to 20 minutes depending on the weight of the quadcopter and the size of the battery used. In order to increase the flight time, either the weight of the quadcopter should be reduced or the battery

For a conventional quadcopter system with 4 planar rotors, flight times vary between 10 to 20 minutes depending on the weight of the quadcopter and the size of the battery used. In order to increase the flight time, either the weight of the quadcopter should be reduced or the battery size should be increased. Another way is to increase the efficiency of the propellers. Previous research shows that ducting a propeller can cause an increase of up to 94 % in the thrust produced by the rotor-duct system. This research focused on developing and testing a quadcopter having a centrally ducted rotor which produces 60 % of the total system thrust and 3 other peripheral rotors. This quadcopter will provide longer flight times while having the same maneuvering flexibility in planar movements.
ContributorsLal, Harsh (Author) / Artemiadis, Panagiotis (Thesis advisor) / Lee, Hyunglae (Committee member) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2019
Description
To achieve the ambitious long-term goal of a feet of cooperating Flexible Autonomous

Machines operating in an uncertain Environment (FAME), this thesis addresses several

critical modeling, design, control objectives for rear-wheel drive ground vehicles.

Toward this ambitious goal, several critical objectives are addressed. One central objective of the thesis was to show how

To achieve the ambitious long-term goal of a feet of cooperating Flexible Autonomous

Machines operating in an uncertain Environment (FAME), this thesis addresses several

critical modeling, design, control objectives for rear-wheel drive ground vehicles.

Toward this ambitious goal, several critical objectives are addressed. One central objective of the thesis was to show how to build low-cost multi-capability robot platform

that can be used for conducting FAME research.

A TFC-KIT car chassis was augmented to provide a suite of substantive capabilities.

The augmented vehicle (FreeSLAM Robot) costs less than $500 but offers the capability

of commercially available vehicles costing over $2000.

All demonstrations presented involve rear-wheel drive FreeSLAM robot. The following

summarizes the key hardware demonstrations presented and analyzed:

(1)Cruise (v, ) control along a line,

(2) Cruise (v, ) control along a curve,

(3) Planar (x, y) Cartesian Stabilization for rear wheel drive vehicle,

(4) Finish the track with camera pan tilt structure in minimum time,

(5) Finish the track without camera pan tilt structure in minimum time,

(6) Vision based tracking performance with different cruise speed vx,

(7) Vision based tracking performance with different camera fixed look-ahead distance L,

(8) Vision based tracking performance with different delay Td from vision subsystem,

(9) Manually remote controlled robot to perform indoor SLAM,

(10) Autonomously line guided robot to perform indoor SLAM.

For most cases, hardware data is compared with, and corroborated by, model based

simulation data. In short, the thesis uses low-cost self-designed rear-wheel

drive robot to demonstrate many capabilities that are critical in order to reach the

longer-term FAME goal.
ContributorsLu, Xianglong (Author) / Rodriguez, Armando Antonio (Thesis advisor) / Berman, Spring (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2016
154964-Thumbnail Image.png
Description
Traditional methods for detecting the status of traffic lights used in autonomous vehicles may be susceptible to errors, which is troublesome in a safety-critical environment. In the case of vision-based recognition methods, failures may arise due to disturbances in the environment such as occluded views or poor lighting conditions. Some

Traditional methods for detecting the status of traffic lights used in autonomous vehicles may be susceptible to errors, which is troublesome in a safety-critical environment. In the case of vision-based recognition methods, failures may arise due to disturbances in the environment such as occluded views or poor lighting conditions. Some methods also depend on high-precision meta-data which is not always available. This thesis proposes a complementary detection approach based on an entirely new source of information: the movement patterns of other nearby vehicles. This approach is robust to traditional sources of error, and may serve as a viable supplemental detection method. Several different classification models are presented for inferring traffic light status based on these patterns. Their performance is evaluated over real-world and simulation data sets, resulting in up to 97% accuracy in each set.
ContributorsCampbell, Joseph (Author) / Fainekos, Georgios (Thesis advisor) / Ben Amor, Heni (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2016
155722-Thumbnail Image.png
Description
A robotic swarm can be defined as a large group of inexpensive, interchangeable

robots with limited sensing and/or actuating capabilities that cooperate (explicitly

or implicitly) based on local communications and sensing in order to complete a

mission. Its inherent redundancy provides flexibility and robustness to failures and

environmental disturbances which guarantee the proper completion

A robotic swarm can be defined as a large group of inexpensive, interchangeable

robots with limited sensing and/or actuating capabilities that cooperate (explicitly

or implicitly) based on local communications and sensing in order to complete a

mission. Its inherent redundancy provides flexibility and robustness to failures and

environmental disturbances which guarantee the proper completion of the required

task. At the same time, human intuition and cognition can prove very useful in

extreme situations where a fast and reliable solution is needed. This idea led to the

creation of the field of Human-Swarm Interfaces (HSI) which attempts to incorporate

the human element into the control of robotic swarms for increased robustness and

reliability. The aim of the present work is to extend the current state-of-the-art in HSI

by applying ideas and principles from the field of Brain-Computer Interfaces (BCI),

which has proven to be very useful for people with motor disabilities. At first, a

preliminary investigation about the connection of brain activity and the observation

of swarm collective behaviors is conducted. After showing that such a connection

may exist, a hybrid BCI system is presented for the control of a swarm of quadrotors.

The system is based on the combination of motor imagery and the input from a game

controller, while its feasibility is proven through an extensive experimental process.

Finally, speech imagery is proposed as an alternative mental task for BCI applications.

This is done through a series of rigorous experiments and appropriate data analysis.

This work suggests that the integration of BCI principles in HSI applications can be

successful and it can potentially lead to systems that are more intuitive for the users

than the current state-of-the-art. At the same time, it motivates further research in

the area and sets the stepping stones for the potential development of the field of

Brain-Swarm Interfaces (BSI).
ContributorsKaravas, Georgios Konstantinos (Author) / Artemiadis, Panagiotis (Thesis advisor) / Berman, Spring M. (Committee member) / Lee, Hyunglae (Committee member) / Arizona State University (Publisher)
Created2017