Introduction

Using a robotic camera arm (equipped with a ZED stereo camera) mounted on a YuMi robot (a fixed base robot with two arms), the goal is to achieve natural human-robot collaboration. This includes handovers of objects and teaching the robot motions by demonstrating them with the user’s own arms. To do so, visual human pose estimation and gesture recognition together with modern trajectory optimization algorithms are combined. Depending on the recognized gesture, different actions are taken. Human-like robot motions were implemented to give the user a feedback and an intuitive feeling on what the robot is currently doing.

Demonstration Videos

Handovers

Demo Video

Teach and repeat motions

Demo Video Teaching

Overview of performed steps

Node Graph Demonstration

The current state of the robot is highlighted in the node graph. It can also be used to customize the flow of events by enabling / disabling states, by deleting links which will disable the respective state transition or by creating new links (for example after the handover is performed, stop the application etc.).

Content

This repo is a demonstration of my master’s thesis, but it is not possible to make the code publicly available.

It includes the following tasks: