MiRo-Training
The aim of the project is to develop a software architecture for interacting with Miro using vocal and gestural commands.
 All Classes Files Functions Variables
MiRo-Training Documentation

The Project

This Project has been developed for the Social Robotics course of the master degree program in Robotics Engineering at University of Genoa.

MiRo Companion Robot

MiRo is a animal-like robot developed as a prototype companion. It was designed with a bio-inspired architecture based on the neuroscience knowledge of the mammalian brain.

The Objective

The aim of the project is to develop a software architecture for interacting with Miro using vocal and gestural commands. The robot attention is obtained through the vocal activation command **"Miro"**. Only after the activation command the robot is able to execute further commands. The possibilities are the following:

The Software Architecture

The Software architecture can be seen by clicking on the section title. It shows the ROS node (blocks) and the rostopic (blue) used by the nodes to communicate.

Each module part of the architecture has been implemented as a ROS node. To make communication between the nodes possible, a Publish/Subscribe messaging pattern has been used. The Architecture has been organized to be modular, scalable and reusable in few steps. The characteristics of each node will be analysed more deeply in the following.

Description of the Modules

Gettin Started

Prerequisites

ROS

This project is developed using ROS:

MiRo Workstation Setup

Download the Miro Developer kit.

Follow the instructions from Consequential Robotics Miro: Prepare Workstation to set up your workstation to work with mthe robot. Strictly follow the instructions in the Install mdk section as the following steps will rely on this. Not necessary to make static IP for your workstation (laptop) while setting up connection with MiRo. For a clear tutorial step-by-step you should visit Emarolab Miro Repository.

The wearable device

In order to interact with MiRo through gestures, a smartwatch with a 9-axis IMU sensor has been used. LG G WATCH R Follow the instructions reported in imu_stream to download the app for both the smartphone and the smartwatch.

Smartwatch and Smartphone Setup

In order to publish imu sensor data from your smartwatch to ROS nodes you must have a smartwatch paired with a smartphone. The smartphone acts as the bridge between the smartwatch and the ros master running on your computer.

MQTT ROS Bridge

In order to succesfully subscribe to MQTT topics and publish contents of MQTT messages to ROS follow the instruction in mqtt_ros_bridge. To work with the current project some parameter must be modified in the imu_bridge.launch The parameter device_name must be changed with the name of your personal smartwatch.

ROS Based Speech Interface

In order to vocally interact with the robot we use a repository that contains an example for using a web interface to speak with the robot. It is based on Google Speech Demo for performing speech-to-text. We disabled the text-to-speech functionality.

For this project we used the mic in LOGITECH Wireless Headset H600, but any microphone connected to your laptop should work pretty fine.

Create a catkin workspace and clone all the packages in the src folder

``` $ git clone https://github.com/EmaroLab/ros_verbal_interaction_node.git

```

For further information follow the instruction contained in ros_verbal_interaction_node repository.

OpenCV apps

The images streaming from Miro's camera are processed using the package opencv_apps. The camera frames are subject to color segmentation and hugh circles detection. To install it: ``` $ git clone sudo apt install ros-kinetic-opencv-apps

```

MiRo Training

Create a catkin workspace and clone all the packages in the src folder

``` $ git clone https://github.com/EmaroLab/MiRo-training.git $ catkin_make $ source devel/setup.bash ```

Run the Project

Open a new terminal and launch

``` $ roscore ``` mosquitto must be running on your PC for the birdge to work.

In a new terminal ``` $ mosquitto ``` Make sure that the IP in the IMU_stream app on the smartphone is the same shown by doing

``` $ ifconfig ```

Open the IMU_stream app on the smartwatch To test if the connection between smartwatch and ROS is working, start to transmitt the data from IMU_stream app on the smartwatch and check in a new terminal ``` $ rostopic echo ``` It should see the Imu data published by the smartwatch.

Connect the Miro robot to the ROS Master

``` $ ssh root<MIRO-IP> $ sudo nano ./profile ``` Insert your IP after ROS_MASTER_IP

For more detailed instructions see MIRO: Commission MIRO

Open in a new terminal your catkin_ws The following command will start the project

``` $ roslaunch command_handler command_handler.launch

```

Parameters that is possible to change directly from the launch file:

Results

Click the picture below for demostration video.

![MiRo-Training - SoRo](https://img.youtube.com/vi/DoKFgs3enpU/0.jpg).

Each partecipant was asked to fill the questionaire in order to evaluate the interaction with the robot.

Recommendations

1 ) Once cloned the repository ros_verbal_interaction_node, be sure to unsubscribe from the topic */text_to_speech*. We brutally modified the speech_web_interface.html file by publishing to an unusubscribed topic insted to */text_to_speech*. This to avoid the web interface to repeat what just said by the user.

Acknowledgments

Team