A vocal interface to control a mobile robot

Authors

  • Roberto Gretter Fondazione Bruno Kessler (FBK), 38123 Povo, Trento, Italia
  • Maurizio Omologo Fondazione Bruno Kessler (FBK), 38123 Povo, Trento, Italia
  • Luca Cristoforetti Fondazione Bruno Kessler (FBK), 38123 Povo, Trento, Italia
  • Piergiorgio Svaizer Fondazione Bruno Kessler (FBK), 38123 Povo, Trento, Italia

DOI:

https://doi.org/10.17469/O2104AISV000015

Keywords:

multi-microphone signal processing, multi-modal interfaces, distant-speech recognition, spoken dialogue management, human-robot interaction

Abstract

A multi-modal interface has been integrated on a moving robotic platform, which allows the user to interact at distance, through voice and gestures. The platform includes a microphone array, whose processing provides speaker localization as well as an enhanced signal acquisition. A multi-modal dialogue management is combined with a traditional HMMbased ASR technology, in order to give the user the possibility to interact with the robot in different possible ways, e.g., for platform navigation purposes. The system is always-listening, it operates in real-time, and has been tested in different environments. A corpus of dialogues was collected while using the resulting platform in an apartment. Experimental results show that performance are quite satisfactory in terms of both recognition and understanding rates, even when the user is at a distance of some meters from the robot.

Downloads

Published

31-12-2018

Most read articles by the same author(s)