Legged robots can outperform wheeled machines for most navigation tasks across unknown and rough terrains. For such tasks, visual feedback is a fundamental asset to provide robots with terrain awareness. However, robust dynamic locomotion on difficult terrains with real-time performance guarantees remains a challenge. We present here a real-time, dynamic foothold adaptation strategy based on visual feedback. Our method adjusts the landing position of the feet in a fully reactive manner, using only on-board computers and sensors. The correction is computed and executed continuously along the swing phase trajectory of each leg. To efficiently adapt the landing position, we implement a self-supervised foothold classifier based on a convolutional neural network. Our method results in an up to 200 times faster computation with respect to the full-blown heuristics. Our goal is to react to visual stimuli from the environment, bridging the gap between blind reactive locomotion and purely vision-based planning strategies. We assess the performance of our method on the dynamic quadruped robot HyQ, executing static and dynamic gaits (at speeds up to 0.5 m/s) in both simulated and real scenarios; the benefit of safe foothold adaptation is clearly demonstrated by the overall robot behavior.

Fast and Continuous Foothold Adaptation for Dynamic Locomotion Through CNNs / Magana, Oav; Barasuol, V; Camurri, M; Franceschi, L; Focchi, M; Pontil, M; Caldwell, Dg; Semini, C. - In: IEEE ROBOTICS AND AUTOMATION LETTERS. - ISSN 2377-3766. - 4:2(2019), pp. 2140-2147. [10.1109/LRA.2019.2899434]

Fast and Continuous Foothold Adaptation for Dynamic Locomotion Through CNNs

Camurri M;Focchi M;
2019-01-01

Abstract

Legged robots can outperform wheeled machines for most navigation tasks across unknown and rough terrains. For such tasks, visual feedback is a fundamental asset to provide robots with terrain awareness. However, robust dynamic locomotion on difficult terrains with real-time performance guarantees remains a challenge. We present here a real-time, dynamic foothold adaptation strategy based on visual feedback. Our method adjusts the landing position of the feet in a fully reactive manner, using only on-board computers and sensors. The correction is computed and executed continuously along the swing phase trajectory of each leg. To efficiently adapt the landing position, we implement a self-supervised foothold classifier based on a convolutional neural network. Our method results in an up to 200 times faster computation with respect to the full-blown heuristics. Our goal is to react to visual stimuli from the environment, bridging the gap between blind reactive locomotion and purely vision-based planning strategies. We assess the performance of our method on the dynamic quadruped robot HyQ, executing static and dynamic gaits (at speeds up to 0.5 m/s) in both simulated and real scenarios; the benefit of safe foothold adaptation is clearly demonstrated by the overall robot behavior.
2019
2
Magana, Oav; Barasuol, V; Camurri, M; Franceschi, L; Focchi, M; Pontil, M; Caldwell, Dg; Semini, C
Fast and Continuous Foothold Adaptation for Dynamic Locomotion Through CNNs / Magana, Oav; Barasuol, V; Camurri, M; Franceschi, L; Focchi, M; Pontil, M; Caldwell, Dg; Semini, C. - In: IEEE ROBOTICS AND AUTOMATION LETTERS. - ISSN 2377-3766. - 4:2(2019), pp. 2140-2147. [10.1109/LRA.2019.2899434]
File in questo prodotto:
File Dimensione Formato  
villarreal19ral.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 5.76 MB
Formato Adobe PDF
5.76 MB Adobe PDF Visualizza/Apri
Fast_and_Continuous_Foothold_Adaptation_for_Dynamic_Locomotion_Through_CNNs.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.49 MB
Formato Adobe PDF
3.49 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/365148
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 58
  • ???jsp.display-item.citation.isi??? 50
  • OpenAlex ND
social impact