3D object detectors based only on LiDAR point clouds hold the state-of-the-art on modern street-view benchmarks. However, LiDAR-based detectors poorly generalize across domains due to domain shift. In the case of LiDAR, in fact, domain shift is not only due to changes in the environment and in the object appearances, as for visual data from RGB cameras, but is also related to the geometry of the point clouds (e.g., point density variations). This paper proposes SF-UDA 3D , the first Source-Free Unsupervised Domain Adaptation (SF-UDA) framework to domain-adapt the state-of-the-art PointRCNN 3D detector to target domains for which we have no annotations (unsupervised), neither we hold images nor annotations of the source domain (source-free). SF-UDA 3D is novel on both aspects. Our approach is based on pseudo-annotations, reversible scale-transformations and motion coherency. SFUDA 3D outperforms both previous domain adaptation techniques based on features alignment and state-of-the-art 3D object detection methods which additionally use few-shot target annotations or target annotation statistics. This is demonstrated by extensive experiments on two large-scale datasets, i.e., KITTI and nuScenes.

SF-UDA-3D: Source-Free Unsupervised Domain Adaptation for LiDAR-based 3D Object Detection / Saltori, Cristiano; Lathuiliére, Stéphane; Sebe, Nicu; Ricci, Elisa; Galasso, Fabio. - (2020), pp. 771-780. (Intervento presentato al convegno 3DV 2020 tenutosi a virtual event nel 25th-28th November 2020) [10.1109/3DV50981.2020.00087].

SF-UDA-3D: Source-Free Unsupervised Domain Adaptation for LiDAR-based 3D Object Detection

Saltori, Cristiano;Sebe, Nicu;Ricci, Elisa;
2020-01-01

Abstract

3D object detectors based only on LiDAR point clouds hold the state-of-the-art on modern street-view benchmarks. However, LiDAR-based detectors poorly generalize across domains due to domain shift. In the case of LiDAR, in fact, domain shift is not only due to changes in the environment and in the object appearances, as for visual data from RGB cameras, but is also related to the geometry of the point clouds (e.g., point density variations). This paper proposes SF-UDA 3D , the first Source-Free Unsupervised Domain Adaptation (SF-UDA) framework to domain-adapt the state-of-the-art PointRCNN 3D detector to target domains for which we have no annotations (unsupervised), neither we hold images nor annotations of the source domain (source-free). SF-UDA 3D is novel on both aspects. Our approach is based on pseudo-annotations, reversible scale-transformations and motion coherency. SFUDA 3D outperforms both previous domain adaptation techniques based on features alignment and state-of-the-art 3D object detection methods which additionally use few-shot target annotations or target annotation statistics. This is demonstrated by extensive experiments on two large-scale datasets, i.e., KITTI and nuScenes.
2020
Proceedings: 2020 International Conference on 3D Vision
Piscataway, NJ
IEEE
978-1-7281-8128-8
Saltori, Cristiano; Lathuiliére, Stéphane; Sebe, Nicu; Ricci, Elisa; Galasso, Fabio
SF-UDA-3D: Source-Free Unsupervised Domain Adaptation for LiDAR-based 3D Object Detection / Saltori, Cristiano; Lathuiliére, Stéphane; Sebe, Nicu; Ricci, Elisa; Galasso, Fabio. - (2020), pp. 771-780. (Intervento presentato al convegno 3DV 2020 tenutosi a virtual event nel 25th-28th November 2020) [10.1109/3DV50981.2020.00087].
File in questo prodotto:
File Dimensione Formato  
2010.0824.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.64 MB
Formato Adobe PDF
3.64 MB Adobe PDF Visualizza/Apri
SF-UDA3D_Source-Free_Unsupervised_Domain_Adaptation_for_LiDAR-Based_3D_Object_Detection.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 458.04 kB
Formato Adobe PDF
458.04 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/286980
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 24
  • ???jsp.display-item.citation.isi??? 18
social impact