Online depth learning is the problem of consistently adapting a depth estimation model to handle a continuously changing environment. This problem is challenging due to the network easily overfits on the current environment and forgets its past experiences. To address such problem, this paper presents a novel Learning to Prevent Forgetting (LPF) method for online mono-depth adaptation to new target domains in unsupervised manner. Instead of updating the universal parameters, LPF learns adapter modules to efficiently adjust the feature representation and distribution without losing the pre-learned knowledge in online condition. Specifically, to adapt temporal-continuous depth patterns in videos, we introduce a novel meta-learning approach to learn adapter modules by combining online adaptation process into the learning objective. To further avoid overfitting, we propose a novel temporal-consistent regularization to harmonize the gradient descent procedure at each online learning step. Extensive evaluations on real-world datasets demonstrate that the proposed method, with very limited parameters, significantly improves the estimation quality.

Online depth learning against forgetting in monocular videos / Zhang, Zhenyu; Lathuiliere, Stephane; Ricci, Elisa; Sebe, Nicu; Yan, Yan; Yang, Jian. - ELETTRONICO. - (2020), pp. 4493-4502. (Intervento presentato al convegno 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 tenutosi a Virtual nel 14th-19th June 2020) [10.1109/CVPR42600.2020.00455].

Online depth learning against forgetting in monocular videos

Ricci, Elisa;Sebe, Nicu;
2020-01-01

Abstract

Online depth learning is the problem of consistently adapting a depth estimation model to handle a continuously changing environment. This problem is challenging due to the network easily overfits on the current environment and forgets its past experiences. To address such problem, this paper presents a novel Learning to Prevent Forgetting (LPF) method for online mono-depth adaptation to new target domains in unsupervised manner. Instead of updating the universal parameters, LPF learns adapter modules to efficiently adjust the feature representation and distribution without losing the pre-learned knowledge in online condition. Specifically, to adapt temporal-continuous depth patterns in videos, we introduce a novel meta-learning approach to learn adapter modules by combining online adaptation process into the learning objective. To further avoid overfitting, we propose a novel temporal-consistent regularization to harmonize the gradient descent procedure at each online learning step. Extensive evaluations on real-world datasets demonstrate that the proposed method, with very limited parameters, significantly improves the estimation quality.
2020
Proceedings: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Piscataway, NJ
IEEE Computer Society
978-1-7281-7168-5
Zhang, Zhenyu; Lathuiliere, Stephane; Ricci, Elisa; Sebe, Nicu; Yan, Yan; Yang, Jian
Online depth learning against forgetting in monocular videos / Zhang, Zhenyu; Lathuiliere, Stephane; Ricci, Elisa; Sebe, Nicu; Yan, Yan; Yang, Jian. - ELETTRONICO. - (2020), pp. 4493-4502. (Intervento presentato al convegno 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 tenutosi a Virtual nel 14th-19th June 2020) [10.1109/CVPR42600.2020.00455].
File in questo prodotto:
File Dimensione Formato  
09157014.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 748.64 kB
Formato Adobe PDF
748.64 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/287384
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 23
  • ???jsp.display-item.citation.isi??? 15
  • OpenAlex ND
social impact