Analyzing radargrams obtained from radar sounder (RS) instruments is an effective method for studying the subsurface of celestial bodies. However, existing deep learning-based methods for automatic classification of subsurface in RS data require a large amount of labeled training data, posing a significant challenge. Moreover, these methods lack generalization to radargrams from different campaigns due to varying distributions within the same classes. To address these limitations, we propose a novel few-shot pixel-based classification framework for RS data. This framework aims to learn underlying patterns using only a few labeled support samples and adapt quickly to radargrams from different campaigns with minimal labeled information without requiring retraining. Given the scarcity of labeled RS data and the significance of low-level features (such as texture and intensity) in class differentiation over high-level features (such as shapes), we propose simplifying the segmentation task into a classification problem by treating each pixel independently. To preserve spatial information, we introduce spatial input (SI) by integrating neighboring pixels in the depth dimension and incorporating sequence awareness to address misinterpretations of reflections from different subsurfaces. Furthermore, unlike conventional semantic segmentation approaches relying on encoder-decoder structures, our framework eliminates the need for the decoder component. We evaluate the proposed method on different datasets acquired in regions of Antarctica by MCoRDS. Results show the effectiveness and generalization capability of the proposed framework to accurately segment different subsurface targets under very limited data.
A Spatially Aware Few-Shot Approach to Classification of Radar Sounder Data / Yebasse, Milkisa T.; Bruzzone, Lorenzo. - In: IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING. - ISSN 1558-0644. - 63:(2025), pp. 1-14. [10.1109/TGRS.2025.3546160]
A Spatially Aware Few-Shot Approach to Classification of Radar Sounder Data
Milkisa T. YebassePrimo
;Lorenzo Bruzzone
Secondo
2025-01-01
Abstract
Analyzing radargrams obtained from radar sounder (RS) instruments is an effective method for studying the subsurface of celestial bodies. However, existing deep learning-based methods for automatic classification of subsurface in RS data require a large amount of labeled training data, posing a significant challenge. Moreover, these methods lack generalization to radargrams from different campaigns due to varying distributions within the same classes. To address these limitations, we propose a novel few-shot pixel-based classification framework for RS data. This framework aims to learn underlying patterns using only a few labeled support samples and adapt quickly to radargrams from different campaigns with minimal labeled information without requiring retraining. Given the scarcity of labeled RS data and the significance of low-level features (such as texture and intensity) in class differentiation over high-level features (such as shapes), we propose simplifying the segmentation task into a classification problem by treating each pixel independently. To preserve spatial information, we introduce spatial input (SI) by integrating neighboring pixels in the depth dimension and incorporating sequence awareness to address misinterpretations of reflections from different subsurfaces. Furthermore, unlike conventional semantic segmentation approaches relying on encoder-decoder structures, our framework eliminates the need for the decoder component. We evaluate the proposed method on different datasets acquired in regions of Antarctica by MCoRDS. Results show the effectiveness and generalization capability of the proposed framework to accurately segment different subsurface targets under very limited data.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione



