FreeDA

Training-Free Open-Vocabulary Segmentation with Offline Diffusion-Augmented Prototype Generation

CVPR 2024

1University of Modena and Reggio Emilia, 2IIT-CNR
* Equal contribution

FreeDA is a training-free approach to perform open-vocabulary segmentation with free-form textual queries

Abstract

Open-vocabulary semantic segmentation aims at segmenting arbitrary categories expressed in textual form. Previous works have trained over large amounts of image-caption pairs to enforce pixel-level multimodal alignments. However, captions provide global information about the semantics of a given image but lack direct localization of individual concepts. Further, training on large-scale datasets inevitably brings significant computational costs. In this paper, we propose FreeDA, a training-free diffusion-augmented method for open-vocabulary semantic segmentation, which leverages the ability of diffusion models to visually localize generated concepts and local-global similarities to match class-agnostic regions with semantic classes. Our approach involves an offline stage in which textual-visual reference embeddings are collected, starting from a large set of captions and leveraging visual and semantic contexts. At test time, these are queried to support the visual matching process, which is carried out by jointly considering class-agnostic regions and global semantic similarities. Extensive analyses demonstrate that FreeDA achieves state-of-the-art performance on five datasets, surpassing previous methods by more than 7.0 average points in terms of mIoU and without requiring any training.


Method

Diffusion-Augmented Prototype Generation



Overview of the diffusion-augmented prototype generation phase. Visual prototypes are generated by pooling self-supervised visual features on weak localization masks extracted from Stable Diffusion.

Training-Free Mask Prediction



Overview of the inference process in FreeDA. Local (region-level) and global similarities are computed by employing, respectively, visual self-supervised and multimodal contrastive embedding spaces, and by comparing them with input texts and prototypes, built during the off-line stage.


Qualitative Results



Qualitative results of FreeDA with and without global similarities and superpixels.


Prototypes Examples



Examples of retrieved prototypes for a specified textual category. From left to right, we show the original COCO caption, the corresponding generated image, the attribution map, and the binarized mask (area highlighted in red).

BibTeX

@inproceedings{barsellotti2024training
        title={Training-Free Open-Vocabulary Segmentation with Offline Diffusion-Augmented Prototype Generation},
        author={Barsellotti, Luca and Amoroso, Roberto and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
        booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
        year={2024}
      }