Large-scale vision-and-language models, such as CLIP, are typically trained on web-scale data, which can introduce inappropriate content and lead to the development of unsafe and biased behavior. This, in turn, hampers their applicability in sensitive and trustworthy contexts and could raise significant concerns in their adoption.
Our research introduces a novel approach to enhancing the safety of vision-and-language models by diminishing their sensitivity to NSFW (not safe for work) inputs. In particular, our methodology seeks to sever “toxic” linguistic and visual concepts, unlearning the linkage between unsafe linguistic or visual items and unsafe regions of the embedding space. We show how this can be done by fine-tuning a CLIP model on synthetic data obtained from a large language model trained to convert between safe and unsafe sentences, and a text-to-image generator.
We conduct extensive experiments on the resulting embedding space for cross-modal retrieval, text-to-image, and image-to-text generation, where we show that our model can be remarkably employed with pre-trained generative models.
Safe-CLIP is a vision-and-language model designed to mitigate the risks associated with NSFW (Not Safe For Work) content in AI applications. To achieved this, we fine-tuned the CLIP model on a synthetic dataset obtained from a large language model trained to convert between safe and unsafe sentences, and a text-to-image generator. The fine-tuning process is based on a mixture of cosine distance and contrastive losses to modifiy the CLIP model's embedding space, but at the same time preserving the original CLIP's ability to associate linguistic and visual concepts.
@inproceedings{poppi2024removing,
title={{Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models}},
author={Poppi, Samuele and Poppi, Tobia and Cocchi, Federico and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
booktitle={Proceedings of the European Conference on Computer Vision},
year={2024}
}