Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models

1University of Modena and Reggio Emilia, 2University of Pisa, 3IIT-CNR, Italy
* Equal contribution

Safe-CLIP is an ehnanced vision-and-language model designed to mitigate the risks associated with NSFW (Not Safe For Work) content in AI applications. Based on the CLIP model, Safe-CLIP is fine-tuned to serve the association between linguistic and visual concepts, ensuring safer outputs in text-to-image and image-to-text retrieval and generation tasks.

Warning: This project involves explicit sexual content, racially insensitive language, and other material that may be harmful or disturbing to certain users. Please use this content solely for research purposes and proceed with caution.

Abstract

Large-scale vision-and-language models, such as CLIP, are typically trained on web-scale data, which can introduce inappropriate content and lead to the development of unsafe and biased behavior. This, in turn, hampers their applicability in sensitive and trustworthy contexts and could raise significant concerns in their adoption.

Our research introduces a novel approach to enhancing the safety of vision-and-language models by diminishing their sensitivity to NSFW (not safe for work) inputs. In particular, our methodology seeks to sever “toxic” linguistic and visual concepts, unlearning the linkage between unsafe linguistic or visual items and unsafe regions of the embedding space. We show how this can be done by fine-tuning a CLIP model on synthetic data obtained from a large language model trained to convert between safe and unsafe sentences, and a text-to-image generator.

We conduct extensive experiments on the resulting embedding space for cross-modal retrieval, text-to-image, and image-to-text generation, where we show that our model can be remarkably employed with pre-trained generative models.

Model architecture

Safe-CLIP is a vision-and-language model designed to mitigate the risks associated with NSFW (Not Safe For Work) content in AI applications. To achieved this, we fine-tuned the CLIP model on a synthetic dataset obtained from a large language model trained to convert between safe and unsafe sentences, and a text-to-image generator. The fine-tuning process is based on a mixture of cosine distance and contrastive losses to modifiy the CLIP model's embedding space, but at the same time preserving the original CLIP's ability to associate linguistic and visual concepts.

Safe-CLIP Tasks

We evaluate Safe-CLIP on a variety of tasks. Specifically, cross-modal retrieval consider both text and image as queries information to retrieve the most relevant information from the other modality. Moreover, we evaluate the model on text-to-image and image-to-text generation tasks. Employing the stable diffusion pipeline for the former and LLaVA family models for the latter. It is important to mention that thanks to its development Safe-CLIP can be effectively and simply integrated in pre-trained generative models.

Qualitative Results

Text-to-Image Retrieval & Image-to-Text Retrieval
Image-to-Text Generation with LLaVA Models
Text-to-Image Generation with Stable Diffusion 1.4

Citation

@inproceedings{poppi2024removing,
            title={{Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models}},
            author={Poppi, Samuele and Poppi, Tobia and Cocchi, Federico and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
            booktitle={Proceedings of the European Conference on Computer Vision},
            year={2024}
          }