clip#

Arguments#

Options

--clip_backbonestr

Help: Backbone architecture for CLIP

  • Default: ViT-L/14

  • Choices: RN50, RN101, RN50x4, RN50x16, RN50x64, ViT-B/32, ViT-B/16, ViT-L/14, ViT-L/14@336px

--save_predictions0|1|True|False -> bool

Help: Whether to save predictions of the TRAINING set after each task

  • Default: 0

--use_templates0|1|True|False -> bool

Help: Whether to use prompt templates for CLIP. NOTE: Datasets NEED to have a get_prompt_templates method implemented.

  • Default: 0