CLIP#

Arguments#

Options

--clip_backbonestr

Help: Backbone architecture for CLIP

  • Default: ViT-L/14

  • Choices: RN50, RN101, RN50x4, RN50x16, RN50x64, ViT-B/32, ViT-B/16, ViT-L/14, ViT-L/14@336px

--save_predictions0|1|True|False -> bool

Help: Whether to save predictions of the TRAINING set after each task

  • Default: 0

--use_templates0|1|True|False -> bool

Help: Whether to use prompt templates for CLIP. NOTE: Datasets NEED to have a get_prompt_templates method implemented.

  • Default: 0

Adaptation of OpenAI’s CLIP. Requires: - pip install git+https://github.com/openai/CLIP.git

Classes#

class models.clip.CLIP(backbone, loss, args, transform, dataset=None)[source]#

Bases: ContinualModel

STATIC Continual Learning with CLIP

COMPATIBILITY: List[str] = ['class-il', 'domain-il', 'task-il', 'general-continual']#
NAME: str = 'clip'#
begin_task(dataset)[source]#
end_task(dataset)[source]#
forward(x)[source]#
static get_parser(parser)[source]#
Return type:

ArgumentParser

observe(inputs, labels, not_aug_inputs, epoch=None)[source]#
class models.clip.FinalModel(clip_model, dataset, args)[source]#

Bases: Module

forward(x)[source]#