L2P#
Arguments#
Options
- --prompt_poolbool
Help: None
Default:
True
- --pool_size_l2pint
Help: number of prompts (M in paper)
Default:
10
- --lengthint
Help: length of prompt (L_p in paper)
Default:
5
- --top_kint
Help: top k prompts to use (N in paper)
Default:
5
- --prompt_keybool
Help: Use learnable prompt key
Default:
True
- --prompt_key_initstr
Help: initialization type for key’s prompts
Default:
uniform
- --use_prompt_maskbool
Help: None
Default:
False
- --batchwise_prompt0|1|True|False -> bool
Help: Use batch-wise prompting (i.e., majority voting) during test? NOTE: this may lead to unfair comparison with other methods.
Default:
0
- --embedding_keystr
Help: None
Default:
cls
- --predefined_keystr
Help: None
- --pull_constraintunknown
Help: None
Default:
True
- --pull_constraint_coefffloat
Help: None
Default:
0.1
- --global_poolstr
Help: type of global pooling for final sequence
Default:
token
Choices:
token, avg
- --head_typestr
Help: input type of classification head
Default:
prompt
Choices:
token, gap, prompt, token+prompt
- --freezelist
Help: freeze part in backbone model
Default:
['blocks', 'patch_embed', 'cls_token', 'norm', 'pos_embed']
- --clip_gradfloat
Help: Clip gradient norm
Default:
1
- --use_original_ckpt0|1|True|False -> bool
Help: Use original checkpoint from https://storage.googleapis.com/vit_models/imagenet21k/ViT-B_16.npz
Default:
0
L2P: Learning to Prompt for Continual Learning
Note
L2P USES A CUSTOM BACKBONE: vit_base_patch16_224. The backbone is a ViT-B/16 pretrained on Imagenet 21k and finetuned on ImageNet 1k.
Classes#
- class models.l2p.L2P(backbone, loss, args, transform, dataset=None)[source]#
Bases:
ContinualModel
Learning to Prompt (L2P).