CODA PROMPT#

Arguments#

Options

--mufloat

Help: weight of prompt loss

  • Default: 0.0

--pool_sizeint

Help: pool size

  • Default: 100

--prompt_lenint

Help: prompt length

  • Default: 8

--virtual_bs_iterationsint

Help: virtual batch size iterations

  • Default: 1

CODA-Prompt: COntinual Decomposed Attention-based Prompting

Note

CODA-Prompt USES A CUSTOM BACKBONE: vit_base_patch16_224. The backbone is a ViT-B/16 pretrained on Imagenet 21k and finetuned on ImageNet 1k.

Classes#

class models.coda_prompt.CodaPrompt(backbone, loss, args, transform, dataset=None)[source]#

Bases: ContinualModel

Continual Learning via CODA-Prompt: COntinual Decomposed Attention-based Prompting.

COMPATIBILITY: List[str] = ['class-il', 'task-il']#
NAME: str = 'coda_prompt'#
begin_task(dataset)[source]#
forward(x)[source]#
get_optimizer()[source]#
static get_parser(parser)[source]#
Return type:

ArgumentParser

observe(inputs, labels, not_aug_inputs, epoch=0)[source]#