MOE ADAPTERS#

Arguments#

Options

--virtual_bs_nint

Help: Virtual batch size iterations

  • Default: 8

--clip_backbonestr

Help: Clip backbone

  • Default: ViT-L/14

--prompt_templatestr

Help: Template string

  • Default: a bad photo of a {}.

Implementation of MoE-Adapters from the CVPR 2024 paper “Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters” Paper: https://arxiv.org/abs/2403.11549 Original code: https://github.com/JiazuoYu/MoE-Adapters4CL

Classes#

class models.moe_adapters.MoEAdapters(backbone, loss, args, transform, dataset=None)[source]#

Bases: FutureModel

MoE Adapters – Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters.

COMPATIBILITY: List[str] = ['class-il', 'domain-il', 'task-il', 'general-continual']#
NAME: str = 'moe-adapters'#
begin_task(dataset)[source]#
change_transform(dataset)[source]#
forward(x)[source]#
future_forward(x)[source]#
get_optimizer()[source]#
Return type:

Optimizer

get_parameters()[source]#
static get_parser(parser)[source]#
Return type:

ArgumentParser

net: Model#
observe(inputs, labels, not_aug_inputs, epoch=None)[source]#
class models.moe_adapters.Model(args, dataset, device='cpu')[source]#

Bases: Module

forward(images, n_past_classes=0, n_seen_classees=None, train=False)[source]#
Return type:

Tensor