MOE ADAPTERS#
Arguments#
Options
- --virtual_bs_nint
Help: Virtual batch size iterations
Default:
8
- --clip_backbonestr
Help: Clip backbone
Default:
ViT-L/14
- --prompt_templatestr
Help: Template string
Default:
a bad photo of a {}.
Implementation of MoE-Adapters from the CVPR 2024 paper “Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters” Paper: https://arxiv.org/abs/2403.11549 Original code: https://github.com/JiazuoYu/MoE-Adapters4CL
Classes#
- class models.moe_adapters.MoEAdapters(backbone, loss, args, transform, dataset=None)[source]#
Bases:
FutureModel
MoE Adapters – Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters.