ER ACE TRICKS#
Arguments#
Options
- --bic_epochsint
Help: bias injector.
Default:
50
- --elrdfloat
Help: None
Default:
0.99999925
- --sample_selection_strategystr
Help: Sample selection strategy to use: reservoir, lars (Loss-Aware Reservoir Sampling), labrs (Loss-Aware Balanced Reservoir Sampling)
Default:
labrs
Choices:
reservoir, lars, labrs
Rehearsal arguments
Arguments shared by all rehearsal-based methods.
- --buffer_sizeint
Help: The size of the memory buffer.
Default:
None
- --minibatch_sizeint
Help: The batch size of the memory buffer.
Default:
None
This module implements the simplest form of rehearsal training: Experience Replay. It maintains a buffer of previously seen examples and uses them to augment the current batch during training.
- Example usage:
model = Er(backbone, loss, args, transform, dataset) loss = model.observe(inputs, labels, not_aug_inputs, epoch)
Classes#
- class models.er_ace_tricks.ErAceTricks(backbone, loss, args, transform, dataset=None)[source]#
Bases:
ContinualModel
Experience Replay with tricks from Rethinking Experience Replay: a Bag of Tricks for Continual Learning.