ER ACE TRICKS#

Arguments#

Options

--bic_epochsint

Help: bias injector.

  • Default: 50

--elrdfloat

Help: None

  • Default: 0.99999925

--sample_selection_strategystr

Help: Sample selection strategy to use: reservoir, lars (Loss-Aware Reservoir Sampling), labrs (Loss-Aware Balanced Reservoir Sampling)

  • Default: labrs

  • Choices: reservoir, lars, labrs

Rehearsal arguments

Arguments shared by all rehearsal-based methods.

--buffer_sizeint

Help: The size of the memory buffer.

  • Default: None

--minibatch_sizeint

Help: The batch size of the memory buffer.

  • Default: None

This module implements the simplest form of rehearsal training: Experience Replay. It maintains a buffer of previously seen examples and uses them to augment the current batch during training.

Example usage:

model = Er(backbone, loss, args, transform, dataset) loss = model.observe(inputs, labels, not_aug_inputs, epoch)

Classes#

class models.er_ace_tricks.ErAceTricks(backbone, loss, args, transform, dataset=None)[source]#

Bases: ContinualModel

Experience Replay with tricks from Rethinking Experience Replay: a Bag of Tricks for Continual Learning.

COMPATIBILITY: List[str] = ['class-il', 'domain-il', 'task-il', 'general-continual']#
NAME: str = 'er_ace_tricks'#
end_task(dataset)[source]#
forward(x)[source]#
Return type:

Tensor

static get_parser(parser)[source]#

Returns an ArgumentParser object with predefined arguments for the Er model.

Return type:

ArgumentParser

observe(inputs, labels, not_aug_inputs, epoch=None)[source]#

ER trains on the current task using the data provided, but also augments the batch with data from the buffer.

Functions#

models.er_ace_tricks.apply_decay(decay, lr, optimizer, num_iter)[source]#