ICARL#

Arguments#

Options

--compute_theoretical_best0|1|True|False -> bool

Help: Compute NCM with the theoretical case where all the training samples are stored AND extend the buffer to equalize number of samples? (as in the original code)

  • Default: 0

--use_original_icarl_transform0|1|True|False -> bool

Help: Use the original iCaRL transform?

  • Default: 0

--opt_wdfloat

Help: Optimizer weight decay

  • Default: 1e-05

Rehearsal arguments

Arguments shared by all rehearsal-based methods.

--buffer_sizeint

Help: The size of the memory buffer.

  • Default: None

--minibatch_sizeint

Help: The batch size of the memory buffer.

  • Default: None

Classes#

class models.icarl.ICarl(backbone, loss, args, transform, dataset=None)[source]#

Bases: ContinualModel

Continual Learning via iCaRL.

COMPATIBILITY: List[str] = ['class-il', 'task-il']#
NAME: str = 'icarl'#
begin_task(dataset)[source]#
static binary_cross_entropy(pred, y)[source]#
compute_class_means()[source]#

Computes a vector representing mean features for each class.

end_task(dataset)[source]#
forward(x)[source]#
get_loss(inputs, labels, task_idx, logits)[source]#

Computes the loss tensor.

Parameters:
  • inputs (Tensor) – the images to be fed to the network

  • labels (Tensor) – the ground-truth labels

  • task_idx (int) – the task index

  • logits (Tensor) – the logits of the old network

Returns:

the differentiable loss value

Return type:

Tensor

static get_parser(parser)[source]#
Return type:

ArgumentParser

observe(inputs, labels, not_aug_inputs, logits=None, epoch=None)[source]#
wd()[source]#

Functions#

models.icarl.c100_transform(inputs)[source]#

Original augmentation from https://github.com/srebuffi/iCaRL/blob/master/iCaRL-TheanoLasagne/utils_cifar100.py