ICARL#
Arguments#
Options
- --compute_theoretical_best0|1|True|False -> bool
Help: Compute NCM with the theoretical case where all the training samples are stored AND extend the buffer to equalize number of samples? (as in the original code)
Default:
0
- --use_original_icarl_transform0|1|True|False -> bool
Help: Use the original iCaRL transform?
Default:
0
- --opt_wdfloat
Help: Optimizer weight decay
Default:
1e-05
Rehearsal arguments
Arguments shared by all rehearsal-based methods.
- --buffer_sizeint
Help: The size of the memory buffer.
Default:
None
- --minibatch_sizeint
Help: The batch size of the memory buffer.
Default:
None
Classes#
- class models.icarl.ICarl(backbone, loss, args, transform, dataset=None)[source]#
Bases:
ContinualModel
Continual Learning via iCaRL.
- get_loss(inputs, labels, task_idx, logits)[source]#
Computes the loss tensor.
- Parameters:
inputs (Tensor) – the images to be fed to the network
labels (Tensor) – the ground-truth labels
task_idx (int) – the task index
logits (Tensor) – the logits of the old network
- Returns:
the differentiable loss value
- Return type:
Tensor