ICARL LIDER#

Arguments#

Options

--alpha_lip_lambdafloat

Help: Lambda parameter for lipschitz minimization loss on buffer samples

  • Default: 0

--beta_lip_lambdafloat

Help: Lambda parameter for lipschitz budget distribution loss

  • Default: 0

--headless_init_actstr

Help: None

  • Default: relu

  • Choices: relu, lrelu

--grad_iter_stepint

Help: Step from which to enable gradient computation.

  • Default: -2

Rehearsal arguments

Arguments shared by all rehearsal-based methods.

--buffer_sizeint

Help: The size of the memory buffer.

  • Default: None

--minibatch_sizeint

Help: The batch size of the memory buffer.

  • Default: None

Classes#

class models.icarl_lider.ICarlLider(backbone, loss, args, transform, dataset=None)[source]#

Bases: LiderOptimizer

Continual Learning via iCaRL. Treated with LiDER!

COMPATIBILITY: List[str] = ['class-il', 'task-il']#
NAME: str = 'icarl_lider'#
begin_task(dataset)[source]#
static binary_cross_entropy(pred, y)[source]#
compute_class_means()[source]#

Computes a vector representing mean features for each class.

end_task(dataset)[source]#
forward(x)[source]#
get_loss(inputs, labels, task_idx, logits)[source]#

Computes the loss tensor.

Parameters:
  • inputs (Tensor) – the images to be fed to the network

  • labels (Tensor) – the ground-truth labels

  • task_idx (int) – the task index

  • logits (Tensor) – the logits of the old network

Returns:

the loss tensor List[torch.Tensor]: the output features

Return type:

torch.Tensor

static get_parser(parser)[source]#
Return type:

ArgumentParser

observe(inputs, labels, not_aug_inputs, logits=None, epoch=None)[source]#
to(device)[source]#