LUCIR#

Arguments#

Options

--lamda_basefloat

Help: Regularization weight for embedding cosine similarity.

  • Default: 5.0

--lamda_mrfloat

Help: Regularization weight for embedding cosine similarity.

  • Default: 1.0

--k_mrint

Help: K for margin-ranking loss.

  • Default: 2

--mr_marginfloat

Help: Margin for margin-ranking loss.

  • Default: 0.5

--fitting_epochsint

Help: Number of epochs to finetune on coreset after each task.

  • Default: 20

--lr_finetunefloat

Help: Learning Rate for finetuning.

  • Default: 0.01

--imprint_weights0|1|True|False -> bool

Help: Apply weight imprinting?

  • Default: 1

Rehearsal arguments

Arguments shared by all rehearsal-based methods.

--buffer_sizeint

Help: The size of the memory buffer.

  • Default: None

--minibatch_sizeint

Help: The batch size of the memory buffer.

  • Default: None

Classes#

class models.lucir.CustomClassifier(in_features, cpt, n_tasks)[source]#

Bases: Module

forward(x)[source]#
noscale_forward(x)[source]#
reset_parameters()[source]#
reset_weight(i)[source]#
class models.lucir.Lucir(backbone, loss, args, transform, dataset=None)[source]#

Bases: ContinualModel

Continual Learning via Lucir.

COMPATIBILITY: List[str] = ['class-il', 'task-il']#
NAME: str = 'lucir'#
begin_task(dataset)[source]#
end_task(dataset)[source]#
fit_buffer(opt_steps)[source]#
forward(x)[source]#
get_loss(inputs, labels, task_idx)[source]#

Computes the loss tensor.

Parameters:
  • inputs (Tensor) – the images to be fed to the network

  • labels (Tensor) – the ground-truth labels

  • task_idx (int) – the task index

Returns:

the differentiable loss value

Return type:

Tensor

static get_parser(parser)[source]#
Return type:

ArgumentParser

imprint_weights(dataset)[source]#
observe(inputs, labels, not_aug_inputs, logits=None, epoch=None, fitting=False)[source]#
update_classifier()[source]#

Functions#

models.lucir.lucir_batch_hard_triplet_loss(labels, embeddings, k, margin, num_old_classes)[source]#

LUCIR triplet loss.