CNLL#

Arguments#

Options

--cnll_debug_mode0|1|True|False -> bool

Help: Run CNLL with just a few iterations?

  • Default: False

--unlimited_buffer0|1|True|False -> bool

Help: Use unlimited buffers?

  • Default: False

--delayed_buffer_sizeint

Help: Size of the delayed buffer.

  • Default: 500

--noisy_buffer_sizeint

Help: Size of the noisy buffer.

  • Default: 1000

--warmup_epochsint

Help: Warmup epochs

  • Default: 5

--finetune_epochsint

Help: Finetuning epochs

  • Default: 10

--warmup_lrfloat

Help: Warmup learning rate

  • Default: 0.001

--subsample_cleanint

Help: Number of high confidence samples to subsample from the clean buffer (N_1 in the paper)

  • Default: 25

--subsample_noisyint

Help: Number of high confidence samples to subsample from the noisy buffer (N_2 in the paper)

  • Default: 50

--sharp_tempfloat

Help: Temperature for label CO-Guessing

  • Default: 0.5

--mixup_alphafloat

Help: Alpha parameter of Beta distribution for mixup

  • Default: 4

--lambda_ufloat

Help: Weight for unsupervised loss

  • Default: 30

--lambda_cfloat

Help: Weight for constrastive loss

  • Default: 0.025

--finetune_lrfloat

Help: Warmup learning rate

  • Default: 0.1

Rehearsal arguments

Arguments shared by all rehearsal-based methods.

--buffer_sizeint

Help: The size of the memory buffer.

  • Default: None

--minibatch_sizeint

Help: The batch size of the memory buffer.

  • Default: None

Classes#

class models.cnll.Cnll(backbone, loss, args, transform, dataset=None)[source]#

Bases: ContinualModel

Implementation of CNLL: A Semi-supervised Approach For Continual Noisy Label Learning from CVPRW 2022.

COMPATIBILITY: List[str] = ['class-il', 'task-il']#
NAME: str = 'cnll'#
begin_task(dataset)[source]#
coguess_label(xa, xb, y)[source]#
finetune_on_buffers()[source]#

Fit finetuned model on purified and noisy buffer

static get_parser(parser)[source]#
get_partition_buffer_indexes(buffer)[source]#
observe(inputs, labels, not_aug_inputs, true_labels)[source]#
sample_selection_JSD(buffer)[source]#
ssl_loss(all_inputs, all_targets, batch_size, c_iter)[source]#
warm_up_on_buffer(buffer)[source]#
class models.cnll.Dataset(data, targets=None, transform=None, device='cpu')[source]#

Bases: Dataset

class models.cnll.Jensen_Shannon(*args, **kwargs)[source]#

Bases: Module

forward(p, q)[source]#
class models.cnll.NegEntropy[source]#

Bases: object

class models.cnll.SemiLoss(args)[source]#

Bases: object

Functions#

models.cnll.get_hard_transform(args, dataset)[source]#
models.cnll.kl_divergence(p, q)[source]#
models.cnll.linear_rampup(current, warm_up, rampup_length=16)[source]#