CNLL#
Arguments#
Options
- --cnll_debug_mode0|1|True|False -> bool
Help: Run CNLL with just a few iterations?
Default:
False
- --unlimited_buffer0|1|True|False -> bool
Help: Use unlimited buffers?
Default:
False
- --delayed_buffer_sizeint
Help: Size of the delayed buffer.
Default:
500
- --noisy_buffer_sizeint
Help: Size of the noisy buffer.
Default:
1000
- --warmup_epochsint
Help: Warmup epochs
Default:
5
- --finetune_epochsint
Help: Finetuning epochs
Default:
10
- --warmup_lrfloat
Help: Warmup learning rate
Default:
0.001
- --subsample_cleanint
Help: Number of high confidence samples to subsample from the clean buffer (N_1 in the paper)
Default:
25
- --subsample_noisyint
Help: Number of high confidence samples to subsample from the noisy buffer (N_2 in the paper)
Default:
50
- --sharp_tempfloat
Help: Temperature for label CO-Guessing
Default:
0.5
- --mixup_alphafloat
Help: Alpha parameter of Beta distribution for mixup
Default:
4
- --lambda_ufloat
Help: Weight for unsupervised loss
Default:
30
- --lambda_cfloat
Help: Weight for constrastive loss
Default:
0.025
- --finetune_lrfloat
Help: Warmup learning rate
Default:
0.1
Rehearsal arguments
Arguments shared by all rehearsal-based methods.
- --buffer_sizeint
Help: The size of the memory buffer.
Default:
None
- --minibatch_sizeint
Help: The batch size of the memory buffer.
Default:
None
Classes#
- class models.cnll.Cnll(backbone, loss, args, transform, dataset=None)[source]#
Bases:
ContinualModel
Implementation of CNLL: A Semi-supervised Approach For Continual Noisy Label Learning from CVPRW 2022.