SPR#
Arguments#
Options
- --spr_debug_mode0|1|True|False -> bool
Help: Run SPR with just a few iterations?
Default:
False
- --delayed_buffer_sizeint
Help: Size of the delayed buffer.
Default:
500
- --fitting_lrfloat
Help: LR used during finetuining (classifier buffer fitting on P)
Default:
0.002
- --fitting_epochsint
Help: Number of epochs used during finetuining (classifier buffer fitting on P)
Default:
50
- --inner_train_epochsint
Help: Inner train epochs for SSL (base net)
Default:
3000
- --expert_train_epochsint
Help: Innert train epochs for SSL (expert)
Default:
4000
- --simclr_tempfloat
Help: Temperature for simclr SSL loss
Default:
0.5
- --fitting_sched_lr_stepsizeint
Help: Step size for the LR scheduler during finetuining (classifier buffer fitting on P)
Default:
300
- --fitting_sched_lr_gammafloat
Help: Gamma for the LR scheduler during finetuining (classifier buffer fitting on P)
Default:
0.1
- --fitting_batch_sizeint
Help: Batch size for finetuining (classifier buffer fitting on P)
Default:
16
- --fitting_clip_valuefloat
Help: Gradient clipping for finetuning
Default:
0.5
- --E_maxint
Help: Number of stochastic ensemble for expert
Default:
5
Rehearsal arguments
Arguments shared by all rehearsal-based methods.
- --buffer_sizeint
Help: The size of the memory buffer.
Default:
None
- --minibatch_sizeint
Help: The batch size of the memory buffer.
Default:
None
Classes#
- class models.spr.SimCLR(transform, temp=0.5, eps=1e-06, filter_bs_len=None, correlation_mask=None)[source]#
Bases:
object
- class models.spr.Spr(backbone, loss, args, transform, dataset=None)[source]#
Bases:
ContinualModel
Implementation of Continual Learning on Noisy Data Streams via Self-Purified Replay from ICCV 2021.
- OVERRIDE_SUPPORT_DISTRIBUTED = True#