EVALUATE#
Functions#
- utils.evaluate.evaluate(model, dataset, last=False, return_loss=False)[source]#
Evaluates the single-class top-1 accuracy of the model for each past task.
The accuracy is evaluated for all the tasks up to the current one, only for the total number of classes seen so far.
- Parameters:
model (ContinualModel) – the model to be evaluated
dataset (ContinualDataset) – the continual dataset at hand
last – a boolean indicating whether to evaluate only the last task
return_loss – a boolean indicating whether to return the loss in addition to the accuracy
- Returns:
a tuple of lists, containing the class-il and task-il accuracy for each task. If return_loss is True, the loss is also returned as a third element.
- Return type:
- utils.evaluate.mask_classes(outputs, dataset, k)[source]#
Given the output tensor, the dataset at hand and the current task, masks the former by setting the responses for the other tasks at -inf. It is used to obtain the results for the task-il setting.
- Parameters:
outputs (Tensor) – the output tensor
dataset (ContinualDataset) – the continual dataset
k (int) – the task index