How to Run Mammoth#
This section describes how to run experiments with Mammoth.
Basic Usage - Running from CLI#
From the project root, you can run:
python main.py --model <model_name> --dataset <dataset_name> [options]
or equivalently:
python utils/main.py --model <model_name> --dataset <dataset_name> [options]
# The `utils/main.py` is here to stay for backward compatibility.
You can list all available arguments by running:
python main.py --help
This will tell you about the available models, datasets. For method- or dataset-specific options, you can run:
python main.py --model <model_name> --dataset <dataset_name> --help
Installation#
Before running Mammoth, ensure you have installed the required dependencies. You can do this by running:
# to install the basic dependencies
pip install -r requirements.txt
# to install all dependencies, including the optional ones
pip install -r requirements.txt -r requirements-optional.txt
or if you are using UV:
# install the basic dependencies
uv sync
# to install all dependencies, including the optional ones
uv sync --extra extra
Note
Some models and datasets may require additional dependencies. You can find these in the requirements-optional.txt file.
Common Options#
--model
: name of the continual learning method--dataset
: name of the dataset--lr
: learning rate--savecheck
: save a checkpoint at the end of training (last
) or at the end of each task (task
)--validation
: reserve a percentage of the training data for each class for validation--wandb_entity
and--wandb_project
: specify the Weights & Biases entity and project for logging
For more details on arguments, see Utils.
Examples#
Run DER++ on seq-cifar100:
python main.py --model derpp --dataset seq-cifar100 --buffer_size 500 --lr 0.1
Run with best hyperparameters:
python main.py --model derpp --dataset seq-cifar100 --buffer_size 500 --model_config best
Running Mammoth as a Library#
You can also use Mammoth programmatically in Python scripts or interactive sessions. Here’s a simple Python example:
# Import Mammoth functions
from mammoth import train, load_runner, get_avail_args
# Inspect available arguments for a specific model and dataset
required_args, optional_args = get_avail_args(dataset='seq-cifar10', model='sgd')
print('Required arguments:', required_args)
print('Optional arguments:', optional_args)
# Load runner for a particular model and dataset
model, dataset = load_runner(
'sgd', 'seq-cifar10', # The model and dataset names
{'lr': 0.1, 'n_epochs': 1, 'batch_size': 32} # Specify any additional arguments here
)
# Train the model
train(model, dataset)
See the examples/notebooks/basics.ipynb for a full notebook version.
Note
Differently from the CLI, the Python API does not support capturing the SIGINT signal (Ctrl+C) to gracefully stop the training.
Sending a SIGINT signal will stop the training gracefully, allowing to keep the current state of the model and dataset. However, it will not save the checkpoint, so you will need to save it manually if needed.
See Also#
Reproducibility
Checkpoints
Fast Training
Scripts