Augmenting Multimodal LLMs with
Self-Reflective Tokens for Knowledge-based
Visual Question Answering

* Equal contribution

CVPR 2025

Abstract

Multimodal LLMs (MLLMs) are the natural extension of large language models to handle multimodal inputs, combining text and image data. They have recently garnered attention due to their capability to address complex tasks involving both modalities. However, their effectiveness is limited to the knowledge acquired during training, which restricts their practical utility. In this work, we introduce a novel method to enhance the adaptability of MLLMs by integrating external knowledge sources. Our proposed model, Reflective LLaVA (ReflectiVA), utilizes reflective tokens to dynamically determine the need for external knowledge and predict the relevance of information retrieved from an external database. Tokens are trained following a two-stage two-model training recipe. This ultimately enables the MLLM to manage external knowledge while preserving fluency and performance on tasks where external knowledge is not needed. Through our experiments, we demonstrate the efficacy of Reflectiva for knowledge-based visual question answering, highlighting its superior performance compared to existing methods.

BibTeX

@InProceedings{cocchi2024augmenting,
    author    = {Cocchi, Federico and Moratelli, Nicholas and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
    title     = {{Augmenting Multimodal LLMs with Self-Reflective Tokens for Knowledge-based Visual Question Answering}},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    year      = {2025}
}

Proposed Method

Model Architecture

Extend model vocabulary with special reflective tokens:

  • [RET]: External knowledge retrieval needed
  • [NORET]: No external knowledge required
  • [REL]: Retrieved items are relevant
  • [NOREL]: Retrieved items are irrelevant

Two-Stage Training Process

  • Stage 1: In-Article Discriminator
  • Stage 2: Reflective Token Training

Inference Time

  1. Assess need for external knowledge retrieval (emit [RET] or [NORET])
  2. if [RET] then retrieve Wikipedia pages from external KB
    • Chunk each page in N passages
    • Classify relevance of each passage (mark with [REL] or [NOREL])
    • Generate answer using only [REL] passages
  3. if [NORET] then generate the answer
Architettura del Metodo

Overview of ReflectiVA, which employs reflective tokens for knowledge-based visual question answering. Our model learns to predict the need of retrieving data from an external knowledge source (top), classifies the relevance of each retrieved item (middle) and generate the final answer based on relevant items (bottom).

Performance on InfoSeek and Encyclopedic-VQA

The ReflectiVA model outperforms competitors on both InfoSeek and Encyclopedic-VQA when external knowledge is incorporated to enrich the generation pipeline. Specifically, we also evaluate various LLM and MLLM baselines in a zero-shot setting.

Accuracy of Reflective Tokens

The special tokens determine when to retrieve information and what to utilize during generation. Specifically, in the Encyclopedic-VQA task, [REL] and [NOREL] achieve accuracies of 94.6 and 95.9, respectively. Meanwhile, the [RET] token is correctly predicted with an accuracy of 88.4.

Preservation on Dataset w/o KB

ReflectiVA preserves the performance on multimodal datasets that do not required external knowledge. Effectively demonstrate that our two-stage, two-model training strategy preserves the model's ability to answer open-ended and general visual-text questions.

Experimental Results

ReflectiVA

VQA accuracy scores on the Encyclopedic-VQA test set and the InfoSeek validation set, where all results from retrieval-augmented models are reported without considering any re-ranking stage to reorder retrieved documents. † indicates results that are not directly comparable due to different knowledge bases, and the marker ◈ represents our reproductions with different LLMs.

Experimental Settings
Configuration
  • Training on 16 NVIDIA A100 GPUs
  • Global Batch size of 128
  • AdamW Optimizer and a learning rate of 2e-5
  • The weights of both the MLP and LLM models are updated
Dataset
  • Training data curated by In-Article Model
  • SoTA performance on InfoSeek and Encyclopedic-VQA
  • Performance preservation on standard MLLM benchmarks
  • Knowledge base: Wikipedia + Wikidata (updated at 2023)

Qualitative Results

ReflectiVA

Sample qualitative results on image-question pairs from Encyclopedic-VQA (top row) and InfoSeek (bottom row), where we compare the answers provided by ReflectiVA with those from WikiLLaVA and EchoSight.

Conclusion

We proposed ReflectiVA, a multimodal LLM with retrieval-augmented generation. Our method employs reflective tokens, trained in a two-stage two-model pipeline. Extensive experiments, conducted on both VQA datasets requiring external knowledge and standard datasets, demonstrate the efficacy of the proposed solution.