NLKI Framework — Project Website
Commonsense visual–question answering often hinges on knowledge that is missing from the image or the question. Small vision–language models (sVLMs) such as ViLT, VisualBERT, and FLAVA therefore lag behind their larger generative counterparts. To study the effect of careful commonsense knowledge integration on sVLMs, we present an end-to-end framework NLKI that: (i) retrieves natural language facts, (ii) prompts an LLM to craft natural language explanations, and (iii) feeds both signals to sVLMs across two commonsense VQA datasets (CRIC, AOKVQA) and a visual-entailment dataset (e-SNLI-VE). Facts retrieved using a fine-tuned ColBERTv2 and an object information–enriched prompt yield explanations that largely reduce hallucinations, while lifting end-to-end answer accuracy by up to 7% across three datasets. This enables FLAVA and other models in NLKI to match or exceed medium-sized VLMs such as Qwen-2 VL-2B and SmolVLM-2.5B. As these benchmarks contain 10–25% label noise, additional finetuning using noise-robust losses (such as symmetric cross entropy and generalised cross entropy) adds another 2.5% in CRIC and 5.5% in AOKVQA. Our findings expose when LLM-based commonsense knowledge beats retrieval from commonsense knowledge bases, how noise-aware training stabilises small models in the context of external knowledge augmentation, and why parameter-efficient commonsense reasoning is now within reach for 250M-parameter models.
All components are modular; swap retriever/LLM/reader as needed. Replace this text with your final description.
Dataset | Type | Train | Val | Test | Answer |
---|---|---|---|---|---|
CRIC | VQA | 364K | 76K | 84K | MCQ |
AOKVQA | VQA | 17K | 1.1K | 6.7K | MCQ / Free-form |
e-SNLI-VE | NLI | 401K | 14K | 14K | 3-way |
Numbers above are placeholders copied from the paper; adjust to the final stats you wish to report.
@inproceedings{dutta2025nlki,
title = {NLKI: A Lightweight Natural Language Knowledge Integration Framework for Improving Small VLMs in Commonsense VQA Tasks},
author = {Aritra Dutta and Swapnanil Mukherjee and Deepanway Ghoshal and Somak Aditya},
booktitle = {Findings of the Association for Computational Linguistics: EMNLP},
year = {2025}
}
Update as needed.
For questions, please email traillab@gmail.com.
If you use our code or ideas, please cite the paper above. Thanks!