Memory‑efficient Streaming VideoLLMs for Real‑time Procedural Video Understanding

1 Meta Reality Labs

2 FAIR, Meta

3 National University of Singapore

arXiv 2025

Abstract

We introduce ProVideLLM, an end-to-end framework for real-time procedural video understanding. ProVideLLM integrates a multimodal cache configured to store two types of tokens—verbalized text tokens, which provide compressed textual summaries of long‑term observations, and visual tokens, encoded with DETR‑QFormer to capture fine‑grained details from short‑term observations. This design reduces token count by 22× over existing methods in representing one hour of long‑term observations while effectively encoding present fine‑granularity. By interleaving these tokens in our multimodal cache, ProVideLLM ensures sub‑linear scaling of memory and compute with video length, enabling per‑frame streaming inference at 10 FPS and streaming dialogue at 25 FPS, with a minimal 2 GB GPU memory footprint. ProVideLLM also sets new state‑of‑the‑art results on six procedural tasks across four datasets.

Framework overview

Citation

@article{chatterjee2025memory,
  title={Memory-efficient Streaming VideoLLMs for Real-time Procedural Video Understanding},
  author={Dibyadip Chatterjee and Edoardo Remelli and Yale Song and Bugra Tekin 
    and Abhay Mittal and Bharat Bhatnagar and Necati Cihan Camgöz and Shreyas Hampali and 
    Eric Sauser and Shugao Ma and Angela Yao and Fadime Sener},
  journal={arXiv preprint 2504.13915},
  year={2025}
}