Memory‑efficient Streaming VideoLLMs for Real‑time Procedural Video Understanding |
1 Meta Reality Labs 2 FAIR, Meta 3 National University of Singapore arXiv 2025 |
AbstractWe introduce ProVideLLM, an end-to-end framework for real-time procedural video understanding. ProVideLLM integrates a multimodal cache configured to store two types of tokens—verbalized text tokens, which provide compressed textual summaries of long‑term observations, and visual tokens, encoded with DETR‑QFormer to capture fine‑grained details from short‑term observations. This design reduces token count by 22× over existing methods in representing one hour of long‑term observations while effectively encoding present fine‑granularity. By interleaving these tokens in our multimodal cache, ProVideLLM ensures sub‑linear scaling of memory and compute with video length, enabling per‑frame streaming inference at 10 FPS and streaming dialogue at 25 FPS, with a minimal 2 GB GPU memory footprint. ProVideLLM also sets new state‑of‑the‑art results on six procedural tasks across four datasets. |
![]() |
Citation
|