CliPPER: Contextual Video-Language Pretraining on Long-form Intraoperative Surgical Procedures for Event Recognition
2026-03-25 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionArtificial Intelligence
AI summaryⓘ
The authors created a new method called CliPPER to help computers better understand long videos of surgeries without needing a lot of labeled examples. They trained their model on surgical lecture videos and invented special techniques to link video parts with matching text descriptions, improving how the model recognizes events over time. Their approach led to better results than previous methods in identifying surgical phases, steps, and tools purely from the videos. This work aims to enhance computer understanding in complex surgical settings where data is limited.
video-language modelsintraoperative surgerypretrainingcontrastive learningtemporal video understandingclip order predictionframe-text matchingmultimodal alignmentzero-shot recognitionevent recognition
Authors
Florian Stilz, Vinkle Srivastav, Nassir Navab, Nicolas Padoy
Abstract
Video-language foundation models have proven to be highly effective in zero-shot applications across a wide range of tasks. A particularly challenging area is the intraoperative surgical procedure domain, where labeled data is scarce, and precise temporal understanding is often required for complex downstream tasks. To address this challenge, we introduce CliPPER (Contextual Video-Language Pretraining on Long-form Intraoperative Surgical Procedures for Event Recognition), a novel video-language pretraining framework trained on surgical lecture videos. Our method is designed for fine-grained temporal video-text recognition and introduces several novel pretraining strategies to improve multimodal alignment in long-form surgical videos. Specifically, we propose Contextual Video-Text Contrastive Learning (VTC_CTX) and Clip Order Prediction (COP) pretraining objectives, both of which leverage temporal and contextual dependencies to enhance local video understanding. In addition, we incorporate a Cycle-Consistency Alignment over video-text matches within the same surgical video to enforce bidirectional consistency and improve overall representation coherence. Moreover, we introduce a more refined alignment loss, Frame-Text Matching (FTM), to improve the alignment between video frames and text. As a result, our model establishes a new state-of-the-art across multiple public surgical benchmarks, including zero-shot recognition of phases, steps, instruments, and triplets. The source code and pretraining captions can be found at https://github.com/CAMMA-public/CliPPER.