SCSampler

Sampling Salient Clips from Video for Efficient Action Recognition

While many action recognition datasets consist of collections of brief, trimmed videos each containing a relevant action, videos in the real-world (e.g., on YouTube) exhibit very different properties: they are often several minutes long, where brief relevant clips are often interleaved with segments of extended duration containing little change. Applying densely an action recognition system to every temporal clip within such videos is prohibitively expensive. Furthermore, as we show in our experiments, this results in suboptimal recognition accuracy as informative predictions from relevant clips are outnumbered by meaningless classification outputs over long uninformative sections of the video. In this paper we introduce a lightweight `clip-sampling’ model that can efficiently identify the most salient temporal clips within a long video. We demonstrate that the computational cost of action recognition on untrimmed videos can be dramatically reduced by invoking recognition only on these most salient clips. Furthermore, we show that this yields significant gains in recognition accuracy compared to analysis of all clips or randomly/uniformly selected clips. On Sports1M, our clip sampling scheme elevates the accuracy of an already state-of-the-art action classifier by 7% and reduces by more than 15 times its computational cost.

People:

Bruno Korbar, Du Tran, Lorenzo Torresani

Qualitative examples

Here, we give a few examples of videos and the highest and lowest ranked clip according to SCSampler; top row is the visualization of the “most salient” 3 clips, and the bottom row are the “least salient” clips.

Video 1 (Cycling):

Video 2 (Dog agility):

Video 3 (Beach volleyball):

Paper

Please find the full paper here, and the bibtex entry below.

@article{korbar2019scsampler,
  author    = {Bruno Korbar and
               Du Tran and
               Lorenzo Torresani},
  title     = {SCSampler: Sampling Salient Clips from Video for Efficient Action
               Recognition},
  journal   = {CoRR},
  volume    = {abs/1904.04289},
  year      = {2019},
  url       = {http://arxiv.org/abs/1904.04289},
  archivePrefix = {arXiv},
  eprint    = {1904.04289},
  timestamp = {Thu, 25 Apr 2019 13:55:01 +0200},
  biburl    = {https://dblp.org/rec/bib/journals/corr/abs-1904-04289},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}