📹

Saliency-based video segmentation with sequentially updated priors

  • Our method enables us to automatically detect and segment object-like regions from videos without any manually annotated labels.
  • We utilize visual saliency as a prior distribution of region segmentation instead of manually annotated labels.
  • A prior distribution for every frame is updated with previous segmentation results, combined with prior information coming from visual saliency.
  • We introduce CUDA implementation to accelerate the computation of prior distributions and feature likelihoods, resulting in achieving around 10fps in a mobile PC with CUDA-compatible graphics boards.

Data

This dataset contains 10 videos as inputs, and segmented image sequences as ground-truth.

Required

Any report or publication using this data should cite its use as the following publication:

  • Ken Fukuchi, Kouji Miyazato, Akisato Kimura, Shigeru Takagi and Junji Yamato "Saliency-based video segmentation with graph cuts and sequentially updated priors," Proc. International Conference on Multimedia and Expo (ICME2009), pp.638--641, New York, New York, USA, June-July 2009.

Detailed description

Videos : 10 uncompressed AVI clips of natural scenes with 12 fps, including at least one target objects or something others. Length varies 5-10 seconds.

Ground-truth: 10 sets of JPEG images, each corresponds to an input video. Segmented images are provided for almost all the frames excluding first 15 frames.

Download

videoSegmentationData.zip123332.9KB

Example

(Top left) Input video (Top right) Visual attention density (Bottom left) Priors for segmentation (Bottom right) Segmentation result

Publication

  • Ken Fukuchi, Kouji Miyazato, Akisato Kimura, Shigeru Takagi and Junji Yamato "Saliency-based video segmentation with graph cuts and sequentially updated priors," Proc. International Conference on Multimedia and Expo (ICME2009), pp.638--641, New York, New York, USA, June-July 2009.