Skip to content

MegEngine/ECCV2022-RIFE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Real-Time Intermediate Flow Estimation for Video Frame Interpolation (MegEngine implementation)

16X interpolation results from two input images:

Demo Demo

Introduction

This project is an official MegEngine implementation of Real-Time Intermediate Flow Estimation for Video Frame Interpolation. For Pytorch implementation, please refers to this repo. Currently, our model can run 30+FPS for 2X 720p interpolation on a 2080Ti GPU. It supports arbitrary-timestep interpolation between a pair of images.

CLI Usage

Installation

git clone git@github.com:MegEngine/ECCV2022-RIFE-MegEngine
cd ECCV2022-RIFE-MegEngine
pip3 install -r requirements.txt
  • Download the pretrained HD models from here.
  • Unzip and move the pretrained parameters to train_log/*
  • This model is not reported by our paper, for our paper model please refer to evaluation.

Run

Image Interpolation

python3 inference_img.py --img img0.png img1.png --exp=4

(2^4=16X interpolation results) After that, you can use pngs to generate mp4:

ffmpeg -r 10 -f image2 -i output/img%d.png -s 448x256 -c:v libx264 -pix_fmt yuv420p output/slomo.mp4 -q:v 0 -q:a 0

You can also use pngs to generate gif:

ffmpeg -r 10 -f image2 -i output/img%d.png -s 448x256 -vf "split[s0][s1];[s0]palettegen=stats_mode=single[p];[s1][p]paletteuse=new=1" output/slomo.gif

Evaluation

Download RIFE model or RIFE_m model reported by our paper.

MiddleBury: Download MiddleBury OTHER dataset at ./other-data and ./other-gt-interp

HD: Download HD dataset at ./HD_dataset. We also provide a google drive download link.

We provide code for evaluating with datasets above, please follow lines:

python3 benchmark/HD_multi_4X.py
python3 benchmark/HD.py
python3 benchmark/MiddleBury_Other.py
python3 benchmark/yuv_frame_io.py
python3 testtime.py

Training and Reproduction

Download Vimeo90K dataset.

We use 16 CPUs, 4 GPUs and 20G memory for training:

python3 train.py --arbitrary=False

Citation

@inproceedings{huang2022rife,
  title={Real-Time Intermediate Flow Estimation for Video Frame Interpolation},
  author={Huang, Zhewei and Zhang, Tianyuan and Heng, Wen and Shi, Boxin and Zhou, Shuchang},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  year={2022}
}

Reference

Optical Flow: ARFlow pytorch-liteflownet RAFT pytorch-PWCNet

Video Interpolation: DVF TOflow SepConv DAIN CAIN MEMC-Net SoftSplat BMBC EDSC

About

Official MegEngine Implementation of Real-Time Intermediate Flow Estimation for Video Frame Interpolation

Topics

Resources

License

Stars

Watchers

Forks

Languages