RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation

Overview

RIFE

RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation

Ported from https://github.com/hzwer/arXiv2020-RIFE

Dependencies

  • NumPy
  • PyTorch, preferably with CUDA. Note that torchvision and torchaudio are not required and hence can be omitted from the command.
  • VapourSynth

Installation

pip install --upgrade vsrife

Usage

from vsrife import RIFE

ret = RIFE(clip)

See __init__.py for the description of the parameters.

Comments
  • Getting Error when interpolating

    Getting Error when interpolating

        model.load_model(os.path.join(os.path.dirname(__file__), model_dir), -1)
      File "C:\Users\\AppData\Local\Programs\Python\Python39\lib\site-packages\vsrife\RIFE_HDv2.py", line 164, in load_model
        convert(torch.load('{}/flownet.pkl'.format(path), map_location=self.torch_device)))
      File "C:\Users\\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\serialization.py", line 608, in load
        return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
      File "C:\Users\\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\serialization.py", line 777, in _legacy_load
        magic_number = pickle_module.load(f, **pickle_load_args)
    EOFError: Ran out of input  ```
    
    Source file is a 720p 30fps mp4, loaded into VS through Lsmash source, set the format to RGBS. Nothing else
    System specs are R7 3700x, 32GB of ram and a RTX 3060
    
    
    opened by banjaminicc 4
  • Small feature request for RIFEv4: target fps as alternative to multiplier

    Small feature request for RIFEv4: target fps as alternative to multiplier

    I would it be possible to allow setting a target fps instead of a multiplier when using RIFEv4? When going from for example 23.976 (24000/1001) to 60 fps, having to use (60 * 1001 / 24000 =) 2,5025 is kind of annoying. ;) I know could write a wrapper arount the rife.RIFE but I suspect depending on the resulting float it would be more accurate if this was done inside the filter.

    opened by Selur 3
  • vs-rife + latest vs-dpir don't work

    vs-rife + latest vs-dpir don't work

    When using just vs-rife:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    # Loading Plugins
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/d2vSource/d2vsource.dll")
    # source: 'C:\Users\Selur\Desktop\VTS_01_1.VOB'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine
    # Loading C:\Users\Selur\Desktop\VTS_01_1.VOB using D2VSource
    clip = core.d2v.Source(input="E:/Temp/vob_941fdaaeda22090766694391cc4281d5_853323747.d2v")
    # Setting color matrix to 470bg.
    clip = core.std.SetFrameProps(clip, _Matrix=5)
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 29.970
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
    # Deinterlacing using TIVTC
    clip = core.tivtc.TFM(clip=clip)
    clip = core.tivtc.TDecimate(clip=clip, mode=7, rate=10, dupThresh=0.04, vidThresh=3.50, sceneThresh=15.00)# new fps: 10
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    clip = core.misc.SCDetect(clip=clip,threshold=0.150)
    from vsrife import RIFE
    # adjusting color space from YUV420P8 to RGBS for VsTorchRIFE
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # adjusting frame count&rate with RIFE (torch)
    clip = RIFE(clip, multi=3, device_type='cuda', device_index=0) # new fps: 20
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
    # set output frame rate to 30.000fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30, fpsden=1)
    # Output
    clip.set_output()
    

    everything works. But when I add latest vs-dpir:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    import os
    import site
    # Import libraries for onnxruntime
    from ctypes import WinDLL
    path = site.getsitepackages()[0]+'/onnxruntime_dlls/'
    WinDLL(path+'cublas64_11.dll')
    WinDLL(path+'cudart64_110.dll')
    WinDLL(path+'cudnn64_8.dll')
    WinDLL(path+'cudnn_cnn_infer64_8.dll')
    WinDLL(path+'cudnn_ops_infer64_8.dll')
    WinDLL(path+'cufft64_10.dll')
    WinDLL(path+'cufftw64_10.dll')
    WinDLL(path+'nvinfer.dll')
    WinDLL(path+'nvinfer_plugin.dll')
    WinDLL(path+'nvparsers.dll')
    WinDLL(path+'nvonnxparser.dll')
    # Loading Plugins
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/d2vSource/d2vsource.dll")
    # source: 'C:\Users\Selur\Desktop\VTS_01_1.VOB'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine
    # Loading C:\Users\Selur\Desktop\VTS_01_1.VOB using D2VSource
    clip = core.d2v.Source(input="E:/Temp/vob_941fdaaeda22090766694391cc4281d5_853323747.d2v")
    # Setting color matrix to 470bg.
    clip = core.std.SetFrameProps(clip, _Matrix=5)
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 29.970
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
    # Deinterlacing using TIVTC
    clip = core.tivtc.TFM(clip=clip)
    clip = core.tivtc.TDecimate(clip=clip, mode=7, rate=10, dupThresh=0.04, vidThresh=3.50, sceneThresh=15.00)# new fps: 10
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    from vsdpir import DPIR
    # adjusting color space from YUV420P8 to RGBS for vsDPIRDenoise
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # denoising using DPIRDenoise
    clip = DPIR(clip=clip, strength=15.000, task="denoise", provider=1, device_id=0)
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV444P16, matrix_s="470bg", range_s="limited")
    clip = core.misc.SCDetect(clip=clip,threshold=0.150)
    from vsrife import RIFE
    # adjusting color space from YUV444P16 to RGBS for VsTorchRIFE
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # adjusting frame count&rate with RIFE (torch)
    clip = RIFE(clip, multi=3, device_type='cuda', device_index=0) # new fps: 20
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
    # set output frame rate to 30.000fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30, fpsden=1)
    # Output
    clip.set_output()
    

    I get:

    Python exception: [WinError 127] Die angegebene Prozedur wurde nicht gefunden. Error loading "I:\Hybrid\64bit\Vapoursynth\Lib/site-packages\torch\lib\cudnn_cnn_train64_8.dll" or one of its dependencies.
    

    Using just vs-dpir:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    import os
    import site
    # Import libraries for onnxruntime
    from ctypes import WinDLL
    path = site.getsitepackages()[0]+'/onnxruntime_dlls/'
    WinDLL(path+'cublas64_11.dll')
    WinDLL(path+'cudart64_110.dll')
    WinDLL(path+'cudnn64_8.dll')
    WinDLL(path+'cudnn_cnn_infer64_8.dll')
    WinDLL(path+'cudnn_ops_infer64_8.dll')
    WinDLL(path+'cufft64_10.dll')
    WinDLL(path+'cufftw64_10.dll')
    WinDLL(path+'nvinfer.dll')
    WinDLL(path+'nvinfer_plugin.dll')
    WinDLL(path+'nvparsers.dll')
    WinDLL(path+'nvonnxparser.dll')
    # Loading Plugins
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/d2vSource/d2vsource.dll")
    # source: 'C:\Users\Selur\Desktop\VTS_01_1.VOB'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine
    # Loading C:\Users\Selur\Desktop\VTS_01_1.VOB using D2VSource
    clip = core.d2v.Source(input="E:/Temp/vob_941fdaaeda22090766694391cc4281d5_853323747.d2v")
    # Setting color matrix to 470bg.
    clip = core.std.SetFrameProps(clip, _Matrix=5)
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 29.970
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
    # Deinterlacing using TIVTC
    clip = core.tivtc.TFM(clip=clip)
    clip = core.tivtc.TDecimate(clip=clip, mode=7, rate=10, dupThresh=0.04, vidThresh=3.50, sceneThresh=15.00)# new fps: 10
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    from vsdpir import DPIR
    # adjusting color space from YUV420P8 to RGBS for vsDPIRDenoise
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # denoising using DPIRDenoise
    clip = DPIR(clip=clip, strength=15.000, task="denoise", provider=1, device_id=0)
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
    # set output frame rate to 10.000fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=10, fpsden=1)
    # Output
    clip.set_output()
    

    works fine.

    -> do you have an idea how I could fix this?

    opened by Selur 3
  • half the image is broken when using 4k content

    half the image is broken when using 4k content

    I get a broken output (see attachment), when using:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    # Loading Plugins
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll")
    # source: 'G:\TestClips&Co\files\MPEG-4 H.264\4k\Back to the Future (1985) 4k 10bit - 0.10.35-0.11.35.mkv'
    # current color space: YUV420P10, bit depth: 10, resolution: 3840x2076, fps: 23.976, color matrix: 2020ncl, yuv luminance scale: limited, scanorder: progressive
    # Loading G:\TestClips&Co\files\MPEG-4 H.264\4k\Back to the Future (1985) 4k 10bit - 0.10.35-0.11.35.mkv using LWLibavSource
    clip = core.lsmas.LWLibavSource(source="G:/TestClips&Co/files/MPEG-4 H.264/4k/Back to the Future (1985) 4k 10bit - 0.10.35-0.11.35.mkv", format="YUV420P10", cache=0, fpsnum=24000, fpsden=1001, prefer_hw=1)
    # Setting color matrix to 2020ncl.
    clip = core.std.SetFrameProps(clip, _Matrix=9)
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=9)
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=9)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 23.976
    clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001)
    clip = core.misc.SCDetect(clip=clip,threshold=0.150)
    from vsrife import RIFE
    # adjusting color space from YUV420P10 to RGBS for VsTorchRIFE
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="2020ncl", range_s="limited")
    # adjusting frame count&rate with RIFE (torch)
    clip = RIFE(clip, scale=0.5, multi=3, device_type='cuda', device_index=0, fp16=True) # new fps: 71.928
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="2020ncl", range_s="limited", dither_type="error_diffusion")
    # set output frame rate to 71.928fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=8991, fpsden=125)
    # Output
    clip.set_output()
    

    tried different scale values, fp16 disabled, without scene change detection and other values for mult, nothing helped. https://github.com/HomeOfVapourSynthEvolution/VapourSynth-RIFE-ncnn-Vulkan works fine. 2k content also works fine. I tried different source filters and different files. Would be nice if this could be fixed.

    attachment was too large: https://ibb.co/WGT9pvL

    opened by Selur 2
  • Vapoursynth R58 and Python 3.10 compatibilty

    Vapoursynth R58 and Python 3.10 compatibilty

    trying to install vs-rife in Vapoursynth R58 I get:

    I:\Hybrid\64bit\Vapoursynth>python -m pip install --upgrade vsrife
    Collecting vsrife
      Using cached vsrife-2.0.0-py3-none-any.whl (32.5 MB)
    Requirement already satisfied: torch>=1.9.0 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsrife) (1.11.0+cu113)
    Requirement already satisfied: numpy in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsrife) (1.22.3)
    Collecting VapourSynth>=55
      Using cached VapourSynth-57.zip (567 kB)
      Preparing metadata (setup.py) ... error
      error: subprocess-exited-with-error
    
      × python setup.py egg_info did not run successfully.
      │ exit code: 1
      ╰─> [15 lines of output]
          Traceback (most recent call last):
            File "C:\Users\Selur\AppData\Local\Temp\pip-install-s7976394\vapoursynth_701a37362cd045f58da4818d07217c99\setup.py", line 64, in <module>
              dll_path = query(winreg.HKEY_LOCAL_MACHINE, REGISTRY_PATH, REGISTRY_KEY)
            File "C:\Users\Selur\AppData\Local\Temp\pip-install-s7976394\vapoursynth_701a37362cd045f58da4818d07217c99\setup.py", line 38, in query
              reg_key = winreg.OpenKey(hkey, path, 0, winreg.KEY_READ)
          FileNotFoundError: [WinError 2] Das System kann die angegebene Datei nicht finden
    
          During handling of the above exception, another exception occurred:
    
          Traceback (most recent call last):
            File "<string>", line 2, in <module>
            File "<pip-setuptools-caller>", line 34, in <module>
            File "C:\Users\Selur\AppData\Local\Temp\pip-install-s7976394\vapoursynth_701a37362cd045f58da4818d07217c99\setup.py", line 67, in <module>
              raise OSError("Couldn't detect vapoursynth installation path")
          OSError: Couldn't detect vapoursynth installation path
          [end of output]
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    
    × Encountered error while generating package metadata.
    ╰─> See above for output.
    
    note: This is an issue with the package mentioned above, not pip.
    hint: See above for details.
    

    any idea how to fix it?

    opened by Selur 2
  • How to set 'clip.num_frames

    How to set 'clip.num_frames

    How to set the frames numbers?I only found the "multi: int ="in "init.py".Can I set the whole number of the frames numbers?Like 60 fps?Thanks!

    opened by feaonal 2
  • Requesting example vapoursynth script

    Requesting example vapoursynth script

    I tried to create a valid script for a while, but I can't make it run.

    from vsrife import RIFE
    import vapoursynth as vs
    core = vs.core
    core.std.LoadPlugin(path='/usr/lib/x86_64-linux-gnu/libffms2.so')
    clip = core.ffms2.Source(source='test.webm')
    print(clip) # YUV420P8
    clip = vs.core.resize.Bicubic(clip, format=vs.RGBS)
    print(clip) # RGBS
    clip = RIFE(clip)
    clip.set_output()
    
    vspipe --y4m inference.py - | x264 - --demuxer y4m -o example.mkv
    
    Error: Failed to retrieve frame 0 with error: Resize error: Resize error 3074: no path between colorspaces (2/2/2 => 0/2/2). May need to specify additional colorspace parameters.
    

    Can I get an example that should actually work?

    opened by styler00dollar 2
  • [Q] 0bit models in the repo

    [Q] 0bit models in the repo

    Hi

    i see in the model folders, have a files (models?) with 0bits, i presume when the plugin "learn", the models is filled with the data

    this is correct?

    then, in a system with install this plugin as system-wide, these models should be have a write permissions? (in case of linux)

    greetings

    opened by sl1pkn07 2
  • Wrong output framerate

    Wrong output framerate

    That - https://github.com/HolyWu/vs-rife/blob/91e894f41cbdfb458ef8f776c47c7f652158bc6f/vsrife/init.py#L280 - doesn't work as expected because of two reasons:

    1. clip.fps.numerator / denominator can be 0 / 1 (from the docs: "It is 0/1 when the clip has a variable framerate")
    2. there's a frame duration attached to each frame, and it seems like FrameEval(frame_adjuster) return frames with the original durations, not the ones from format_clip

    A quick fix that works:

        clip0 = vs.core.std.Interleave([clip] * factor_num)
        if factor_den>1:
            clip0 = clip0.std.SelectEvery(cycle=factor_den,offsets=0)
        clip1 = clip.std.DuplicateFrames(frames=clip.num_frames - 1).std.DeleteFrames(frames=0)
        clip1 = vs.core.std.Interleave([clip1] * factor_num)
        if factor_den>1:
            clip1 = clip1.std.SelectEvery(cycle=factor_den,offsets=0)
    
    opened by chainikdn 1
  • How to set clip.num_frames

    How to set clip.num_frames

    How to set the frames numbers?I only found the "multi: int ="in "init.py".Can I set the whole number of the frames numbers?Like 60 fps?Thanks!

    opened by feaonal 0
Releases(v3.1.0)
COPA-SSE contains crowdsourced explanations for the Balanced COPA dataset

COPA-SSE Repository for COPA-SSE: Semi-Structured Explanations for Commonsense Reasoning. COPA-SSE contains crowdsourced explanations for the Balanced

Ana Brassard 5 Jul 31, 2022
Generating retro pixel game characters with Generative Adversarial Networks. Dataset "TinyHero" included.

pixel_character_generator Generating retro pixel game characters with Generative Adversarial Networks. Dataset "TinyHero" included. Dataset TinyHero D

Agnieszka Mikołajczyk 88 Nov 17, 2022
Official PyTorch implementation of "Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets" (ICLR 2021)

Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets This is the official PyTorch implementation for the paper Rapid Neural A

48 Dec 26, 2022
Uni-Fold: Training your own deep protein-folding models.

Uni-Fold: Training your own deep protein-folding models. This package provides and implementation of a trainable, Transformer-based deep protein foldi

DeepModeling 88 Jan 03, 2023
neural image generation

pixray Pixray is an image generation system. It combines previous ideas including: Perception Engines which uses image augmentation and iteratively op

dribnet 398 Dec 17, 2022
Few-NERD: Not Only a Few-shot NER Dataset

Few-NERD: Not Only a Few-shot NER Dataset This is the source code of the ACL-IJCNLP 2021 paper: Few-NERD: A Few-shot Named Entity Recognition Dataset.

THUNLP 319 Dec 30, 2022
Pytorch Implementations of large number classical backbone CNNs, data enhancement, torch loss, attention, visualization and some common algorithms.

Torch-template-for-deep-learning Pytorch implementations of some **classical backbone CNNs, data enhancement, torch loss, attention, visualization and

Li Shengyan 270 Dec 31, 2022
A new data augmentation method for extreme lighting conditions.

Random Shadows and Highlights This repo has the source code for the paper: Random Shadows and Highlights: A new data augmentation method for extreme l

Osama Mazhar 35 Nov 26, 2022
InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal Artifact Reduction in CT Images

InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal Artifact Reduction in CT Images Hong Wang, Yuexiang Li, Haimiao Zhang, Deyu Men

Hong Wang 4 Dec 27, 2022
The official code of Anisotropic Stroke Control for Multiple Artists Style Transfer

ASMA-GAN Anisotropic Stroke Control for Multiple Artists Style Transfer Proceedings of the 28th ACM International Conference on Multimedia The officia

Six_God 146 Nov 21, 2022
From this paper "SESNet: A Semantically Enhanced Siamese Network for Remote Sensing Change Detection"

SESNet for remote sensing image change detection It is the implementation of the paper: "SESNet: A Semantically Enhanced Siamese Network for Remote Se

1 May 24, 2022
Editing a Conditional Radiance Field

Editing Conditional Radiance Fields Project | Paper | Video | Demo Editing Conditional Radiance Fields Steven Liu, Xiuming Zhang, Zhoutong Zhang, Rich

Steven Liu 216 Dec 30, 2022
Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021)

Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021) Overview of paths used in DIG and IG. w is the word being attributed. The

INK Lab @ USC 17 Oct 27, 2022
Deep motion generator collections

GenMotion GenMotion (/gen’motion/) is a Python library for making skeletal animations. It enables easy dataset loading and experiment sharing for synt

23 May 24, 2022
Pytorch tutorials for Neural Style transfert

PyTorch Tutorials This tutorial is no longer maintained. Please use the official version: https://pytorch.org/tutorials/advanced/neural_style_tutorial

Alexis David Jacq 135 Jun 26, 2022
Minecraft Hack Detection With Python

Minecraft Hack Detection An attempt to try and use crowd sourced replays to find

Kuleen Sasse 3 Mar 26, 2022
TensorFlow-LiveLessons - "Deep Learning with TensorFlow" LiveLessons

TensorFlow-LiveLessons Note that the second edition of this video series is now available here. The second edition contains all of the content from th

Deep Learning Study Group 830 Jan 03, 2023
PyTorch implementation for the Neuro-Symbolic Sudoku Solver leveraging the power of Neural Logic Machines (NLM)

Neuro-Symbolic Sudoku Solver PyTorch implementation for the Neuro-Symbolic Sudoku Solver leveraging the power of Neural Logic Machines (NLM). Please n

Ashutosh Hathidara 60 Dec 10, 2022
Revisiting Video Saliency: A Large-scale Benchmark and a New Model (CVPR18, PAMI19)

DHF1K =========================================================================== Wenguan Wang, J. Shen, M.-M Cheng and A. Borji, Revisiting Video Sal

Wenguan Wang 126 Dec 03, 2022
Python module providing a framework to trace individual edges in an image using Gaussian process regression.

Edge Tracing using Gaussian Process Regression Repository storing python module which implements a framework to trace individual edges in an image usi

Jamie Burke 7 Dec 27, 2022