RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation

Overview

RIFE

RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation

Ported from https://github.com/hzwer/arXiv2020-RIFE

Dependencies

  • NumPy
  • PyTorch, preferably with CUDA. Note that torchvision and torchaudio are not required and hence can be omitted from the command.
  • VapourSynth

Installation

pip install --upgrade vsrife

Usage

from vsrife import RIFE

ret = RIFE(clip)

See __init__.py for the description of the parameters.

Comments
  • Getting Error when interpolating

    Getting Error when interpolating

        model.load_model(os.path.join(os.path.dirname(__file__), model_dir), -1)
      File "C:\Users\\AppData\Local\Programs\Python\Python39\lib\site-packages\vsrife\RIFE_HDv2.py", line 164, in load_model
        convert(torch.load('{}/flownet.pkl'.format(path), map_location=self.torch_device)))
      File "C:\Users\\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\serialization.py", line 608, in load
        return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
      File "C:\Users\\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\serialization.py", line 777, in _legacy_load
        magic_number = pickle_module.load(f, **pickle_load_args)
    EOFError: Ran out of input  ```
    
    Source file is a 720p 30fps mp4, loaded into VS through Lsmash source, set the format to RGBS. Nothing else
    System specs are R7 3700x, 32GB of ram and a RTX 3060
    
    
    opened by banjaminicc 4
  • Small feature request for RIFEv4: target fps as alternative to multiplier

    Small feature request for RIFEv4: target fps as alternative to multiplier

    I would it be possible to allow setting a target fps instead of a multiplier when using RIFEv4? When going from for example 23.976 (24000/1001) to 60 fps, having to use (60 * 1001 / 24000 =) 2,5025 is kind of annoying. ;) I know could write a wrapper arount the rife.RIFE but I suspect depending on the resulting float it would be more accurate if this was done inside the filter.

    opened by Selur 3
  • vs-rife + latest vs-dpir don't work

    vs-rife + latest vs-dpir don't work

    When using just vs-rife:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    # Loading Plugins
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/d2vSource/d2vsource.dll")
    # source: 'C:\Users\Selur\Desktop\VTS_01_1.VOB'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine
    # Loading C:\Users\Selur\Desktop\VTS_01_1.VOB using D2VSource
    clip = core.d2v.Source(input="E:/Temp/vob_941fdaaeda22090766694391cc4281d5_853323747.d2v")
    # Setting color matrix to 470bg.
    clip = core.std.SetFrameProps(clip, _Matrix=5)
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 29.970
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
    # Deinterlacing using TIVTC
    clip = core.tivtc.TFM(clip=clip)
    clip = core.tivtc.TDecimate(clip=clip, mode=7, rate=10, dupThresh=0.04, vidThresh=3.50, sceneThresh=15.00)# new fps: 10
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    clip = core.misc.SCDetect(clip=clip,threshold=0.150)
    from vsrife import RIFE
    # adjusting color space from YUV420P8 to RGBS for VsTorchRIFE
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # adjusting frame count&rate with RIFE (torch)
    clip = RIFE(clip, multi=3, device_type='cuda', device_index=0) # new fps: 20
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
    # set output frame rate to 30.000fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30, fpsden=1)
    # Output
    clip.set_output()
    

    everything works. But when I add latest vs-dpir:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    import os
    import site
    # Import libraries for onnxruntime
    from ctypes import WinDLL
    path = site.getsitepackages()[0]+'/onnxruntime_dlls/'
    WinDLL(path+'cublas64_11.dll')
    WinDLL(path+'cudart64_110.dll')
    WinDLL(path+'cudnn64_8.dll')
    WinDLL(path+'cudnn_cnn_infer64_8.dll')
    WinDLL(path+'cudnn_ops_infer64_8.dll')
    WinDLL(path+'cufft64_10.dll')
    WinDLL(path+'cufftw64_10.dll')
    WinDLL(path+'nvinfer.dll')
    WinDLL(path+'nvinfer_plugin.dll')
    WinDLL(path+'nvparsers.dll')
    WinDLL(path+'nvonnxparser.dll')
    # Loading Plugins
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/d2vSource/d2vsource.dll")
    # source: 'C:\Users\Selur\Desktop\VTS_01_1.VOB'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine
    # Loading C:\Users\Selur\Desktop\VTS_01_1.VOB using D2VSource
    clip = core.d2v.Source(input="E:/Temp/vob_941fdaaeda22090766694391cc4281d5_853323747.d2v")
    # Setting color matrix to 470bg.
    clip = core.std.SetFrameProps(clip, _Matrix=5)
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 29.970
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
    # Deinterlacing using TIVTC
    clip = core.tivtc.TFM(clip=clip)
    clip = core.tivtc.TDecimate(clip=clip, mode=7, rate=10, dupThresh=0.04, vidThresh=3.50, sceneThresh=15.00)# new fps: 10
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    from vsdpir import DPIR
    # adjusting color space from YUV420P8 to RGBS for vsDPIRDenoise
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # denoising using DPIRDenoise
    clip = DPIR(clip=clip, strength=15.000, task="denoise", provider=1, device_id=0)
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV444P16, matrix_s="470bg", range_s="limited")
    clip = core.misc.SCDetect(clip=clip,threshold=0.150)
    from vsrife import RIFE
    # adjusting color space from YUV444P16 to RGBS for VsTorchRIFE
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # adjusting frame count&rate with RIFE (torch)
    clip = RIFE(clip, multi=3, device_type='cuda', device_index=0) # new fps: 20
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
    # set output frame rate to 30.000fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30, fpsden=1)
    # Output
    clip.set_output()
    

    I get:

    Python exception: [WinError 127] Die angegebene Prozedur wurde nicht gefunden. Error loading "I:\Hybrid\64bit\Vapoursynth\Lib/site-packages\torch\lib\cudnn_cnn_train64_8.dll" or one of its dependencies.
    

    Using just vs-dpir:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    import os
    import site
    # Import libraries for onnxruntime
    from ctypes import WinDLL
    path = site.getsitepackages()[0]+'/onnxruntime_dlls/'
    WinDLL(path+'cublas64_11.dll')
    WinDLL(path+'cudart64_110.dll')
    WinDLL(path+'cudnn64_8.dll')
    WinDLL(path+'cudnn_cnn_infer64_8.dll')
    WinDLL(path+'cudnn_ops_infer64_8.dll')
    WinDLL(path+'cufft64_10.dll')
    WinDLL(path+'cufftw64_10.dll')
    WinDLL(path+'nvinfer.dll')
    WinDLL(path+'nvinfer_plugin.dll')
    WinDLL(path+'nvparsers.dll')
    WinDLL(path+'nvonnxparser.dll')
    # Loading Plugins
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/d2vSource/d2vsource.dll")
    # source: 'C:\Users\Selur\Desktop\VTS_01_1.VOB'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine
    # Loading C:\Users\Selur\Desktop\VTS_01_1.VOB using D2VSource
    clip = core.d2v.Source(input="E:/Temp/vob_941fdaaeda22090766694391cc4281d5_853323747.d2v")
    # Setting color matrix to 470bg.
    clip = core.std.SetFrameProps(clip, _Matrix=5)
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 29.970
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
    # Deinterlacing using TIVTC
    clip = core.tivtc.TFM(clip=clip)
    clip = core.tivtc.TDecimate(clip=clip, mode=7, rate=10, dupThresh=0.04, vidThresh=3.50, sceneThresh=15.00)# new fps: 10
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    from vsdpir import DPIR
    # adjusting color space from YUV420P8 to RGBS for vsDPIRDenoise
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # denoising using DPIRDenoise
    clip = DPIR(clip=clip, strength=15.000, task="denoise", provider=1, device_id=0)
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
    # set output frame rate to 10.000fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=10, fpsden=1)
    # Output
    clip.set_output()
    

    works fine.

    -> do you have an idea how I could fix this?

    opened by Selur 3
  • half the image is broken when using 4k content

    half the image is broken when using 4k content

    I get a broken output (see attachment), when using:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    # Loading Plugins
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll")
    # source: 'G:\TestClips&Co\files\MPEG-4 H.264\4k\Back to the Future (1985) 4k 10bit - 0.10.35-0.11.35.mkv'
    # current color space: YUV420P10, bit depth: 10, resolution: 3840x2076, fps: 23.976, color matrix: 2020ncl, yuv luminance scale: limited, scanorder: progressive
    # Loading G:\TestClips&Co\files\MPEG-4 H.264\4k\Back to the Future (1985) 4k 10bit - 0.10.35-0.11.35.mkv using LWLibavSource
    clip = core.lsmas.LWLibavSource(source="G:/TestClips&Co/files/MPEG-4 H.264/4k/Back to the Future (1985) 4k 10bit - 0.10.35-0.11.35.mkv", format="YUV420P10", cache=0, fpsnum=24000, fpsden=1001, prefer_hw=1)
    # Setting color matrix to 2020ncl.
    clip = core.std.SetFrameProps(clip, _Matrix=9)
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=9)
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=9)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 23.976
    clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001)
    clip = core.misc.SCDetect(clip=clip,threshold=0.150)
    from vsrife import RIFE
    # adjusting color space from YUV420P10 to RGBS for VsTorchRIFE
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="2020ncl", range_s="limited")
    # adjusting frame count&rate with RIFE (torch)
    clip = RIFE(clip, scale=0.5, multi=3, device_type='cuda', device_index=0, fp16=True) # new fps: 71.928
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="2020ncl", range_s="limited", dither_type="error_diffusion")
    # set output frame rate to 71.928fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=8991, fpsden=125)
    # Output
    clip.set_output()
    

    tried different scale values, fp16 disabled, without scene change detection and other values for mult, nothing helped. https://github.com/HomeOfVapourSynthEvolution/VapourSynth-RIFE-ncnn-Vulkan works fine. 2k content also works fine. I tried different source filters and different files. Would be nice if this could be fixed.

    attachment was too large: https://ibb.co/WGT9pvL

    opened by Selur 2
  • Vapoursynth R58 and Python 3.10 compatibilty

    Vapoursynth R58 and Python 3.10 compatibilty

    trying to install vs-rife in Vapoursynth R58 I get:

    I:\Hybrid\64bit\Vapoursynth>python -m pip install --upgrade vsrife
    Collecting vsrife
      Using cached vsrife-2.0.0-py3-none-any.whl (32.5 MB)
    Requirement already satisfied: torch>=1.9.0 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsrife) (1.11.0+cu113)
    Requirement already satisfied: numpy in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsrife) (1.22.3)
    Collecting VapourSynth>=55
      Using cached VapourSynth-57.zip (567 kB)
      Preparing metadata (setup.py) ... error
      error: subprocess-exited-with-error
    
      × python setup.py egg_info did not run successfully.
      │ exit code: 1
      ╰─> [15 lines of output]
          Traceback (most recent call last):
            File "C:\Users\Selur\AppData\Local\Temp\pip-install-s7976394\vapoursynth_701a37362cd045f58da4818d07217c99\setup.py", line 64, in <module>
              dll_path = query(winreg.HKEY_LOCAL_MACHINE, REGISTRY_PATH, REGISTRY_KEY)
            File "C:\Users\Selur\AppData\Local\Temp\pip-install-s7976394\vapoursynth_701a37362cd045f58da4818d07217c99\setup.py", line 38, in query
              reg_key = winreg.OpenKey(hkey, path, 0, winreg.KEY_READ)
          FileNotFoundError: [WinError 2] Das System kann die angegebene Datei nicht finden
    
          During handling of the above exception, another exception occurred:
    
          Traceback (most recent call last):
            File "<string>", line 2, in <module>
            File "<pip-setuptools-caller>", line 34, in <module>
            File "C:\Users\Selur\AppData\Local\Temp\pip-install-s7976394\vapoursynth_701a37362cd045f58da4818d07217c99\setup.py", line 67, in <module>
              raise OSError("Couldn't detect vapoursynth installation path")
          OSError: Couldn't detect vapoursynth installation path
          [end of output]
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    
    × Encountered error while generating package metadata.
    ╰─> See above for output.
    
    note: This is an issue with the package mentioned above, not pip.
    hint: See above for details.
    

    any idea how to fix it?

    opened by Selur 2
  • How to set 'clip.num_frames

    How to set 'clip.num_frames

    How to set the frames numbers?I only found the "multi: int ="in "init.py".Can I set the whole number of the frames numbers?Like 60 fps?Thanks!

    opened by feaonal 2
  • Requesting example vapoursynth script

    Requesting example vapoursynth script

    I tried to create a valid script for a while, but I can't make it run.

    from vsrife import RIFE
    import vapoursynth as vs
    core = vs.core
    core.std.LoadPlugin(path='/usr/lib/x86_64-linux-gnu/libffms2.so')
    clip = core.ffms2.Source(source='test.webm')
    print(clip) # YUV420P8
    clip = vs.core.resize.Bicubic(clip, format=vs.RGBS)
    print(clip) # RGBS
    clip = RIFE(clip)
    clip.set_output()
    
    vspipe --y4m inference.py - | x264 - --demuxer y4m -o example.mkv
    
    Error: Failed to retrieve frame 0 with error: Resize error: Resize error 3074: no path between colorspaces (2/2/2 => 0/2/2). May need to specify additional colorspace parameters.
    

    Can I get an example that should actually work?

    opened by styler00dollar 2
  • [Q] 0bit models in the repo

    [Q] 0bit models in the repo

    Hi

    i see in the model folders, have a files (models?) with 0bits, i presume when the plugin "learn", the models is filled with the data

    this is correct?

    then, in a system with install this plugin as system-wide, these models should be have a write permissions? (in case of linux)

    greetings

    opened by sl1pkn07 2
  • Wrong output framerate

    Wrong output framerate

    That - https://github.com/HolyWu/vs-rife/blob/91e894f41cbdfb458ef8f776c47c7f652158bc6f/vsrife/init.py#L280 - doesn't work as expected because of two reasons:

    1. clip.fps.numerator / denominator can be 0 / 1 (from the docs: "It is 0/1 when the clip has a variable framerate")
    2. there's a frame duration attached to each frame, and it seems like FrameEval(frame_adjuster) return frames with the original durations, not the ones from format_clip

    A quick fix that works:

        clip0 = vs.core.std.Interleave([clip] * factor_num)
        if factor_den>1:
            clip0 = clip0.std.SelectEvery(cycle=factor_den,offsets=0)
        clip1 = clip.std.DuplicateFrames(frames=clip.num_frames - 1).std.DeleteFrames(frames=0)
        clip1 = vs.core.std.Interleave([clip1] * factor_num)
        if factor_den>1:
            clip1 = clip1.std.SelectEvery(cycle=factor_den,offsets=0)
    
    opened by chainikdn 1
  • How to set clip.num_frames

    How to set clip.num_frames

    How to set the frames numbers?I only found the "multi: int ="in "init.py".Can I set the whole number of the frames numbers?Like 60 fps?Thanks!

    opened by feaonal 0
Releases(v3.1.0)
Adversarial Color Enhancement: Generating Unrestricted Adversarial Images by Optimizing a Color Filter

ACE Please find the preliminary version published at BMVC 2020 in the folder BMVC_version, and its extended journal version in Journal_version. Datase

28 Dec 25, 2022
A Closer Look at Reference Learning for Fourier Phase Retrieval

A Closer Look at Reference Learning for Fourier Phase Retrieval This repository contains code for our NeurIPS 2021 Workshop on Deep Learning and Inver

Tobias Uelwer 1 Oct 28, 2021
Veri Setinizi Yolov5 Formatına Dönüştürün

Veri Setinizi Yolov5 Formatına Dönüştürün! Bu Repo da Neler Var? Xml Formatındaki Veri Setini .Txt Formatına Çevirme Xml Formatındaki Dosyaları Silme

Kadir Nar 4 Aug 22, 2022
Red Team tool for exfiltrating files from a target's Google Drive that you have access to, via Google's API.

GD-Thief Red Team tool for exfiltrating files from a target's Google Drive that you(the attacker) has access to, via the Google Drive API. This includ

Antonio Piazza 39 Dec 27, 2022
HackBMU-5.0-Team-Ctrl-Alt-Elite - HackBMU 5.0 Team Ctrl Alt Elite

HackBMU-5.0-Team-Ctrl-Alt-Elite The search is over. We present to you ‘Health-A-

3 Feb 19, 2022
Apache Spark - A unified analytics engine for large-scale data processing

Apache Spark Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an op

The Apache Software Foundation 34.7k Jan 04, 2023
CTF challenges from redpwnCTF 2021

redpwnCTF 2021 Challenges This repository contains challenges from redpwnCTF 2021 in the rCDS format; challenge information is in the challenge.yaml f

redpwn 27 Dec 07, 2022
Vikrant Deshpande 1 Nov 17, 2022
reimpliment of DFANet: Deep Feature Aggregation for Real-Time Semantic Segmentation

DFANet This repo is an unofficial pytorch implementation of DFANet:Deep Feature Aggregation for Real-Time Semantic Segmentation log 2019.4.16 After 48

shen hui xiang 248 Oct 21, 2022
Open standard for machine learning interoperability

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides

Open Neural Network Exchange 13.9k Dec 30, 2022
PyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement."

FullSubNet This Git repository for the official PyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech E

郝翔 357 Jan 04, 2023
This is an official implementation for "PlaneRecNet".

PlaneRecNet This is an official implementation for PlaneRecNet: A multi-task convolutional neural network provides instance segmentation for piece-wis

yaxu 50 Nov 17, 2022
PyTorch Code for "Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning"

Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning [Project Page] [Paper] Wenlong Huang1, Igor Mordatch2, Pieter Abbeel1,

Wenlong Huang 40 Nov 22, 2022
g9.py - Torch interactive graphics

g9.py - Torch interactive graphics A Torch toy in the browser. Demo at https://srush.github.io/g9py/ This is a shameless copy of g9.js, written in Pyt

Sasha Rush 13 Nov 16, 2022
A transformer-based method for Healthcare Image Captioning in Vietnamese

vieCap4H Challenge 2021: A transformer-based method for Healthcare Image Captioning in Vietnamese This repo GitHub contains our solution for vieCap4H

Doanh B C 4 May 05, 2022
🥈78th place in Riiid Solution🥈

Riiid Answer Correctness Prediction Introduction This repository is the code that placed 78th in Riiid Answer Correctness Prediction competition. Requ

ds wook 14 Apr 26, 2022
Voice of Pajlada with model and weights.

Pajlada TTS Stripped down version of ForwardTacotron (https://github.com/as-ideas/ForwardTacotron) with pretrained weights for Pajlada's (https://gith

6 Sep 03, 2021
Statistical and Algorithmic Investing Strategies for Everyone

Eiten - Algorithmic Investing Strategies for Everyone Eiten is an open source toolkit by Tradytics that implements various statistical and algorithmic

Tradytics 2.5k Jan 02, 2023
A TikTok-like recommender system for GitHub repositories based on Gorse

GitRec GitRec is the missing recommender system for GitHub repositories based on Gorse. Architecture The trending crawler crawls trending repositories

337 Jan 04, 2023
Tensorflow implementation of the paper "HumanGPS: Geodesic PreServing Feature for Dense Human Correspondences", CVPR 2021.

HumanGPS: Geodesic PreServing Feature for Dense Human Correspondences Tensorflow implementation of the paper "HumanGPS: Geodesic PreServing Feature fo

Google Interns 50 Dec 21, 2022