Keras implementation of Deeplab v3+ with pretrained weights

Overview

Keras implementation of Deeplabv3+

This repo is not longer maintained. I won't respond to issues but will merge PR
DeepLab is a state-of-art deep learning model for semantic image segmentation.

Model is based on the original TF frozen graph. It is possible to load pretrained weights into this model. Weights are directly imported from original TF checkpoint.

Segmentation results of original TF model. Output Stride = 8




Segmentation results of this repo model with loaded weights and OS = 8
Results are identical to the TF model




Segmentation results of this repo model with loaded weights and OS = 16
Results are still good




How to get labels

Model will return tensor of shape (batch_size, height, width, num_classes). To obtain labels, you need to apply argmax to logits at exit layer. Example of predicting on image1.jpg:

import numpy as np
from PIL import Image
from matplotlib import pyplot as plt

from model import Deeplabv3

# Generates labels using most basic setup.  Supports various image sizes.  Returns image labels in same format
# as original image.  Normalization matches MobileNetV2

trained_image_width=512 
mean_subtraction_value=127.5
image = np.array(Image.open('imgs/image1.jpg'))

# resize to max dimension of images from training dataset
w, h, _ = image.shape
ratio = float(trained_image_width) / np.max([w, h])
resized_image = np.array(Image.fromarray(image.astype('uint8')).resize((int(ratio * h), int(ratio * w))))

# apply normalization for trained dataset images
resized_image = (resized_image / mean_subtraction_value) - 1.

# pad array to square image to match training images
pad_x = int(trained_image_width - resized_image.shape[0])
pad_y = int(trained_image_width - resized_image.shape[1])
resized_image = np.pad(resized_image, ((0, pad_x), (0, pad_y), (0, 0)), mode='constant')

# make prediction
deeplab_model = Deeplabv3()
res = deeplab_model.predict(np.expand_dims(resized_image, 0))
labels = np.argmax(res.squeeze(), -1)

# remove padding and resize back to original image
if pad_x > 0:
    labels = labels[:-pad_x]
if pad_y > 0:
    labels = labels[:, :-pad_y]
labels = np.array(Image.fromarray(labels.astype('uint8')).resize((h, w)))

plt.imshow(labels)
plt.waitforbuttonpress()

How to use this model with custom input shape and custom number of classes

from model import Deeplabv3
deeplab_model = Deeplabv3(input_shape=(384, 384, 3), classes=4#or you can use None as shape
deeplab_model = Deeplabv3(input_shape=(None, None, 3), classes=4)

After that you will get a usual Keras model which you can train using .fit and .fit_generator methods.

How to train this model

Useful parameters can be found in the original repository.

Important notes:

  1. This model doesn’t provide default weight decay, user needs to add it themselves.
  2. Due to huge memory use with OS=8, Xception backbone should be trained with OS=16 and only inferenced with OS=8.
  3. User can freeze feature extractor for Xception backbone (first 356 layers) and only fine-tune decoder. Right now (March 2019), there is a problem with finetuning Keras models with BN. You can read more about it here.

Known issues

This model can be retrained check this notebook. Finetuning is tricky and difficult because of the confusion between training and trainable in Keras. See this issue for a discussion and possible alternatives.

How to load model

In order to load model after using model.save() use this code:

from model import relu6
deeplab_model = load_model('example.h5')

Xception vs MobileNetv2

There are 2 available backbones. Xception backbone is more accurate, but has 25 times more parameters than MobileNetv2.

For MobileNetv2 there are pretrained weights only for alpha=1. However, you can initiate model with different values of alpha.

Requirement

The latest vesrion of this repo uses TF Keras, so you only need TF 2.0+ installed
tensorflow-gpu==2.0.0a0
CUDA==9.0


If you want to use older version, use following commands:

git clone https://github.com/bonlime/keras-deeplab-v3-plus/
cd keras-deeplab-v3-plus/
git checkout 714a6b7d1a069a07547c5c08282f1a706db92e20

tensorflow-gpu==1.13
Keras==2.2.4

Comments
  • Performance degradation

    Performance degradation

    Hi,

    I'm using the converted pretrain-weights to measure mIoU and observed 10% drop. Could you sure you measurement results, did you still get 84% mIoU on PASCAL using the transferred model and weights?

    Thanks

    opened by baoruxiao 13
  • Is dialtion_rate useful in keras DepthwiseConv2D layer ?

    Is dialtion_rate useful in keras DepthwiseConv2D layer ?

    Hi, I read the document and found dialtion_rate seems not a hyperparameter in DepthwiseConv2D layer. But you used it in your model x = DepthwiseConv2D((kernel_size, kernel_size), strides=(stride, stride), dilation_rate=(rate, rate)

    I made a toy example to check this:

    model=Sequential()
    model.add(DepthwiseConv2D(3,strides=1,padding='valid',depth_multiplier=2,\
                              dilation_rate=(2,2),input_shape=(9,9,3))) 
    model.summary()
    

    Output:

    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    depthwise_conv2d_6 (Depthwis (None, 7, 7, 6)           60        
    =================================================================
    Total params: 60
    Trainable params: 60
    Non-trainable params: 0
    _________________________________________________________________
    

    If dilation_rate is useful in DepthwiseConv2D , I think Output shape should be (5,5,6) not (7,7,6)

    My English is pretty limited, plz don't mind

    opened by Pofatoezil 10
  • About the softmax layer & label process?

    About the softmax layer & label process?

    Hi, bonlime, Thanks for your work! It helps me a lot! But I have two questions in my training process:

    1. I found there is no 'softmax' on the last layer of the Deeplabv3() model. Is it right if I use model.fit() directly after model=Deeplabv3() without any other process? Or could you please tell me where to use the 'softmax' layer?

    2. In the training image input process, how to correctly deal with labels? In my previous segmentation projects, the training and prediction steps usually operate on single-channel pngs labels, where the value of each pixel corresponds to the class label (for example, 0-21 for the Pascal dataset). Is this project the same as that?

    I'm looking forward to your reply. Thanks!

    opened by YanZhiyuan0918 10
  • The right way of pre-processing for input?

    The right way of pre-processing for input?

    image We use the first img as input, and resize it from (427, 640, 3) to (512, 512, 3), and normalize it from [0,255] to [0,1]. image As below, the output of OS=16 is normal, image

    but the output of OS=8 is worse.

    I can not figure out that, so I think my pre-processiong(such as normalizing) is wrong.

    opened by munanning 9
  • compatibility with tensorflow < 2.0

    compatibility with tensorflow < 2.0

    Hi, as I have cuda 9.0 in Ubuntu 16.04, so tensorflow 2.0 is not an option (tensorflow 1.12 installed instead). when I run the code:

    from model import Deeplabv3 'model=Deeplabv3(input_shape=(None,None,3), backbone='xception',OS=8)I got an error:AttributeError: module 'tensorflow._api.v1.image' has no attribute 'resize'`

    How can I resolve this so that I can run the code in tensorlow 1.120 ? many thanks

    opened by tsing90 7
  • image-level pooling

    image-level pooling

    Hello,

    I checked your code, I think the image-level pooling was not correctly implemented. because it is not computing the global average pooling.

    https://github.com/bonlime/keras-deeplab-v3-plus/blob/master/model.py#L437

    I checked this code: https://github.com/tensorflow/models/blob/4a0ee4a29dd7e4b6e0135ccf6857f4dc58d71a90/research/deeplab/model.py#L397 and Reference are here ref 52 from original paper (https://arxiv.org/pdf/1506.04579.pdf):

    Exploiting the FCN architecture, ParsetNet can directly use global average pooling from the final (or any) feature map, resulting in the feature of the whole image, and use it as context.

    In implementation, this is accomplished by unpooling the context vector and appending the resulting feature map with the standard feature map.

    Specifically, we use global average pooling and pool the context features from the last layer or any layer if that is desired.

    opened by emedinac 7
  • Does the model have errors?

    Does the model have errors?

    When I use fit, it says Error when checking target: expected bilinear_upsampling_2 to have shape (512, 512, 2) but got array with shape (512, 512, 3) ps: My input image.shape is (512,512,3)

    opened by Taylor-Rose 6
  • How can I run the video module to achieve real-time detection?

    How can I run the video module to achieve real-time detection?

    Hello I want to know the speed of deeplabv3+ ,and I try to run that: from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img from matplotlib import pyplot as plt import cv2 # used for resize. if you dont have it, use anything else import numpy as np from model import Deeplabv3 deeplab_model = Deeplabv3() def detect_video(deeplab_model): import cv2 vid = cv2.VideoCapture(0) if not vid.isOpened(): raise IOError("Couldn't open webcam or video") accum_time = 10 curr_fps = 10 fps = "20" prev_time = timer() while True: return_value, frame = vid.read() res = deeplab_model.predict(frame)

    	result = array_to_img(res)
        curr_time = timer()
        exec_time = curr_time - prev_time
        prev_time = curr_time
        accum_time = accum_time + exec_time
        curr_fps = curr_fps + 1
        if accum_time > 1:
            accum_time = accum_time - 1
            fps = "FPS: " + str(curr_fps)
            curr_fps = 0
        cv2.putText(result, text=fps, org=(3, 15), fontFace=cv2.FONT_HERSHEY_SIMPLEX,
                    fontScale=0.50, color=(255, 0, 0), thickness=2)
        cv2.namedWindow("result", cv2.WINDOW_NORMAL)
        cv2.imshow("result", result)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    deeplab_model.close_session()
    

    detect_video(deeplab_model)

    but it doesn't work! I thank I need your help thanks.

    opened by ll1214 6
  • How can i get each label's coordinates on segmentation map?

    How can i get each label's coordinates on segmentation map?

    import numpy as np
    from PIL import Image
    from matplotlib import pyplot as plt
    
    from model import Deeplabv3
    
    # Generates labels using most basic setup.  Supports various image sizes.  Returns image labels in same format
    # as original image.  Normalization matches MobileNetV2
    
    trained_image_width=512 
    mean_subtraction_value=127.5
    image = np.array(Image.open('imgs/image1.jpg'))
    
    # resize to max dimension of images from training dataset
    w, h, _ = image.shape
    ratio = float(trained_image_width) / np.max([w, h])
    resized_image = np.array(Image.fromarray(image.astype('uint8')).resize((int(ratio * h), int(ratio * w))))
    
    # apply normalization for trained dataset images
    resized_image = (resized_image / mean_subtraction_value) - 1.
    
    # pad array to square image to match training images
    pad_x = int(trained_image_width - resized_image.shape[0])
    pad_y = int(trained_image_width - resized_image.shape[1])
    resized_image = np.pad(resized_image, ((0, pad_x), (0, pad_y), (0, 0)), mode='constant')
    
    # make prediction
    deeplab_model = Deeplabv3()
    res = deeplab_model.predict(np.expand_dims(resized_image, 0))
    labels = np.argmax(res.squeeze(), -1)
    
    # remove padding and resize back to original image
    if pad_x > 0:
        labels = labels[:-pad_x]
    if pad_y > 0:
        labels = labels[:, :-pad_y]
    labels = np.array(Image.fromarray(labels.astype('uint8')).resize((h, w)))
    
    plt.imshow(labels)
    plt.show()
    

    I run this code and get segmentation map But, I want to get result like https://github.com/bonlime/keras-deeplab-v3-plus/blob/master/imgs/seg_results2.png and each label's coordinates on image. How can i do that?

    opened by Baek2back 5
  • Make model.py compatible with Python 3 and switch to tf.image.resize()

    Make model.py compatible with Python 3 and switch to tf.image.resize()

    • make model.py compatible with Python 3 by changing [tensor].shape to [tensor].shape.as_list()
    • move away from tf.compat.v1.image.resize() due to "resize method is not implemented" error.
    • use tf.image.resize() instead because it now supports align_corners=True.
    opened by yanfengliu 5
  • Custom Dataset: Number of Classes

    Custom Dataset: Number of Classes

    Should num_classes take into account the background class. For example, if I have 3 foreground classes that I have masks for and a background class that I don't care about and don't have a mask for, should my num_classes be 3?

    opened by ad12 5
  • Bump pillow from 6.0.0 to 9.3.0

    Bump pillow from 6.0.0 to 9.3.0

    Bumps pillow from 6.0.0 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump tensorflow from 2.5.0rc0 to 2.9.3

    Bump tensorflow from 2.5.0rc0 to 2.9.3

    Bumps tensorflow from 2.5.0rc0 to 2.9.3.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Citation in Journal

    Citation in Journal

    I would like to cite this repository in a journal paper I'm currently writing as I have used most of the code here. I have already included the reference to the original DeepLabV3+ paper, but I also want to include this repository. What is the best way to cite it? Thanks!

    opened by pedrogalher 1
  • Model not learning anything

    Model not learning anything

    Hi I'm training DeepLabV3+ Mobilenet backbone on my custom dataset. My Dataset has 1 class. My Model architecture looks like:

    deeplab_model = Deeplabv3(input_shape=(512, 512, 3), classes=2,activation='softmax') deeplab_model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics='accuracy']) deeplab_model.fit(...)

    My Loss is not reducing and the output is all black. My mask is of the shape (512,512) Someone, please guide me what to do? Any help would be great.

    opened by anmol4210 1
  • AttributeError: 'int' object has no attribute 'value'

    AttributeError: 'int' object has no attribute 'value'

    I copy and run this code(and copy images and model.py) :

    from matplotlib import pyplot as plt import cv2 # used for resize. if you dont have it, use anything else import numpy as np from model import Deeplabv3 img = plt.imread("imgs/image1.jpg") print(img.shape) w, h, _ = img.shape ratio = 512. / np.max([w,h]) resized = cv2.resize(img,(int(ratioh),int(ratiow))) resized = resized / 127.5 - 1. new_deeplab_model = Deeplabv3(input_shape=(512,512,3), OS=16)

    pad_x = int(512 - resized.shape[0]) resized2 = np.pad(resized,((0,pad_x),(0,0),(0,0)), mode='constant') res = new_deeplab_model.predict(np.expand_dims(resized2,0)) res_old = old_deeplab_model.predict(np.expand_dims(resized2,0)) labels = np.argmax(res.squeeze(),-1) labels_old = np.argmax(res_old.squeeze(),-1) plt.imshow(labels[:-pad_x]) plt.show() plt.imshow(labels_old[:-pad_x]) plt.show() `

    but I get this error =>

    ERROR:root:An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line string', (1, 2))


    TypeError Traceback (most recent call last) in () 26 # make prediction 27 deeplab_model = Deeplabv3() ---> 28 res = deeplab_model.predict(np.expand_dims(resized_image, 0)) 29 labels = np.argmax(res.squeeze(), -1) 30

    10 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 966 func_graph: A FuncGraph object to destroy. func_graph is unusable 967 after this function. --> 968 """ 969 # TODO(b/115366440): Delete this method when a custom OrderedDict is added. 970 # Clearing captures using clear() leaves some cycles around.

    TypeError: in user code:

    TypeError: tf__predict_function() missing 8 required positional arguments: 'x', 'batch_size', 'verbose', 'steps', 'callbacks', 'max_queue_size', 'workers', and 'use_multiprocessing'
    
    opened by alikarimi120 1
Releases(1.2)
Owner
Emil Zakirov. MIPT & Skoltech. Computer Vision Engineer.
A simple root calculater for python

Root A simple root calculater Usage/Examples python3 root.py 9 3 4 # Order: number - grid - number of decimals # Output: 2.08

Reza Hosseinzadeh 5 Feb 10, 2022
Deep Learning Pipelines for Apache Spark

Deep Learning Pipelines for Apache Spark The repo only contains HorovodRunner code for local CI and API docs. To use HorovodRunner for distributed tra

Databricks 2k Jan 08, 2023
PyTorch Implementation of PortaSpeech: Portable and High-Quality Generative Text-to-Speech

PortaSpeech - PyTorch Implementation PyTorch Implementation of PortaSpeech: Portable and High-Quality Generative Text-to-Speech. Model Size Module Nor

Keon Lee 279 Jan 04, 2023
Official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution"

RealBasicVSR [Paper] This is the official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution, arXiv". This repository contain

Kelvin C.K. Chan 566 Dec 28, 2022
[ICCV 2021] Deep Hough Voting for Robust Global Registration

Deep Hough Voting for Robust Global Registration, ICCV, 2021 Project Page | Paper | Video Deep Hough Voting for Robust Global Registration Junha Lee1,

Junha Lee 10 Dec 02, 2022
DuBE: Duple-balanced Ensemble Learning from Skewed Data

DuBE: Duple-balanced Ensemble Learning from Skewed Data "Towards Inter-class and Intra-class Imbalance in Class-imbalanced Learning" (IEEE ICDE 2022 S

6 Nov 12, 2022
Code accompanying the paper "ProxyFL: Decentralized Federated Learning through Proxy Model Sharing"

ProxyFL Code accompanying the paper "ProxyFL: Decentralized Federated Learning through Proxy Model Sharing" Authors: Shivam Kalra*, Junfeng Wen*, Jess

Layer6 Labs 14 Dec 06, 2022
An implementation of EWC with PyTorch

EWC.pytorch An implementation of Elastic Weight Consolidation (EWC), proposed in James Kirkpatrick et al. Overcoming catastrophic forgetting in neural

Ryuichiro Hataya 166 Dec 22, 2022
DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism (SVS & TTS); AAAI 2022; Official code

DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism This repository is the official PyTorch implementation of our AAAI-2022 paper, in

Jinglin Liu 803 Dec 28, 2022
Blind Video Temporal Consistency via Deep Video Prior

deep-video-prior (DVP) Code for NeurIPS 2020 paper: Blind Video Temporal Consistency via Deep Video Prior PyTorch implementation | paper | project web

Chenyang LEI 272 Dec 21, 2022
Official Code Release for Container : Context Aggregation Network

Container: Context Aggregation Network Official Code Release for Container : Context Aggregation Network Comparion between CNN, MLP-Mixer and Transfor

peng gao 42 Nov 17, 2021
Project page for our ICCV 2021 paper "The Way to my Heart is through Contrastive Learning"

The Way to my Heart is through Contrastive Learning: Remote Photoplethysmography from Unlabelled Video This is the official project page of our ICCV 2

36 Jan 06, 2023
Python scripts for performing stereo depth estimation using the MobileStereoNet model in ONNX

ONNX-MobileStereoNet Python scripts for performing stereo depth estimation using the MobileStereoNet model in ONNX Stereo depth estimation on the cone

Ibai Gorordo 23 Nov 29, 2022
An onlinel learning to rank python codebase.

OLTR Online learning to rank python codebase. The code related to Pairwise Differentiable Gradient Descent (ranker/PDGDLinearRanker.py) is copied from

ielab 5 Jul 18, 2022
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.

Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with

SpaceML 92 Nov 30, 2022
Train SN-GAN with AdaBelief

SNGAN-AdaBelief Train a state-of-the-art spectral normalization GAN with AdaBelief https://github.com/juntang-zhuang/Adabelief-Optimizer Acknowledgeme

Juntang Zhuang 10 Jun 11, 2022
The end-to-end platform for building voice products at scale

Picovoice Made in Vancouver, Canada by Picovoice Picovoice is the end-to-end platform for building voice products on your terms. Unlike Alexa and Goog

Picovoice 318 Jan 07, 2023
A benchmark dataset for mesh multi-label-classification based on cube engravings introduced in MeshCNN

Double Cube Engravings This script creates a dataset for multi-label mesh clasification, with an intentionally difficult setup for point cloud classif

Yotam Erel 1 Nov 30, 2021
Gesture Volume Control Using OpenCV and MediaPipe

This Project Uses OpenCV and MediaPipe Hand solutions to identify hands and Change system volume by taking thumb and index finger positions

Pratham Bhatnagar 6 Sep 12, 2022
Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer"

SCGAN Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer" Prepare The pre-trained model is avaiable at http

118 Dec 12, 2022