Keras implementation of Deeplab v3+ with pretrained weights

Overview

Keras implementation of Deeplabv3+

This repo is not longer maintained. I won't respond to issues but will merge PR
DeepLab is a state-of-art deep learning model for semantic image segmentation.

Model is based on the original TF frozen graph. It is possible to load pretrained weights into this model. Weights are directly imported from original TF checkpoint.

Segmentation results of original TF model. Output Stride = 8




Segmentation results of this repo model with loaded weights and OS = 8
Results are identical to the TF model




Segmentation results of this repo model with loaded weights and OS = 16
Results are still good




How to get labels

Model will return tensor of shape (batch_size, height, width, num_classes). To obtain labels, you need to apply argmax to logits at exit layer. Example of predicting on image1.jpg:

import numpy as np
from PIL import Image
from matplotlib import pyplot as plt

from model import Deeplabv3

# Generates labels using most basic setup.  Supports various image sizes.  Returns image labels in same format
# as original image.  Normalization matches MobileNetV2

trained_image_width=512 
mean_subtraction_value=127.5
image = np.array(Image.open('imgs/image1.jpg'))

# resize to max dimension of images from training dataset
w, h, _ = image.shape
ratio = float(trained_image_width) / np.max([w, h])
resized_image = np.array(Image.fromarray(image.astype('uint8')).resize((int(ratio * h), int(ratio * w))))

# apply normalization for trained dataset images
resized_image = (resized_image / mean_subtraction_value) - 1.

# pad array to square image to match training images
pad_x = int(trained_image_width - resized_image.shape[0])
pad_y = int(trained_image_width - resized_image.shape[1])
resized_image = np.pad(resized_image, ((0, pad_x), (0, pad_y), (0, 0)), mode='constant')

# make prediction
deeplab_model = Deeplabv3()
res = deeplab_model.predict(np.expand_dims(resized_image, 0))
labels = np.argmax(res.squeeze(), -1)

# remove padding and resize back to original image
if pad_x > 0:
    labels = labels[:-pad_x]
if pad_y > 0:
    labels = labels[:, :-pad_y]
labels = np.array(Image.fromarray(labels.astype('uint8')).resize((h, w)))

plt.imshow(labels)
plt.waitforbuttonpress()

How to use this model with custom input shape and custom number of classes

from model import Deeplabv3
deeplab_model = Deeplabv3(input_shape=(384, 384, 3), classes=4#or you can use None as shape
deeplab_model = Deeplabv3(input_shape=(None, None, 3), classes=4)

After that you will get a usual Keras model which you can train using .fit and .fit_generator methods.

How to train this model

Useful parameters can be found in the original repository.

Important notes:

  1. This model doesn’t provide default weight decay, user needs to add it themselves.
  2. Due to huge memory use with OS=8, Xception backbone should be trained with OS=16 and only inferenced with OS=8.
  3. User can freeze feature extractor for Xception backbone (first 356 layers) and only fine-tune decoder. Right now (March 2019), there is a problem with finetuning Keras models with BN. You can read more about it here.

Known issues

This model can be retrained check this notebook. Finetuning is tricky and difficult because of the confusion between training and trainable in Keras. See this issue for a discussion and possible alternatives.

How to load model

In order to load model after using model.save() use this code:

from model import relu6
deeplab_model = load_model('example.h5')

Xception vs MobileNetv2

There are 2 available backbones. Xception backbone is more accurate, but has 25 times more parameters than MobileNetv2.

For MobileNetv2 there are pretrained weights only for alpha=1. However, you can initiate model with different values of alpha.

Requirement

The latest vesrion of this repo uses TF Keras, so you only need TF 2.0+ installed
tensorflow-gpu==2.0.0a0
CUDA==9.0


If you want to use older version, use following commands:

git clone https://github.com/bonlime/keras-deeplab-v3-plus/
cd keras-deeplab-v3-plus/
git checkout 714a6b7d1a069a07547c5c08282f1a706db92e20

tensorflow-gpu==1.13
Keras==2.2.4

Comments
  • Performance degradation

    Performance degradation

    Hi,

    I'm using the converted pretrain-weights to measure mIoU and observed 10% drop. Could you sure you measurement results, did you still get 84% mIoU on PASCAL using the transferred model and weights?

    Thanks

    opened by baoruxiao 13
  • Is dialtion_rate useful in keras DepthwiseConv2D layer ?

    Is dialtion_rate useful in keras DepthwiseConv2D layer ?

    Hi, I read the document and found dialtion_rate seems not a hyperparameter in DepthwiseConv2D layer. But you used it in your model x = DepthwiseConv2D((kernel_size, kernel_size), strides=(stride, stride), dilation_rate=(rate, rate)

    I made a toy example to check this:

    model=Sequential()
    model.add(DepthwiseConv2D(3,strides=1,padding='valid',depth_multiplier=2,\
                              dilation_rate=(2,2),input_shape=(9,9,3))) 
    model.summary()
    

    Output:

    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    depthwise_conv2d_6 (Depthwis (None, 7, 7, 6)           60        
    =================================================================
    Total params: 60
    Trainable params: 60
    Non-trainable params: 0
    _________________________________________________________________
    

    If dilation_rate is useful in DepthwiseConv2D , I think Output shape should be (5,5,6) not (7,7,6)

    My English is pretty limited, plz don't mind

    opened by Pofatoezil 10
  • About the softmax layer & label process?

    About the softmax layer & label process?

    Hi, bonlime, Thanks for your work! It helps me a lot! But I have two questions in my training process:

    1. I found there is no 'softmax' on the last layer of the Deeplabv3() model. Is it right if I use model.fit() directly after model=Deeplabv3() without any other process? Or could you please tell me where to use the 'softmax' layer?

    2. In the training image input process, how to correctly deal with labels? In my previous segmentation projects, the training and prediction steps usually operate on single-channel pngs labels, where the value of each pixel corresponds to the class label (for example, 0-21 for the Pascal dataset). Is this project the same as that?

    I'm looking forward to your reply. Thanks!

    opened by YanZhiyuan0918 10
  • The right way of pre-processing for input?

    The right way of pre-processing for input?

    image We use the first img as input, and resize it from (427, 640, 3) to (512, 512, 3), and normalize it from [0,255] to [0,1]. image As below, the output of OS=16 is normal, image

    but the output of OS=8 is worse.

    I can not figure out that, so I think my pre-processiong(such as normalizing) is wrong.

    opened by munanning 9
  • compatibility with tensorflow < 2.0

    compatibility with tensorflow < 2.0

    Hi, as I have cuda 9.0 in Ubuntu 16.04, so tensorflow 2.0 is not an option (tensorflow 1.12 installed instead). when I run the code:

    from model import Deeplabv3 'model=Deeplabv3(input_shape=(None,None,3), backbone='xception',OS=8)I got an error:AttributeError: module 'tensorflow._api.v1.image' has no attribute 'resize'`

    How can I resolve this so that I can run the code in tensorlow 1.120 ? many thanks

    opened by tsing90 7
  • image-level pooling

    image-level pooling

    Hello,

    I checked your code, I think the image-level pooling was not correctly implemented. because it is not computing the global average pooling.

    https://github.com/bonlime/keras-deeplab-v3-plus/blob/master/model.py#L437

    I checked this code: https://github.com/tensorflow/models/blob/4a0ee4a29dd7e4b6e0135ccf6857f4dc58d71a90/research/deeplab/model.py#L397 and Reference are here ref 52 from original paper (https://arxiv.org/pdf/1506.04579.pdf):

    Exploiting the FCN architecture, ParsetNet can directly use global average pooling from the final (or any) feature map, resulting in the feature of the whole image, and use it as context.

    In implementation, this is accomplished by unpooling the context vector and appending the resulting feature map with the standard feature map.

    Specifically, we use global average pooling and pool the context features from the last layer or any layer if that is desired.

    opened by emedinac 7
  • Does the model have errors?

    Does the model have errors?

    When I use fit, it says Error when checking target: expected bilinear_upsampling_2 to have shape (512, 512, 2) but got array with shape (512, 512, 3) ps: My input image.shape is (512,512,3)

    opened by Taylor-Rose 6
  • How can I run the video module to achieve real-time detection?

    How can I run the video module to achieve real-time detection?

    Hello I want to know the speed of deeplabv3+ ,and I try to run that: from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img from matplotlib import pyplot as plt import cv2 # used for resize. if you dont have it, use anything else import numpy as np from model import Deeplabv3 deeplab_model = Deeplabv3() def detect_video(deeplab_model): import cv2 vid = cv2.VideoCapture(0) if not vid.isOpened(): raise IOError("Couldn't open webcam or video") accum_time = 10 curr_fps = 10 fps = "20" prev_time = timer() while True: return_value, frame = vid.read() res = deeplab_model.predict(frame)

    	result = array_to_img(res)
        curr_time = timer()
        exec_time = curr_time - prev_time
        prev_time = curr_time
        accum_time = accum_time + exec_time
        curr_fps = curr_fps + 1
        if accum_time > 1:
            accum_time = accum_time - 1
            fps = "FPS: " + str(curr_fps)
            curr_fps = 0
        cv2.putText(result, text=fps, org=(3, 15), fontFace=cv2.FONT_HERSHEY_SIMPLEX,
                    fontScale=0.50, color=(255, 0, 0), thickness=2)
        cv2.namedWindow("result", cv2.WINDOW_NORMAL)
        cv2.imshow("result", result)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    deeplab_model.close_session()
    

    detect_video(deeplab_model)

    but it doesn't work! I thank I need your help thanks.

    opened by ll1214 6
  • How can i get each label's coordinates on segmentation map?

    How can i get each label's coordinates on segmentation map?

    import numpy as np
    from PIL import Image
    from matplotlib import pyplot as plt
    
    from model import Deeplabv3
    
    # Generates labels using most basic setup.  Supports various image sizes.  Returns image labels in same format
    # as original image.  Normalization matches MobileNetV2
    
    trained_image_width=512 
    mean_subtraction_value=127.5
    image = np.array(Image.open('imgs/image1.jpg'))
    
    # resize to max dimension of images from training dataset
    w, h, _ = image.shape
    ratio = float(trained_image_width) / np.max([w, h])
    resized_image = np.array(Image.fromarray(image.astype('uint8')).resize((int(ratio * h), int(ratio * w))))
    
    # apply normalization for trained dataset images
    resized_image = (resized_image / mean_subtraction_value) - 1.
    
    # pad array to square image to match training images
    pad_x = int(trained_image_width - resized_image.shape[0])
    pad_y = int(trained_image_width - resized_image.shape[1])
    resized_image = np.pad(resized_image, ((0, pad_x), (0, pad_y), (0, 0)), mode='constant')
    
    # make prediction
    deeplab_model = Deeplabv3()
    res = deeplab_model.predict(np.expand_dims(resized_image, 0))
    labels = np.argmax(res.squeeze(), -1)
    
    # remove padding and resize back to original image
    if pad_x > 0:
        labels = labels[:-pad_x]
    if pad_y > 0:
        labels = labels[:, :-pad_y]
    labels = np.array(Image.fromarray(labels.astype('uint8')).resize((h, w)))
    
    plt.imshow(labels)
    plt.show()
    

    I run this code and get segmentation map But, I want to get result like https://github.com/bonlime/keras-deeplab-v3-plus/blob/master/imgs/seg_results2.png and each label's coordinates on image. How can i do that?

    opened by Baek2back 5
  • Make model.py compatible with Python 3 and switch to tf.image.resize()

    Make model.py compatible with Python 3 and switch to tf.image.resize()

    • make model.py compatible with Python 3 by changing [tensor].shape to [tensor].shape.as_list()
    • move away from tf.compat.v1.image.resize() due to "resize method is not implemented" error.
    • use tf.image.resize() instead because it now supports align_corners=True.
    opened by yanfengliu 5
  • Custom Dataset: Number of Classes

    Custom Dataset: Number of Classes

    Should num_classes take into account the background class. For example, if I have 3 foreground classes that I have masks for and a background class that I don't care about and don't have a mask for, should my num_classes be 3?

    opened by ad12 5
  • Bump pillow from 6.0.0 to 9.3.0

    Bump pillow from 6.0.0 to 9.3.0

    Bumps pillow from 6.0.0 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump tensorflow from 2.5.0rc0 to 2.9.3

    Bump tensorflow from 2.5.0rc0 to 2.9.3

    Bumps tensorflow from 2.5.0rc0 to 2.9.3.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Citation in Journal

    Citation in Journal

    I would like to cite this repository in a journal paper I'm currently writing as I have used most of the code here. I have already included the reference to the original DeepLabV3+ paper, but I also want to include this repository. What is the best way to cite it? Thanks!

    opened by pedrogalher 1
  • Model not learning anything

    Model not learning anything

    Hi I'm training DeepLabV3+ Mobilenet backbone on my custom dataset. My Dataset has 1 class. My Model architecture looks like:

    deeplab_model = Deeplabv3(input_shape=(512, 512, 3), classes=2,activation='softmax') deeplab_model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics='accuracy']) deeplab_model.fit(...)

    My Loss is not reducing and the output is all black. My mask is of the shape (512,512) Someone, please guide me what to do? Any help would be great.

    opened by anmol4210 1
  • AttributeError: 'int' object has no attribute 'value'

    AttributeError: 'int' object has no attribute 'value'

    I copy and run this code(and copy images and model.py) :

    from matplotlib import pyplot as plt import cv2 # used for resize. if you dont have it, use anything else import numpy as np from model import Deeplabv3 img = plt.imread("imgs/image1.jpg") print(img.shape) w, h, _ = img.shape ratio = 512. / np.max([w,h]) resized = cv2.resize(img,(int(ratioh),int(ratiow))) resized = resized / 127.5 - 1. new_deeplab_model = Deeplabv3(input_shape=(512,512,3), OS=16)

    pad_x = int(512 - resized.shape[0]) resized2 = np.pad(resized,((0,pad_x),(0,0),(0,0)), mode='constant') res = new_deeplab_model.predict(np.expand_dims(resized2,0)) res_old = old_deeplab_model.predict(np.expand_dims(resized2,0)) labels = np.argmax(res.squeeze(),-1) labels_old = np.argmax(res_old.squeeze(),-1) plt.imshow(labels[:-pad_x]) plt.show() plt.imshow(labels_old[:-pad_x]) plt.show() `

    but I get this error =>

    ERROR:root:An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line string', (1, 2))


    TypeError Traceback (most recent call last) in () 26 # make prediction 27 deeplab_model = Deeplabv3() ---> 28 res = deeplab_model.predict(np.expand_dims(resized_image, 0)) 29 labels = np.argmax(res.squeeze(), -1) 30

    10 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 966 func_graph: A FuncGraph object to destroy. func_graph is unusable 967 after this function. --> 968 """ 969 # TODO(b/115366440): Delete this method when a custom OrderedDict is added. 970 # Clearing captures using clear() leaves some cycles around.

    TypeError: in user code:

    TypeError: tf__predict_function() missing 8 required positional arguments: 'x', 'batch_size', 'verbose', 'steps', 'callbacks', 'max_queue_size', 'workers', and 'use_multiprocessing'
    
    opened by alikarimi120 1
Releases(1.2)
Owner
Emil Zakirov. MIPT & Skoltech. Computer Vision Engineer.
[ECE NTUA] 👁 Computer Vision - Lab Projects & Theoretical Problem Sets (2020-2021)

Computer Vision - NTUA (2020-2021) This repository hosts the lab projects and theoretical problem sets of the Computer Vision course held by ECE NTUA

Dimitris Dimos 6 Jul 21, 2022
A Python library for common tasks on 3D point clouds

Point Cloud Utils (pcu) - A Python library for common tasks on 3D point clouds Point Cloud Utils (pcu) is a utility library providing the following fu

Francis Williams 622 Dec 27, 2022
Lux AI environment interface for RLlib multi-agents

Lux AI interface to RLlib MultiAgentsEnv For Lux AI Season 1 Kaggle competition. LuxAI repo RLlib-multiagents docs Kaggle environments repo Please let

Jaime 12 Nov 07, 2022
Image based Human Fall Detection

Here I integrated the YOLOv5 object detection algorithm with my own created dataset which consists of human activity images to achieve low cost, high accuracy, and real-time computing requirements

UTTEJ KUMAR 12 Dec 11, 2022
A Python package for time series augmentation

tsaug tsaug is a Python package for time series augmentation. It offers a set of augmentation methods for time series, as well as a simple API to conn

Arundo Analytics 278 Jan 01, 2023
use tensorflow 2.0 to tell a dog and cat from a specified picture

dog_or_cat use tensorflow 2.0 to tell a dog and cat from a specified picture This is one of the classic experiments for the introduction of deep learn

你这个代码我看不懂 1 Oct 22, 2021
Simple implementation of OpenAI CLIP model in PyTorch.

It was in January of 2021 that OpenAI announced two new models: DALL-E and CLIP, both multi-modality models connecting texts and images in some way. In this article we are going to implement CLIP mod

Moein Shariatnia 226 Jan 05, 2023
DeepFaceLab fork which provides IPython Notebook to use DFL with Google Colab

DFL-Colab — DeepFaceLab fork for Google Colab This project provides you IPython Notebook to use DeepFaceLab with Google Colaboratory. You can create y

779 Jan 05, 2023
Pytorch library for seismic data augmentation

Pytorch library for seismic data augmentation

Artemii Novoselov 27 Nov 22, 2022
Gesture Volume Control Using OpenCV and MediaPipe

This Project Uses OpenCV and MediaPipe Hand solutions to identify hands and Change system volume by taking thumb and index finger positions

Pratham Bhatnagar 6 Sep 12, 2022
Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging

Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging This repository contains an implementation

Computational Photography Lab @ SFU 1.1k Jan 02, 2023
Accuracy Aligned. Concise Implementation of Swin Transformer

Accuracy Aligned. Concise Implementation of Swin Transformer This repository contains the implementation of Swin Transformer, and the training codes o

FengWang 77 Dec 16, 2022
Effect of Deep Transfer and Multi task Learning on Sperm Abnormality Detection

Effect of Deep Transfer and Multi task Learning on Sperm Abnormality Detection Introduction This repository includes codes and models of "Effect of De

Amir Abbasi 5 Sep 05, 2022
All materials of Cassandra Event, Udyam'22

Cassandra 2022 Workspace Workshop Materials Workshop-1 Workshop-2 Workshop-3 Workshop-4 Assignments Assignment-1 Assignment-2 Assignment-3 Resources P

36 Dec 31, 2022
Implementing a simplified copy of Shazam application from scratch using MinHashing and LSH.

Building Shazam from scratch In this repository we tried to implement a simplified copy of the Shazam application able to tell you the name of a song

Arturo Ghinassi 0 Nov 17, 2022
Public Implementation of ChIRo from "Learning 3D Representations of Molecular Chirality with Invariance to Bond Rotations"

Learning 3D Representations of Molecular Chirality with Invariance to Bond Rotations This directory contains the model architectures and experimental

35 Dec 05, 2022
Mitsuba 2: A Retargetable Forward and Inverse Renderer

Mitsuba Renderer 2 Documentation Mitsuba 2 is a research-oriented rendering system written in portable C++17. It consists of a small set of core libra

Mitsuba Physically Based Renderer 2k Jan 07, 2023
Bianace Prediction Pytorch Model

Bianace Prediction Pytorch Model Main Results ETHUSDT from 2021-01-01 00:00:00 t

RoyYang 4 Jul 20, 2022
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)

English | 简体中文 Welcome to the PaddlePaddle GitHub. PaddlePaddle, as the only independent R&D deep learning platform in China, has been officially open

19.4k Jan 04, 2023
[AAAI 2021] EMLight: Lighting Estimation via Spherical Distribution Approximation and [ICCV 2021] Sparse Needlets for Lighting Estimation with Spherical Transport Loss

EMLight: Lighting Estimation via Spherical Distribution Approximation (AAAI 2021) Update 12/2021: We release our Virtual Object Relighting (VOR) Datas

Fangneng Zhan 144 Jan 06, 2023