Segmentation models with pretrained backbones. Keras and TensorFlow Keras.

Overview

Python library with Neural Networks for Image Segmentation based on Keras and TensorFlow.

The main features of this library are:

  • High level API (just two lines of code to create model for segmentation)
  • 4 models architectures for binary and multi-class image segmentation (including legendary Unet)
  • 25 available backbones for each architecture
  • All backbones have pre-trained weights for faster and better convergence
  • Helpful segmentation losses (Jaccard, Dice, Focal) and metrics (IoU, F-score)

Important note

Some models of version 1.* are not compatible with previously trained models, if you have such models and want to load them - roll back with:

$ pip install -U segmentation-models==0.2.1

Table of Contents

Quick start

Library is build to work together with Keras and TensorFlow Keras frameworks

import segmentation_models as sm
# Segmentation Models: using `keras` framework.

By default it tries to import keras, if it is not installed, it will try to start with tensorflow.keras framework. There are several ways to choose framework:

  • Provide environment variable SM_FRAMEWORK=keras / SM_FRAMEWORK=tf.keras before import segmentation_models
  • Change framework sm.set_framework('keras') / sm.set_framework('tf.keras')

You can also specify what kind of image_data_format to use, segmentation-models works with both: channels_last and channels_first. This can be useful for further model conversion to Nvidia TensorRT format or optimizing model for cpu/gpu computations.

import keras
# or from tensorflow import keras

keras.backend.set_image_data_format('channels_last')
# or keras.backend.set_image_data_format('channels_first')

Created segmentation model is just an instance of Keras Model, which can be build as easy as:

model = sm.Unet()

Depending on the task, you can change the network architecture by choosing backbones with fewer or more parameters and use pretrainded weights to initialize it:

model = sm.Unet('resnet34', encoder_weights='imagenet')

Change number of output classes in the model (choose your case):

# binary segmentation (this parameters are default when you call Unet('resnet34')
model = sm.Unet('resnet34', classes=1, activation='sigmoid')
# multiclass segmentation with non overlapping class masks (your classes + background)
model = sm.Unet('resnet34', classes=3, activation='softmax')
# multiclass segmentation with independent overlapping/non-overlapping class masks
model = sm.Unet('resnet34', classes=3, activation='sigmoid')

Change input shape of the model:

# if you set input channels not equal to 3, you have to set encoder_weights=None
# how to handle such case with encoder_weights='imagenet' described in docs
model = Unet('resnet34', input_shape=(None, None, 6), encoder_weights=None)

Simple training pipeline

import segmentation_models as sm

BACKBONE = 'resnet34'
preprocess_input = sm.get_preprocessing(BACKBONE)

# load your data
x_train, y_train, x_val, y_val = load_data(...)

# preprocess input
x_train = preprocess_input(x_train)
x_val = preprocess_input(x_val)

# define model
model = sm.Unet(BACKBONE, encoder_weights='imagenet')
model.compile(
    'Adam',
    loss=sm.losses.bce_jaccard_loss,
    metrics=[sm.metrics.iou_score],
)

# fit model
# if you use data generator use model.fit_generator(...) instead of model.fit(...)
# more about `fit_generator` here: https://keras.io/models/sequential/#fit_generator
model.fit(
   x=x_train,
   y=y_train,
   batch_size=16,
   epochs=100,
   validation_data=(x_val, y_val),
)

Same manipulations can be done with Linknet, PSPNet and FPN. For more detailed information about models API and use cases Read the Docs.

Examples

Models training examples:
  • [Jupyter Notebook] Binary segmentation (cars) on CamVid dataset here.
  • [Jupyter Notebook] Multi-class segmentation (cars, pedestrians) on CamVid dataset here.

Models and Backbones

Models

Unet Linknet
unet_image linknet_image
PSPNet FPN
psp_image fpn_image

Backbones

Type Names
VGG 'vgg16' 'vgg19'
ResNet 'resnet18' 'resnet34' 'resnet50' 'resnet101' 'resnet152'
SE-ResNet 'seresnet18' 'seresnet34' 'seresnet50' 'seresnet101' 'seresnet152'
ResNeXt 'resnext50' 'resnext101'
SE-ResNeXt 'seresnext50' 'seresnext101'
SENet154 'senet154'
DenseNet 'densenet121' 'densenet169' 'densenet201'
Inception 'inceptionv3' 'inceptionresnetv2'
MobileNet 'mobilenet' 'mobilenetv2'
EfficientNet 'efficientnetb0' 'efficientnetb1' 'efficientnetb2' 'efficientnetb3' 'efficientnetb4' 'efficientnetb5' efficientnetb6' efficientnetb7'
All backbones have weights trained on 2012 ILSVRC ImageNet dataset (encoder_weights='imagenet').

Installation

Requirements

  1. python 3
  2. keras >= 2.2.0 or tensorflow >= 1.13
  3. keras-applications >= 1.0.7, <=1.0.8
  4. image-classifiers == 1.0.*
  5. efficientnet == 1.0.*

PyPI stable package

$ pip install -U segmentation-models

PyPI latest package

$ pip install -U --pre segmentation-models

Source latest version

$ pip install git+https://github.com/qubvel/segmentation_models

Documentation

Latest documentation is avaliable on Read the Docs

Change Log

To see important changes between versions look at CHANGELOG.md

Citing

@misc{Yakubovskiy:2019,
  Author = {Pavel Yakubovskiy},
  Title = {Segmentation Models},
  Year = {2019},
  Publisher = {GitHub},
  Journal = {GitHub repository},
  Howpublished = {\url{https://github.com/qubvel/segmentation_models}}
}

License

Project is distributed under MIT Licence.

Comments
  • Moving to tf.keras

    Moving to tf.keras

    Hi. First of all thanks for your repo! Have you considered moving to tf.keras instead of using separate keras? It would make your repo compatible with new TF 2.0 out of the box while still being compatible with older versions of TF as well.

    enhancement help wanted 
    opened by bonlime 19
  • How to assign class_weights for  5 classes?

    How to assign class_weights for 5 classes?

    Hope you can show an example on how to assign class_weights for each class in loss function. I tried with class_weights = [1,10,50,10,50] its return error due to tensor shape. My data is one-hot-encoded for all 5 classes. I am trying to use the PSPNet with class_weights.

    opened by blaxe05 19
  • ResourceExhaustedError after segmentation models update!

    ResourceExhaustedError after segmentation models update!

    Hi! I was working FPN with 'resnext101' backbone on Google Colab. I've trained the model and have done lots of experiments and the results were very good. Today, after I updated the segmentation models (actually, every time I use Google Colab, I have to reinstall it) I got the following error shown below. By the way, I tried to use Unet with 'vgg16' backbone and everything went well. I wonder why FPN with resnext101 backbone does not fit GPU memory as it fit two days ago.

    Thank you very much @qubvel .

    Edit1: FPN with vgg16 backbone is OK. FPN with vgg19 backbone is OK. FPN with resnet34 backbone is OK. FPN with resnet50 backbone is NOT OK (The same error is shown below). FPN with resnet101 backbone is NOT OK (The same error is shown below). FPN with resnext50 backbone is NOT OK (The same error is shown below).

    Edit2: The related StackOverflow question.

    Epoch 1/100
    ---------------------------------------------------------------------------
    ResourceExhaustedError                    Traceback (most recent call last)
    <ipython-input-22-1b2892f8cab2> in <module>()
    ----> 1 get_ipython().run_cell_magic('time', '', 'history = model.fit_generator(\n    generator = zipped_train_generator,\n  validation_data=(X_validation, y_validation),\n    steps_per_epoch=len(X_train) // NUM_BATCH,\n    callbacks= callbacks_list,\n    verbose = 1,\n    epochs = NUM_EPOCH)')
    
    9 frames
    </usr/local/lib/python3.6/dist-packages/decorator.py:decorator-gen-60> in time(self, line, cell, local_ns)
    
    <timed exec> in <module>()
    
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in __call__(self, *args, **kwargs)
       1456         ret = tf_session.TF_SessionRunCallable(self._session._session,
       1457                                                self._handle, args,
    -> 1458                                                run_metadata_ptr)
       1459         if run_metadata:
       1460           proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
    
    ResourceExhaustedError: 2 root error(s) found.
      (0) Resource exhausted: OOM when allocating tensor with shape[32,128,112,112] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
    	 [[{{node training/RMSprop/gradients/zeros_21}}]]
    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
    
    	 [[loss/mul/_11081]]
    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
    
      (1) Resource exhausted: OOM when allocating tensor with shape[32,128,112,112] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
    	 [[{{node training/RMSprop/gradients/zeros_21}}]]
    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
    
    0 successful operations.
    0 derived errors ignored.
    
    opened by safak17 18
  • Issue with model predictions

    Issue with model predictions

    I tried to train a PSPNet and a VPN for a multi-class segmentation task. When I try to predict on new data (I tried also to predict on the training data), the predictions seem to follow some of the patterns, but the problem is that the predicted image ends up to be of lower resolution, meaning that the same class is predicted for 4X4 or 8x8 blocks. For the task that I am working with, pixel prediction is quite important. Has anyone faced similar problem or have any advice? Thanks

    bug 
    opened by panakouris 17
  • Unable to load the repository in google colab

    Unable to load the repository in google colab

    I have already cloned the repository using: !git clone https://github.com/qubvel/segmentation_models

    image

    Now when I try to load it (import it), it shows following error:

    from segmentation_models import Unet

    --------------------------------------------------------------------------
    ModuleNotFoundError                       Traceback (most recent call last)
    <ipython-input-21-95926c7db055> in <module>()
    ----> 1 from segmentation_models import Unet
    
    /content/segmentation_models/__init__.py in <module>()
    ----> 1 from .segmentation_models import *
    
    /content/segmentation_models/segmentation_models/__init__.py in <module>()
          3 from .__version__ import __version__
          4 
    ----> 5 from .unet import Unet
          6 from .fpn import FPN
          7 from .linknet import Linknet
    
    /content/segmentation_models/segmentation_models/unet/__init__.py in <module>()
    ----> 1 from .model import Unet
    
    /content/segmentation_models/segmentation_models/unet/model.py in <module>()
          2 from ..utils import freeze_model
          3 from ..utils import legacy_support
    ----> 4 from ..backbones import get_backbone, get_feature_layers
          5 
          6 old_args_map = {
    
    /content/segmentation_models/segmentation_models/backbones/__init__.py in <module>()
    ----> 1 from classification_models import Classifiers
          2 from classification_models import resnext
          3 
          4 from . import inception_resnet_v2 as irv2
          5 from . import inception_v3 as iv3
    
    /content/classification_models/__init__.py in <module>()
    ----> 1 from .classification_models import *
    
    /content/classification_models/classification_models/__init__.py in <module>()
          3 from . import resnet as rn
          4 from . import senet as sn
    ----> 5 from . import keras_applications as ka
          6 
          7 
    
    /content/classification_models/classification_models/keras_applications/__init__.py in <module>()
          1 import keras
    ----> 2 from .keras_applications.keras_applications import *
          3 
          4 set_keras_submodules(
          5     backend=keras.backend,
    
    ModuleNotFoundError: No module named 'classification_models.classification_models.keras_applications.keras_applications.keras_applications'
    

    Can anyone help regarding this? Thanks.

    opened by sankalpmittal1911-BitSian 17
  • ImportError: cannot import name 'Resnet18'

    ImportError: cannot import name 'Resnet18'

    I get the listed error below when I try to run your demo code:

    from segmentation_models import Unet
    # prepare model
    model = Unet(backbone_name='resnet34', encoder_weigths='imagenet')
    model.compile('Adam', 'binary_crossentropy', ['binary_accuracy'])
    
    TypeError                                 Traceback (most recent call last)
    <ipython-input-12-b1cbdbcf0c01> in <module>()
          1 from segmentation_models import Unet
          2 # prepare model
    ----> 3 model = Unet(backbone_name='resnet34', encoder_weigths='imagenet')
          4 model.compile('Adam', 'binary_crossentropy', ['binary_accuracy'])
    
    TypeError: Unet() got an unexpected keyword argument 'encoder_weigths'
    
    opened by MinuteswithMetrics 13
  • Validation metrics exceeding training metrics

    Validation metrics exceeding training metrics

    HI @qubvel, Hi all,

    I am training a UNet classifier for the OpenCitiesAI challenge (1 building class and ResNext50 backbone). @hasan-nn

    At the end of the first epoch, I got the following metrics where the validation metrics are much better than the test metrics:

    23366/23366 [==============================] - 15937s 682ms/step - loss: 0.2726 - iou_score: 0.5777 - f1-score: 0.7036 - val_loss: 0.0748 - val_iou_score: 0.7403 - val_f1-score: 0.7849 -

    This issue is kind of related to the issue below: https://github.com/qubvel/segmentation_models/issues/285

    Please advise.

    Ali

    opened by aghand0ur 11
  • ValueError: You are trying to load a weight file containing 428 layers into a model with 114 layers.

    ValueError: You are trying to load a weight file containing 428 layers into a model with 114 layers.

    I have trained a segmentation model, the backbone is densenet201 with pretrained model, the decoder is FPN But, when I load saved weight for predicting,raise error as below:

    Traceback (most recent call last):
      File "test.py", line 62, in <module>
        main()
      File "test.py", line 57, in main
        apply_predict()
      File "test.py", line 27, in apply_predict
        model.load_weights("models/densenet201_fpn_w.h5")
      File "/home/jiangd/.conda/envs/tf110/lib/python3.5/site-packages/keras/engine/network.py", line 1166, in load_weights
        f, self.layers, reshape=reshape)
      File "/home/jiangd/.conda/envs/tf110/lib/python3.5/site-packages/keras/engine/saving.py", line 1030, in load_weights_from_hdf5_group
        str(len(filtered_layers)) + ' layers.')
    ValueError: You are trying to load a weight file containing 428 layers into a model with 114 layers.
    
    opened by zhudaoruyi 11
  • How to preprocess mask for multiclass segmentation?

    How to preprocess mask for multiclass segmentation?

    Hello. Images and masks in PNG format. 8 Classes. How to preprocess masks for categorical_crossentropy loss right?

    I tried to use (128, 128, 1) mask, setting each pixel of the mask with a number from 0 to 7, but got error: Error when checking target: expected softmax to have shape (128, 128, 8) but got array with shape (128, 128, 1)

    Code for model creation: `BACKBONE = 'resnet50' preprocess_input = get_preprocessing(BACKBONE)

    model = Unet(BACKBONE, input_shape=(128,128,3), encoder_weights='imagenet', encoder_freeze=True, classes=8, activation='softmax')

    model.compile('Adam', loss='categorical_crossentropy', metrics=[iou_score])`

    Thanks. Sorry for bad English.

    opened by JevgeniyVnk 11
  • problem with input

    problem with input

    Hello! I have some problems with using unet tamplate If I try to use unet with x, y = X_train,y_train (X_train contains np.array of images as well as y_train and their shapes are (10,512, 512, 3) - 10 is count of samples 512x512 is resolution and 3 is rgb) But I get this error :Error when checking target: expected sigmoid to have shape (None, None, 1) but got array with shape (512, 512, 3)

    opened by DZ4VR 10
  • NotImplementedError: Cannot convert a symbolic Tensor (dice_loss_plus_1focal_loss/truediv:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

    NotImplementedError: Cannot convert a symbolic Tensor (dice_loss_plus_1focal_loss/truediv:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

    Hi, when I run model.fit with the following code I get the error: NotImplementedError: Cannot convert a symbolic Tensor (dice_loss_plus_1focal_loss/truediv:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

    the code: activation='softmax'

    LR = 0.0001 optim = keras.optimizers.Adam(LR)

    Segmentation models losses can be combined together by '+' and scaled by integer or float factor

    set class weights for dice_loss (car: 1.; pedestrian: 2.; background: 0.5;)

    dice_loss = sm.losses.DiceLoss(class_weights=np.array([0.25, 0.25, 0.25, 0.25])) focal_loss = sm.losses.CategoricalFocalLoss() total_loss = dice_loss + (1 * focal_loss)

    ####actulally total_loss can be imported directly from library, above example just show you how to manipulate with losses

    total_loss = sm.losses.binary_focal_dice_loss # or sm.losses.categorical_focal_dice_loss

    metrics = [sm.metrics.IOUScore(threshold=0.5), sm.metrics.FScore(threshold=0.5)]

    checkpointer = tf.keras.callbacks.ModelCheckpoint('UNET_128_PaperModel/Model_Dice1{epoch:03d}.h5', verbose = 1, save_best_only = True)

    BACKBONE1 = 'mobilenetv2' preprocess_input1 = sm.get_preprocessing(BACKBONE1)

    preprocess input

    X_train1 = preprocess_input1(X_train) X_test1 = preprocess_input1(X_test) ##################################################################### ###Model 1 #Using same backbone for both models

    define model (Change to unet or Linknet based on the need )

    model1 = sm.Unet(BACKBONE1, encoder_weights=None, classes=n_classes, activation=activation)

    compile keras model with defined optimozer, loss and metrics

    model1.compile(optim, total_loss, metrics=metrics)

    #model1.compile(optimizer='adam', loss='categorical_crossentropy', metrics=metrics)

    print(model1.summary())

    My data is structured as a numpy array with X_train of shape (6632, 224, 224, 3) and y_train is one-hot encoded using to_categorical with shape (6632, 224, 224, 3)

    When I google this error alot of it seems to do with the numpy version but I don't believe this to be the case....

    Any help is much appreciated, Thanks!

    opened by sbetts2 9
  • TF 2.4 issue for loading model with classes parameter

    TF 2.4 issue for loading model with classes parameter

    I have used the example notebook to train a Linknet model. It is working with tensorflow 2.9.2. However I am having issues when I repeat the same steps with tensorflow 2.4.0. I checked the model.summary(). There is an issue in the softmax layer: Linknet(BACKBONE, encoder_weights='imagenet', classes=12, activation=activation)

    TF 2.9.2:

    conv2d (Conv2D)                (None, None, None,   1740        ['decoder_stage4c_relu[0][0]']   
                                    12)                                                               
                                                                                                      
     softmax (Activation)           (None, None, None,   0           ['conv2d[0][0]']                 
                                    12)                                                               
                                                                                                      
    ==================================================================================================
    Total params: 11,523,285
    Trainable params: 11,513,263
    Non-trainable params: 10,022
    __________________________________________________________________________
    

    TF 2.4.0:

    conv2d (Conv2D)                 (None, None, None, 1 1740        decoder_stage4c_relu[0][0]       
    __________________________________________________________________________________________________
    softmax (Activation)            (None, None, None, 1 0           conv2d[0][0]                     
    ==================================================================================================
    Total params: 11,523,285
    Trainable params: 11,513,263
    Non-trainable params: 10,022
    _________________________________________________________________
    
    opened by irhallac 0
  • mIoU with background in semantic segmentaion

    mIoU with background in semantic segmentaion

    Hi!

    I have a question about background in the calculation of the mIoU.

    When we calculate the mIoU of the segmentaion, usually IoU of the background is included.

    In that case, mIoU is at least 0.45 even the model didn't catch anything because of the IoU(background).

    I wondering about Is the calculation(mIoU with background) correct or reasonable? If the calculation is not reasonable, then how can we measure the performance of the segmentation model?

    Thank you.

    opened by Gil-TakKong 0
  • mIoU of semantic segmentation for incorrect Prediction.

    mIoU of semantic segmentation for incorrect Prediction.

    Hi!

    I am wondering about mIoU method. When I calculate mIoU of an image with incorrect prediction, if there are wrong label region, the IoU of the label is nan, not 0.

    Here is the example.

    Class in Ground Truth : Background, L1, L2 Class in Prediction : Background, L1, L2, L3

    (1) mIoU = {IoU(BG) + IoU(L1) + IoU(L2)} / 3 Is it right?

    I think, the formular should be (2) mIoU = {IoU(BG) + IoU(L1) + IoU(L2) + IoU(L3)} / 4

    If (1) is right, could you explain about why the IoU of L3 is not included?

    Thank you.

    opened by Gil-TakKong 0
  • How to use custom image sizes to train the Qubvel segmentation models in Keras?

    How to use custom image sizes to train the Qubvel segmentation models in Keras?

    I am using the Qubvel segmentation models https://github.com/qubvel/segmentation_models repository to train an Inception-V3-encoder based model for a binary segmentation task. I am using (256 width x 256 height) images to train the models and they are working good. If I double one of the dimensions, say for example, (256 width x 512 height), it works fine as well. However, when I make adjustments for the aspect ratio and resize the images to a custom dimension, say (272 width x 256 height), the model throws an error as follows: ValueError: AConcatenatelayer requires inputs with matching shapes except for the concatenation axis. Received: input_shape=[(None, 16, 18, 2048), (None, 16, 17, 768)] Is there a way to use such custom dimensions to train these models? I am using RGB images and grayscale masks to train the models. Thanks.

    opened by sivaramakrishnan-rajaraman 2
  • Is there any skip connection missing for VGG19 or ResNet34 backbone based U-net?

    Is there any skip connection missing for VGG19 or ResNet34 backbone based U-net?

    I have a confusion regarding using CNN (VGG, ResNet) as a backbone of U-net using the segmentation library. My input shape is 512x512x3. As far I’ve understood in U-net the skip connection is used in before every layer where a downsampling happes (example: maxpool for VGG or conv with 2x2 stride for ResNet).But, in the model summary for both VGG and ResNet-based backbone I see the skip connection is there from the second downsampling (256x256x64), but there is no skip connection from the 512 resolution. Can someone explain the reasons? please check the detailed diagram of VGG19 for reference here: https://i.stack.imgur.com/FBqUy.png

    opened by sumit-skyhigh 0
Releases(1.0.1)
  • 1.0.1(Jan 10, 2020)

  • v1.0.0(Oct 15, 2019)

    Areas of improvement
    • Support for keras and tf.keras
    • Losses as classes, base loss operations (sum of losses, multiplied loss)
    • NCHW and NHWC support
    • Removed pure tf operations to work with other keras backends
    • Reduced a number of custom objects for better models serialization and deserialization
    New featrues
    • New backbones: EfficentNetB[0-7]
    • New loss function: Focal loss
    • New metrics: Precision, Recall
    API changes
    • get_preprocessing moved from sm.backbones.get_preprocessing to sm.get_preprocessing
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0b1(Aug 9, 2019)

    • Support for keras and tf.keras
    • Focal loss; precision and recall metrics
    • New losses functionality: aggregation and multiplication by factor
    • NCHW and NHWC support
    • Removed pure tf operations to work with other keras backends
    • Reduced a number of custom objects for better models serialization and deserialization
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(May 23, 2019)

    Areas of improvements
    • Added set_regularization function
    • Added beta argument to dice loss
    • Added threshold argument for metrics
    • Fixed prerprocess_input for mobilenets
    • Fixed missing parameter interpolation in ResizeImage layer config
    • Some minor improvements in docs, fixed typos
    Source code(tar.gz)
    Source code(zip)
Owner
Pavel Yakubovskiy
Pavel Yakubovskiy
PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules

Dynamic Routing Between Capsules - PyTorch implementation PyTorch implementation of NIPS 2017 paper Dynamic Routing Between Capsules from Sara Sabour,

Adam Bielski 475 Dec 24, 2022
A large-scale video dataset for the training and evaluation of 3D human pose estimation models

ASPset-510 ASPset-510 (Australian Sports Pose Dataset) is a large-scale video dataset for the training and evaluation of 3D human pose estimation mode

Aiden Nibali 36 Oct 30, 2022
Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning. CVPR 2018

Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning Tensorflow code and models for the paper: Large Scale Fine-Grained Categ

Yin Cui 187 Oct 01, 2022
Pyramid R-CNN: Towards Better Performance and Adaptability for 3D Object Detection

Pyramid R-CNN: Towards Better Performance and Adaptability for 3D Object Detection

61 Jan 07, 2023
Learning to Estimate Hidden Motions with Global Motion Aggregation

Learning to Estimate Hidden Motions with Global Motion Aggregation (GMA) This repository contains the source code for our paper: Learning to Estimate

Shihao Jiang (Zac) 221 Dec 18, 2022
Code for the ICCV'21 paper "Context-aware Scene Graph Generation with Seq2Seq Transformers"

ICCV'21 Context-aware Scene Graph Generation with Seq2Seq Transformers Authors: Yichao Lu*, Himanshu Rai*, Cheng Chang*, Boris Knyazev†, Guangwei Yu,

Layer6 Labs 37 Dec 18, 2022
Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation

Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation Prerequisites This repo is built upon a local copy of transfo

Jixuan Wang 10 Sep 28, 2022
Detecting drunk people through thermal images using Deep Learning (CNN)

Drunk Detection CNN Detecting drunk people through thermal images using Deep Learning (CNN) Dataset We used thermal images provided by Electronics Lab

Giacomo Ferretti 3 Oct 27, 2022
Attentional Focus Modulates Automatic Finger‑tapping Movements

"Attentional Focus Modulates Automatic Finger‑tapping Movements", in Scientific Reports

Xingxun Jiang 1 Dec 02, 2021
Kaggle | 9th place single model solution for TGS Salt Identification Challenge

UNet for segmenting salt deposits from seismic images with PyTorch. General We, tugstugi and xuyuan, have participated in the Kaggle competition TGS S

Erdene-Ochir Tuguldur 276 Dec 20, 2022
[CVPR 2021] Official PyTorch Implementation for "Iterative Filter Adaptive Network for Single Image Defocus Deblurring"

IFAN: Iterative Filter Adaptive Network for Single Image Defocus Deblurring Checkout for the demo (GUI/Google Colab)! The GUI version might occasional

Junyong Lee 173 Dec 30, 2022
Json2Xml tool will help you convert from json COCO format to VOC xml format in Object Detection Problem.

JSON 2 XML All codes assume running from root directory. Please update the sys path at the beginning of the codes before running. Over View Json2Xml t

Nguyễn Trường Lâu 6 Aug 22, 2022
A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

Segnet is deep fully convolutional neural network architecture for semantic pixel-wise segmentation. This is implementation of http://arxiv.org/pdf/15

Pradyumna Reddy Chinthala 190 Dec 15, 2022
diablo2 resurrected loot filter

Only For Chinese and Traditional Chinese The filter only for Chinese and Traditional Chinese, i didn't change it for other language.Maybe you could mo

elmagnifico 249 Dec 04, 2022
Relative Uncertainty Learning for Facial Expression Recognition

Relative Uncertainty Learning for Facial Expression Recognition The official implementation of the following paper at NeurIPS2021: Title: Relative Unc

35 Dec 28, 2022
Hypersearch weight debugging and losses tutorial

tutorial Activate tensorboard option Running TensorBoard remotely When working on a remote server, you can use SSH tunneling to forward the port of th

1 Dec 11, 2021
Some code of the implements of Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network

3D-GMPDCNN Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network PyTorch implementation of "Geological Modeling Usin

5 Nov 21, 2022
Few-Shot Object Detection via Association and DIscrimination

Few-Shot Object Detection via Association and DIscrimination Code release of our NeurIPS 2021 paper: Few-Shot Object Detection via Association and DIs

Cao Yuhang 49 Dec 18, 2022
This is an (re-)implementation of DeepLab-ResNet in TensorFlow for semantic image segmentation on the PASCAL VOC dataset.

DeepLab-ResNet-TensorFlow This is an (re-)implementation of DeepLab-ResNet in TensorFlow for semantic image segmentation on the PASCAL VOC dataset. Up

19 Jan 16, 2022