TGS Salt Identification Challenge

Overview

TGS Salt Identification Challenge

license

This is an open solution to the TGS Salt Identification Challenge.

Note

Unfortunately, we can no longer provide support for this repo. Hopefully, it should still work, but if it doesn't, we cannot really help.

More competitions 🎇

Check collection of public projects 🎁 , where you can find multiple Kaggle competitions with code, experiments and outputs.

Our goals

We are building entirely open solution to this competition. Specifically:

  1. Learning from the process - updates about new ideas, code and experiments is the best way to learn data science. Our activity is especially useful for people who wants to enter the competition, but lack appropriate experience.
  2. Encourage more Kagglers to start working on this competition.
  3. Deliver open source solution with no strings attached. Code is available on our GitHub repository 💻 . This solution should establish solid benchmark, as well as provide good base for your custom ideas and experiments. We care about clean code 😃
  4. We are opening our experiments as well: everybody can have live preview on our experiments, parameters, code, etc. Check: TGS Salt Identification Challenge 📈 or screen below.
Train and validation monitor 📊
training monitor

Disclaimer

In this open source solution you will find references to the neptune.ai. It is free platform for community Users, which we use daily to keep track of our experiments. Please note that using neptune.ai is not necessary to proceed with this solution. You may run it as plain Python script 🐍 .

How to start?

Learn about our solutions

  1. Check Kaggle forum and participate in the discussions.
  2. See solutions below:
Link to Experiments CV LB Open
solution 1 0.413 0.745 True
solution 2 0.794 0.798 True
solution 3 0.807 0.801 True
solution 4 0.802 0.809 True
solution 5 0.804 0.813 True
solution 6 0.819 0.824 True
solution 7 0.829 0.837 True
solution 8 0.830 0.845 True
solution 9 0.853 0.849 True

Start experimenting with ready-to-use code

You can jump start your participation in the competition by using our starter pack. Installation instruction below will guide you through the setup.

Installation

Clone repository

git clone https://github.com/minerva-ml/open-solution-salt-identification.git

Set-up environment

You can setup the project with default env variables and open NEPTUNE_API_TOKEN by running:

source Makefile

I suggest at least reading the step-by-step instructions to know what is happening.

Install conda environment salt

conda env create -f environment.yml

After it is installed you can activate/deactivate it by running:

conda activate salt
conda deactivate

Register to the neptune.ai (if you wish to use it) even if you don't register you can still see your experiment in Neptune. Just go to shared/showroom project and find it.

Set environment variables NEPTUNE_API_TOKEN and CONFIG_PATH.

If you are using the default neptune.yaml config then run:

export export CONFIG_PATH=neptune.yaml

otherwise you can change to your config.

Registered in Neptune:

Set NEPTUNE_API_TOKEN variable with your personal token:

export NEPTUNE_API_TOKEN=your_account_token

Create new project in Neptune and go to your config file (neptune.yaml) and change project name:

project: USER_NAME/PROJECT_NAME

Not registered in Neptune:

open token

export NEPTUNE_API_TOKEN=eyJhcGlfYWRkcmVzcyI6Imh0dHBzOi8vdWkubmVwdHVuZS5tbCIsImFwaV9rZXkiOiJiNzA2YmM4Zi03NmY5LTRjMmUtOTM5ZC00YmEwMzZmOTMyZTQifQ==

Create data folder structure and set data paths in your config file (neptune.yaml)

Suggested directory structure:

project
|--   README.md
|-- ...
|-- data
    |-- raw
         |-- train 
            |-- images 
            |-- masks
         |-- test 
            |-- images
         |-- train.csv
         |-- sample_submission.csv
    |-- meta
        │-- depths.csv
        │-- metadata.csv # this is generated
        │-- auxiliary_metadata.csv # this is generated
    |-- stacking_data
        |-- out_of_folds_predictions # put oof predictions for multiple models/pipelines here
    |-- experiments
        |-- baseline # this is where your experiment files will be dumped
            |-- checkpoints # neural network checkpoints
            |-- transformers # serialized transformers after fitting
            |-- outputs # outputs of transformers if you specified save_output=True anywhere
            |-- out_of_fold_train_predictions.pkl # oof predictions on train
            |-- out_of_fold_test_predictions.pkl # oof predictions on test
            |-- submission.csv
        |-- empty_non_empty 
        |-- new_idea_exp 

in neptune.yaml config file change data paths if you decide on a different structure:

  # Data Paths
  train_images_dir: data/raw/train
  test_images_dir: data/raw/test
  metadata_filepath: data/meta/metadata.csv
  depths_filepath: data/meta/depths.csv
  auxiliary_metadata_filepath: data/meta/auxiliary_metadata.csv
  stacking_data_dir: data/stacking_data

Run experiment based on U-Net:

Prepare metadata:

python prepare_metadata.py

Training and inference. Everything happens in main.py. Whenever you try new idea make sure to change the name of the experiment:

EXPERIMENT_NAME = 'baseline'

to a new name.

python main.py

You can always change the pipeline you want ot run in the main. For example, if I want to run just training and evaluation I can change `main.py':

if __name__ == '__main__':
    train_evaluate_cv()

References

1.Lovash Loss

@InProceedings{Berman_2018_CVPR,
author = {Berman, Maxim and Rannen Triki, Amal and Blaschko, Matthew B.},
title = {The Lovász-Softmax Loss: A Tractable Surrogate for the Optimization of the Intersection-Over-Union Measure in Neural Networks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}

Get involved

You are welcome to contribute your code and ideas to this open solution. To get started:

  1. Check competition project on GitHub to see what we are working on right now.
  2. Express your interest in paticular task by writing comment in this task, or by creating new one with your fresh idea.
  3. We will get back to you quickly in order to start working together.
  4. Check CONTRIBUTING for some more information.

User support

There are several ways to seek help:

  1. Kaggle discussion is our primary way of communication.
  2. Submit an issue directly in this repo.
Comments
  • Conflicts between open-solution-salt-identification and ipython

    Conflicts between open-solution-salt-identification and ipython

    Hi, users are unable to run open-solution-salt-identification due to dependency conflict with ipython package. As shown in the following full dependency graph of open-solution-salt-identification, open-solution-salt-identification requires ipython==6.3.1, while steppy==0.1.6 requires _ipython >= 6.4.0. According to pip’s “first found wins” installation strategy, ipython 6.3.1 is the actually installed version. However, ipython 6.3.1 does not satisfy >=6.4.0.

    Dependency tree------

    open-solution-salt-identification-master<version range:>
    | +- attrdict<version range:==2.0.0>
    | | +- six<version range:>
    | +- cffi<version range:==1.11.5>
    | +- click<version range:==6.7>
    | +- cython<version range:==0.28.2>
    | +- imageio<version range:==2.2.0>
    | | +- numpy<version range:>
    | | +- pillow<version range:>
    | +- imgaug<version range:==0.2.5>
    | | +- numpy<version range:>=1.7.0>
    | | +- scikit-image<version range:>=0.11.0>
    | | +- scipy<version range:>
    | | +- six<version range:>
    | +- ipython<version range:==6.3.1>
    | +- kaggle<version range:==1.3.11.1>
    | | +- certifi<version range:>
    | | +- python-dateutil<version range:>
    | | +- requests<version range:>
    | | +- six<version range:>=1.10>
    | | +- tqdm<version range:>
    | | +- urllib3<version range:>=1.15,<1.23.0>
    | +- neptune-cli<version range:>
    | | +- argcomplete<version range:>=1.9.3>
    | | +- enum34<version range:>=1.1.6>
    | | +- flask<version range:==0.12.0>
    | | +- future<version range:>=0.16.0>
    | | +- gitpython<version range:>=2.0.8>
    | | +- humanize<version range:>=0.5.1>
    | | +- kitchen<version range:>=1.2.4>
    | | +- more-itertools<version range:>=4.1.0>
    | | +- nvidia-ml-py3<version range:>=7.352.0>
    | | +- pathlib2<version range:==2.3.0>
    | | +- pillow<version range:>=1.7.6>
    | | +- psutil<version range:>=5.4.3>
    | | +- pyjwt<version range:>=1.5.2>
    | | +- pykwalify<version range:<1.6.0,>=1.5.2>
    | | | +- docopt<version range:>=0.6.2>
    | | | +- python-dateutil<version range:>=2.4.2>
    | | | +- pyyaml<version range:>=3.11>
    | | +- raven<version range:>=6.1.0>
    | | +- requests<version range:>=2.11.1>
    | | +- requests-oauthlib<version range:>=0.8.0>
    | | +- terminaltables<version range:>=2.1.0,<3.0.0>
    | | +- tqdm<version range:>=4.11.2>
    | | +- voluptuous<version range:>=0.9.3>
    | | +- websocket-client<version range:>=0.35.0>
    | +- numpy<version range:==1.14.0>
    | +- opencv-python<version range:==3.4.0.12>
    | +- pandas<version range:==0.22.0>
    | +- pillow<version range:==5.1.0>
    | +- pretrainedmodels<version range:==0.7.0>
    | | +- munch<version range:>
    | | | +- six<version range:>
    | | +- torch<version range:>
    | | +- torchvision<version range:>
    | +- pycocotools<version range:==2.0.0>
    | +- pydot-ng<version range:==1.0.0>
    | | +- pyparsing<version range:>=2.0.1>
    | +- pyyaml<version range:>=4.2b1>
    | +- scikit-image<version range:==0.13.1>
    | +- scikit-learn<version range:==0.19.2>
    | +- scipy<version range:==1.0.0>
    | +- steppy<version range:==0.1.6>
    | | +- ipython<version range:>=6.4.0>
    | | +- numpy<version range:>=1.14.0>
    | | +- pydot-ng<version range:>=1.0.0>
    | | | +- pyparsing<version range:>=2.0.1>
    | | +- pytest<version range:>=3.6.0>
    | | +- scikit-learn<version range:>=0.19.0>
    | | +- scipy<version range:>=1.0.0>
    | | +- setuptools<version range:>=39.2.0>
    | | +- typing<version range:>=3.6.4>
    | +- steppy-toolkit<version range:==0.1.5>
    | | +- attrdict<version range:>=2.0.0>
    | | | +- six<version range:>
    | | +- neptune-cli<version range:>=2.8.5>
    | | | +- argcomplete<version range:>=1.9.3>
    | | | +- enum34<version range:>=1.1.6>
    | | | +- flask<version range:==0.12.0>
    | | | +- future<version range:>=0.16.0>
    | | | +- gitpython<version range:>=2.0.8>
    | | | +- humanize<version range:>=0.5.1>
    | | | +- kitchen<version range:>=1.2.4>
    | | | +- more-itertools<version range:>=4.1.0>
    | | | +- nvidia-ml-py3<version range:>=7.352.0>
    | | | +- pathlib2<version range:==2.3.0>
    | | | +- pillow<version range:>=1.7.6>
    | | | +- psutil<version range:>=5.4.3>
    | | | +- pyjwt<version range:>=1.5.2>
    | | | +- pykwalify<version range:<1.6.0,>=1.5.2>
    | | | | +- docopt<version range:>=0.6.2>
    | | | | +- python-dateutil<version range:>=2.4.2>
    | | | | +- pyyaml<version range:>=3.11>
    | | | +- raven<version range:>=6.1.0>
    | | | +- requests<version range:>=2.11.1>
    | | | +- requests-oauthlib<version range:>=0.8.0>
    | | | +- terminaltables<version range:>=2.1.0,<3.0.0>
    | | | +- tqdm<version range:>=4.11.2>
    | | | +- voluptuous<version range:>=0.9.3>
    | | | +- websocket-client<version range:>=0.35.0>
    | | +- numpy<version range:>=1.14.3>
    | | +- pandas<version range:>=0.23.0>
    | | +- pytest<version range:>=3.6.0>
    | | +- setuptools<version range:>=39.2.0>
    | | +- steppy<version range:>=0.1.4>
    | | | +- ipython<version range:>=6.4.0>
    | | | +- numpy<version range:>=1.14.0>
    | | | +- pydot-ng<version range:>=1.0.0>
    | | | | +- pyparsing<version range:>=2.0.1>
    | | | +- pytest<version range:>=3.6.0>
    | | | +- scikit-learn<version range:>=0.19.0>
    | | | +- scipy<version range:>=1.0.0>
    | | | +- setuptools<version range:>=39.2.0>
    | | | +- typing<version range:>=3.6.4>
    | +- torch<version range:==0.3.1>
    | +- torchvision<version range:==0.2.0>
    | +- tqdm<version range:==4.23.0>
    
    

    Solution

    Could you please loose your version of ipython (change ipython==6.3.1 to >=6.3.1)? Thanks for your help. Best, Neolith

    opened by NeolithEra 5
  • Problem with submission.csv

    Problem with submission.csv

    There is problem with submission.csv file as kaggle is not accepting the file and showing error

    Evaluation Exception: Index was outside the bounds of the array.

    There might be some problem with rle encoding

    opened by ymittal23 5
  • Installation fails due to another conflicting pandas version

    Installation fails due to another conflicting pandas version

    Hi, another dependency hell issue in open-solution-salt-identification. Users are unable to run open-solution-salt-identification due to dependency conflict with pandas package. As shown in the following full dependency graph of open-solution-salt-identification, this project directly requires pandas==0.22.0,while steppy-toolkit==0.1.5 requires _pandas>=0.23.0. According to pip’s “first found wins” installation strategy, pandas==0.22.0 is the actually installed version. However, pandas==0.22.0 does not satisfy >=0.23.0.

    Dependency tree-----

    open-solution-salt-identification-master<version range:>
    | +- attrdict<version range:==2.0.0>
    | | +- six<version range:>
    | +- cffi<version range:==1.11.5>
    | +- click<version range:==6.7>
    | +- cython<version range:==0.28.2>
    | +- imageio<version range:==2.2.0>
    | | +- numpy<version range:>
    | | +- pillow<version range:>
    | +- imgaug<version range:==0.2.5>
    | | +- numpy<version range:>=1.7.0>
    | | +- scikit-image<version range:>=0.11.0>
    | | +- scipy<version range:>
    | | +- six<version range:>
    | +- ipython<version range:==6.3.1>
    | +- kaggle<version range:==1.3.11.1>
    | | +- certifi<version range:>
    | | +- python-dateutil<version range:>
    | | +- requests<version range:>
    | | +- six<version range:>=1.10>
    | | +- tqdm<version range:>
    | | +- urllib3<version range:>=1.15,<1.23.0>
    | +- neptune-cli<version range:>
    | | +- argcomplete<version range:>=1.9.3>
    | | +- enum34<version range:>=1.1.6>
    | | +- flask<version range:==0.12.0>
    | | +- future<version range:>=0.16.0>
    | | +- gitpython<version range:>=2.0.8>
    | | +- humanize<version range:>=0.5.1>
    | | +- kitchen<version range:>=1.2.4>
    | | +- more-itertools<version range:>=4.1.0>
    | | +- nvidia-ml-py3<version range:>=7.352.0>
    | | +- pathlib2<version range:==2.3.0>
    | | +- pillow<version range:>=1.7.6>
    | | +- psutil<version range:>=5.4.3>
    | | +- pyjwt<version range:>=1.5.2>
    | | +- pykwalify<version range:<1.6.0,>=1.5.2>
    | | | +- docopt<version range:>=0.6.2>
    | | | +- python-dateutil<version range:>=2.4.2>
    | | | +- pyyaml<version range:>=3.11>
    | | +- raven<version range:>=6.1.0>
    | | +- requests<version range:>=2.11.1>
    | | +- requests-oauthlib<version range:>=0.8.0>
    | | +- terminaltables<version range:>=2.1.0,<3.0.0>
    | | +- tqdm<version range:>=4.11.2>
    | | +- voluptuous<version range:>=0.9.3>
    | | +- websocket-client<version range:>=0.35.0>
    | +- numpy<version range:==1.14.0>
    | +- opencv-python<version range:==3.4.0.12>
    | +- pandas<version range:==0.22.0>
    | +- pillow<version range:==5.1.0>
    | +- pretrainedmodels<version range:==0.7.0>
    | | +- munch<version range:>
    | | | +- six<version range:>
    | | +- torch<version range:>
    | | +- torchvision<version range:>
    | +- pycocotools<version range:==2.0.0>
    | +- pydot-ng<version range:==1.0.0>
    | | +- pyparsing<version range:>=2.0.1>
    | +- pyyaml<version range:>=4.2b1>
    | +- scikit-image<version range:==0.13.1>
    | +- scikit-learn<version range:==0.19.2>
    | +- scipy<version range:==1.0.0>
    | +- steppy<version range:==0.1.6>
    | | +- ipython<version range:>=6.4.0>
    | | +- numpy<version range:>=1.14.0>
    | | +- pydot-ng<version range:>=1.0.0>
    | | | +- pyparsing<version range:>=2.0.1>
    | | +- pytest<version range:>=3.6.0>
    | | +- scikit-learn<version range:>=0.19.0>
    | | +- scipy<version range:>=1.0.0>
    | | +- setuptools<version range:>=39.2.0>
    | | +- typing<version range:>=3.6.4>
    | +- steppy-toolkit<version range:==0.1.5>
    | | +- attrdict<version range:>=2.0.0>
    | | | +- six<version range:>
    | | +- neptune-cli<version range:>=2.8.5>
    | | | +- argcomplete<version range:>=1.9.3>
    | | | +- enum34<version range:>=1.1.6>
    | | | +- flask<version range:==0.12.0>
    | | | +- future<version range:>=0.16.0>
    | | | +- gitpython<version range:>=2.0.8>
    | | | +- humanize<version range:>=0.5.1>
    | | | +- kitchen<version range:>=1.2.4>
    | | | +- more-itertools<version range:>=4.1.0>
    | | | +- nvidia-ml-py3<version range:>=7.352.0>
    | | | +- pathlib2<version range:==2.3.0>
    | | | +- pillow<version range:>=1.7.6>
    | | | +- psutil<version range:>=5.4.3>
    | | | +- pyjwt<version range:>=1.5.2>
    | | | +- pykwalify<version range:<1.6.0,>=1.5.2>
    | | | | +- docopt<version range:>=0.6.2>
    | | | | +- python-dateutil<version range:>=2.4.2>
    | | | | +- pyyaml<version range:>=3.11>
    | | | +- raven<version range:>=6.1.0>
    | | | +- requests<version range:>=2.11.1>
    | | | +- requests-oauthlib<version range:>=0.8.0>
    | | | +- terminaltables<version range:>=2.1.0,<3.0.0>
    | | | +- tqdm<version range:>=4.11.2>
    | | | +- voluptuous<version range:>=0.9.3>
    | | | +- websocket-client<version range:>=0.35.0>
    | | +- numpy<version range:>=1.14.3>
    | | +- pandas<version range:>=0.23.0>
    | | +- pytest<version range:>=3.6.0>
    | | +- setuptools<version range:>=39.2.0>
    | | +- steppy<version range:>=0.1.4>
    | | | +- ipython<version range:>=6.4.0>
    | | | +- numpy<version range:>=1.14.0>
    | | | +- pydot-ng<version range:>=1.0.0>
    | | | | +- pyparsing<version range:>=2.0.1>
    | | | +- pytest<version range:>=3.6.0>
    | | | +- scikit-learn<version range:>=0.19.0>
    | | | +- scipy<version range:>=1.0.0>
    | | | +- setuptools<version range:>=39.2.0>
    | | | +- typing<version range:>=3.6.4>
    | +- torch<version range:==0.3.1>
    | +- torchvision<version range:==0.2.0>
    | +- tqdm<version range:==4.23.0>
    
    

    Solution

    Relax the version range of pandas (change pandas==0.22.0 to >=0.22.0).

    Thanks for your help. Best, Neolith

    opened by NeolithEra 3
  • No module named 'neptune'

    No module named 'neptune'

    Firstly I run "pip install -r requirements.txt". Then I try to run "python main.py", but got a error "No module named 'neptune'". Did I miss something important ?

    opened by qinhui99 2
  • TypeError: can't multiply sequence by non-int of type 'float' while training

    TypeError: can't multiply sequence by non-int of type 'float' while training

    I'm trying to run locally solution-3

    Full stack of the error:

    TypeError                                 Traceback (most recent call last)
    <ipython-input-5-2da0ffaf5447> in <module>()
    ----> 1 train()
    
    <ipython-input-3-0a8b44c1d62a> in train()
        314     pipeline_network = unet(config=CONFIG, train_mode=True)
        315     pipeline_network.clean_cache()
    --> 316     pipeline_network.fit_transform(data)
        317     pipeline_network.clean_cache()
        318 
    
    ~/anaconda3/lib/python3.6/site-packages/steppy/base.py in fit_transform(self, data)
        321             else:
        322                 step_inputs = self._unpack(step_inputs)
    --> 323             step_output_data = self._cached_fit_transform(step_inputs)
        324         return step_output_data
        325 
    
    ~/anaconda3/lib/python3.6/site-packages/steppy/base.py in _cached_fit_transform(self, step_inputs)
        441             else:
        442                 logger.info('Step {}, fitting and transforming...'.format(self.name))
    --> 443                 step_output_data = self.transformer.fit_transform(**step_inputs)
        444                 logger.info('Step {}, persisting transformer to the {}'
        445                             .format(self.name, self.exp_dir_transformers_step))
    
    ~/anaconda3/lib/python3.6/site-packages/steppy/base.py in fit_transform(self, *args, **kwargs)
        603             dict: outputs
        604         """
    --> 605         self.fit(*args, **kwargs)
        606         return self.transform(*args, **kwargs)
        607 
    
    ~/Desktop/ml/salt/open-solution-salt-identification-master/common_blocks/models.py in fit(self, datagen, validation_datagen, meta_valid)
         61             self.model = self.model.cuda()
         62 
    ---> 63         self.callbacks.set_params(self, validation_datagen=validation_datagen, meta_valid=meta_valid)
         64         self.callbacks.on_train_begin()
         65 
    
    ~/Desktop/ml/salt/open-solution-salt-identification-master/common_blocks/callbacks.py in set_params(self, *args, **kwargs)
         89     def set_params(self, *args, **kwargs):
         90         for callback in self.callbacks:
    ---> 91             callback.set_params(*args, **kwargs)
         92 
         93     def on_train_begin(self, *args, **kwargs):
    
    ~/Desktop/ml/salt/open-solution-salt-identification-master/common_blocks/callbacks.py in set_params(self, transformer, validation_datagen, *args, **kwargs)
        235         self.optimizer = transformer.optimizer
        236         self.loss_function = transformer.loss_function
    --> 237         self.lr_scheduler = ExponentialLR(self.optimizer, self.gamma, last_epoch=-1)
        238 
        239     def on_train_begin(self, *args, **kwargs):
    
    ~/anaconda3/lib/python3.6/site-packages/torch/optim/lr_scheduler.py in __init__(self, optimizer, gamma, last_epoch)
        155     def __init__(self, optimizer, gamma, last_epoch=-1):
        156         self.gamma = gamma
    --> 157         super(ExponentialLR, self).__init__(optimizer, last_epoch)
        158 
        159     def get_lr(self):
    
    ~/anaconda3/lib/python3.6/site-packages/torch/optim/lr_scheduler.py in __init__(self, optimizer, last_epoch)
         19                                    "in param_groups[{}] when resuming an optimizer".format(i))
         20         self.base_lrs = list(map(lambda group: group['initial_lr'], optimizer.param_groups))
    ---> 21         self.step(last_epoch + 1)
         22         self.last_epoch = last_epoch
         23 
    
    ~/anaconda3/lib/python3.6/site-packages/torch/optim/lr_scheduler.py in step(self, epoch)
         29             epoch = self.last_epoch + 1
         30         self.last_epoch = epoch
    ---> 31         for param_group, lr in zip(self.optimizer.param_groups, self.get_lr()):
         32             param_group['lr'] = lr
         33 
    
    ~/anaconda3/lib/python3.6/site-packages/torch/optim/lr_scheduler.py in get_lr(self)
        159     def get_lr(self):
        160         return [base_lr * self.gamma ** self.last_epoch
    --> 161                 for base_lr in self.base_lrs]
        162 
        163 
    
    ~/anaconda3/lib/python3.6/site-packages/torch/optim/lr_scheduler.py in <listcomp>(.0)
        159     def get_lr(self):
        160         return [base_lr * self.gamma ** self.last_epoch
    --> 161                 for base_lr in self.base_lrs]
        162 
        163 
    
    TypeError: can't multiply sequence by non-int of type 'float'
    

    SO suggest to convert list to numpy this way np.asarray(coff) * C

    • but actually I'm kind confused where to apply it
    opened by Diyago 2
  • Can not find the 'metadata.csv'

    Can not find the 'metadata.csv'

    When I run the command 'python main.py -- train--pipeline_name unet', I got the error "Can not find the 'metadata.csv'" . I checked all the files of salt-detection and still can not find it. Did I miss something important?

    opened by qinhui99 2
  • Lovasz loss

    Lovasz loss

    Hi,

    Thank you for the code. I am attempting to run your code with the lovasz hinge loss, and run into the following error -- please let me know if there is a quick way to fix this.

    File "..../lovash_losses.py", line 105, in lovasz_hinge_flat errors = (1. - logits * Variable(signs)) RuntimeError: Variable data has to be a tensor, but got Variable

    Apologies if this is a simple question -- recently converted from Keras, so not that familiar with Torch.

    -Qing

    opened by yuanqing811 1
  • OSError: [Errno 101] Network is unreachable

    OSError: [Errno 101] Network is unreachable

    COULD YOU TELL ME HOW TO FIX IT, THANKS

    Downloading: "https://download.pytorch.org/models/resnet152-b121ed2d.pth" to /ddn/home/.torch/models/resnet152-b121ed2d.pth Traceback (most recent call last): File "/ddn/data/enteESC[Denter/lib/python3.6/urllib/request.py", line 1318, in do_open encode_chunked=req.has_header('Transfer-encoding')) File "/ddn/data/enteESC[Denter/lib/python3.6/http/client.py", line 1239, in request self._send_request(method, url, body, headers, encode_chunked) File "/ddn/data/enteESC[Denter/lib/python3.6/http/client.py", line 1285, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/ddn/data/enteESC[Denter/lib/python3.6/http/client.py", line 1234, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/ddn/data/enteESC[Denter/lib/python3.6/http/client.py", line 1026, in _send_output self.send(msg) File "/ddn/data/enteESC[Denter/lib/python3.6/http/client.py", line 964, in send self.connect() File "/ddn/data/lxdc43/enteESC[Denter/lib/python3.6/http/client.py", line 1392, in connect super().connect() File "/ddn/data/enteESC[Denter/lib/python3.6/http/client.py", line 936, in connect (self.host,self.port), self.timeout, self.source_address) File "/ddn/data/enteESC[Denter/lib/python3.6/socket.py", line 724, in create_connection raise err File "/ddn/data/enteESC[Denter/lib/python3.6/socket.py", line 713, in create_connection sock.connect(sa) OSError: [Errno 101] Network is unreachable

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/ddn/home/lxdc43/kaggle/salt_identification/main.py", line 771, in train_evaluate_predict_cv() File "/ddn/home/kaggle/salt_identification/main.py", line 504, in train_evaluate_predict_cv fold_id) File "/ddn/home/kaggle/salt_identification/main.py", line 600, in fold_fit_evaluate_predict_loop iou, iout, predicted_masks_valid = fold_fit_evaluate_loop(train_data_split, valid_data_split, fold_id) File "/ddn/home/kaggle/salt_identification/main.py", line 635, in fold_fit_evaluate_loop pipeline_network = unet(config=config, suffix='fold{}'.format(fold_id), train_mode=True) File "/ddn/home/kaggle/salt_identification/main.py", line 268, in unet transformer=models.PyTorchUNet(**config.model['unet']), File "/ddn/home/kaggle/salt_identification/common_blocks/models.py", line 50, in init self.set_model() File "/ddn/home/kaggle/salt_identification/common_blocks/models.py", line 160, in set_model **config['model_config']) File "/ddn/home/lxdc43/kaggle/salt_identification/common_blocks/unet_models.py", line 114, in init self.encoder = torchvision.models.resnet152(pretrained=pretrained) File "/ddn/data/lxdc43/enteESC[Denter/lib/python3.6/site-packages/torchvision/models/resnet.py", line 212, in resnet152 model.load_state_dict(model_zoo.load_url(model_urls['resnet152'])) File "/ddn/data/lxdc43/enteESC[Denter/lib/python3.6/site-packages/torch/utils/model_zoo.py", line 64, in load_url _download_url_to_file(url, cached_file, hash_prefix) File "/ddn/data/lxdc43/enteESC[Denter/lib/python3.6/site-packages/torch/utils/model_zoo.py", line 69, in _download_url_to_file u = urlopen(url) File "/ddn/data/lxdc43/enteESC[Denter/lib/python3.6/urllib/request.py", line 223, in urlopen return opener.open(url, data, timeout) File "/ddn/data/lxdc43/enteESC[Denter/lib/python3.6/urllib/request.py", line 526, in open response = self._open(req, data) File "/ddn/data/lxdc43/enteESC[Denter/lib/python3.6/urllib/request.py", line 544, in _open '_open', req) File "/ddn/data/lxdc43/enteESC[Denter/lib/python3.6/urllib/request.py", line 504, in _call_chain result = func(*args) File "/ddn/data/lxdc43/enteESC[Denter/lib/python3.6/urllib/request.py", line 1361, in https_open context=self._context, check_hostname=self._check_hostname) File "/ddn/data/lxdc43/enteESC[Denter/lib/python3.6/urllib/request.py", line 1320, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [Errno 101] Network is unreachable>

    opened by OsloAI 1
  • IndexError: too many indices for array

    IndexError: too many indices for array

    Could you tell me how to fix blew issues? Thank you very much.

    Traceback (most recent call last): File "/home/work/tsg/main.py", line 771, in train_evaluate_predict_cv() File "/home/work/tsg/main.py", line 504, in train_evaluate_predict_cv fold_id) File "/home/work/tsg/main.py", line 600, in fold_fit_evaluate_predict_loop iou, iout, predicted_masks_valid = fold_fit_evaluate_loop(train_data_split, valid_data_split, fold_id) File "/home/work/tsg/main.py", line 637, in fold_fit_evaluate_loop pipeline_network.fit_transform(train_pipe_input) File "/home/anaconda3/lib/python3.6/site-packages/steppy/base.py", line 317, in fit_transform step_inputs[input_step.name] = input_step.fit_transform(data) File "/home/anaconda3/lib/python3.6/site-packages/steppy/base.py", line 323, in fit_transform step_output_data = self._cached_fit_transform(step_inputs) File "/home/work/tsg/common_blocks/utils.py", line 453, in _cached_fit_transform step_output_data = self.transformer.fit_transform(**step_inputs) File "/home/anaconda3/lib/python3.6/site-packages/steppy/base.py", line 605, in fit_transform self.fit(*args, **kwargs) File "/home/work/tsg/common_blocks/models.py", line 75, in fit self.callbacks.on_batch_end(metrics=metrics) File "/home/work/tsg/common_blocks/callbacks.py", line 120, in on_batch_end callback.on_batch_end(*args, **kwargs) File "/home/work/tsg/common_blocks/callbacks.py", line 151, in on_batch_end loss = loss.data.cpu().numpy()[0] IndexError: too many indices for array

    opened by OsloAI 1
  • KFoldBySortedValue issue

    KFoldBySortedValue issue

    Hello neptune-team, I'm a fellow Kaggler. I find your contribution very valuable and I'm exploring your code.

    I stumbled on KFoldBySortedValue while doing static analysis.

    Shouldn't

    https://github.com/neptune-ml/open-solution-salt-detection/blob/239016c742d913a6822d33edb6f3beaac959cc9b/src/utils.py#L416

    be

    def get_n_splits(self, X=None, y=None, groups=None):
            return self.n_splits
    

    Btw... did you get any improvement in sorting by depth the dataset?

    Thank you

    opened by thundo 1
  • index 0 is out of bounds for axis 0 with size 0

    index 0 is out of bounds for axis 0 with size 0

    Could you tell me how to fix this problem? thanks

    if: Expression Syntax. then: Command not found. 2018-10-01 08-21-52 salt-detection >>> creating metadata neptune: Executing in Offline Mode.

    0%| | 0/2 [00:00<?, ?it/s] Traceback (most recent call last): File "/ddn/home/kaggle/salt_identification/main.py", line 770, in prepare_metadata() File "/ddn/home/kaggle/salt_identification/main.py", line 351, in prepare_metadata depths_filepath=PARAMS.depths_filepath File "/ddn/home/kaggle/salt_identification/common_blocks/utils.py", line 134, in generate_metadata depth = depths[depths['id'] == image_id]['z'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0

    opened by OsloAI 1
  • Memory Allocation of Solution 4

    Memory Allocation of Solution 4

    Hi~ Thanks for your team's great work. I implemented the solution 4 in a local machine. However, I found something quite annoying in the loop of cross-validation. Here are 2 pictures about the memory problem: image image The first one is the statistics of memory using during the fold_0. The second one is "fold_1". It is obvious that it will cost 3GB RAM after each training processing: image I started this training process last night and went to bed. I saw the error of "cannot allocate memory" this morning. Can u give me some tip so how to solve this problem?

    opened by xhh232018 1
  • Confusion Matrix

    Confusion Matrix

    As per the discussions on Kaggle, yours implementation is the only implementation that is fully correct for the given metric but there is one thing that I couldn't understand as per your code. Here are these three functions:

    def compute_ious(gt, predictions):
        gt_ = get_segmentations(gt)
        predictions_ = get_segmentations(predictions)
    
        if len(gt_) == 0 and len(predictions_) == 0:
            return np.ones((1, 1))
        elif len(gt_) != 0 and len(predictions_) == 0:
            return np.zeros((1, 1))
        else:
            iscrowd = [0 for _ in predictions_]
            ious = cocomask.iou(gt_, predictions_, iscrowd)
            if not np.array(ious).size:
                ious = np.zeros((1, 1))
            return ious
    
    
    def compute_precision_at(ious, threshold):
        mx1 = np.max(ious, axis=0)
        mx2 = np.max(ious, axis=1)
        tp = np.sum(mx2 >= threshold)
        fp = np.sum(mx2 < threshold)
        fn = np.sum(mx1 < threshold)
        return float(tp) / (tp + fp + fn)
    
    def compute_eval_metric(gt, predictions):
        thresholds = [0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]
        ious = compute_ious(gt, predictions)
        precisions = [compute_precision_at(ious, th) for th in thresholds]
        return sum(precisions) / len(precisions)
    
    

    Now, given the fact that compute_ious function works on a single prediction and it's corresponding groundtruth, ious will be a singleton array. Then, how are you calculating TP/FP from that? Am I missing something here?

    opened by AakashKumarNain 1
  • Cannot find the 'metadata.csv'

    Cannot find the 'metadata.csv'

    After run the code, I'm sorry to find 'metadata.csv' does not exist. And I could not find it in the homepage of TGS. I was wondering how to solve this problem.

    opened by dremofly 6
  • GPU utilization is 0 most of the time

    GPU utilization is 0 most of the time

    nvidia-smi gives 0% percentage GPU-Util most of the time(in offline mode). Am I doing something wrong? Using the same config file as in [solution-3]. CPU usage is high most of the time.

    opened by SergazyK 1
Releases(solution-6)
Library for converting from RGB / GrayScale image to base64 and back.

Library for converting RGB / Grayscale numpy images from to base64 and back. Installation pip install -U image_to_base_64 Conversion RGB to base 64 b

Vladimir Iglovikov 16 Aug 28, 2022
DyNet: The Dynamic Neural Network Toolkit

The Dynamic Neural Network Toolkit General Installation C++ Python Getting Started Citing Releases and Contributing General DyNet is a neural network

Chris Dyer's lab @ LTI/CMU 3.3k Jan 06, 2023
Continuum Learning with GEM: Gradient Episodic Memory

Gradient Episodic Memory for Continual Learning Source code for the paper: @inproceedings{GradientEpisodicMemory, title={Gradient Episodic Memory

Facebook Research 360 Dec 27, 2022
ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

Hao Su's Lab, UCSD 48 Dec 30, 2022
Official pytorch implementation of "Scaling-up Disentanglement for Image Translation", ICCV 2021.

Official pytorch implementation of "Scaling-up Disentanglement for Image Translation", ICCV 2021.

Aviv Gabbay 41 Nov 29, 2022
PESTO: Switching Point based Dynamic and Relative Positional Encoding for Code-Mixed Languages

PESTO: Switching Point based Dynamic and Relative Positional Encoding for Code-Mixed Languages Abstract NLP applications for code-mixed (CM) or mix-li

Mohsin Ali, Mohammed 1 Nov 12, 2021
Pytorch GUI(demo) for iVOS(interactive VOS) and GIS (Guided iVOS)

GUI for iVOS(interactive VOS) and GIS (Guided iVOS) GUI Implementation of CVPR2021 paper "Guided Interactive Video Object Segmentation Using Reliabili

Yuk Heo 13 Dec 09, 2022
Efficient face emotion recognition in photos and videos

This repository contains code of face emotion recognition that was developed in the RSF (Russian Science Foundation) project no. 20-71-10010 (Efficien

Andrey Savchenko 239 Jan 04, 2023
Source code for "Roto-translated Local Coordinate Framesfor Interacting Dynamical Systems"

Roto-translated Local Coordinate Frames for Interacting Dynamical Systems Source code for Roto-translated Local Coordinate Frames for Interacting Dyna

Miltiadis Kofinas 19 Nov 27, 2022
Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems

Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems This repository is the official implementation of Rever

6 Aug 25, 2022
This repo holds codes of the ICCV21 paper: Visual Alignment Constraint for Continuous Sign Language Recognition.

VAC_CSLR This repo holds codes of the paper: Visual Alignment Constraint for Continuous Sign Language Recognition.(ICCV 2021) [paper] Prerequisites Th

Yuecong Min 64 Dec 19, 2022
Activity image-based video retrieval

Cross-modal-retrieval Our approach is focus on Activity Image-to-Video Retrieval (AIVR) task. The compared methods are state-of-the-art single modalit

BCMI 75 Oct 21, 2021
Spatial-Location-Constraint-Prototype-Loss-for-Open-Set-Recognition

Spatial Location Constraint Prototype Loss for Open Set Recognition Official PyTorch implementation of "Spatial Location Constraint Prototype Loss for

Xia Ziheng 12 Jun 24, 2022
This is a collection of simple PyTorch implementations of neural networks and related algorithms. These implementations are documented with explanations,

labml.ai Deep Learning Paper Implementations This is a collection of simple PyTorch implementations of neural networks and related algorithms. These i

labml.ai 16.4k Jan 09, 2023
Experiments for distributed optimization algorithms

Network-Distributed Algorithm Experiments -- This repository contains a set of optimization algorithms and objective functions, and all code needed to

Boyue Li 40 Dec 04, 2022
Local trajectory planner based on a multilayer graph framework for autonomous race vehicles.

Graph-Based Local Trajectory Planner The graph-based local trajectory planner is python-based and comes with open interfaces as well as debug, visuali

TUM - Institute of Automotive Technology 160 Jan 04, 2023
Semi-supervised Video Deraining with Dynamical Rain Generator (CVPR, 2021, Pytorch)

S2VD Semi-supervised Video Deraining with Dynamical Rain Generator (CVPR, 2021) Requirements and Dependencies Ubuntu 16.04, cuda 10.0 Python 3.6.10, P

Zongsheng Yue 53 Nov 23, 2022
Empowering journalists and whistleblowers

Onymochat Empowering journalists and whistleblowers Onymochat is an end-to-end encrypted, decentralized, anonymous chat application. You can also host

Samrat Dutta 19 Sep 02, 2022
Miscellaneous and lightweight network tools

Network Tools Collection of miscellaneous and lightweight network tools to simplify daily operations, administration, and troubleshooting of networks.

Nicholas Russo 22 Mar 22, 2022
Improved Fitness Optimization Landscapes for Sequence Design

ReLSO Improved Fitness Optimization Landscapes for Sequence Design Description Citation How to run Training models Original data source Description In

Krishnaswamy Lab 44 Dec 20, 2022