LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation (NeurIPS2021 Benchmark and Dataset Track)

Overview

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation

by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zhong


This is an official implementation of LoveDA in our NeurIPS2021 paper " LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation"

Citation

If you use FactSeg in your research, please cite our coming NeurIPS2021 paper.

    @inproceedings{
    wang2021loveda,
    title={Love{DA}: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation},
    author={Junjue Wang and Zhuo Zheng and Ailong Ma and Xiaoyan Lu and Yanfei Zhong},
    booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
    year={2021},
    url={https://openreview.net/forum?id=bLBIbVaGDu}
    }

Dataset

Coming Soon!

Comments
  • bad cbst result

    bad cbst result

    hello, we re-run the cbst_train with the default settings you provide, but get bad results as shown in the fig, even worse than the source only method. i wonder the stability of the training of cbst, and i will appreciate that if you can provide the training log of the cbst. THANK YOU VERY MUCH! Uploading 屏幕截图 2021-11-11 112454.png…

    bug 
    opened by Luffy03 14
  • About the accuracy of the CodaLab website

    About the accuracy of the CodaLab website

    Why is the domain adaptation MIOU on the CodaLab site so high? Shouldn't the "Oracle" MIOU provided in the paper be the highest MIOU for this domain adaptation task?

    question 
    opened by Hcshenziyang 6
  • Results submitted to Codalab

    Results submitted to Codalab

    The results submitted to the CodaLab get zero score and zero ExecutionTime. I wonder is it any wrong with the CodaLab or it is just my own mistake. The output class index is 0~6 with 1024*1024 pixels.

    question 
    opened by Luffy03 6
  • Invitation of incoporating LoveDA dataset into MMSegmentation.

    Invitation of incoporating LoveDA dataset into MMSegmentation.

    Hi, I am member of OpenMMLab who develops MMSegmentation. Our vision is provide up-to-date methods and dataset(i.e., benchmark) for researchers and community around the world.

    First, congrats for acceptance of NeurIPS'21. I think this dataset and benchmark would definitely help Remote Sensing Image field where semantic segmentation plays an important role.

    Frankly speaking, right now we do not have too much human resources. Would you like to help us incpoorate your dataset into MMSegmentation? We appreciate all contibutors and users, here is our contributing details.

    I think if LoveDA is provided by MMSegmentation, it could let more people use & cite this excellent work, especially for those who want to establish standard segmentation benchmark.

    Looking forward to your reply. Wish you all the best.

    Best,

    good first issue 
    opened by MengzhangLI 6
  • Potential shift in class labels

    Potential shift in class labels

    Following up on the discussion from #23, I was wondering whether in the context of the segmantic segmentation task there could be a shift in class labels between the data on which the pretrained model hrnetw32.pth was trained on and the data provided in this repo.

    Here I have visualised the true and predicted segmentations on training image 1338 for 2 different COLOR_MAP-s from the repo (render.py and data.loveda.py)

    Screenshot 2022-03-26 at 10 06 23 Screenshot 2022-03-26 at 10 06 31

    Based on the input image we can see that the colours are correct for the top left and bottom right visualisations. Also, the black colours in top right image corresponds to label IGNORE with RGB values (0,0,0) while in the bottom left the black colour has RGB values (7,7,7), which seems to be because in data.loveda.py the COLOR_MAP only has 7 classes and with indexing 0-6 with agriculture having label 7 in the masked images, it is not colour mapped.

    This seems to be related to the difference between labels in the current repo:

    Category labels: background – 1, building – 2, road – 3, water – 4, barren – 5, forest – 6, agriculture – 7. And the no-data regions were assigned 0 which should be ignored. The provided data loader will help you construct your pipeline.

    and the ones described on CodaLab:

    Classes indexes: Background - 0, Building - 1, Road - 2, Water - 3, Barren - 4, Forest - 5, Agriculture - 6

    Could this class label offset be the case or perhaps there is an alternative explanation which I have not thought about?

    question 
    opened by keliive 3
  • Dataset links for Google drive return a 404 error

    Dataset links for Google drive return a 404 error

    The links mentioned on the README.md of this repository as well as the competition page for google drive of the dataset are broken as of 30-01-2022 and return a 404 error. Please update the link with a working one.

    opened by AnkushMalaker 3
  • The different resolutions in training and testing

    The different resolutions in training and testing

    I found that in the training process, the input resolution is 512x512, while in the test phase, the input resolution is 1024x1024. Would you please tell me why?

    question 
    opened by Luffy03 3
  • Meaning of line 228 in the Unsupervised_Domian_Adaptation/utils/tools.py

    Meaning of line 228 in the Unsupervised_Domian_Adaptation/utils/tools.py

    Hello,

    Thank you very much for making your excellent work open to the public.

    May I ask you the meaning of line 228 in tools.py for Unsupervised Domain Adaptation? I found that when running bash ./scripts/predict_cbst.sh, it will generate a bug saying AttributeError: 'NoneType' object has no attribute 'info'. This bug is due to line 228 and also the default setting _default_logger=None. Hence, I wonder what this line is for. Also, I would like to let you know that after commenting the line 228, the command can be run successfully.

    Many thanks for your help.

    opened by simonep1052 2
  • [Request] Release codalab evaluation script

    [Request] Release codalab evaluation script

    Would it be possible to release the evaluation script from codalab? File format detail is a bit confusing. For example, if I set empty regions as transparent or embed color palette within the image the evaluation script shows warning:

    /opt/conda/lib/python2.7/site-packages/PIL/Image.py:870: UserWarning: Palette images with Transparency   expressed in bytes should be converted to RGBA images
      'to RGBA images')
    

    Even if i remove the color palette I get the following error:

    Traceback (most recent call last):
      File "/tmp/codalab/tmpS_IrwU/run/program/evaluate.py", line 157, in <module>
        metric.forward(gt[valid_inds], mask[valid_inds])
      File "/tmp/codalab/tmpS_IrwU/run/program/evaluate.py", line 22, in forward
        cm = sparse.coo_matrix((v, (y_true, y_pred)), shape=(self.num_classes, self.num_classes), dtype=np.float32)
      File "/opt/conda/lib/python2.7/site-packages/scipy/sparse/coo.py", line 182, in __init__
        self._check()
      File "/opt/conda/lib/python2.7/site-packages/scipy/sparse/coo.py", line 219, in _check
        nnz = self.nnz
      File "/opt/conda/lib/python2.7/site-packages/scipy/sparse/coo.py", line 196, in getnnz
        raise ValueError('row, column, and data array must all be the '
    ValueError: row, column, and data array must all be the same length
    

    I made sure all my images are 1024 × 1024 with a single uint8 channel. The class ids have been assigned as per the specification, with empty regions assigned with value 15

    Classes indexes

    Background - 0
    Building - 1
    Road - 2
    Water - 3
    Barren - 4
    Forest - 5
    Agriculture - 6
    

    So, it would be helpful to see the evaluation script and generate compatible prediction images.

    opened by digital-idiot 2
  • Can you provide the pre-training weights of the adversarial learning?

    Can you provide the pre-training weights of the adversarial learning?

    Hi, I would like to use the visualized results of Adaptseg and CLAN for comparison, could you provide the pre-training weights (Rural to Urban weights) of these two networks?

    opened by csliujw 2
  • Running pretrained model without CUDA

    Running pretrained model without CUDA

    Hi,

    Is there a way to run ./scripts/predict_test.sh without CUDA?

    I am using the LoveDA dataset and pretrained model weights hrnetw32.pth as described in the ReadME.

    Initially I got the error urllib.error.HTTPError: HTTP Error 403: Forbidden, which I fixed by setting pretrained=False as recommended here: https://github.com/Junjue-Wang/LoveDA/issues/9.

    Then when rerunning the predict_test.sh, I got the error:

    Traceback (most recent call last):
      File "predict.py", line 52, in <module>
        predict_test(args.ckpt_path, args.config_path, args.out_dir)
      File "predict.py", line 38, in predict_test
        model.cuda()
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 680, in cuda
        return self._apply(lambda t: t.cuda(device))
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
        module._apply(fn)
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
        module._apply(fn)
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
        module._apply(fn)
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 593, in _apply
        param_applied = fn(param)
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 680, in <lambda>
        return self._apply(lambda t: t.cuda(device))
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/cuda/__init__.py", line 208, in _lazy_init
        raise AssertionError("Torch not compiled with CUDA enabled")
    AssertionError: Torch not compiled with CUDA enabled
    

    I then commented out the line 38: https://github.com/Junjue-Wang/LoveDA/blob/4d574ce08f84cbc8d27becf2bd9dce8fbb7f50f8/Semantic_Segmentation/predict.py#L38 and after rerunning predict_test.sh, I got the output:

    Load model!
    INFO:data.loveda:./LoveDA/Val/Urban/images_png -- Dataset images: 0
    INFO:data.loveda:./LoveDA/Val/Rural/images_png -- Dataset images: 0
    INFO:ever.core.logger:HRNetEncoder: pretrained = False
    0it [00:00, ?it/s]
    
    question 
    opened by keliive 2
  • bash eval_hrnetw32.sh  Error!

    bash eval_hrnetw32.sh Error!

    Traceback (most recent call last). File ""home/libowen/LoveDA-master/Semantic_Segmentation/predict.py", line 52, in smodule> predict test(argsckpt path, args.config path, args.out dir) File "/home/libowen/LoveDA-master/Semantic_Segmentation/predictpy", line 37, in predict test model.load_state_dictmodel_state_dict) File "home/libowen/.conda/envs/bw/ib/python3.8/site-packages/torch/nn/modules/module,py", line 1667, in load_state_dictraise RuntimeError('Error(s) in loading state_dic for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state dict for HRNetFusion. Missing keys) in state dict: "ackbone.het.conv1.weight"ackbone hmet bn1.weight, "ackbone hretbn1.bias""backbone hmet bn1.running mean"

    question 
    opened by kukujoyyo 1
  • Predict.py Problem

    Predict.py Problem

    I download pretrained weight and use predict.py to test some images, but meet this bug, what's the problem of the fuse_layers?

    File "test4/Road/LoveDA-master/Semantic_Segmentation/module/baseline/base_hrnet/_hrnet.py", line 394, in forward y = y + self.fuse_layers[i][j](x[j]) RuntimeError: The size of tensor a (500) must match the size of tensor b (504) at non-singleton dimension 3

    question 
    opened by Acid-knight 3
  • Can run with One GPU in this work?

    Can run with One GPU in this work?

    **Shall we run this work with One GPU? If possible how to set parameters? **

    I'v got the issue below:

    PS F:\Models\LoveDA-master\Semantic_Segmentation> bash ./scripts/train_hrnetw32.sh NOTE: Redirects are currently not supported in Windows or MacOs. Init Trainer Set Seed Torch Traceback (most recent call last): File "train.py", line 79, in trainer = er.trainer.get_trainer('th_amp_ddp')() File "D:\ProgramData\Anaconda3\lib\site-packages\ever\api\trainer\th_amp_ddp_trainer.py", line 77, in init torch.cuda.set_device(self.args.local_rank) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\cuda_init_.py", line 311, in set_device device = _get_device_index(device) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\cuda_utils.py", line 34, in _get_device_index return _torch_get_device_index(device, optional, allow_cpu) File "D:\ProgramData\Anaconda3\lib\site-packages\torch_utils.py", line 537, in _get_device_index 'or an integer, but got:{}'.format(device)) ValueError: Expected a torch.device with a specified index or an integer, but got:None ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 108252) of binary: D:\ProgramData\Anaconda3\python.exe Traceback (most recent call last): File "D:\ProgramData\Anaconda3\lib\runpy.py", line 193, in run_module_as_main "main", mod_spec) File "D:\ProgramData\Anaconda3\lib\runpy.py", line 85, in run_code exec(code, run_globals) File "D:\ProgramData\Anaconda3\Scripts\torchrun.exe_main.py", line 7, in File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\elastic\multiprocessing\errors_init.py", line 345, in wrapper return f(*args, **kwargs) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\run.py", line 724, in main run(args) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\run.py", line 718, in run )(*cmd_args) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\launcher\api.py", line 131, in call return launch_agent(self._config, self._entrypoint, list(args)) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\launcher\api.py", line 247, in launch_agent failures=result.failures, torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

    train.py FAILED

    Failures: <NO_OTHER_FAILURES>

    Root Cause (first observed failure): [0]: time : 2022-11-13_13:10:33 host : KWPAACQRFTY8V05 rank : 0 (local_rank: 0) exitcode : 1 (pid: 108252) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

    question 
    opened by kukujoyyo 1
  • no such file problem when training ST 2urban scripts

    no such file problem when training ST 2urban scripts

    When training self-training 2urban scripts, such as CBST_train.py and IAST_train.py, there is a problem which is 'FileNotFoundError: No such file: '/home/xxx/ssuda/UDA/log/cbst/2urban/pseudo_label/3814.png''. I guess this is because that the batch size is set to 2, and as expected the problem is solved when batch size is modified to 1.

    So, I wonder that if this is a bug or something what?

    Thanks for your excellent works!

    question 
    opened by lyhnsn 2
Releases(v0.2.0-alpha)
Owner
Kingdrone
Deep learning in RS
Kingdrone
This is the implementation of our work Deep Extreme Cut (DEXTR), for object segmentation from extreme points.

This is the implementation of our work Deep Extreme Cut (DEXTR), for object segmentation from extreme points.

Sergi Caelles 828 Jan 05, 2023
Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E. Evaluated on benchmark dataset Office31.

Deep-Unsupervised-Domain-Adaptation Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E.

Alan Grijalva 49 Dec 20, 2022
This repository contains the scripts for downloading and validating scripts for the documents

HC4: HLTCOE CLIR Common-Crawl Collection This repository contains the scripts for downloading and validating scripts for the documents. Document ids,

JHU Human Language Technology Center of Excellence 6 Jun 07, 2022
Code and results accompanying our paper titled Mixture Proportion Estimation and PU Learning: A Modern Approach at Neurips 2021 (Spotlight)

Mixture Proportion Estimation and PU Learning: A Modern Approach This repository is the official implementation of Mixture Proportion Estimation and P

Approximately Correct Machine Intelligence (ACMI) Lab 23 Dec 28, 2022
An improvement of FasterGICP: Acceptance-rejection Sampling based 3D Lidar Odometry

fasterGICP This package is an improvement of fast_gicp Please cite our paper if possible. W. Jikai, M. Xu, F. Farzin, D. Dai and Z. Chen, "FasterGICP:

79 Dec 31, 2022
Run Effective Large Batch Contrastive Learning on Limited Memory GPU

Gradient Cache Gradient Cache is a simple technique for unlimitedly scaling contrastive learning batch far beyond GPU memory constraint. This means tr

Luyu Gao 198 Dec 29, 2022
Source for the paper "Universal Activation Function for machine learning"

Universal Activation Function Tensorflow and Pytorch source code for the paper Yuen, Brosnan, Minh Tu Hoang, Xiaodai Dong, and Tao Lu. "Universal acti

4 Dec 03, 2022
Fastquant - Backtest and optimize your trading strategies with only 3 lines of code!

fastquant 🤓 Bringing backtesting to the mainstream fastquant allows you to easily backtest investment strategies with as few as 3 lines of python cod

Lorenzo Ampil 1k Dec 29, 2022
🥈78th place in Riiid Solution🥈

Riiid Answer Correctness Prediction Introduction This repository is the code that placed 78th in Riiid Answer Correctness Prediction competition. Requ

ds wook 14 Apr 26, 2022
Automatically download the cwru data set, and then divide it into training data set and test data set

Automatically download the cwru data set, and then divide it into training data set and test data set.自动下载cwru数据集,然后分训练数据集和测试数据集

6 Jun 27, 2022
PyBrain - Another Python Machine Learning Library.

PyBrain -- the Python Machine Learning Library =============================================== INSTALLATION ------------ Quick answer: make sure you

2.8k Dec 31, 2022
Look Who’s Talking: Active Speaker Detection in the Wild

Look Who's Talking: Active Speaker Detection in the Wild Dependencies pip install -r requirements.txt In addition to the Python dependencies, ffmpeg

Clova AI Research 60 Dec 08, 2022
MNE: Magnetoencephalography (MEG) and Electroencephalography (EEG) in Python

MNE-Python MNE-Python software is an open-source Python package for exploring, visualizing, and analyzing human neurophysiological data such as MEG, E

MNE tools for MEG and EEG data analysis 2.1k Dec 28, 2022
Code for one-stage adaptive set-based HOI detector AS-Net.

AS-Net Code for one-stage adaptive set-based HOI detector AS-Net. Mingfei Chen*, Yue Liao*, Si Liu, Zhiyuan Chen, Fei Wang, Chen Qian. "Reformulating

Mingfei Chen 45 Dec 09, 2022
Subpopulation detection in high-dimensional single-cell data

PhenoGraph for Python3 PhenoGraph is a clustering method designed for high-dimensional single-cell data. It works by creating a graph ("network") repr

Dana Pe'er Lab 42 Sep 05, 2022
The repository for our EMNLP 2021 paper "Finnish Dialect Identification: The Effect of Audio and Text"

Finnish Dialect Identification The repository for our EMNLP 2021 paper "Finnish Dialect Identification: The Effect of Audio and Text". We present a te

Rootroo Ltd 2 Dec 25, 2021
pytorch implementation for Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network arXiv:1609.04802

PyTorch SRResNet Implementation of Paper: "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"(https://arxiv.org/abs

Jiu XU 436 Jan 09, 2023
CenterPoint 3D Object Detection and Tracking using center points in the bird-eye view.

CenterPoint 3D Object Detection and Tracking using center points in the bird-eye view. Center-based 3D Object Detection and Tracking, Tianwei Yin, Xin

Tianwei Yin 134 Dec 23, 2022
Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR)

Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR) This is the official implementation of our paper Personalized Tran

Yongchun Zhu 81 Dec 29, 2022
Official implementation for “Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior”

Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior. The code will release soon. Implementation Python3 PyTorch=1.0 NVIDIA GPU+

FengZhang 34 Dec 04, 2022