MMFlow is an open source optical flow toolbox based on PyTorch

Overview

PyPI docs badge codecov license open issues

Documentation: https://mmflow.readthedocs.io/

Introduction

English | 简体中文

MMFlow is an open source optical flow toolbox based on PyTorch. It is a part of the OpenMMLab project.

The master branch works with PyTorch 1.5+.

mmflow_readme.mp4

Major features

  • The First Unified Framework for Optical Flow

    MMFlow is the first toolbox that provides a framework for unified implementation and evaluation of optical flow algorithms.

  • Flexible and Modular Design

    We decompose the flow estimation framework into different components, which makes it much easy and flexible to build a new model by combining different modules.

  • Plenty of Algorithms and Datasets Out of the Box

    The toolbox directly supports popular and contemporary optical flow models, e.g. FlowNet, PWC-Net, RAFT, etc, and representative datasets, FlyingChairs, FlyingThings3D, Sintel, KITTI, etc.

License

This project is released under the Apache 2.0 license.

Benchmark and model zoo

Results and models are available in the model zoo.

Supported methods:

Installation

Please refer to install.md for installation and guidance in dataset_prepare for dataset preparation.

Getting Started

If you're new of optical flow, you can start with Learn the basics. If you’re familiar with it, check out getting_started.md to try out MMFlow.

Refer to the below tutorials to dive deeper:

Contributing

We appreciate all contributions improving MMFlow. Please refer to CONTRIBUTING.md in MMCV for more details about the contributing guideline.

Citation

If you use this toolbox or benchmark in your research, please cite this project.

@misc{2021mmflow,
    title={{MMFlow}: OpenMMLab Optical Flow Toolbox and Benchmark},
    author={MMFlow Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmflow}},
    year={2021}
}

Projects in OpenMMLab

  • MMCV: OpenMMLab foundational library for computer vision.
  • MIM: MIM Installs OpenMMLab Packages.
  • MMClassification: OpenMMLab image classification toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMEditing: OpenMMLab image and video editing toolbox.
  • MMOCR: A Comprehensive Toolbox for Text Detection, Recognition and Understanding.
  • MMGeneration: OpenMMLab image and video generative models toolbox.
  • MMFlow: OpenMMLab optical flow toolbox and benchmark.
Comments
  • Train PWC-Net with two gpus

    Train PWC-Net with two gpus

    Thanks for your wonderful code!

    I have two gpus and to reproduce the PWC-Net, I only need to set the samples_per_gpu=4 HERE and keep other settings unchanged. And then, run bash ./tools/dist_train.sh configs/pwcnet/pwcnet_8x1_slong_flyingchairs_384x448.py 2.

    Are these okay?

    opened by lhao0301 23
  • Runtime error gradient computation when train.py

    Runtime error gradient computation when train.py

    Describe the bug Good evening, i wanted to lauch train.py using a default config file for RAFT on a standard dataset (KITTI_2015). I followed the instruction to install MMFlow from source successfully.

    Reproduction

    python tools/train.py configs/raft/raft_8x2_50k_kitti2015_and_Aug_288x960.py \
    --load-from /home/s.starace/FlowNets/mmflow/checkpoints/raft/raft_8x2_100k_mixed_368x768.pth
    
    1. Did you make any modifications on the code or config? Did you understand what you have modified? I just changed the name of symlink that i created under /data (uppercase)

    2. What dataset did you use? KITTI_2015

    Environment I launched the command on my PC and also on a little cluster and the output error is the same.

    Error traceback See log attached: slurm-53090.out.txt

    Bug fix Not sure about it, could either be a configuration issue in Encoder/Decoder or a regression. I'll try the train.py using other models as well and update the report if i understand better the problem.

    opened by Salvatore-tech 7
  • Operation Error [Assertion failed) !dsize.empty() in function 'resize']

    Operation Error [Assertion failed) !dsize.empty() in function 'resize']

    运行如下官方的demo是不报错的,证明不是环境问题

    Demo running as follows is not an error, prove is not an environmental problem

    from mmflow.apis import inference_model, init_model config_file = 'configs/pwcnet/pwcnet_ft_4x1_300k_sintel_final_384x768.py' checkpoint_file = 'checkpoints/pwcnet_ft_4x1_300k_sintel_final_384x768.pth' device = 'cuda:0' model = init_model(config_file, checkpoint_file, device=device) inference_model(model, 'demo/frame_0001.png', 'demo/frame_0002.png')

    使用以下命令调用video demo是没问题的

    It’s OK to call the video demo with the following command

    python demo/video_demo.py "/home/thhicv/program/mmflow/demo/demo.mp4" 'configs/pwcnet/pwcnet_ft_4x1_300k_sintel_final_384x768.py' 'checkpoints/pwcnet_ft_4x1_300k_sintel_final_384x768.pth' /home/thhicv/program/mmflow/demo/xx1x.mp4

    当我改变了视频文件就会报错

    When I change the video file, it’s an error

    /home/thhicv/anaconda3/envs/mmflow/bin/python /home/thhicv/program/mmflow/demo/video_demo.py /home/thhicv/视频/flow/溪丁灯.mp4 configs/pwcnet/pwcnet_ft_4x1_300k_sintel_final_384x768.py checkpoints/pwcnet_ft_4x1_300k_sintel_final_384x768.pth /home/thhicv/视频/flow/溪丁灯.mp4_flow.mp4

    报错如下:

    LOG:

    Traceback (most recent call last): File "/home/thhicv/program/mmflow/demo/video_demo.py", line 143, in <module> main(args) File "/home/thhicv/program/mmflow/demo/video_demo.py", line 83, in main result = inference_model(model, img1, img2) File "/home/thhicv/program/mmflow/mmflow/apis/inference.py", line 115, in inference_model data = test_pipeline(data) File "/home/thhicv/program/mmflow/mmflow/datasets/pipelines/compose.py", line 42, in __call__ data = t(data) File "/home/thhicv/program/mmflow/mmflow/datasets/pipelines/transforms.py", line 428, in __call__ imgs, scale_factor = self._resize_img(imgs) File "/home/thhicv/program/mmflow/mmflow/datasets/pipelines/transforms.py", line 451, in _resize_img img_ = mmcv.imresize(img, (newW, newH), return_scale=False) File "/home/thhicv/anaconda3/envs/mmflow/lib/python3.7/site-packages/mmcv/image/geometric.py", line 89, in imresize img, size, dst=out, interpolation=cv2_interp_codes[interpolation]) cv2.error: OpenCV(4.5.3) /tmp/pip-req-build-l1r0y34w/opencv/modules/imgproc/src/resize.cpp:3688: error: (-215:Assertion failed) !dsize.empty() in function 'resize'

    opened by fusang1337 6
  • pwcnet to onnx

    pwcnet to onnx

    import os
    import sys
    import argparse
    import torch
    from mmflow.apis import inference_model, init_model
    from mmflow.datasets import visualize_flow, write_flow
    
    def parser_func():
        parser = argparse.ArgumentParser()
        parser.add_argument('--config',default='configs/pwcnet/pwcnet_ft_4x1_300k_kitti_320x896.py',  help='Config file')
        parser.add_argument('--checkpoint', type=str, default='models/pwc/pwcnet_ft_4x1_300k_kitti_320x896.pth')
        parser.add_argument('--out_path', type=str, default='models/pwc/pwcnet_ft_4x1_300k_kitti_320x896.onnx')
        parser.add_argument(
            '--device', default='cuda:0', help='Device used for inference')
        parser.add_argument('--batch_size', type=int, default=1)
        args = parser.parse_args()
        os.makedirs(os.path.dirname(args.out_path), exist_ok=True)
        return args
    
    def export_onnx():
        model = init_model(args.config, args.checkpoint, device=args.device)
    
        input_names = ['encoder.layers.0.layers.0.conv']
        output_names = ['decoder.decoders.level6.upfeat_layer']
    
        dummy_input = torch.randn(args.batch_size, 6, 320, 896).cuda()
    
        torch.onnx.export(model, dummy_input, args.out_path,
                          input_names=input_names, output_names=output_names, opset_version=11,
                          verbose=True)
    
    if __name__ == '__main__':
        args = parser_func()
        export_onnx()
    

    RuntimeError: Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: numpy.ndarray

    opened by betterhalfwzm 5
  • Dimension issue on RAFT

    Dimension issue on RAFT

    Thanks for your great work!

    Test input script:

    python tools/test.py configs/raft/raft_8x2_100k_flyingchairs_368x496.py  /ckpt/raft_8x2_100k_flyingchairs.pth --eval EPE
    

    The error message is as follow:

    [ ] 0/640, elapsed: 0s, ETA:Traceback (most recent call last): File "tools/test.py", line 178, in main() File "tools/test.py", line 171, in main f'In {dataset_name} ' File "root/framework/optical_flow/mmflow/mmflow/core/evaluation/evaluation.py", line 38, in online_evaluation model, data_loader, metric=metric, **kwargs) File "root/framework/optical_flow/mmflow/mmflow/core/evaluation/evaluation.py", line 68, in single_gpu_online_evaluation batch_results = model(test_mode=True, **data) File "root/miniconda3/envs/mmflow/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "root/miniconda3/envs/mmflow/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 48, in forward return self.module(*inputs[0], **kwargs[0]) File "root/miniconda3/envs/mmflow/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "root/framework/optical_flow/mmflow/mmflow/models/flow_estimators/base.py", line 61, in forward return self.forward_test(*args, **kwargs) File "root/framework/optical_flow/mmflow/mmflow/models/flow_estimators/raft.py", line 145, in forward_test feat1, feat2, h_feat, cxt_feat = self.extract_feat(imgs) File "root/framework/optical_flow/mmflow/mmflow/models/flow_estimators/raft.py", line 72, in extract_feat feat1 = self.encoder(img1) File "root/miniconda3/envs/mmflow/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "root/framework/optical_flow/mmflow/mmflow/models/encoders/raft_encoder.py", line 293, in forward x = self.conv1(x) File "root/miniconda3/envs/mmflow/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "root/miniconda3/envs/mmflow/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 443, in forward return self._conv_forward(input, self.weight, self.bias) File "root/miniconda3/envs/mmflow/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 440, in _conv_forward self.padding, self.dilation, self.groups) RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 7, 7], but got 5-dimensional input of size [1, 1, 6, 384, 512] instead

    May you help me on this?

    opened by ckcraig01 5
  • ImportError: cannot import name 'Correlation' from 'mmcv.ops'

    ImportError: cannot import name 'Correlation' from 'mmcv.ops'

    window10 cuda10.2 pytorch1.6 python3.7 mmcv1.1.5

    my first try failed. is the version of mmcv too old?

    error info: ImportError: cannot import name 'Correlation' from 'mmcv.ops'

    opened by Z-XQ 5
  • Why does GaussianNoise in data enhancement cause gradient explosion and grad_norm value is large

    Why does GaussianNoise in data enhancement cause gradient explosion and grad_norm value is large

    Thanks for your error report and we appreciate it a lot.

    Checklist

    1. I have searched related issues but cannot get the expected help.
    2. I have read the FAQ documentation but cannot get the expected help.
    3. The bug has not been fixed in the latest version.

    Describe the bug A clear and concise description of what the bug is.

    Reproduction

    1. What command or script did you run?
    A placeholder for the command.
    
    1. Did you make any modifications on the code or config? Did you understand what you have modified?
    2. What dataset did you use?

    Environment

    1. Please run python mmdet/utils/collect_env.py to collect necessary environment information and paste it here.
    2. You may add addition that may be helpful for locating the problem, such as
      • How you installed PyTorch [e.g., pip, conda, source]
      • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

    Error traceback If applicable, paste the error trackback here.

    A placeholder for trackback.
    

    Bug fix If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

    opened by pedroHuang123 4
  • RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

    RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

    PyTorch: 1.11 CudaToolKits: 11.3.1

    Error occur while running this command

    python tools/train.py configs/raft/raft_8x2_50k_kitti2015_288x960.py
    

    Complete stacktrace

    /home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  /opt/conda/conda-bld/pytorch_1646755953518/work/aten/src/ATen/native/TensorShape.cpp:2228.)
      return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
    /home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/autograd/__init__.py:175: UserWarning: Error detected in ReluBackward0. Traceback of forward call that caused the error:
      File "tools/train.py", line 209, in <module>
        main()
      File "tools/train.py", line 205, in main
        meta=meta)
      File "/media/exthdd/laizeqiang/lzq/projects/misc/mmflow/mmflow/apis/train.py", line 238, in train_model
        runner.run(data_loaders, cfg.workflow)
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 134, in run
        iter_runner(iter_loaders[i], **kwargs)
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 61, in train
        outputs = self.model.train_step(data_batch, self.optimizer, **kwargs)
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 75, in train_step
        return self.module.train_step(*inputs[0], **kwargs[0])
      File "/media/exthdd/laizeqiang/lzq/projects/misc/mmflow/mmflow/models/flow_estimators/base.py", line 90, in train_step
        losses = self(**data, test_mode=False)
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/media/exthdd/laizeqiang/lzq/projects/misc/mmflow/mmflow/models/flow_estimators/base.py", line 59, in forward
        return self.forward_train(*args, **kwargs)
      File "/media/exthdd/laizeqiang/lzq/projects/misc/mmflow/mmflow/models/flow_estimators/raft.py", line 107, in forward_train
        feat1, feat2, h_feat, cxt_feat = self.extract_feat(imgs)
      File "/media/exthdd/laizeqiang/lzq/projects/misc/mmflow/mmflow/models/flow_estimators/raft.py", line 74, in extract_feat
        cxt_feat = self.context(img1)
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/media/exthdd/laizeqiang/lzq/projects/misc/mmflow/mmflow/models/encoders/raft_encoder.py", line 296, in forward
        x = res_layer(x)
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/container.py", line 141, in forward
        input = module(input)
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/media/exthdd/laizeqiang/lzq/projects/misc/mmflow/mmflow/models/utils/res_layer.py", line 88, in forward
        out = _inner_forward(x)
      File "/media/exthdd/laizeqiang/lzq/projects/misc/mmflow/mmflow/models/utils/res_layer.py", line 76, in _inner_forward
        out = self.relu(out)
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/activation.py", line 98, in forward
        return F.relu(input, inplace=self.inplace)
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/functional.py", line 1442, in relu
        result = torch.relu(input)
     (Triggered internally at  /opt/conda/conda-bld/pytorch_1646755953518/work/torch/csrc/autograd/python_anomaly_mode.cpp:104.)
      allow_unreachable=True, accumulate_grad=True)  # Calls into the C++ engine to run the backward pass
    Traceback (most recent call last):
      File "tools/train.py", line 209, in <module>
        main()
      File "tools/train.py", line 205, in main
        meta=meta)
      File "/media/exthdd/laizeqiang/lzq/projects/misc/mmflow/mmflow/apis/train.py", line 238, in train_model
        runner.run(data_loaders, cfg.workflow)
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 134, in run
        iter_runner(iter_loaders[i], **kwargs)
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 67, in train
        self.call_hook('after_train_iter')
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 309, in call_hook
        getattr(hook, fn_name)(self)
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmcv/runner/hooks/optimizer.py", line 56, in after_train_iter
        runner.outputs['loss'].backward()
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/_tensor.py", line 363, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
      File "/home/laizeqiang/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/autograd/__init__.py", line 175, in backward
        allow_unreachable=True, accumulate_grad=True)  # Calls into the C++ engine to run the backward pass
    RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2, 128, 36, 120]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
    
    opened by Zeqiang-Lai 4
  • Random seed matters?

    Random seed matters?

    I cannot get the desired results(epe = 0.78 on flyingchairs), I check the log file provided and find that the seed is "null", so what does the null value means? Doesn't the function "init_random_seed" generate a seed?

    opened by VingtDylan 4
  • How to understand

    How to understand "RepeatDataset"?

    image This method can reduce the data loading time between epochs.What do it mean?if my dataset is small,it's loading time is long between epochs,so is the args "times " bigger,the better?

    opened by pedroHuang123 3
  • When we want to finetune on mixed.pth, we should modify the training schedule,the max_lr should be set to 1e-5?or 1.25e-4

    When we want to finetune on mixed.pth, we should modify the training schedule,the max_lr should be set to 1e-5?or 1.25e-4

    Thanks for your error report and we appreciate it a lot.

    Checklist

    1. I have searched related issues but cannot get the expected help.
    2. I have read the FAQ documentation but cannot get the expected help.
    3. The bug has not been fixed in the latest version.

    Describe the bug A clear and concise description of what the bug is.

    Reproduction

    1. What command or script did you run?
    A placeholder for the command.
    
    1. Did you make any modifications on the code or config? Did you understand what you have modified?
    2. What dataset did you use?

    Environment

    1. Please run python mmdet/utils/collect_env.py to collect necessary environment information and paste it here.
    2. You may add addition that may be helpful for locating the problem, such as
      • How you installed PyTorch [e.g., pip, conda, source]
      • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

    Error traceback If applicable, paste the error trackback here.

    A placeholder for trackback.
    

    Bug fix If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

    opened by pedroHuang123 3
  • Update 1_inference.md

    Update 1_inference.md

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    Please describe the motivation of this PR and the goal you want to achieve through this PR.

    Modification

    Please briefly describe what modification is made in this PR.

    BC-breaking (Optional)

    Does the modification introduce changes that break the backward-compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

    Use cases (Optional)

    If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

    Checklist

    Before PR:

    • [ ] I have read and followed the workflow indicated in the CONTRIBUTING.md to create this PR.
    • [ ] Pre-commit or linting tools indicated in CONTRIBUTING.md are used to fix the potential lint issues.
    • [ ] Bug fixes are covered by unit tests, the case that causes the bug should be added in the unit tests.
    • [ ] New functionalities are covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    • [ ] The documentation has been modified accordingly, including docstring or example tutorials.

    After PR:

    • [ ] If the modification has potential influence on downstream or other related projects, this PR should be tested with some of those projects, like MMDet or MMCls.
    • [ ] CLA has been signed and all committers have signed the CLA in this PR.
    opened by forkbabu 1
  • add autoFlow and crowdFlow  to mmflow dataset

    add autoFlow and crowdFlow to mmflow dataset

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    Add

    Modification

    Please briefly describe what modification is made in this PR.

    BC-breaking (Optional)

    Does the modification introduce changes that break the backward-compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

    Use cases (Optional)

    If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

    Checklist

    Before PR:

    • [x] I have read and followed the workflow indicated in the CONTRIBUTING.md to create this PR.
    • [x] Pre-commit or linting tools indicated in CONTRIBUTING.md are used to fix the potential lint issues.
    • [x] Bug fixes are covered by unit tests, the case that causes the bug should be added in the unit tests.
    • [x] New functionalities are covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    • [x] The documentation has been modified accordingly, including docstring or example tutorials.

    After PR:

    • [x] If the modification has potential influence on downstream or other related projects, this PR should be tested with some of those projects, like MMDet or MMCls.
    • [x] CLA has been signed and all committers have signed the CLA in this PR.
    opened by pedroHuang123 1
  • mmcv 1.7.0 not supported

    mmcv 1.7.0 not supported

    I first installed mmflow 0.5.1 and then mmcv using command mim install mmcv but it reported

    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    mmflow 0.5.1 requires mmcv-full<1.7.0,>=1.3.15, but you have mmcv-full 1.7.0 which is incompatible.
    

    And I tried to run the demo

       python demo/image_demo.py demo/frame_0001.png demo/frame_0002.png \
           configs/pwcnet/pwcnet_ft_4x1_300k_sintel_final_384x768.py \
           checkpoints/pwcnet_ft_4x1_300k_sintel_final_384x768.pth results
    

    It reported that ModuleNotFoundError: No module named 'mmcv' I checked the installation using pip list and it showed mmcv-full is installed. I'm using cuda-11.3 and pytorch-1.13.0

    opened by JamesYang-7 1
  • When I want to fintune base on pre-trained RAFT mixed model,how to determine the ratio of old and new data?

    When I want to fintune base on pre-trained RAFT mixed model,how to determine the ratio of old and new data?

    1、When I want to fintune my datasets base on mixed model, can I use the fine-tuning hyper-parameters as flow:

    optimizer

    optimizer = dict(type='Adam', lr=1e-5, weight_decay=0.0004, betas=(0.9, 0.999)) optimizer_config = dict(grad_clip=None)

    learning policy

    lr_config = dict( policy='step', by_epoch=False, gamma=0.5, step=[ 45000, 65000, 85000, 95000, 97500, 100000, 110000, 120000, 130000, 140000 ]) runner = dict(type='IterBasedRunner', max_iters=150000) checkpoint_config = dict(by_epoch=False, interval=10000) evaluation = dict(interval=10000, metric='EPE') these parameters are from https://github.com/open-mmlab/mmflow/blob/master/docs/en/tutorials/2_finetune.md

    2、we know the pre-trained mixed model is fintune on the mixed datasets,including FlyingChairs FlyingThing3D,Sintel,kitti2015 and HD1K, When I use the model for my datasets to obtain better performace, even if I use some old data(FlyingThing3D,Sintel,HD1K) to participate in finetune ,But the result show that the EPE of my dataset decrease with iteration, but the old dataset(sintel、flyingthing3d) increase . So when I finetune the pre-trained model,should I use the old datasets for training?if yes,how to determine the ratio of old and new data? image

    3、When you train the mixed model, why you only use sintel final and clean as the validation dataset?Why not test the training effect of other datasets like hd1k、kitti2015 and flyingthing3d?

    opened by pedroHuang123 0
  • the default grad_clip=None,so  when we should set optimizer_config=dict(grad_clip=dict(max_norm=1.0))?

    the default grad_clip=None,so when we should set optimizer_config=dict(grad_clip=dict(max_norm=1.0))?

    Thanks for your error report and we appreciate it a lot.

    Checklist

    1. I have searched related issues but cannot get the expected help.
    2. I have read the FAQ documentation but cannot get the expected help.
    3. The bug has not been fixed in the latest version.

    Describe the bug A clear and concise description of what the bug is.

    Reproduction

    1. What command or script did you run?
    A placeholder for the command.
    
    1. Did you make any modifications on the code or config? Did you understand what you have modified?
    2. What dataset did you use?

    Environment

    1. Please run python mmdet/utils/collect_env.py to collect necessary environment information and paste it here.
    2. You may add addition that may be helpful for locating the problem, such as
      • How you installed PyTorch [e.g., pip, conda, source]
      • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

    Error traceback If applicable, paste the error trackback here.

    A placeholder for trackback.
    

    Bug fix If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

    opened by pedroHuang123 19
  • [WIP] Add Flow1D Algorithm

    [WIP] Add Flow1D Algorithm

    Motivation

    Add Flow1D algorithm. This PR is based on #213 and the official code of Flow1D. Thanks for their excellent work!

    Modification

    1.configs/_base_/models/flow1d.py 2. configs/flow1d/flow1d_8xb2_100k_flyingchairs-368x496.py 3. mmflow/datasets/transforms/transforms.py (fix a bug) 4. mmflow/models/decoders/__init__.py 5. mmflow/models/decoders/flow1d_decoder.py 6. mmflow/models/flow_estimators/__init__.py 7. mmflow/models/flow_estimators/flow1d.py 8. mmflow/models/utils/__init__.py 9. mmflow/models/utils/attention1d.py 10. mmflow/models/utils/corr_lookup.py 11. mmflow/models/utils/correlation1d.py 12. tests/test_models/test_decoders/test_flow1d_decoder.py 13. tests/test_models/test_flow_estimators.py 14. tests/test_models/test_utils/test_corr_lookup.py

    TODO

    Reproduce the metrics of the original paper.

    opened by Zachary-66 1
Releases(v1.0.0rc0)
  • v1.0.0rc0(Aug 31, 2022)

    We are excited to announce the release of MMFlow 1.0.0rc0. MMFlow 1.0.0rc0 is a part of the OpenMMLab 2.0 projects. Built upon the new training engine, MMFlow 1.x unifies the interfaces of dataset, models, evaluation, and visualization with faster training and testing speed.

    Highlights

    1. New engines MMFlow 1.x is based on MMEngine, which provides a general and powerful runner that allows more flexible customizations and significantly simplifies the entrypoints of high-level interfaces.

    2. Unified interfaces As a part of the OpenMMLab 2.0 projects, MMFlow 1.x unifies and refactors the interfaces and internal logics of training, testing, datasets, models, evaluation, and visualization. All the OpenMMLab 2.0 projects share the same design in those interfaces and logics to allow the emergence of multi-task/modality algorithms.

    3. Faster speed We optimize the training and inference speed for common models.

    4. More documentation and tutorials We add a bunch of documentation and tutorials to help users get started more smoothly. Read it here.

    Breaking Changes

    We briefly list the major breaking changes here. We will update the migration guide to provide complete details and migration instructions.

    Training and testing

    • MMFlow 1.x runs on PyTorch>=1.6. We have deprecated the support of PyTorch 1.5 to embrace the mixed precision training and other new features since PyTorch 1.6. Some models can still run on PyTorch 1.5, but the full functionality of MMFlow 1.x is not guaranteed.

    • MMFlow 1.x uses Runner in MMEngine rather than that in MMCV. The new Runner implements and unifies the building logic of dataset, model, evaluation, and visualization. Therefore, MMFlow 1.x no longer maintains the building logics of those modules in mmflow.train.apis and tools/train.py. Those code have been migrated into MMEngine. Please refer to the migration guide of Runner in MMEngine for more details.

    • The Runner in MMEngine also supports testing and validation. The testing scripts are also simplified, which has similar logic as that in training scripts to build the runner.

    • The execution points of hooks in the new Runner have been enriched to allow more flexible customization. Please refer to the migration guide of Hook in MMEngine for more details.

    • Learning rate and momentum scheduling has been migrated from Hook to Parameter Scheduler in MMEngine. Please refer to the migration guide of Parameter Scheduler in MMEngine for more details.

    Configs

    Components

    • Dataset
    • Data Transforms
    • Model
    • Evaluation
    • Visualization

    Improvements

    • The training speed of those models with some common training strategies are improved, including those with synchronized batch normalization and mixed precision training.

    • Support mixed precision training of all the models. However, some models may got Nan results due to some numerical issues. We will update the documentation and list their results (accuracy of failure) of mixed precision training.

    Ongoing changes

    1. Inference interfaces: a unified inference interfaces will be supported in the future to ease the use of released models.

    2. Interfaces of useful tools that can be used in notebook: more useful tools that implemented in the tools directory will have their python interfaces so that they can be used through notebook and in downstream libraries.

    3. Documentation: we will add more design docs, tutorials, and migration guidance so that the community can deep dive into our new design, participate the future development, and smoothly migrate downstream libraries to MMFlow 1.x.

    Source code(tar.gz)
    Source code(zip)
  • v0.5.1(Jul 29, 2022)

    What's Changed

    Improvements

    • Set the maximum version of MMCV to 1.7.0 (167)
    • Update the qq_group_qrcode image in resources (166)

    New Contributors

    • @Weepingchestnut made their first contribution in https://github.com/open-mmlab/mmflow/pull/166

    Full Changelog: https://github.com/open-mmlab/mmflow/compare/v0.5.0...v0.5.1

    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Jul 1, 2022)

    What's Changed

    Highlight

    • Add config and pre-trained model for FlowNet2 on FlyingChairs (163)

    Documentation

    • Add a template for PR (160)
    • Fix config file error in metafile (151)
    • Fix broken URL in metafile (157)
    • Fix broken URLs for issue reporting in README (147)

    Improvements

    • Add mim to extras_require in setup.py (154)
    • Fix mdformat version to support python3.6 and remove ruby install (153)
    • Add test_mim.yml for testing commands of mim in CI (158)

    New Contributors

    • @lyq10085 made their first contribution in https://github.com/open-mmlab/mmflow/pull/151
    • @Zachary-66 made their first contribution in https://github.com/open-mmlab/mmflow/pull/157

    Full Changelog: https://github.com/open-mmlab/mmflow/compare/v0.4.2...v0.5.0

    Source code(tar.gz)
    Source code(zip)
  • v0.4.2(May 31, 2022)

    What's Changed

    Bug Fixes

    • Inference bug for sparse flow map (133)
    • H and W input images must be divisible by 2**6 (136)

    Documents

    • Configure Myst-parser to parse anchor tag (129)
    • Replace markdownlint with mdformat for avoiding installing ruby (130)
    • Rewrite install and README by (139, 140, 141, 144, 145)

    Full Changelog: https://github.com/open-mmlab/mmflow/compare/v0.4.1...v0.4.2

    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Apr 29, 2022)

    What's Changed

    Feature

    • Loading flow annotation from file client (#116)
    • Support overall dastaloader settings (#117)
    • Generate ann_file for flyingchairs (121)

    Improvements

    • Add GPG keys in CI(127)

    Bug Fixes

    • The config and weights are not corresponding in the metafile.yml (#118)
    • Replace recommonmark with myst_parser (#120)

    Documents

    • Add zh-cn doc 0_config_.md (#126)

    New Contributors

    • @HiiiXinyiii made their first contribution in https://github.com/open-mmlab/mmflow/pull/118
    • @SheffieldCao made their first contribution in https://github.com/open-mmlab/mmflow/pull/126

    Full Changelog: https://github.com/open-mmlab/mmflow/compare/v0.4.0...v0.4.1

    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Apr 1, 2022)

    Highlights

    • Support occlusion estimation methods including flow forward-backward consistency, range map of the backward flow, and flow forward-backward abstract difference

    Features

    • Support three occlusion estimation methods (#106)
    • Support different seeds on different ranks when distributed training (#104)

    Improvements

    • Revise collect_env for Windows platform (#112)
    • Add script and documentation for multi-machine distributed training (#107)
    • Synchronize random seed for the distributed sampler (#110)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Mar 4, 2022)

    Highlights

    • Officially support CPU train/inference
    • Officially support model inference in windows platform
    • Add census loss, SSIM loss and smoothness loss
    • Update the list of files with nan in Flyingthings3d_subset dataset

    Features

    • Add census loss (#100)
    • Add smoothness loss function (#97)
    • Add SSIM loss function (#96)

    Bug Fixes

    • Update nan files in Flyingthings3d_subset (94)
    • Add pretrained pwcnet-model when training PWCNet+ (#99)
    • Fix bug in non-distributed multi-gpu training/testing (#85)
    • Fix writing flow map bug in test (#83)

    Improvements

    • Add win-ci (#92)
    • Update the installation of MMCV (#89)
    • Upgrade isort in pre-commit hook (#87)
    • Support CPU train/inference (#86)
    • Add multi-processes script (#79)
    • Deprecate the support for "python setup.py test" (#73)

    Documents

    • Fix broken URLs in GMA README (#93)
    • Fix date format in readme (#90)
    • Reorganizing OpenMMLab projects in readme (#98)
    • Fix README files of algorithms (#84)
    • Add url of OpenMMLab and platform in README (76)

    New Contributors

    • @gxiaotian made their first contribution in https://github.com/open-mmlab/mmflow/pull/90
    • @lhao0301 made their first contribution in https://github.com/open-mmlab/mmflow/pull/94

    Full Changelog

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jan 7, 2022)

    Highlights

    • Support GMA: Learning to Estimate Hidden Motions with Global Motion Aggregation (ICCV 2021) (#32)
    • Fix the bug of wrong refine iter in RAFT, and update RAFT model checkpoint after the bug fixing (#62, #68)
    • Support resuming from the latest checkpoint automatically (#71)

    Features

    • Add scale_as_level for multi-level flow loss (#58)
    • Add scale_mode for correlation block (#56)
    • Add upsample_cfg in IRR-PWC decoder (#53)

    Bug Fixes

    • Resized input image must be dividable by 2^6 (#65)
    • Fix RAFT wrong refine iter after evaluation (#62)

    Improvements

    • Add persistent_workers=True in val_dataloader (#63)
    • Revise env_info key (#46)
    • Add digital version (#43)
    • Try to create a symbolic link on windows (#37)
    • Set a random seed when the user does not set a seed (#27)

    Refactors

    • Refactor utils in models (#50)

    Documents

    • Refactor documentation (#14)
    • Fix script bug in FlyingChairs dataset prepare (#21)
    • Fix broken links in model_zoo (#60)
    • Update metafile (#39, #41, #49)
    • Update documentation (#28, #35, #36, #47, #48, #70)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Nov 16, 2021)

    Highlights

    • MMFlow v0.1.0 is released.

    Main Features

    • The First Unified Framework for Optical Flow: MMFlow is the first toolbox that provides a framework for unified implementation and evaluation of optical flow algorithms.

    • Flexible and Modular Design: We decompose the flow estimation framework into different components, making it much easy and flexible to build a new model by combining different modules.

    • Plenty of Algorithms and Datasets Out of the Box: The toolbox directly supports popular and contemporary optical flow models, e.g. FlowNet, PWC-Net, RAFT, etc, and representative datasets, FlyingChairs, FlyingThings3D, Sintel, KITTI, etc.

    Source code(tar.gz)
    Source code(zip)
Plotting points that lie on the intersection of the given curves using gradient descent.

Plotting intersection of curves using gradient descent Webapp Link --- What's the app about Why this app Plotting functions and their intersection. A

Divakar Verma 2 Jan 09, 2022
CUDA Python Low-level Bindings

CUDA Python Low-level Bindings

NVIDIA Corporation 529 Jan 03, 2023
A Self-Supervised Contrastive Learning Framework for Aspect Detection

AspDecSSCL A Self-Supervised Contrastive Learning Framework for Aspect Detection This repository is a pytorch implementation for the following AAAI'21

Tian Shi 30 Dec 28, 2022
PSGAN running with ncnn⚡妆容迁移/仿妆⚡Imitation Makeup/Makeup Transfer⚡

PSGAN running with ncnn⚡妆容迁移/仿妆⚡Imitation Makeup/Makeup Transfer⚡

WuJinxuan 144 Dec 26, 2022
PyTorch implementation of Histogram Layers from DeepHist: Differentiable Joint and Color Histogram Layers for Image-to-Image Translation

deep-hist PyTorch implementation of Histogram Layers from DeepHist: Differentiable Joint and Color Histogram Layers for Image-to-Image Translation PyT

Winfried Lötzsch 10 Dec 06, 2022
[3DV 2021] A Dataset-Dispersion Perspective on Reconstruction Versus Recognition in Single-View 3D Reconstruction Networks

dispersion-score Official implementation of 3DV 2021 Paper A Dataset-dispersion Perspective on Reconstruction versus Recognition in Single-view 3D Rec

Yefan 7 May 28, 2022
Official PyTorch code for the paper: "Point-Based Modeling of Human Clothing" (ICCV 2021)

Point-Based Modeling of Human Clothing Paper | Project page | Video This is an official PyTorch code repository of the paper "Point-Based Modeling of

Visual Understanding Lab @ Samsung AI Center Moscow 64 Nov 22, 2022
A Temporal Extension Library for PyTorch Geometric

Documentation | External Resources | Datasets PyTorch Geometric Temporal is a temporal (dynamic) extension library for PyTorch Geometric. The library

Benedek Rozemberczki 1.9k Jan 07, 2023
Image Completion with Deep Learning in TensorFlow

Image Completion with Deep Learning in TensorFlow See my blog post for more details and usage instructions. This repository implements Raymond Yeh and

Brandon Amos 1.3k Dec 23, 2022
On Nonlinear Latent Transformations for GAN-based Image Editing - PyTorch implementation

On Nonlinear Latent Transformations for GAN-based Image Editing - PyTorch implementation On Nonlinear Latent Transformations for GAN-based Image Editi

Valentin Khrulkov 22 Oct 24, 2022
Classify music genre from a 10 second sound stream using a Neural Network.

MusicGenreClassification Academic research in the field of Deep Learning (Deep Neural Networks) and Sound Processing, Tel Aviv University. Featured in

Matan Lachmish 453 Dec 27, 2022
Molecular Sets (MOSES): A benchmarking platform for molecular generation models

Molecular Sets (MOSES): A benchmarking platform for molecular generation models Deep generative models are rapidly becoming popular for the discovery

Neelesh C A 3 Oct 14, 2022
PECOS - Prediction for Enormous and Correlated Spaces

PECOS - Predictions for Enormous and Correlated Output Spaces PECOS is a versatile and modular machine learning (ML) framework for fast learning and i

Amazon 387 Jan 04, 2023
maximal update parametrization (µP)

Maximal Update Parametrization (μP) and Hyperparameter Transfer (μTransfer) Paper link | Blog link In Tensor Programs V: Tuning Large Neural Networks

Microsoft 694 Jan 03, 2023
A Pose Estimator for Dense Reconstruction with the Structured Light Illumination Sensor

Phase-SLAM A Pose Estimator for Dense Reconstruction with the Structured Light Illumination Sensor This open source is written by MATLAB Run Mode Open

Xi Zheng 14 Dec 19, 2022
This is the official code for the paper "Tracker Meets Night: A Transformer Enhancer for UAV Tracking".

SCT This is the official code for the paper "Tracker Meets Night: A Transformer Enhancer for UAV Tracking" The spatial-channel Transformer (SCT) enhan

Intelligent Vision for Robotics in Complex Environment 27 Nov 23, 2022
YolactEdge: Real-time Instance Segmentation on the Edge

YolactEdge, the first competitive instance segmentation approach that runs on small edge devices at real-time speeds. Specifically, YolactEdge runs at up to 30.8 FPS on a Jetson AGX Xavier (and 172.7

Haotian Liu 1.1k Jan 06, 2023
This repository contains code from the paper "TTS-GAN: A Transformer-based Time-Series Generative Adversarial Network"

TTS-GAN: A Transformer-based Time-Series Generative Adversarial Network This repository contains code from the paper "TTS-GAN: A Transformer-based Tim

Intelligent Multimodal Computing and Sensing Laboratory (IMICS Lab) - Texas State University 108 Dec 29, 2022
Code for the paper "On the Power of Edge Independent Graph Models"

Edge Independent Graph Models Code for the paper: "On the Power of Edge Independent Graph Models" Sudhanshu Chanpuriya, Cameron Musco, Konstantinos So

Konstantinos Sotiropoulos 0 Oct 26, 2021
UV matrix decompostion using movielens dataset

UV-matrix-decompostion-with-kfold UV matrix decompostion using movielens dataset upload the 'ratings.dat' file install the following python libraries

2 Oct 18, 2022