UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss

Overview

UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss

This repository contains the TensorFlow implementation of the paper

UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss (AAAI 2018)

Simon Meister, Junhwa Hur, and Stefan Roth.

Slides

Download slides from AAAI 2018 talk.

Citation

If you find UnFlow useful in your research, please consider citing:

@inproceedings{Meister:2018:UUL,
  title  = {{UnFlow}: Unsupervised Learning of Optical Flow
            with a Bidirectional Census Loss},
  author = {Simon Meister and Junhwa Hur and Stefan Roth},
  address = {New Orleans, Louisiana},
  booktitle = {AAAI},
  month = feb,
  year = {2018}
}

License

UnFlow is released under the MIT License (refer to the LICENSE file for details).

Unofficial PyTorch code

There is a unofficial third party PyTorch implementation by Simon Niklaus.

Contents

  1. Introduction
  2. Usage
  3. Replicating our Models
  4. Navigating the Code

Introduction

Our paper describes a method to train end-to-end deep networks for dense optical flow without the need for ground truth optical flow.

NOTE (January 2020): There may be some hiccups with more recent versions of tensorflow due to the unstable custom op compilation API used for the correlation operation. If you experience these issues, please try one of the older tensorflow versions (see list of releases, e.g. 1.2 or 1.7) until I find time to fix these issues. I'm currently too busy with new projects to upgrade the code, and would be happy about any contributions.

This implementation supports all training and evaluation styles described in the paper. This includes

The supported network architectures are FlowNetS, FlowNetC, as well as stacked variants of these networks as introduced in FlowNet 2.0.

All datasets required for training and evaluation are downloaded on-demand by the code (except Cityscapes, which has to be downloaded manually if needed). Please ensure that the data directory specified in the configuration file has enough space to hold at least 150 GB (when using SYNTHIA and KITTI only).

Usage

Please refer to the configuration file template (config_template/config.ini) for a detailed description of the different operating modes.

Hardware requirements

  • at least one NVIDIA GPU (multi-GPU training is supported). We used the Titan X Pascal with 12GB memory.
  • for best performance, at least 8GB of GPU memory is recommended, for the stacked variants 11-12GB may be best
  • to run the code with less GPU memory, take a look at https://github.com/openai/gradient-checkpointing. One user reported successfully running the full CSS model with a GTX 960 with 4GB memory. Note that training will take longer in that case.

Software requirements

  • python 3
  • gcc4
  • RAR backend tool for rarfile (see https://pypi.python.org/pypi/rarfile/)
  • python packages: matplotlib pypng rarfile pillow and tensorflow-gpu (at least version 1.7)
  • CUDA and CuDNN. You should use the installer downloaded from the NVIDIA website and install to /usr/local/cuda. Please make sure that the versions you install are compatible with the version of tensorflow-gpu you are using (see e.g. https://github.com/tensorflow/tensorflow/releases).

Prepare environment

  • copy config_template/config.ini to ./ and modify settings in the [dir], [run] and [compile] sections for your environment (see comments in the file).

Run & evaluate experiments

  • adapt settings in ./config.ini for your experiment
  • cd src
  • train with python run.py --ex my_experiment. Evaluation is run during training as specified in config.ini, but no visualizations are saved.
  • evaluate (multiple) experiments with python eval_gui.py --ex experiment_name_1[, experiment_name_2 [...]] and visually compare results side-by-side.

You can use the --help flag with run.py or eval_gui.py to view all available flags.

View tensorboard logs

  • view logs for all experiments with tensorboard --logdir=<log_dir>/ex

Pre-trained models

We provide checkpoints for the C_SYNTHIA, CS_SYNTHIA, CSS_SYNTHIA, C, CS, CSS and CSS_ft models (see "Replicating our models" for a description). To use them,

  • download this file and extract the contents to <log_dir>/ex/.

Now, you can evaluate and compare different models, e.g.

  • python eval_gui.py --ex C_SYNTHIA,C,CSS_ft.

Replicating our models

In the following, each list item gives an experiment name and parameters to set in ./config.ini. For each experiment, first modify the configuration parameters as specified, and then run python run.py --ex experiment_name.

First, create a series of experiments for the models pre-trained on SYNTHIA:

  • C_SYNTHIA: dataset = synthia, flownet = C,
  • CS_SYNTHIA: dataset = synthia, flownet = CS, finetune = C_SYNTHIA,
  • CSS_SYNTHIA: dataset = synthia, flownet = CSS, finetune = C_SYNTHIA,CS_SYNTHIA.

Next, create a series of experiments for the models trained on KITTI raw:

  • C: dataset = kitti, flownet = C, finetune = C_SYNTHIA,
  • CS: dataset = kitti, flownet = CS, finetune = C,CS_SYNTHIA,
  • CSS: dataset = kitti, flownet = CSS, finetune = C,CS,CSS_SYNTHIA.

Then, train the final fine-tuned model on KITTI 2012 / 2015:

  • CSS_ft: dataset = kitti_ft, flownet = CSS, finetune = C,CS,CSS, train_all = True. Please monitor the eval logs and stop the training as soon as the validation error starts to increase (for CSS, it should be at about 70K iterations).

If you want to train on Cityscapes:

  • C_Cityscapes: dataset = cityscapes, flownet = C, finetune = C_SYNTHIA.

Note that all models but CSS_ft were trained without ground truth optical flow, using our unsupervised proxy loss only.

Navigating the code

The core implementation of our method resides in src/e2eflow/core. The key files are:

  • losses.py: Proxy losses for unsupervised training,
  • flownet.py: FlowNet architectures with support for stacking,
  • image_warp.py: Backward-warping via differentiable bilinear image sampling,
  • supervised.py: Supervised loss and network computation,
  • unsupervised.py: Unsupervised (bi-directional) multi-resolution network and loss computation,
  • train.py: Implementation of training with online evaluation,
  • augment.py: Geometric and photometric data augmentations.
Comments
  • How to obtain the .so files?

    How to obtain the .so files?

    When I try to run the code according to the README instructions, I get an error that certain .so files are not found. Indeed, the necessary .cc and .h files are in the ops directory, but no .so or .o files.

    How can I obtain them? Or are they supposed to be generated somehow at first?

    (Maybe I have more general understanding problem: What does ops actually stand for?)

    opened by clauslang 13
  • occ loss implementation problem

    occ loss implementation problem

    Hi,

    in losses.py, forward-backward occlusion loss is implemented as:

    ...
        flow_diff_bw = flow_bw + flow_fw_warped
    ...
        fb_occ_bw = tf.cast(length_sq(flow_diff_bw) > occ_thresh, tf.float32)
    ...
            mask_bw *= (1 - fb_occ_bw)
    ...
        occ_bw = 1 - mask_bw
    ...
        losses['occ'] = (charbonnier_loss(occ_fw) +
                         charbonnier_loss(occ_bw))
    ...
    

    However, gradients cannot BP through the tf.greater operation:

        test_g = tf.gradients(occ_bw, flow_diff_bw)
        print("occ gradient: %s" % test_g)
    

    would output [None].

    Does that means the occ loss is not working?

    correct me if anything wrong,

    Thanks

    opened by immars 12
  • Please help, error! when run  python3 run.py --ex test

    Please help, error! when run python3 run.py --ex test

    I have one GPU, I have configured config.in. What have I done wrong here?

    /usr/lib/python3/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/lib/bfloat16/bfloat16.h(466): error: no suitable constructor exists to convert from "float" to "tensorflow::bfloat16"

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/lib/bfloat16/bfloat16.h(467): error: no suitable constructor exists to convert from "float" to "tensorflow::bfloat16"

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/lib/bfloat16/bfloat16.h(468): error: no suitable constructor exists to convert from "float" to "tensorflow::bfloat16"

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/google/protobuf/arena_impl.h(57): warning: integer conversion resulted in a change of sign

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/google/protobuf/arena_impl.h(304): warning: integer conversion resulted in a change of sign

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/google/protobuf/arena_impl.h(305): warning: integer conversion resulted in a change of sign

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(108): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::lgamma_impl::run(Scalar) [with Scalar=double]" (1517): here instantiation of "Eigen::internal::lgamma_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::lgamma(const Scalar &) [with Scalar=double]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(32): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(1015): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::polygamma_impl::run(Scalar, Scalar) [with Scalar=float]" (1535): here instantiation of "Eigen::internal::polygamma_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::polygamma(const Scalar &, const Scalar &) [with Scalar=float]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(67): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(1015): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::polygamma_impl::run(Scalar, Scalar) [with Scalar=double]" (1535): here instantiation of "Eigen::internal::polygamma_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::polygamma(const Scalar &, const Scalar &) [with Scalar=double]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(74): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(342): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::erf_impl::run(Scalar) [with Scalar=double]" (1541): here instantiation of "Eigen::internal::erf_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::erf(const Scalar &) [with Scalar=double]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(87): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(375): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::erfc_impl::run(Scalar) [with Scalar=float]" (1547): here instantiation of "Eigen::internal::erfc_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::erfc(const Scalar &) [with Scalar=float]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(94): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(375): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::erfc_impl::run(Scalar) [with Scalar=double]" (1547): here instantiation of "Eigen::internal::erfc_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::erfc(const Scalar &) [with Scalar=double]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(101): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(649): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::igamma_impl::run(Scalar, Scalar) [with Scalar=float]" (1553): here instantiation of "Eigen::internal::igamma_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::igamma(const Scalar &, const Scalar &) [with Scalar=float]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(110): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(649): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::igamma_impl::run(Scalar, Scalar) [with Scalar=double]" (1553): here instantiation of "Eigen::internal::igamma_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::igamma(const Scalar &, const Scalar &) [with Scalar=double]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(120): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(461): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::igammac_impl::run(Scalar, Scalar) [with Scalar=float]" (1559): here instantiation of "Eigen::internal::igammac_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::igammac(const Scalar &, const Scalar &) [with Scalar=float]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(128): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(461): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::igammac_impl::run(Scalar, Scalar) [with Scalar=double]" (1559): here instantiation of "Eigen::internal::igammac_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::igammac(const Scalar &, const Scalar &) [with Scalar=double]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(138): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(1064): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::betainc_impl::run(Scalar, Scalar, Scalar) [with Scalar=float]" (1565): here instantiation of "Eigen::internal::betainc_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::betainc(const Scalar &, const Scalar &, const Scalar &) [with Scalar=float]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(146): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(1064): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::betainc_impl::run(Scalar, Scalar, Scalar) [with Scalar=double]" (1565): here instantiation of "Eigen::internal::betainc_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::betainc(const Scalar &, const Scalar &, const Scalar &) [with Scalar=double]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(156): here

    15 errors detected in the compilation of "/tmp/tmpxft_000013b5_00000000-6_backward_warp_op.cu.cpp1.ii". Traceback (most recent call last): File "/home/cislab315/deeplearning/UnFlow/src/e2eflow/ops.py", line 61, in op_lib = tf.load_op_library(lib_path) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/load_library.py", line 56, in load_op_library lib_handle = py_tf.TF_LoadLibrary(library_filename) tensorflow.python.framework.errors_impl.NotFoundError: ./backward_warp_op.so: cannot open shared object file: No such file or directory

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "run.py", line 7, in from e2eflow.core.train import Trainer File "/home/cislab315/deeplearning/UnFlow/src/e2eflow/core/train.py", line 12, in from ..ops import forward_warp File "/home/cislab315/deeplearning/UnFlow/src/e2eflow/ops.py", line 63, in compile(n) File "/home/cislab315/deeplearning/UnFlow/src/e2eflow/ops.py", line 45, in compile subprocess.check_output(nvcc_cmd, shell=True) File "/usr/lib/python3.6/subprocess.py", line 336, in check_output **kwargs).stdout File "/usr/lib/python3.6/subprocess.py", line 418, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command 'nvcc -std=c++11 -c -o backward_warp_op.cu.o backward_warp_op.cu.cc -I/usr/local/lib/python3.6/dist-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0 -L/usr/local/lib/python3.6/dist-packages/tensorflow -ltensorflow_framework -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I /usr/local --expt-relaxed-constexpr' returned non-zero exit status 1.

    opened by hbzhang 11
  • No OpKernel was registered to support Op 'Correlation' with these attrs.

    No OpKernel was registered to support Op 'Correlation' with these attrs.

    Hi,

    Thank you for your work. I am facing a problem which I cannot fully understand... It seems that it is related to "correlation_op.cc" Could you help me with that? Thanks. :)

    My settings are: Python 3.5 (with Anaconda) Cuda 8.0 Tensorflow-gpu 1.10

    I run the following command and set the corresponding "gpu_list=0" in "config_ini": CUDA_VISIBLE_DEVICES=0 python3 run.py --ex my_experiment

    And I got the following errors: Traceback (most recent call last): File "/home/runzeli/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1278, in _do_call return fn(*args) File "/home/runzeli/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1261, in _run_fn self._extend_graph() File "/home/runzeli/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1295, in _extend_graph tf_session.ExtendSession(self._session) tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'Correlation' with these attrs. Registered devices: [CPU,GPU], Registered kernels:

     [[Node: flownet_c/Correlation = Correlation[kernel_size=1, max_displacement=20, pad=20, stride_1=1, stride_2=2, _device="/device:GPU:3"](flownet_c_features/conv3/leaky_relu/Maximum, flownet_c_features/conv3_1/leaky_relu/Maximum)]]
    
    opened by bragilee 10
  • How flow vectors are stored and why do we need to do addition in this line of forward warping?

    How flow vectors are stored and why do we need to do addition in this line of forward warping?

    Hi @simonmeister ,

    I was reading your forward_warp code and couldn't understand why you are doing addition in lines 29 and 30 instead of subtraction?

    https://github.com/simonmeister/UnFlow/blob/ddb4bd32f762168ed38849f736b6a7d2094b0042/ops/forward_warp_op.cu.cc#L29

    If you look at tensorflow's implementation of dense_image_warp, you can see that they are doing subtraction when it comes to finding query points for interpolation step.

    https://github.com/tensorflow/tensorflow/blob/6612da89516247503f03ef76e974b51a434fb52e/tensorflow/contrib/image/python/ops/dense_image_warp.py#L210

    I guess it all depends on how flow vectors are stored in flow matrixes. In the following picture, I have shown two different ways of storing a flow vector, which one do you think is correct one, Flow field 1 or Flow field 2? (I mean which one is standard, and used by benchmark datasets like Middlebury and Sintel?)

    [ In this simple example one black dot moves from the location (2,6) in image one to (5,10) in image two. So, flow vector for that point is (3,4)]

    FlowExplained(1)

    If you make it clear that which one you are assuming, I can try to see why you are doing addition in those lines.

    opened by alisaaalehi 7
  • Do you write  UnFlow using pytorch?

    Do you write UnFlow using pytorch?

    Hi, I like your paper very much. I would like to use pytorch to build my model. But I only find tensorflow version. If I want to use this loss function in my pytorch model, should I write the loss function by myself?

    Thank you in advance

    opened by hzk7287 6
  • How to do BP for occlusion mask penalty?

    How to do BP for occlusion mask penalty?

    Hi, @simonmeister Thanks for your nice work and repo. I've read your paper and there is one thing that I do not understand. In equation (2) occlusion mask o_x is penalized with weight lambda_p. However in equation (1) this mask is calculated by comparision. How to do back propagation for this penalty term? image image

    Thanks~

    wontfix 
    opened by EthanZhangYi 6
  • How many iterations does FlownetC need?

    How many iterations does FlownetC need?

    I have trained 130400 iterations with batch_size 8 on flying chairs dataset. The predicted flows on test set are still not good. Should I continue or switched to FlownetCS?

    opened by dvorak0 6
  • Doesn't work if cuda is not installed in usr/local

    Doesn't work if cuda is not installed in usr/local

    If cuda is not installed in 'usr/local', it does not run correctly. This is due to lines 42 and 46 in ops.py

    Note also that lines 37-40 are not used in ops.py, and therefore neither is line 17 in config.ini.

    My simple fix to this was to replace lines 37 to 46 in ops.py with the following:

        out, err = subprocess.Popen(['which', 'nvcc'], stdout=subprocess.PIPE).communicate()
        cuda_dir = out.decode().split('/cuda')[0]
    
        nvcc_cmd = "nvcc -std=c++11 -c -o {} {} {} -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I " + cuda_dir + " --expt-relaxed-constexpr"
        nvcc_cmd = nvcc_cmd.format(" ".join([fn_cu_o, fn_cu_cc]),
                                   tf_inc, tf_lib)
        subprocess.check_output(nvcc_cmd, shell=True)
        gcc_cmd = "{} -std=c++11 -shared -o {} {} -fPIC -L " + cuda_dir + "/cuda/lib64 -lcudart {} -O2 -D GOOGLE_CUDA=1"
    

    I also removed line 17 from config.ini

    Finally, I know this is a custom config file, but I would also suggest perhaps changing line 15 in config.ini to

    g++ = g++

    I have tested on 3 different machines, all with different versions of linux, tensorflow, and cuda. With these small changes, your code runs immediately following a clone from this repo on all of them (after copying over the config.ini file of course.)

    Just a few small suggestions to make things as out-the-box as possible! :)

    opened by djl11 5
  • Check failed: dim.IsSet() Internal error: Got nullptr for Dimension in downsample

    Check failed: dim.IsSet() Internal error: Got nullptr for Dimension in downsample

    Hello, thanks again for your work!

    I have a question. In unsupervised.py I have im1_s, im2_s with shapes (2, 320, 1152, ?). After first downsample(im1_s, 4) shapes became (2, 80, 288, ?), and then in each iteration when computing loss (https://github.com/simonmeister/UnFlow/blob/master/src/e2eflow/core/unsupervised.py#L116)

    I have downsample(im1_s, 2) and shapes decreasing twice. So, once, I have shape (2, 5, 18, ?), and there is an error:

    /usr/local/lib/python3.5/dist-packages/tensorflow/include/tensorflow/core/framework/shape_inference.h:625] Check failed: dim.IsSet() Internal error: Got nullptr for Dimension . Aborted (core dumped)

    What can cause such a problem and how can I solve it?

    opened by lenazherdeva 5
  • When finetuning, previous networks are kept fixed?

    When finetuning, previous networks are kept fixed?

    Hi, Simmon! Thanks for your wonderful code! But I have some questions now, please give me some advices. When finetuning, such as use the UnFlowC experiment for first network and UnFlowCS for the second network when training UnFlowCSS, you said that only the final network is trained and any previous networks are kept fixed. But I can't figure out how you make the previous networks fixed in code, are there any settings in code to make sure the previous networks are fixed? Please give me some clues, thank you very much!!

    opened by Blcony 5
  • KeyError:

    KeyError: "correlation"

    Hello, I want to get the output flow images from your CSS_ft model, the code I wrote is showing below:

    import tensorflow as tf
    import os
    
    ckpt_path = "~/UnFlow/logs/CSS_ft"
    
    with tf.Session() as sess:
        saver = tf.train.import_meta_graph(os.path.join(ckpt_path, "model.ckpt.meta"))
        saver.restore(sess, tf.train.latest_checkpoint(ckpt_path))
        print("finished!")
    

    when I execute the code, it shows the error like this:

    Traceback (most recent call last):
      File "export_endovis.py", line 8, in <module>
        saver = tf.train.import_meta_graph(os.path.join(ckpt_path, "model.ckpt.meta"))
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1435, in import_meta_graph
        meta_graph_or_file, clear_devices, import_scope, **kwargs)[0]
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1457, in _import_meta_graph_with_return_elements
        **kwargs))
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/meta_graph.py", line 806, in import_scoped_meta_graph_with_return_elements
        return_elements=return_elements)
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
        return func(*args, **kwargs)
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 399, in import_graph_def
        _RemoveDefaultAttrs(op_dict, producer_op_list, graph_def)
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 159, in _RemoveDefaultAttrs
        op_def = op_dict[node.op]
    KeyError: 'Correlation'
    
    

    please help me out, thanks!

    opened by David3310273 1
  •  lib_handle = py_tf.TF_LoadLibrary(library_filename) tensorflow.python.framework.errors_impl.NotFoundError: HOW TO GET RIDE OF THIS ERROR

    lib_handle = py_tf.TF_LoadLibrary(library_filename) tensorflow.python.framework.errors_impl.NotFoundError: HOW TO GET RIDE OF THIS ERROR

    lib_handle = py_tf.TF_LoadLibrary(library_filename) tensorflow.python.framework.errors_impl.NotFoundError: C:\Users\kabirfaz\Documents\MyData\02_MyProjects\05_P600_Handsigns\venv\lib\site-packages\tensorflow\contrib\tensor_forest\python\ops_tensor_forest_ops.so not found

    Python 3.6.0 tensorflow 1.10.0

    Please help

    help wanted 
    opened by Fazankabir 3
  • error : .\backward_warp_op.so not found

    error : .\backward_warp_op.so not found

    hey i got that error when running "tun.py" i saw the previos issues num 23 and 39 tried to do as they said but still got that error

    ubunto 16 cuda 9 tnesorflow 1.7 (i also tried with 1.12/1.13) also try on windows 10 with cuda 9 got the same problem what can i do

    help wanted 
    opened by yoryory 2
  • Output and input node name of UnFlow.

    Output and input node name of UnFlow.

    I want to get a frozen graph for inference using the freeze_graph function in tensorflow. However, I cannot get the input and output name of the UnFlow. How can I get those node names? Or could you please provide them for me? Thanks.

    opened by shihao1104 0
  • error: constexpr function return is non-constant

    error: constexpr function return is non-constant

    I use tf1.13,cuda10.0 and g++-4.8,and my os is ubuntu16.04.However,when I run python run.py --help,some errors occur:

    /usr/local/lib/python3.5/dist-packages/tensorflow/include/absl/strings/string_view.h(496): error: constexpr function return is non-constant

    /usr/local/lib/python3.5/dist-packages/tensorflow/include/google/protobuf/arena_impl.h(55): warning: integer conversion resulted in a change of sign

    /usr/local/lib/python3.5/dist-packages/tensorflow/include/google/protobuf/arena_impl.h(309): warning: integer conversion resulted in a change of sign

    /usr/local/lib/python3.5/dist-packages/tensorflow/include/google/protobuf/arena_impl.h(310): warning: integer conversion resulted in a change of sign

    1 error detected in the compilation of "/tmp/tmpxft_00003790_00000000-6_backward_warp_op.cu.cpp1.ii".

    Traceback (most recent call last): File "/home/kjq/UnFlow/src/e2eflow/ops.py", line 59, in op_lib = tf.load_op_library(lib_path) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/load_library.py", line 61, in load_op_library lib_handle = py_tf.TF_LoadLibrary(library_filename) tensorflow.python.framework.errors_impl.NotFoundError: ./backward_warp_op.so: cannot open shared object file: No such file or directory

    During handling of the above exception, another exception occurred: Traceback (most recent call last): File "run.py", line 7, in from e2eflow.core.train import Trainer File "/home/kjq/UnFlow/src/e2eflow/core/train.py", line 11, in from . import util File "/home/kjq/UnFlow/src/e2eflow/core/util.py", line 2, in from ..ops import downsample as downsample_ops File "/home/kjq/UnFlow/src/e2eflow/ops.py", line 61, in compile(n) File "/home/kjq/UnFlow/src/e2eflow/ops.py", line 43, in compile subprocess.check_output(nvcc_cmd, shell=True) File "/usr/lib/python3.5/subprocess.py", line 626, in check_output **kwargs).stdout File "/usr/lib/python3.5/subprocess.py", line 708, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command 'nvcc -std=c++11 -c -o backward_warp_op.cu.o backward_warp_op.cu.cc -I/usr/local/lib/python3.5/dist-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0 -L/usr/local/lib/python3.5/dist-packages/tensorflow -ltensorflow_framework -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I /usr/local --expt-relaxed-constexpr' returned non-zero exit status 1

    could you give some advices ?

    opened by ooFormalinoo 1
  • "step" parameter to load frames has no effect

    I am looking at the data loading function and in particular at its step parameter:

    https://github.com/simonmeister/UnFlow/blob/master/src/e2eflow/core/input.py#L164-L165

    It appears though, that regardless of the choice of the step, only consecutive frames are used, because frames are indexed by i and i+1. I would assume the indexing to be i and i+step. Is this a bug or I misunderstand the intention behind this parameter?

    opened by eldar 0
Releases(v0.9.3)
Owner
Simon Meister
stealth @ stealth
Simon Meister
ANEA: Automated (Named) Entity Annotation for German Domain-Specific Texts

ANEA The goal of Automatic (Named) Entity Annotation is to create a small annotated dataset for NER extracted from German domain-specific texts. Insta

Anastasia Zhukova 2 Oct 07, 2022
The software associated with a paper accepted at EMNLP 2021 titled "Open Knowledge Graphs Canonicalization using Variational Autoencoders".

Open-KG-canonicalization The software associated with a paper accepted at EMNLP 2021 titled "Open Knowledge Graphs Canonicalization using Variational

International Business Machines 13 Nov 11, 2022
🔥RandLA-Net in Tensorflow (CVPR 2020, Oral & IEEE TPAMI 2021)

RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds (CVPR 2020) This is the official implementation of RandLA-Net (CVPR2020, Oral

Qingyong 1k Dec 30, 2022
In this project we investigate the performance of the SetCon model on realistic video footage. Therefore, we implemented the model in PyTorch and tested the model on two example videos.

Contrastive Learning of Object Representations Supervisor: Prof. Dr. Gemma Roig Institutions: Goethe University CVAI - Computational Vision & Artifici

Dirk Neuhäuser 6 Dec 08, 2022
NAS-Bench-x11 and the Power of Learning Curves

NAS-Bench-x11 NAS-Bench-x11 and the Power of Learning Curves Shen Yan, Colin White, Yash Savani, Frank Hutter. NeurIPS 2021. Surrogate NAS benchmarks

AutoML-Freiburg-Hannover 13 Nov 18, 2022
Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images".

nvdiffrec Joint optimization of topology, materials and lighting from multi-view image observations as described in the paper Extracting Triangular 3D

NVIDIA Research Projects 1.4k Jan 01, 2023
Neural-fractal - Create Fractals Using Complex-Valued Neural Networks!

Neural Fractal Create Fractals Using Complex-Valued Neural Networks! Home Page Features Define Dynamical Systems Using Complex-Valued Neural Networks

Amirabbas Asadi 10 Dec 17, 2022
PyTorch Personal Trainer: My framework for deep learning experiments

Alex's PyTorch Personal Trainer (ptpt) (name subject to change) This repository contains my personal lightweight framework for deep learning projects

Alex McKinney 8 Jul 14, 2022
FairyTailor: Multimodal Generative Framework for Storytelling

FairyTailor: Multimodal Generative Framework for Storytelling

Eden Bens 172 Dec 30, 2022
Ros2-voiceroid2 - ROS2 wrapper package of VOICEROID2

ros2_voiceroid2 ROS2 wrapper package of VOICEROID2 Windows Only Installation Ins

Nkyoku 1 Jan 23, 2022
Voice Conversion Using Speech-to-Speech Neuro-Style Transfer

This repo contains the official implementation of the VAE-GAN from the INTERSPEECH 2020 paper Voice Conversion Using Speech-to-Speech Neuro-Style Transfer.

Ehab AlBadawy 93 Jan 05, 2023
Rational Activation Functions - Replacing Padé Activation Units

Rational Activations - Learnable Rational Activation Functions First introduce as PAU in Padé Activation Units: End-to-end Learning of Activation Func

<a href=[email protected]"> 38 Nov 22, 2022
ISTR: End-to-End Instance Segmentation with Transformers (https://arxiv.org/abs/2105.00637)

This is the project page for the paper: ISTR: End-to-End Instance Segmentation via Transformers, Jie Hu, Liujuan Cao, Yao Lu, ShengChuan Zhang, Yan Wa

Jie Hu 182 Dec 19, 2022
Sparse Physics-based and Interpretable Neural Networks

Sparse Physics-based and Interpretable Neural Networks for PDEs This repository contains the code and manuscript for research done on Sparse Physics-b

28 Jan 03, 2023
Populating 3D Scenes by Learning Human-Scene Interaction https://posa.is.tue.mpg.de/

Populating 3D Scenes by Learning Human-Scene Interaction [Project Page] [Paper] License Software Copyright License for non-commercial scientific resea

Mohamed Hassan 81 Nov 08, 2022
DenseNet Implementation in Keras with ImageNet Pretrained Models

DenseNet-Keras with ImageNet Pretrained Models This is an Keras implementation of DenseNet with ImageNet pretrained weights. The weights are converted

Felix Yu 568 Oct 31, 2022
【ACMMM 2021】DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning

DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning (ACMMM 2021) Overview We release the code of the DSANet (Dynamic S

Wenhao Wu 46 Dec 27, 2022
S2s2net - Sentinel-2 Super-Resolution Segmentation Network

S2S2Net Sentinel-2 Super-Resolution Segmentation Network Getting started Install

Wei Ji 10 Nov 10, 2022
This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.

Pruning Self-attentions into Convolutional Layers in Single Path This is the official repository for our paper: Pruning Self-attentions into Convoluti

Zhuang AI Group 77 Dec 26, 2022
Trainable Bilateral Filter Layer (PyTorch)

Trainable Bilateral Filter Layer (PyTorch) This repository contains our GPU-accelerated trainable bilateral filter layer (three spatial and one range

FabianWagner 26 Dec 25, 2022