UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss

Overview

UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss

This repository contains the TensorFlow implementation of the paper

UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss (AAAI 2018)

Simon Meister, Junhwa Hur, and Stefan Roth.

Slides

Download slides from AAAI 2018 talk.

Citation

If you find UnFlow useful in your research, please consider citing:

@inproceedings{Meister:2018:UUL,
  title  = {{UnFlow}: Unsupervised Learning of Optical Flow
            with a Bidirectional Census Loss},
  author = {Simon Meister and Junhwa Hur and Stefan Roth},
  address = {New Orleans, Louisiana},
  booktitle = {AAAI},
  month = feb,
  year = {2018}
}

License

UnFlow is released under the MIT License (refer to the LICENSE file for details).

Unofficial PyTorch code

There is a unofficial third party PyTorch implementation by Simon Niklaus.

Contents

  1. Introduction
  2. Usage
  3. Replicating our Models
  4. Navigating the Code

Introduction

Our paper describes a method to train end-to-end deep networks for dense optical flow without the need for ground truth optical flow.

NOTE (January 2020): There may be some hiccups with more recent versions of tensorflow due to the unstable custom op compilation API used for the correlation operation. If you experience these issues, please try one of the older tensorflow versions (see list of releases, e.g. 1.2 or 1.7) until I find time to fix these issues. I'm currently too busy with new projects to upgrade the code, and would be happy about any contributions.

This implementation supports all training and evaluation styles described in the paper. This includes

The supported network architectures are FlowNetS, FlowNetC, as well as stacked variants of these networks as introduced in FlowNet 2.0.

All datasets required for training and evaluation are downloaded on-demand by the code (except Cityscapes, which has to be downloaded manually if needed). Please ensure that the data directory specified in the configuration file has enough space to hold at least 150 GB (when using SYNTHIA and KITTI only).

Usage

Please refer to the configuration file template (config_template/config.ini) for a detailed description of the different operating modes.

Hardware requirements

  • at least one NVIDIA GPU (multi-GPU training is supported). We used the Titan X Pascal with 12GB memory.
  • for best performance, at least 8GB of GPU memory is recommended, for the stacked variants 11-12GB may be best
  • to run the code with less GPU memory, take a look at https://github.com/openai/gradient-checkpointing. One user reported successfully running the full CSS model with a GTX 960 with 4GB memory. Note that training will take longer in that case.

Software requirements

  • python 3
  • gcc4
  • RAR backend tool for rarfile (see https://pypi.python.org/pypi/rarfile/)
  • python packages: matplotlib pypng rarfile pillow and tensorflow-gpu (at least version 1.7)
  • CUDA and CuDNN. You should use the installer downloaded from the NVIDIA website and install to /usr/local/cuda. Please make sure that the versions you install are compatible with the version of tensorflow-gpu you are using (see e.g. https://github.com/tensorflow/tensorflow/releases).

Prepare environment

  • copy config_template/config.ini to ./ and modify settings in the [dir], [run] and [compile] sections for your environment (see comments in the file).

Run & evaluate experiments

  • adapt settings in ./config.ini for your experiment
  • cd src
  • train with python run.py --ex my_experiment. Evaluation is run during training as specified in config.ini, but no visualizations are saved.
  • evaluate (multiple) experiments with python eval_gui.py --ex experiment_name_1[, experiment_name_2 [...]] and visually compare results side-by-side.

You can use the --help flag with run.py or eval_gui.py to view all available flags.

View tensorboard logs

  • view logs for all experiments with tensorboard --logdir=<log_dir>/ex

Pre-trained models

We provide checkpoints for the C_SYNTHIA, CS_SYNTHIA, CSS_SYNTHIA, C, CS, CSS and CSS_ft models (see "Replicating our models" for a description). To use them,

  • download this file and extract the contents to <log_dir>/ex/.

Now, you can evaluate and compare different models, e.g.

  • python eval_gui.py --ex C_SYNTHIA,C,CSS_ft.

Replicating our models

In the following, each list item gives an experiment name and parameters to set in ./config.ini. For each experiment, first modify the configuration parameters as specified, and then run python run.py --ex experiment_name.

First, create a series of experiments for the models pre-trained on SYNTHIA:

  • C_SYNTHIA: dataset = synthia, flownet = C,
  • CS_SYNTHIA: dataset = synthia, flownet = CS, finetune = C_SYNTHIA,
  • CSS_SYNTHIA: dataset = synthia, flownet = CSS, finetune = C_SYNTHIA,CS_SYNTHIA.

Next, create a series of experiments for the models trained on KITTI raw:

  • C: dataset = kitti, flownet = C, finetune = C_SYNTHIA,
  • CS: dataset = kitti, flownet = CS, finetune = C,CS_SYNTHIA,
  • CSS: dataset = kitti, flownet = CSS, finetune = C,CS,CSS_SYNTHIA.

Then, train the final fine-tuned model on KITTI 2012 / 2015:

  • CSS_ft: dataset = kitti_ft, flownet = CSS, finetune = C,CS,CSS, train_all = True. Please monitor the eval logs and stop the training as soon as the validation error starts to increase (for CSS, it should be at about 70K iterations).

If you want to train on Cityscapes:

  • C_Cityscapes: dataset = cityscapes, flownet = C, finetune = C_SYNTHIA.

Note that all models but CSS_ft were trained without ground truth optical flow, using our unsupervised proxy loss only.

Navigating the code

The core implementation of our method resides in src/e2eflow/core. The key files are:

  • losses.py: Proxy losses for unsupervised training,
  • flownet.py: FlowNet architectures with support for stacking,
  • image_warp.py: Backward-warping via differentiable bilinear image sampling,
  • supervised.py: Supervised loss and network computation,
  • unsupervised.py: Unsupervised (bi-directional) multi-resolution network and loss computation,
  • train.py: Implementation of training with online evaluation,
  • augment.py: Geometric and photometric data augmentations.
Comments
  • How to obtain the .so files?

    How to obtain the .so files?

    When I try to run the code according to the README instructions, I get an error that certain .so files are not found. Indeed, the necessary .cc and .h files are in the ops directory, but no .so or .o files.

    How can I obtain them? Or are they supposed to be generated somehow at first?

    (Maybe I have more general understanding problem: What does ops actually stand for?)

    opened by clauslang 13
  • occ loss implementation problem

    occ loss implementation problem

    Hi,

    in losses.py, forward-backward occlusion loss is implemented as:

    ...
        flow_diff_bw = flow_bw + flow_fw_warped
    ...
        fb_occ_bw = tf.cast(length_sq(flow_diff_bw) > occ_thresh, tf.float32)
    ...
            mask_bw *= (1 - fb_occ_bw)
    ...
        occ_bw = 1 - mask_bw
    ...
        losses['occ'] = (charbonnier_loss(occ_fw) +
                         charbonnier_loss(occ_bw))
    ...
    

    However, gradients cannot BP through the tf.greater operation:

        test_g = tf.gradients(occ_bw, flow_diff_bw)
        print("occ gradient: %s" % test_g)
    

    would output [None].

    Does that means the occ loss is not working?

    correct me if anything wrong,

    Thanks

    opened by immars 12
  • Please help, error! when run  python3 run.py --ex test

    Please help, error! when run python3 run.py --ex test

    I have one GPU, I have configured config.in. What have I done wrong here?

    /usr/lib/python3/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/lib/bfloat16/bfloat16.h(466): error: no suitable constructor exists to convert from "float" to "tensorflow::bfloat16"

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/lib/bfloat16/bfloat16.h(467): error: no suitable constructor exists to convert from "float" to "tensorflow::bfloat16"

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/lib/bfloat16/bfloat16.h(468): error: no suitable constructor exists to convert from "float" to "tensorflow::bfloat16"

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/google/protobuf/arena_impl.h(57): warning: integer conversion resulted in a change of sign

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/google/protobuf/arena_impl.h(304): warning: integer conversion resulted in a change of sign

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/google/protobuf/arena_impl.h(305): warning: integer conversion resulted in a change of sign

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(108): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::lgamma_impl::run(Scalar) [with Scalar=double]" (1517): here instantiation of "Eigen::internal::lgamma_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::lgamma(const Scalar &) [with Scalar=double]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(32): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(1015): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::polygamma_impl::run(Scalar, Scalar) [with Scalar=float]" (1535): here instantiation of "Eigen::internal::polygamma_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::polygamma(const Scalar &, const Scalar &) [with Scalar=float]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(67): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(1015): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::polygamma_impl::run(Scalar, Scalar) [with Scalar=double]" (1535): here instantiation of "Eigen::internal::polygamma_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::polygamma(const Scalar &, const Scalar &) [with Scalar=double]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(74): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(342): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::erf_impl::run(Scalar) [with Scalar=double]" (1541): here instantiation of "Eigen::internal::erf_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::erf(const Scalar &) [with Scalar=double]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(87): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(375): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::erfc_impl::run(Scalar) [with Scalar=float]" (1547): here instantiation of "Eigen::internal::erfc_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::erfc(const Scalar &) [with Scalar=float]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(94): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(375): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::erfc_impl::run(Scalar) [with Scalar=double]" (1547): here instantiation of "Eigen::internal::erfc_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::erfc(const Scalar &) [with Scalar=double]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(101): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(649): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::igamma_impl::run(Scalar, Scalar) [with Scalar=float]" (1553): here instantiation of "Eigen::internal::igamma_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::igamma(const Scalar &, const Scalar &) [with Scalar=float]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(110): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(649): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::igamma_impl::run(Scalar, Scalar) [with Scalar=double]" (1553): here instantiation of "Eigen::internal::igamma_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::igamma(const Scalar &, const Scalar &) [with Scalar=double]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(120): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(461): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::igammac_impl::run(Scalar, Scalar) [with Scalar=float]" (1559): here instantiation of "Eigen::internal::igammac_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::igammac(const Scalar &, const Scalar &) [with Scalar=float]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(128): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(461): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::igammac_impl::run(Scalar, Scalar) [with Scalar=double]" (1559): here instantiation of "Eigen::internal::igammac_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::igammac(const Scalar &, const Scalar &) [with Scalar=double]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(138): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(1064): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::betainc_impl::run(Scalar, Scalar, Scalar) [with Scalar=float]" (1565): here instantiation of "Eigen::internal::betainc_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::betainc(const Scalar &, const Scalar &, const Scalar &) [with Scalar=float]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(146): here

    /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/SpecialFunctionsImpl.h(1064): error: static assertion failed with "THIS_TYPE_IS_NOT_SUPPORTED" detected during: instantiation of "Scalar Eigen::internal::betainc_impl::run(Scalar, Scalar, Scalar) [with Scalar=double]" (1565): here instantiation of "Eigen::internal::betainc_retval<Eigen::internal::global_math_functions_filtering_base<Scalar, void>::type>::type Eigen::numext::betainc(const Scalar &, const Scalar &, const Scalar &) [with Scalar=double]" /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/../src/SpecialFunctions/arch/CUDA/CudaSpecialFunctions.h(156): here

    15 errors detected in the compilation of "/tmp/tmpxft_000013b5_00000000-6_backward_warp_op.cu.cpp1.ii". Traceback (most recent call last): File "/home/cislab315/deeplearning/UnFlow/src/e2eflow/ops.py", line 61, in op_lib = tf.load_op_library(lib_path) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/load_library.py", line 56, in load_op_library lib_handle = py_tf.TF_LoadLibrary(library_filename) tensorflow.python.framework.errors_impl.NotFoundError: ./backward_warp_op.so: cannot open shared object file: No such file or directory

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "run.py", line 7, in from e2eflow.core.train import Trainer File "/home/cislab315/deeplearning/UnFlow/src/e2eflow/core/train.py", line 12, in from ..ops import forward_warp File "/home/cislab315/deeplearning/UnFlow/src/e2eflow/ops.py", line 63, in compile(n) File "/home/cislab315/deeplearning/UnFlow/src/e2eflow/ops.py", line 45, in compile subprocess.check_output(nvcc_cmd, shell=True) File "/usr/lib/python3.6/subprocess.py", line 336, in check_output **kwargs).stdout File "/usr/lib/python3.6/subprocess.py", line 418, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command 'nvcc -std=c++11 -c -o backward_warp_op.cu.o backward_warp_op.cu.cc -I/usr/local/lib/python3.6/dist-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0 -L/usr/local/lib/python3.6/dist-packages/tensorflow -ltensorflow_framework -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I /usr/local --expt-relaxed-constexpr' returned non-zero exit status 1.

    opened by hbzhang 11
  • No OpKernel was registered to support Op 'Correlation' with these attrs.

    No OpKernel was registered to support Op 'Correlation' with these attrs.

    Hi,

    Thank you for your work. I am facing a problem which I cannot fully understand... It seems that it is related to "correlation_op.cc" Could you help me with that? Thanks. :)

    My settings are: Python 3.5 (with Anaconda) Cuda 8.0 Tensorflow-gpu 1.10

    I run the following command and set the corresponding "gpu_list=0" in "config_ini": CUDA_VISIBLE_DEVICES=0 python3 run.py --ex my_experiment

    And I got the following errors: Traceback (most recent call last): File "/home/runzeli/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1278, in _do_call return fn(*args) File "/home/runzeli/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1261, in _run_fn self._extend_graph() File "/home/runzeli/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1295, in _extend_graph tf_session.ExtendSession(self._session) tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'Correlation' with these attrs. Registered devices: [CPU,GPU], Registered kernels:

     [[Node: flownet_c/Correlation = Correlation[kernel_size=1, max_displacement=20, pad=20, stride_1=1, stride_2=2, _device="/device:GPU:3"](flownet_c_features/conv3/leaky_relu/Maximum, flownet_c_features/conv3_1/leaky_relu/Maximum)]]
    
    opened by bragilee 10
  • How flow vectors are stored and why do we need to do addition in this line of forward warping?

    How flow vectors are stored and why do we need to do addition in this line of forward warping?

    Hi @simonmeister ,

    I was reading your forward_warp code and couldn't understand why you are doing addition in lines 29 and 30 instead of subtraction?

    https://github.com/simonmeister/UnFlow/blob/ddb4bd32f762168ed38849f736b6a7d2094b0042/ops/forward_warp_op.cu.cc#L29

    If you look at tensorflow's implementation of dense_image_warp, you can see that they are doing subtraction when it comes to finding query points for interpolation step.

    https://github.com/tensorflow/tensorflow/blob/6612da89516247503f03ef76e974b51a434fb52e/tensorflow/contrib/image/python/ops/dense_image_warp.py#L210

    I guess it all depends on how flow vectors are stored in flow matrixes. In the following picture, I have shown two different ways of storing a flow vector, which one do you think is correct one, Flow field 1 or Flow field 2? (I mean which one is standard, and used by benchmark datasets like Middlebury and Sintel?)

    [ In this simple example one black dot moves from the location (2,6) in image one to (5,10) in image two. So, flow vector for that point is (3,4)]

    FlowExplained(1)

    If you make it clear that which one you are assuming, I can try to see why you are doing addition in those lines.

    opened by alisaaalehi 7
  • Do you write  UnFlow using pytorch?

    Do you write UnFlow using pytorch?

    Hi, I like your paper very much. I would like to use pytorch to build my model. But I only find tensorflow version. If I want to use this loss function in my pytorch model, should I write the loss function by myself?

    Thank you in advance

    opened by hzk7287 6
  • How to do BP for occlusion mask penalty?

    How to do BP for occlusion mask penalty?

    Hi, @simonmeister Thanks for your nice work and repo. I've read your paper and there is one thing that I do not understand. In equation (2) occlusion mask o_x is penalized with weight lambda_p. However in equation (1) this mask is calculated by comparision. How to do back propagation for this penalty term? image image

    Thanks~

    wontfix 
    opened by EthanZhangYi 6
  • How many iterations does FlownetC need?

    How many iterations does FlownetC need?

    I have trained 130400 iterations with batch_size 8 on flying chairs dataset. The predicted flows on test set are still not good. Should I continue or switched to FlownetCS?

    opened by dvorak0 6
  • Doesn't work if cuda is not installed in usr/local

    Doesn't work if cuda is not installed in usr/local

    If cuda is not installed in 'usr/local', it does not run correctly. This is due to lines 42 and 46 in ops.py

    Note also that lines 37-40 are not used in ops.py, and therefore neither is line 17 in config.ini.

    My simple fix to this was to replace lines 37 to 46 in ops.py with the following:

        out, err = subprocess.Popen(['which', 'nvcc'], stdout=subprocess.PIPE).communicate()
        cuda_dir = out.decode().split('/cuda')[0]
    
        nvcc_cmd = "nvcc -std=c++11 -c -o {} {} {} -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I " + cuda_dir + " --expt-relaxed-constexpr"
        nvcc_cmd = nvcc_cmd.format(" ".join([fn_cu_o, fn_cu_cc]),
                                   tf_inc, tf_lib)
        subprocess.check_output(nvcc_cmd, shell=True)
        gcc_cmd = "{} -std=c++11 -shared -o {} {} -fPIC -L " + cuda_dir + "/cuda/lib64 -lcudart {} -O2 -D GOOGLE_CUDA=1"
    

    I also removed line 17 from config.ini

    Finally, I know this is a custom config file, but I would also suggest perhaps changing line 15 in config.ini to

    g++ = g++

    I have tested on 3 different machines, all with different versions of linux, tensorflow, and cuda. With these small changes, your code runs immediately following a clone from this repo on all of them (after copying over the config.ini file of course.)

    Just a few small suggestions to make things as out-the-box as possible! :)

    opened by djl11 5
  • Check failed: dim.IsSet() Internal error: Got nullptr for Dimension in downsample

    Check failed: dim.IsSet() Internal error: Got nullptr for Dimension in downsample

    Hello, thanks again for your work!

    I have a question. In unsupervised.py I have im1_s, im2_s with shapes (2, 320, 1152, ?). After first downsample(im1_s, 4) shapes became (2, 80, 288, ?), and then in each iteration when computing loss (https://github.com/simonmeister/UnFlow/blob/master/src/e2eflow/core/unsupervised.py#L116)

    I have downsample(im1_s, 2) and shapes decreasing twice. So, once, I have shape (2, 5, 18, ?), and there is an error:

    /usr/local/lib/python3.5/dist-packages/tensorflow/include/tensorflow/core/framework/shape_inference.h:625] Check failed: dim.IsSet() Internal error: Got nullptr for Dimension . Aborted (core dumped)

    What can cause such a problem and how can I solve it?

    opened by lenazherdeva 5
  • When finetuning, previous networks are kept fixed?

    When finetuning, previous networks are kept fixed?

    Hi, Simmon! Thanks for your wonderful code! But I have some questions now, please give me some advices. When finetuning, such as use the UnFlowC experiment for first network and UnFlowCS for the second network when training UnFlowCSS, you said that only the final network is trained and any previous networks are kept fixed. But I can't figure out how you make the previous networks fixed in code, are there any settings in code to make sure the previous networks are fixed? Please give me some clues, thank you very much!!

    opened by Blcony 5
  • KeyError:

    KeyError: "correlation"

    Hello, I want to get the output flow images from your CSS_ft model, the code I wrote is showing below:

    import tensorflow as tf
    import os
    
    ckpt_path = "~/UnFlow/logs/CSS_ft"
    
    with tf.Session() as sess:
        saver = tf.train.import_meta_graph(os.path.join(ckpt_path, "model.ckpt.meta"))
        saver.restore(sess, tf.train.latest_checkpoint(ckpt_path))
        print("finished!")
    

    when I execute the code, it shows the error like this:

    Traceback (most recent call last):
      File "export_endovis.py", line 8, in <module>
        saver = tf.train.import_meta_graph(os.path.join(ckpt_path, "model.ckpt.meta"))
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1435, in import_meta_graph
        meta_graph_or_file, clear_devices, import_scope, **kwargs)[0]
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1457, in _import_meta_graph_with_return_elements
        **kwargs))
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/meta_graph.py", line 806, in import_scoped_meta_graph_with_return_elements
        return_elements=return_elements)
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
        return func(*args, **kwargs)
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 399, in import_graph_def
        _RemoveDefaultAttrs(op_dict, producer_op_list, graph_def)
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 159, in _RemoveDefaultAttrs
        op_def = op_dict[node.op]
    KeyError: 'Correlation'
    
    

    please help me out, thanks!

    opened by David3310273 1
  •  lib_handle = py_tf.TF_LoadLibrary(library_filename) tensorflow.python.framework.errors_impl.NotFoundError: HOW TO GET RIDE OF THIS ERROR

    lib_handle = py_tf.TF_LoadLibrary(library_filename) tensorflow.python.framework.errors_impl.NotFoundError: HOW TO GET RIDE OF THIS ERROR

    lib_handle = py_tf.TF_LoadLibrary(library_filename) tensorflow.python.framework.errors_impl.NotFoundError: C:\Users\kabirfaz\Documents\MyData\02_MyProjects\05_P600_Handsigns\venv\lib\site-packages\tensorflow\contrib\tensor_forest\python\ops_tensor_forest_ops.so not found

    Python 3.6.0 tensorflow 1.10.0

    Please help

    help wanted 
    opened by Fazankabir 3
  • error : .\backward_warp_op.so not found

    error : .\backward_warp_op.so not found

    hey i got that error when running "tun.py" i saw the previos issues num 23 and 39 tried to do as they said but still got that error

    ubunto 16 cuda 9 tnesorflow 1.7 (i also tried with 1.12/1.13) also try on windows 10 with cuda 9 got the same problem what can i do

    help wanted 
    opened by yoryory 2
  • Output and input node name of UnFlow.

    Output and input node name of UnFlow.

    I want to get a frozen graph for inference using the freeze_graph function in tensorflow. However, I cannot get the input and output name of the UnFlow. How can I get those node names? Or could you please provide them for me? Thanks.

    opened by shihao1104 0
  • error: constexpr function return is non-constant

    error: constexpr function return is non-constant

    I use tf1.13,cuda10.0 and g++-4.8,and my os is ubuntu16.04.However,when I run python run.py --help,some errors occur:

    /usr/local/lib/python3.5/dist-packages/tensorflow/include/absl/strings/string_view.h(496): error: constexpr function return is non-constant

    /usr/local/lib/python3.5/dist-packages/tensorflow/include/google/protobuf/arena_impl.h(55): warning: integer conversion resulted in a change of sign

    /usr/local/lib/python3.5/dist-packages/tensorflow/include/google/protobuf/arena_impl.h(309): warning: integer conversion resulted in a change of sign

    /usr/local/lib/python3.5/dist-packages/tensorflow/include/google/protobuf/arena_impl.h(310): warning: integer conversion resulted in a change of sign

    1 error detected in the compilation of "/tmp/tmpxft_00003790_00000000-6_backward_warp_op.cu.cpp1.ii".

    Traceback (most recent call last): File "/home/kjq/UnFlow/src/e2eflow/ops.py", line 59, in op_lib = tf.load_op_library(lib_path) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/load_library.py", line 61, in load_op_library lib_handle = py_tf.TF_LoadLibrary(library_filename) tensorflow.python.framework.errors_impl.NotFoundError: ./backward_warp_op.so: cannot open shared object file: No such file or directory

    During handling of the above exception, another exception occurred: Traceback (most recent call last): File "run.py", line 7, in from e2eflow.core.train import Trainer File "/home/kjq/UnFlow/src/e2eflow/core/train.py", line 11, in from . import util File "/home/kjq/UnFlow/src/e2eflow/core/util.py", line 2, in from ..ops import downsample as downsample_ops File "/home/kjq/UnFlow/src/e2eflow/ops.py", line 61, in compile(n) File "/home/kjq/UnFlow/src/e2eflow/ops.py", line 43, in compile subprocess.check_output(nvcc_cmd, shell=True) File "/usr/lib/python3.5/subprocess.py", line 626, in check_output **kwargs).stdout File "/usr/lib/python3.5/subprocess.py", line 708, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command 'nvcc -std=c++11 -c -o backward_warp_op.cu.o backward_warp_op.cu.cc -I/usr/local/lib/python3.5/dist-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0 -L/usr/local/lib/python3.5/dist-packages/tensorflow -ltensorflow_framework -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I /usr/local --expt-relaxed-constexpr' returned non-zero exit status 1

    could you give some advices ?

    opened by ooFormalinoo 1
  • "step" parameter to load frames has no effect

    I am looking at the data loading function and in particular at its step parameter:

    https://github.com/simonmeister/UnFlow/blob/master/src/e2eflow/core/input.py#L164-L165

    It appears though, that regardless of the choice of the step, only consecutive frames are used, because frames are indexed by i and i+1. I would assume the indexing to be i and i+step. Is this a bug or I misunderstand the intention behind this parameter?

    opened by eldar 0
Releases(v0.9.3)
Owner
Simon Meister
stealth @ stealth
Simon Meister
Code for training and evaluation of the model from "Language Generation with Recurrent Generative Adversarial Networks without Pre-training"

Language Generation with Recurrent Generative Adversarial Networks without Pre-training Code for training and evaluation of the model from "Language G

Amir Bar 253 Sep 14, 2022
Repository of Vision Transformer with Deformable Attention

Vision Transformer with Deformable Attention This repository contains the code for the paper Vision Transformer with Deformable Attention [arXiv]. Int

410 Jan 03, 2023
A high performance implementation of HDBSCAN clustering.

HDBSCAN HDBSCAN - Hierarchical Density-Based Spatial Clustering of Applications with Noise. Performs DBSCAN over varying epsilon values and integrates

2.3k Jan 02, 2023
Rule Based Classification Project

Kural Tabanlı Sınıflandırma ile Potansiyel Müşteri Getirisi Hesaplama İş Problemi: Bir oyun şirketi müşterilerinin bazı özelliklerini kullanaraknseviy

Şafak 1 Jan 12, 2022
Learn about Spice.ai with in-depth samples

Samples Learn about Spice.ai with in-depth samples ServerOps - Learn when to run server maintainance during periods of low load Gardener - Intelligent

Spice.ai 16 Mar 23, 2022
Pytorch implementation of "Get To The Point: Summarization with Pointer-Generator Networks"

About this repository This repo contains an Pytorch implementation for the ACL 2017 paper Get To The Point: Summarization with Pointer-Generator Netwo

wxDai 7 Oct 14, 2022
Cross-modal Deep Face Normals with Deactivable Skip Connections

Cross-modal Deep Face Normals with Deactivable Skip Connections Victoria Fernández Abrevaya*, Adnane Boukhayma*, Philip H. S. Torr, Edmond Boyer (*Equ

72 Nov 27, 2022
Virtual Dance Reality Stage: a feature that offers you to share a stage with another user virtually

Portrait Segmentation using Tensorflow This script removes the background from an input image. You can read more about segmentation here Setup The scr

291 Dec 24, 2022
Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)

Awesome Visual-Transformer Collect some Transformer with Computer-Vision (CV) papers. If you find some overlooked papers, please open issues or pull r

dkliang 2.8k Jan 08, 2023
Chunkmogrify: Real image inversion via Segments

Chunkmogrify: Real image inversion via Segments Teaser video with live editing sessions can be found here This code demonstrates the ideas discussed i

David Futschik 112 Jan 04, 2023
CoReNet is a technique for joint multi-object 3D reconstruction from a single RGB image.

CoReNet CoReNet is a technique for joint multi-object 3D reconstruction from a single RGB image. It produces coherent reconstructions, where all objec

Google Research 80 Dec 25, 2022
The codes reproduce the figures and statistics in the paper, "Controlling for multiple covariates," by Mark Tygert.

The accompanying codes reproduce all figures and statistics presented in "Controlling for multiple covariates" by Mark Tygert. This repository also pr

Meta Research 1 Dec 02, 2021
Training RNNs as Fast as CNNs

News SRU++, a new SRU variant, is released. [tech report] [blog] The experimental code and SRU++ implementation are available on the dev branch which

ASAPP Research 2.1k Jan 01, 2023
This repository contains the data and code for the paper "Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors" ([email protected])

GP-VAE This repository provides datasets and code for preprocessing, training and testing models for the paper: Diverse Text Generation via Variationa

Wanyu Du 18 Dec 29, 2022
Kaggle: Cell Instance Segmentation

Kaggle: Cell Instance Segmentation The goal of this challenge is to detect cells in microscope images. with simple view on how many cels have been ann

Jirka Borovec 9 Aug 12, 2022
LLVM-based compiler for LightGBM gradient-boosted trees. Speeds up prediction by ≥10x.

LLVM-based compiler for LightGBM gradient-boosted trees. Speeds up prediction by ≥10x.

Simon Boehm 183 Jan 02, 2023
Inferred Model-based Fuzzer

IMF: Inferred Model-based Fuzzer IMF is a kernel API fuzzer that leverages an automated API model inferrence techinque proposed in our paper at CCS. I

SoftSec Lab 104 Sep 28, 2022
This is a simple face recognition mini project that was completed by a team of 3 members in 1 week's time

PeekingDuckling 1. Description This is an implementation of facial identification algorithm to detect and identify the faces of the 3 team members Cla

Eric Kwok 2 Jan 25, 2022
This is an official implementation for "Video Swin Transformers".

Video Swin Transformer By Ze Liu*, Jia Ning*, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin and Han Hu. This repo is the official implementation of "V

Swin Transformer 981 Jan 03, 2023
In-place Parallel Super Scalar Samplesort (IPS⁴o)

In-place Parallel Super Scalar Samplesort (IPS⁴o) This is the implementation of the algorithm IPS⁴o presented in the paper Engineering In-place (Share

82 Dec 22, 2022