Generate image analogies using neural matching and blending

Overview

neural image analogies

Image of arch Image of Sugar Steve Image of season transferImage of Trump

This is basically an implementation of this "Image Analogies" paper, In our case, we use feature maps from VGG16. The patch matching and blending is inspired by the method described in "Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis". Effects similar to that paper can be achieved by turning off the analogy loss (or leave it on!) --analogy-w=0 and turning on the B/B' content weighting via the --b-content-w parameter. Also, instead of using brute-force patch matching we use the PatchMatch algorithm to approximate the best patch matches. Brute-force matching can be re-enabled by setting --model=brute

The initial code was adapted from the Keras "neural style transfer" example.

The example arch images are from the "Image Analogies" website. They have some other good examples from their own implementation which are worth a look. Their paper discusses the various applications of image analogies so you might want to take a look for inspiration.

Installation

This requires either TensorFlow or Theano. If you don't have a GPU you'll want to use TensorFlow. GPU users may find to Theano to be faster at the expense of longer startup times. Here's the Theano GPU guide.

Here's how to configure the backend with Keras and set your default device (e.g. cpu, gpu0).

To install via virtualenv run the following commands.

virtualenv venv
source venv/bin/activate
pip install neural-image-analogies

If you have trouble with the above method, follow these directions to Install latest keras and theano or TensorFlow

The script make_image_analogy.py should now be on your path.

Before running this script, download the weights for the VGG16 model. This file contains only the convolutional layers of VGG16 which is 10% of the full size. Original source of full weights. The script assumes the weights are in the current working directory. If you place them somewhere else make sure to pass the --vgg-weights= parameter or set the VGG_WEIGHT_PATH environment variable.

Example script usage: make_image_analogy.py image-A image-A-prime image-B prefix_for_output

e.g.:

make_image_analogy.py images/arch-mask.jpg images/arch.jpg images/arch-newmask.jpg out/arch

The examples directory has a script, render_example.sh which accepts an example name prefix and, optionally the location of your vgg weights.

./render_example.sh arch /path/to/your/weights.h5

Currently, A and A' must be the same size, the same holds for B and B'. Output size is the same as Image B, unless specified otherwise.

It's too slow

If you're not using a GPU, use TensorFlow. My Macbook Pro with with can render a 512x512 image in approximately 12 minutes using TensorFlow and --mrf-w=0. Here are some other options which mostly trade quality for speed.

  • If you're using Theano enable openmp threading by using env variables THEANO_FLAGS='openmp=1' OMP_NUM_THREADS= . You can read more about multi-core support here.
  • set --mrf-w=0 to skip optimization of local coherence
  • use fewer feature layers by setting --mrf-layers=conv4_1 and/or --analogy-layers=conv4_1 (or other layers) which will consider half as many feature layers.
  • generate a smaller image by either using a smaller source Image B, or setting the --width or --height parameters.
  • ensure you're not using --model=brute which needs a powerful GPU

I want it to look better

The default settings are somewhat lowered to give the average user a better chance at generating something on whatever computer they may have. If you have a powerful GPU then here are some options for nicer output:

  • --model=brute will turn on brute-force patch-matching and will be done on GPU. This is Theano-only (default=patchmatch)
  • --patch-size=3 this will allow for much nicer-looking details (default=1)
  • --mrf-layers=conv1_1,conv2_1,... add more layers to the mix (also analogy-layers and content-layers)

Parameters

  • --width Sets image output max width
  • --height Sets image output max height
  • --scales Run at N different scales
  • --iters Number of iterations per scale
  • --min-scale Smallest scale to iterate
  • --mrf-w Weight for MRF loss between A' and B'
  • --analogy-w Weight for analogy loss
  • --b-content-w Weight for content loss between B and B'
  • --tv-w Weight for total variation loss
  • --vgg-weights Path to VGG16 weights
  • --a-scale-mode Method of scaling A and A' relative to B
    • 'match': force A to be the same size as B regardless of aspect ratio (former default)
    • 'ratio': apply scale imposed by width/height params on B to A (current default)
    • 'none': leave A/A' alone
  • --a-scale Additional scale factor for A and A'
  • --pool-mode Pooling style used by VGG
    • 'avg': average pooling - generally smoother results
    • 'max': max pooling - more noisy but maybe that's what you want (original default)
  • --contrast adjust the contrast of the output by removing the bottom x percentile and scaling by the (100 - x)th percentile. Defaults to 0.02
  • --output-full Output all intermediate images at full size regardless of actual scale
  • --analogy-layers Comma-separated list of layer names to be used for the analogy loss (default: "conv3_1,conv_4_1")
  • --mrf-layers Comma-separated list of layer names to be used for the MRF loss (default: "conv3_1,conv_4_1")
  • --content-layers Comma-separated list of layer names to be used for the content loss (default: "conv3_1,conv_4_1")
  • --patch-size Patch size used for matching (default: 1)
  • --use-full-analogy match on all of the analogy patches, instead of combining them into one image (slower/more memory but maybe more accurate)
  • --model Select the patch matching model ('patchmatch' or 'brute') patchmatch is the default and requires less GPU memory but is less accurate then brute.
  • --nstyle-w Weight for neural style loss between A' and B'
  • --nstyle-layers Comma-separated list of layer names to be used for the neural style The analogy loss is the amount of influence of B -> A -> A' -> B'. It's a structure-preserving mapping of Image B into A' via A.

The MRF loss (or "local coherence") is the influence of B' -> A' -> B'. In the parlance of style transfer, this is the style loss which gives texture to the image.

The B/B' content loss is set to 0.0 by default. You can get effects similar to CNNMRF by turning this up and setting analogy weight to zero. Or leave the analogy loss on for some extra style guidance.

If you'd like to only visualize the analogy target to see what's happening, set the MRF and content loss to zero: --mrf-w=0 --content-w=0 This is also much faster as MRF loss is the slowest part of the algorithm.

License

The code for this implementation is provided under the MIT license.

The suggested VGG16 weights are originally from here and are licensed http://creativecommons.org/licenses/by-nc/4.0/ Open a ticket if you have a suggestion for a more free-as-in-free-speech license.

The attributions for the example art can be found in examples/images/ATTRIBUTIONS.md

Comments
  • Specify theano flags in readme and create an out directory

    Specify theano flags in readme and create an out directory

    it took me a while to figure out

    mkdir out
    THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python image_analogy.py images/arch-mask.jpg images/arch.jpg images/arch-newmask.jpg out/arch
    
    

    I also creating an AMI in northern virginia region. will share once its ready.

    opened by ghost 12
  • ValueError: Layer weight shape not compatible with provided weight shape

    ValueError: Layer weight shape not compatible with provided weight shape

    after installing Tensorflow Python 3 / CPU only on anaconda, I tried to run the script without success:

    $ make_image_analogy.py images/a.jpg images/a.jpg images/b.jpg out/b                                       
    Using TensorFlow backend.
    Tensorflow detected. Forcing --a-scale-mode=match (A images are scaled to same size as B images)
    Using PatchMatch model
    Scale factor 0.25 "A" shape (1, 3, 48, 64) "B" shape (1, 3, 48, 64)
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these areavailable on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these areavailable on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
    Traceback (most recent call last):
      File "/home/dori/.conda/envs/py36/bin/make_image_analogy.py", line 27, in <module>
        image_analogy.main.main(args, model_class)
      File "/home/dori/.conda/envs/py36/lib/python3.6/site-packages/image_analogy/main.py", line 69, in main
        net = vgg16.get_model(img_width, img_height, weights_path=args.vgg_weights, pool_mode=args.pool_mode)
      File "/home/dori/.conda/envs/py36/lib/python3.6/site-packages/image_analogy/vgg16.py", line 89, in get_model
        layer.set_weights(weights)
      File "/home/dori/.conda/envs/py36/lib/python3.6/site-packages/keras/engine/topology.py", line 1154, in set_weights
        'provided weight shape ' + str(w.shape))
    ValueError: Layer weight shape (3, 3, 3, 64) not compatible with provided weight shape (64, 3, 3, 3)
    

    Any idea how to solve this issue ?

    PS: Note that I renamed all calls Convolution2D(XXX, 3, 3, activation=... into Conv2D(XXX, (3, 3), activation=... to fix the many UserWarnings

    /home/dori/.conda/envs/py36/lib/python3.6/site-packages/image_analogy/vgg16.py:71: 
    UserWarning: Update your `Conv2D` call to theKeras 2 API: `Conv2D(512, (3, 3), activation="relu", name="conv5_3")`
    model.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_3'))
    
    opened by AdrienLemaire 11
  • not working as well as it did a few days ago?

    not working as well as it did a few days ago?

    Hey, great wrk on this. looks great. However the changes you've made in the past few days have changed the output quite a lot. A few days ago (commit: https://github.com/awentzonline/image-analogies/commit/c6f35e73bfb35035e195dd5a3bb5a018588bec3e ) I would get this (which looks perfect): pf0000_at_iteration_2_4_old

    but now I get this (with the same settings): pf0000_at_iteration_2_4_new

    my mask is this: pf-nm_0000

    (also new version is exactly 2x faster, which is great. but I'm not anywhere near the same results)

    opened by memo 9
  • Cuda Dimension Mismatch

    Cuda Dimension Mismatch

    when running this command

    python2.7 make_image_analogy.py ~/Documents/imganal/examples/images/arch-A.jpg ~/Documents/imganal/examples/images/arch-Ap.jpg ~/Documents/imganal/examples/images/arch-B.jpg ~/Documents/imganal/out/img

    on the arch example it runs for one pass (0x0 through 0x4) and then I get the following trace:

    Traceback (most recent call last): File "make_image_analogy.py", line 25, in image_analogy.main.main(args, model_class) File "build/bdist.linux-x86_64/egg/image_analogy/main.py", line 69, in main model.build(a_image, ap_image, b_image, (1, img_num_channels, img_height, img_width)) File "build/bdist.linux-x86_64/egg/image_analogy/models/base.py", line 23, in build File "build/bdist.linux-x86_64/egg/image_analogy/models/analogy.py", line 22, in build_loss File "build/bdist.linux-x86_64/egg/image_analogy/models/base.py", line 51, in precompute_static_features File "build/bdist.linux-x86_64/egg/image_analogy/models/base.py", line 60, in get_features File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 384, in call return self.function(*inputs) File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 871, in call storage_map=getattr(self.fn, 'storage_map', None)) File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line 314, in raise_with_op reraise(exc_type, exc_value, exc_trace) File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 859, in call outputs = self.fn() ValueError: CudaNdarray_CopyFromCudaNdarray: need same dimensions for dim 2, destination=290, source=289 Apply node that caused the error: GpuIncSubtensor{InplaceSet;::, ::, int64:int64:, int64:int64:}(GpuAlloc{memset_0=True}.0, GpuElemwise{Composite{(i0 * ((i1 + i2) + Abs((i1 + i2))))}}[(0, 1)].0, Constant{1}, Constant{291}, Constant{1}, Constant{201}) Toposort index: 46 Inputs types: [CudaNdarrayType(float32, 4D), CudaNdarrayType(float32, 4D), Scalar(int64), Scalar(int64), Scalar(int64), Scalar(int64)] Inputs shapes: [(1, 64, 292, 202), (1, 64, 289, 200), (), (), (), ()] Inputs strides: [(0, 58984, 202, 1), (0, 57800, 200, 1), (), (), (), ()] Inputs values: ['not shown', 'not shown', 1, 291, 1, 201] Outputs clients: [[GpuContiguous(GpuIncSubtensor{InplaceSet;::, ::, int64:int64:, int64:int64:}.0)]]

    all my requirements are up to date/correct (or at least pip install -r requirements.txt says so). I've had this project installed for awhile so it might be that the cruft from older versions is conflicting with this one (it's not in a venv or anything). I think it has something to do with the img heights and widths. If I run with an example where A is not the same dimensions as B, it crashes before the first pass... I vaguely remember this problem happening on an older version of this project but I forget how I fixed it. I'm not sure if it's me or the new version, any ideas?

    opened by DrChromaticNo 7
  • Error while running example (

    Error while running example ("Cannot convert %s to TensorType" % str_x, type(x))

    Hello guys, I am try to run the examples and I keep getting this error.

    File "/home/cyb/image-analogies/venv/local/lib/python2.7/site-packages/theano/tensor/basic.py", line 208, in as_tensor_variable raise AsTensorError("Cannot convert %s to TensorType" % str_x, type(x)) theano.tensor.var.AsTensorError: ('Cannot convert Tensor("ExpandDims:0", shape=(1, 256, 32, 22), dtype=float32) to TensorType', <class 'tensorflow.python.framework.ops.Tensor'>)

    Would you have any clues how to fix that ? Thanks a lot!

    opened by Cybrak 6
  • Changed make_patches to patches.make_patches

    Changed make_patches to patches.make_patches

    Minor fix to have make_patches method reference the patches module. This bug wouldn't be noticed unless you are trying to use full analogy patch matching.

    opened by vonclites 3
  • Support for Leaf (Tensorflow alternative)

    Support for Leaf (Tensorflow alternative)

    Leaf is an alternative to Tensorflow that is allegedly faster and easier to run in the GPU, thanks to the Rust language capabilities. Torch, on the other hand, is powered by LuaJIT, and seems great, too.

    Torch seems very fast, too - http://autumnai.com/deep-learning-benchmarks

    Do you think it would be interesting to support any of them? https://github.com/autumnai/leaf https://github.com/torch/torch7

    opened by giovannibonetti 3
  • getting errors installing

    getting errors installing

    I should have all dependencies, theano enabled for opencl.

    running

    pip install -r requirements.txt I get

    pip install -r requirements.txt
    Requirement already satisfied (use --upgrade to upgrade): Cython==0.23.4 in ./venv/lib/python2.7/site-packages (from -r requirements.txt (line 1))
    Collecting h5py==2.5.0 (from -r requirements.txt (line 2))
      Using cached h5py-2.5.0.tar.gz
    Collecting Keras==0.3.2 (from -r requirements.txt (line 3))
    Requirement already satisfied (use --upgrade to upgrade): numpy==1.10.4 in ./venv/lib/python2.7/site-packages (from -r requirements.txt (line 4))
    Collecting Pillow==3.1.1 (from -r requirements.txt (line 5))
    Collecting PyYAML==3.11 (from -r requirements.txt (line 6))
    Collecting scipy==0.17.0 (from -r requirements.txt (line 7))
      Using cached scipy-0.17.0.tar.gz
    Requirement already satisfied (use --upgrade to upgrade): six==1.10.0 in ./venv/lib/python2.7/site-packages (from -r requirements.txt (line 8))
    Obtaining Theano from git+git://github.com/Theano/[email protected]#egg=Theano (from -r requirements.txt (line 9))
      Skipping because already up-to-date.
    Building wheels for collected packages: h5py, scipy
      Running setup.py bdist_wheel for h5py ... error
      Complete output from command /home/alex/image-analogies/venv/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-g7OTWL/h5py/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d /tmp/tmpImCcmFpip-wheel- --python-tag cp27:
      running bdist_wheel
      running build
      running build_py
      creating build
      creating build/lib.linux-x86_64-2.7
      creating build/lib.linux-x86_64-2.7/h5py
      copying h5py/highlevel.py -> build/lib.linux-x86_64-2.7/h5py
      copying h5py/__init__.py -> build/lib.linux-x86_64-2.7/h5py
      copying h5py/ipy_completer.py -> build/lib.linux-x86_64-2.7/h5py
      copying h5py/version.py -> build/lib.linux-x86_64-2.7/h5py
      creating build/lib.linux-x86_64-2.7/h5py/_hl
      copying h5py/_hl/base.py -> build/lib.linux-x86_64-2.7/h5py/_hl
      copying h5py/_hl/selections.py -> build/lib.linux-x86_64-2.7/h5py/_hl
      copying h5py/_hl/__init__.py -> build/lib.linux-x86_64-2.7/h5py/_hl
      copying h5py/_hl/selections2.py -> build/lib.linux-x86_64-2.7/h5py/_hl
      copying h5py/_hl/group.py -> build/lib.linux-x86_64-2.7/h5py/_hl
      copying h5py/_hl/datatype.py -> build/lib.linux-x86_64-2.7/h5py/_hl
      copying h5py/_hl/attrs.py -> build/lib.linux-x86_64-2.7/h5py/_hl
      copying h5py/_hl/dims.py -> build/lib.linux-x86_64-2.7/h5py/_hl
      copying h5py/_hl/dataset.py -> build/lib.linux-x86_64-2.7/h5py/_hl
      copying h5py/_hl/files.py -> build/lib.linux-x86_64-2.7/h5py/_hl
      copying h5py/_hl/filters.py -> build/lib.linux-x86_64-2.7/h5py/_hl
      creating build/lib.linux-x86_64-2.7/h5py/tests
      copying h5py/tests/__init__.py -> build/lib.linux-x86_64-2.7/h5py/tests
      copying h5py/tests/common.py -> build/lib.linux-x86_64-2.7/h5py/tests
      creating build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/test_dataset.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/test_h5.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/test_h5p.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/test_file.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/__init__.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/test_h5f.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/test_selections.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/test_objects.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/test_dimension_scales.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/test_slicing.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/test_attrs_data.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/test_base.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/test_h5t.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/test_datatype.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/common.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/test_attrs.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      copying h5py/tests/old/test_group.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
      creating build/lib.linux-x86_64-2.7/h5py/tests/hl
      copying h5py/tests/hl/test_dataset_swmr.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
      copying h5py/tests/hl/test_file.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
      copying h5py/tests/hl/__init__.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
      copying h5py/tests/hl/test_dims_dimensionproxy.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
      copying h5py/tests/hl/test_dataset_getitem.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
      copying h5py/tests/hl/test_attribute_create.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
      running build_ext
      Autodetection skipped [libhdf5.so: cannot open shared object file: No such file or directory]
      ********************************************************************************
                             Summary of the h5py configuration
    
          Path to HDF5: None
          HDF5 Version: '1.8.4'
           MPI Enabled: False
      Rebuild Required: False
    
      ********************************************************************************
      Executing api_gen rebuild of defs
      Executing cythonize()
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/defs.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/_errors.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/_objects.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/_proxy.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5fd.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5z.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5i.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5r.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/utils.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/_conv.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5t.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5s.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5p.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5d.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5a.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5f.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5g.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5l.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5o.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5ds.pyx because it changed.
      Compiling /tmp/pip-build-g7OTWL/h5py/h5py/h5ac.pyx because it changed.
      [ 1/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/_conv.pyx
      [ 2/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/_errors.pyx
      [ 3/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/_objects.pyx
      [ 4/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/_proxy.pyx
      [ 5/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/defs.pyx
      [ 6/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5.pyx
      [ 7/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5a.pyx
      [ 8/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5ac.pyx
      [ 9/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5d.pyx
      [10/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5ds.pyx
      [11/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5f.pyx
      [12/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5fd.pyx
      [13/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5g.pyx
      [14/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5i.pyx
      [15/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5l.pyx
      [16/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5o.pyx
      [17/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5p.pyx
      [18/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5r.pyx
      [19/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5s.pyx
      [20/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5t.pyx
      [21/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/h5z.pyx
      [22/22] Cythonizing /tmp/pip-build-g7OTWL/h5py/h5py/utils.pyx
      building 'h5py.defs' extension
      creating build/temp.linux-x86_64-2.7
      creating build/temp.linux-x86_64-2.7/tmp
      creating build/temp.linux-x86_64-2.7/tmp/pip-build-g7OTWL
      creating build/temp.linux-x86_64-2.7/tmp/pip-build-g7OTWL/h5py
      creating build/temp.linux-x86_64-2.7/tmp/pip-build-g7OTWL/h5py/h5py
      x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -DH5_USE_16_API -I/tmp/pip-build-g7OTWL/h5py/lzf -I/opt/local/include -I/usr/local/include -I/home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c /tmp/pip-build-g7OTWL/h5py/h5py/defs.c -o build/temp.linux-x86_64-2.7/tmp/pip-build-g7OTWL/h5py/h5py/defs.o
      In file included from /home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1781:0,
                       from /home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:18,
                       from /home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4,
                       from /tmp/pip-build-g7OTWL/h5py/h5py/api_compat.h:26,
                       from /tmp/pip-build-g7OTWL/h5py/h5py/defs.c:279:
      /home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
       #warning "Using deprecated NumPy API, disable it by " \
        ^
      In file included from /tmp/pip-build-g7OTWL/h5py/h5py/defs.c:279:0:
      /tmp/pip-build-g7OTWL/h5py/h5py/api_compat.h:27:18: fatal error: hdf5.h: File o directory non esistente
      compilation terminated.
      error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
    
      ----------------------------------------
      Failed building wheel for h5py
      Running setup.py clean for h5py
      Running setup.py bdist_wheel for scipy ... done
      Stored in directory: /home/alex/.cache/pip/wheels/76/aa/e2/031ee833b4abfd33d8620e4bc36f8178b95cfcf36ec550a6b9
    Successfully built scipy
    Failed to build h5py
    Installing collected packages: h5py, scipy, Theano, PyYAML, Keras, Pillow
      Running setup.py install for h5py ... error
        Complete output from command /home/alex/image-analogies/venv/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-g7OTWL/h5py/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-x5IHF0-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/alex/image-analogies/venv/include/site/python2.7/h5py:
        running install
        running build
        running build_py
        creating build
        creating build/lib.linux-x86_64-2.7
        creating build/lib.linux-x86_64-2.7/h5py
        copying h5py/highlevel.py -> build/lib.linux-x86_64-2.7/h5py
        copying h5py/__init__.py -> build/lib.linux-x86_64-2.7/h5py
        copying h5py/ipy_completer.py -> build/lib.linux-x86_64-2.7/h5py
        copying h5py/version.py -> build/lib.linux-x86_64-2.7/h5py
        creating build/lib.linux-x86_64-2.7/h5py/_hl
        copying h5py/_hl/base.py -> build/lib.linux-x86_64-2.7/h5py/_hl
        copying h5py/_hl/selections.py -> build/lib.linux-x86_64-2.7/h5py/_hl
        copying h5py/_hl/__init__.py -> build/lib.linux-x86_64-2.7/h5py/_hl
        copying h5py/_hl/selections2.py -> build/lib.linux-x86_64-2.7/h5py/_hl
        copying h5py/_hl/group.py -> build/lib.linux-x86_64-2.7/h5py/_hl
        copying h5py/_hl/datatype.py -> build/lib.linux-x86_64-2.7/h5py/_hl
        copying h5py/_hl/attrs.py -> build/lib.linux-x86_64-2.7/h5py/_hl
        copying h5py/_hl/dims.py -> build/lib.linux-x86_64-2.7/h5py/_hl
        copying h5py/_hl/dataset.py -> build/lib.linux-x86_64-2.7/h5py/_hl
        copying h5py/_hl/files.py -> build/lib.linux-x86_64-2.7/h5py/_hl
        copying h5py/_hl/filters.py -> build/lib.linux-x86_64-2.7/h5py/_hl
        creating build/lib.linux-x86_64-2.7/h5py/tests
        copying h5py/tests/__init__.py -> build/lib.linux-x86_64-2.7/h5py/tests
        copying h5py/tests/common.py -> build/lib.linux-x86_64-2.7/h5py/tests
        creating build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/test_dataset.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/test_h5.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/test_h5p.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/test_file.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/__init__.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/test_h5f.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/test_selections.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/test_objects.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/test_dimension_scales.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/test_slicing.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/test_attrs_data.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/test_base.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/test_h5t.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/test_datatype.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/common.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/test_attrs.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        copying h5py/tests/old/test_group.py -> build/lib.linux-x86_64-2.7/h5py/tests/old
        creating build/lib.linux-x86_64-2.7/h5py/tests/hl
        copying h5py/tests/hl/test_dataset_swmr.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
        copying h5py/tests/hl/test_file.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
        copying h5py/tests/hl/__init__.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
        copying h5py/tests/hl/test_dims_dimensionproxy.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
        copying h5py/tests/hl/test_dataset_getitem.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
        copying h5py/tests/hl/test_attribute_create.py -> build/lib.linux-x86_64-2.7/h5py/tests/hl
        running build_ext
        Autodetection skipped [libhdf5.so: cannot open shared object file: No such file or directory]
        ********************************************************************************
                               Summary of the h5py configuration
    
            Path to HDF5: None
            HDF5 Version: '1.8.4'
             MPI Enabled: False
        Rebuild Required: False
    
        ********************************************************************************
        Executing cythonize()
        building 'h5py.defs' extension
        creating build/temp.linux-x86_64-2.7
        creating build/temp.linux-x86_64-2.7/tmp
        creating build/temp.linux-x86_64-2.7/tmp/pip-build-g7OTWL
        creating build/temp.linux-x86_64-2.7/tmp/pip-build-g7OTWL/h5py
        creating build/temp.linux-x86_64-2.7/tmp/pip-build-g7OTWL/h5py/h5py
        x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -DH5_USE_16_API -I/tmp/pip-build-g7OTWL/h5py/lzf -I/opt/local/include -I/usr/local/include -I/home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c /tmp/pip-build-g7OTWL/h5py/h5py/defs.c -o build/temp.linux-x86_64-2.7/tmp/pip-build-g7OTWL/h5py/h5py/defs.o
        In file included from /home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1781:0,
                         from /home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:18,
                         from /home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4,
                         from /tmp/pip-build-g7OTWL/h5py/h5py/api_compat.h:26,
                         from /tmp/pip-build-g7OTWL/h5py/h5py/defs.c:279:
        /home/alex/image-analogies/venv/local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
         #warning "Using deprecated NumPy API, disable it by " \
          ^
        In file included from /tmp/pip-build-g7OTWL/h5py/h5py/defs.c:279:0:
        /tmp/pip-build-g7OTWL/h5py/h5py/api_compat.h:27:18: fatal error: hdf5.h: File o directory non esistente
        compilation terminated.
        error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
    
        ----------------------------------------
    Command "/home/alex/image-analogies/venv/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-g7OTWL/h5py/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-x5IHF0-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/alex/image-analogies/venv/include/site/python2.7/h5py" failed with error code 1 in /tmp/pip-build-g7OTWL/h5py/
    

    I guess it has something to do with some depricated apis or some missing file, but I really don't get what I should do.

    Any help?

    opened by rayset 3
  • add requirements.txt and explicitly state install directions

    add requirements.txt and explicitly state install directions

    @awentzonline this is great. I just made my first image analogy, thanks.

    I'm pretty new to python so I struggled a bit with the setup. Here is an update to the readme and a requirements.txt file to help others in the future.

    opened by mcwhittemore 3
  • Bump numpy from 1.10.4 to 1.21.0

    Bump numpy from 1.10.4 to 1.21.0

    Bumps numpy from 1.10.4 to 1.21.0.

    Release notes

    Sourced from numpy's releases.

    v1.21.0

    NumPy 1.21.0 Release Notes

    The NumPy 1.21.0 release highlights are

    • continued SIMD work covering more functions and platforms,
    • initial work on the new dtype infrastructure and casting,
    • universal2 wheels for Python 3.8 and Python 3.9 on Mac,
    • improved documentation,
    • improved annotations,
    • new PCG64DXSM bitgenerator for random numbers.

    In addition there are the usual large number of bug fixes and other improvements.

    The Python versions supported for this release are 3.7-3.9. Official support for Python 3.10 will be added when it is released.

    :warning: Warning: there are unresolved problems compiling NumPy 1.21.0 with gcc-11.1 .

    • Optimization level -O3 results in many wrong warnings when running the tests.
    • On some hardware NumPy will hang in an infinite loop.

    New functions

    Add PCG64DXSM BitGenerator

    Uses of the PCG64 BitGenerator in a massively-parallel context have been shown to have statistical weaknesses that were not apparent at the first release in numpy 1.17. Most users will never observe this weakness and are safe to continue to use PCG64. We have introduced a new PCG64DXSM BitGenerator that will eventually become the new default BitGenerator implementation used by default_rng in future releases. PCG64DXSM solves the statistical weakness while preserving the performance and the features of PCG64.

    See upgrading-pcg64 for more details.

    (gh-18906)

    Expired deprecations

    • The shape argument numpy.unravel_index cannot be passed as dims keyword argument anymore. (Was deprecated in NumPy 1.16.)

    ... (truncated)

    Commits
    • b235f9e Merge pull request #19283 from charris/prepare-1.21.0-release
    • 34aebc2 MAINT: Update 1.21.0-notes.rst
    • 493b64b MAINT: Update 1.21.0-changelog.rst
    • 07d7e72 MAINT: Remove accidentally created directory.
    • 032fca5 Merge pull request #19280 from charris/backport-19277
    • 7d25b81 BUG: Fix refcount leak in ResultType
    • fa5754e BUG: Add missing DECREF in new path
    • 61127bb Merge pull request #19268 from charris/backport-19264
    • 143d45f Merge pull request #19269 from charris/backport-19228
    • d80e473 BUG: Removed typing for == and != in dtypes
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 2
  • SyntaxError: Missing parentheses in call to 'print'

    SyntaxError: Missing parentheses in call to 'print'

    try to run example, make_image_analogy.py images/arch-mask.jpg images/arch.jpg images/arch-newmask.jpg out/arch the output is SyntaxError: Missing parentheses in call to 'print' (venv) liudeMacBook-Pro:scripts liu$ make_image_analogy.py images/arch-mask.jpg images/arch.jpg images/arch-newmask.jpg out/arch Using Theano backend. /Users/liu/Code/venv/lib/python3.4/site-packages/theano/tensor/signal/downsample.py:6: UserWarning: downsample module has been moved to the theano.tensor.signal.pool module. "downsample module has been moved to the theano.tensor.signal.pool module.") Theano CPU mode detected. Forcing a-scale-mode to "match" Using PatchMatch model Traceback (most recent call last): File "/Users/liu/Code/venv/bin/make_image_analogy.py", line 21, in from image_analogy.models.nnf import NNFModel as model_class File "/Users/liu/Code/venv/lib/python3.4/site-packages/image_analogy/models/nnf.py", line 7, in from image_analogy.losses.nnf import nnf_analogy_loss, NNFState, PatchMatcher File "/Users/liu/Code/venv/lib/python3.4/site-packages/image_analogy/losses/nnf.py", line 5, in from .patch_matcher import PatchMatcher File "/Users/liu/Code/venv/lib/python3.4/site-packages/image_analogy/losses/patch_matcher.py", line 187 print "[congrid] dimensions error. "
    ^ SyntaxError: Missing parentheses in call to 'print'

    opened by Heipiao 2
  • Bump pillow from 3.1.1 to 9.3.0

    Bump pillow from 3.1.1 to 9.3.0

    Bumps pillow from 3.1.1 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump numpy from 1.10.4 to 1.22.0

    Bump numpy from 1.10.4 to 1.22.0

    Bumps numpy from 1.10.4 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump pyyaml from 3.11 to 5.4

    Bump pyyaml from 3.11 to 5.4

    Bumps pyyaml from 3.11 to 5.4.

    Changelog

    Sourced from pyyaml's changelog.

    5.4 (2021-01-19)

    5.3.1 (2020-03-18)

    • yaml/pyyaml#386 -- Prevents arbitrary code execution during python/object/new constructor

    5.3 (2020-01-06)

    5.2 (2019-12-02)

    • Repair incompatibilities introduced with 5.1. The default Loader was changed, but several methods like add_constructor still used the old default yaml/pyyaml#279 -- A more flexible fix for custom tag constructors yaml/pyyaml#287 -- Change default loader for yaml.add_constructor yaml/pyyaml#305 -- Change default loader for add_implicit_resolver, add_path_resolver
    • Make FullLoader safer by removing python/object/apply from the default FullLoader yaml/pyyaml#347 -- Move constructor for object/apply to UnsafeConstructor
    • Fix bug introduced in 5.1 where quoting went wrong on systems with sys.maxunicode <= 0xffff yaml/pyyaml#276 -- Fix logic for quoting special characters
    • Other PRs: yaml/pyyaml#280 -- Update CHANGES for 5.1

    5.1.2 (2019-07-30)

    • Re-release of 5.1 with regenerated Cython sources to build properly for Python 3.8b2+

    ... (truncated)

    Commits
    • 58d0cb7 5.4 release
    • a60f7a1 Fix compatibility with Jython
    • ee98abd Run CI on PR base branch changes
    • ddf2033 constructor.timezone: _copy & deepcopy
    • fc914d5 Avoid repeatedly appending to yaml_implicit_resolvers
    • a001f27 Fix for CVE-2020-14343
    • fe15062 Add 3.9 to appveyor file for completeness sake
    • 1e1c7fb Add a newline character to end of pyproject.toml
    • 0b6b7d6 Start sentences and phrases for capital letters
    • c976915 Shell code improvements
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • FileNotFoundError: [Errno 2] No such file or directory: ''

    FileNotFoundError: [Errno 2] No such file or directory: ''

    I followed the installation upto installing venv. tried running the program with my images and the vgg16 file in cwd.

    I installed tensorflow on my machine using pip3 install tensorflow --user

    venv did not detect tensorflow, so I installed tensorflow with pip install tensorflow while in venv.

    then I ran my command again.

    (venv) [email protected]:~/Documents/Programming/image$ make_image_analogy.py a.jpg a.jpg b.jpg output Using TensorFlow backend. Tensorflow detected. Forcing --a-scale-mode=match (A images are scaled to same size as B images) Using PatchMatch model Traceback (most recent call last): File "/home/user/.keras/venv/bin/make_image_analogy.py", line 27, in image_analogy.main.main(args, model_class) File "/home/user/.keras/venv/lib/python3.6/site-packages/image_analogy/main.py", line 34, in main os.makedirs(output_dir) File "/home/user/.keras/venv/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) FileNotFoundError: [Errno 2] No such file or directory: ''

    I am getting this error involving patchmatch model, and I have no clue where to begin troubleshooting it.

    opened by Logner 0
  • Converting images causes unknown bus error.

    Converting images causes unknown bus error.

    I got all dependencies installed and I finally got some things to run, however I am getting an error when trying to run 2 really small images against each other with a raspberry pi 3 B+. I'm guessing it doesn't have enough memory to do this or I'm missing something.

    Building loss...
    WARNING:tensorflow:From /home/pi/.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:460: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    WARNING:tensorflow:Variable += will be deprecated. Use variable.assign_add if you want assignment to the variable value or 'x = x + y' if you want a new python Tensor object.
    Precomputing static features...
    Building and combining losses...
    /home/pi/.local/lib/python2.7/site-packages/sklearn/feature_extraction/image.py:287: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
      indexing_strides = arr[slices].strides
    Start of iteration 0 x 0
    Bus error
    
    
    opened by RattleyCooper 0
  • Add support for video

    Add support for video

    Hi, do you think it would be hard to implement deepflow/deepmatching to process multiple frames ? Let me know if it seems too difficult or not and I can have a look. I'm interested in making videos with it.

    opened by martync 0
Owner
Adam Wentz
Machine learning @VodyTV Previously wrote codes for Dollar Shave Club, The Onion / ClickHole / AVClub.
Adam Wentz
Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces(ICML 2021)

Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces(ICML 2021) This repository contains the code

149 Dec 15, 2022
ICCV2021 Oral SA-ConvONet: Sign-Agnostic Optimization of Convolutional Occupancy Networks

Sign-Agnostic Convolutional Occupancy Networks Paper | Supplementary | Video | Teaser Video | Project Page This repository contains the implementation

64 Jan 05, 2023
Improving Transferability of Representations via Augmentation-Aware Self-Supervision

Improving Transferability of Representations via Augmentation-Aware Self-Supervision Accepted to NeurIPS 2021 TL;DR: Learning augmentation-aware infor

hankook 38 Sep 16, 2022
Simple tutorials on Pytorch DDP training

pytorch-distributed-training Distribute Dataparallel (DDP) Training on Pytorch Features Easy to study DDP training You can directly copy this code for

Ren Tianhe 188 Jan 06, 2023
CVPR2021 Workshop - HDRUNet: Single Image HDR Reconstruction with Denoising and Dequantization.

HDRUNet [Paper Link] HDRUNet: Single Image HDR Reconstruction with Denoising and Dequantization By Xiangyu Chen, Yihao Liu, Zhengwen Zhang, Yu Qiao an

XyChen 105 Dec 20, 2022
Flexible Option Learning - NeurIPS 2021

Flexible Option Learning This repository contains code for the paper Flexible Option Learning presented as a Spotlight at NeurIPS 2021. The implementa

Martin Klissarov 7 Nov 09, 2022
Official code of Team Yao at Multi-Modal-Fact-Verification-2022

Official code of Team Yao at Multi-Modal-Fact-Verification-2022 A Multi-Modal Fact Verification dataset released as part of the De-Factify workshop in

Wei-Yao Wang 11 Nov 15, 2022
Implementation of paper "DCS-Net: Deep Complex Subtractive Neural Network for Monaural Speech Enhancement"

DCS-Net This is the implementation of "DCS-Net: Deep Complex Subtractive Neural Network for Monaural Speech Enhancement" Steps to run the model Edit V

Jack Walters 10 Apr 04, 2022
TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling

TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling This is the official code release for the paper 'TiP-Adapter: Training-fre

peng gao 189 Jan 04, 2023
Code for our SIGCOMM'21 paper "Network Planning with Deep Reinforcement Learning".

0. Introduction This repository contains the source code for our SIGCOMM'21 paper "Network Planning with Deep Reinforcement Learning". Notes The netwo

NetX Group 68 Nov 24, 2022
Alternatives to Deep Neural Networks for Function Approximations in Finance

Alternatives to Deep Neural Networks for Function Approximations in Finance Code companion repo Overview This is a repository of Python code to go wit

15 Dec 17, 2022
CenterFace(size of 7.3MB) is a practical anchor-free face detection and alignment method for edge devices.

CenterFace Introduce CenterFace(size of 7.3MB) is a practical anchor-free face detection and alignment method for edge devices. Recent Update 2019.09.

StarClouds 1.2k Dec 21, 2022
Analysis of Smiles through reservoir sampling & RDkit

Analysis of Smiles through reservoir sampling and machine learning (under development). This is a simple project that includes two Jupyter files for t

Aurimas A. Nausėdas 6 Aug 30, 2022
Vpw analyzer - A visual J1850 VPW analyzer written in Python

VPW Analyzer A visual J1850 VPW analyzer written in Python Requires Tkinter, Pan

7 May 01, 2022
Notebook and code to synthesize complex and highly dimensional datasets using Gretel APIs.

Gretel Trainer This code is designed to help users successfully train synthetic models on complex datasets with high row and column counts. The code w

Gretel.ai 24 Nov 03, 2022
Rainbow: Combining Improvements in Deep Reinforcement Learning

Rainbow Rainbow: Combining Improvements in Deep Reinforcement Learning [1]. Results and pretrained models can be found in the releases. DQN [2] Double

Kai Arulkumaran 1.4k Dec 29, 2022
Implement face detection, and age and gender classification, and emotion classification.

YOLO Keras Face Detection Implement Face detection, and Age and Gender Classification, and Emotion Classification. (image from wider face dataset) Ove

Chloe 10 Nov 14, 2022
Predictive AI layer for existing databases.

MindsDB is an open-source AI layer for existing databases that allows you to effortlessly develop, train and deploy state-of-the-art machine learning

MindsDB Inc 12.2k Jan 03, 2023
Forecasting directional movements of stock prices for intraday trading using LSTM and random forest

Forecasting directional movements of stock-prices for intraday trading using LSTM and random-forest https://arxiv.org/abs/2004.10178 Pushpendu Ghosh,

Pushpendu Ghosh 270 Dec 24, 2022
Cascading Feature Extraction for Fast Point Cloud Registration (BMVC 2021)

Cascading Feature Extraction for Fast Point Cloud Registration This repository contains the source code for the paper [Arxive link comming soon]. Meth

7 May 26, 2022