Imagededup - ๐Ÿ˜Ž Finding duplicate images made easy

Overview

Image Deduplicator (imagededup)

Build Status Build Status Docs codecov PyPI Version License

imagededup is a python package that simplifies the task of finding exact and near duplicates in an image collection.

This package provides functionality to make use of hashing algorithms that are particularly good at finding exact duplicates as well as convolutional neural networks which are also adept at finding near duplicates. An evaluation framework is also provided to judge the quality of deduplication for a given dataset.

Following details the functionality provided by the package:

Detailed documentation for the package can be found at: https://idealo.github.io/imagededup/

imagededup is compatible with Python 3.6+ and runs on Linux, MacOS X and Windows. It is distributed under the Apache 2.0 license.

๐Ÿ“– Contents

โš™๏ธ Installation

There are two ways to install imagededup:

  • Install imagededup from PyPI (recommended):
pip install imagededup

โš ๏ธ Note: The TensorFlow >=2.1 and TensorFlow 1.15 release now include GPU support by default. Before that CPU and GPU packages are separate. If you have GPUs, you should rather install the TensorFlow version with GPU support especially when you use CNN to find duplicates. It's way faster. See the TensorFlow guide for more details on how to install it for older versions of TensorFlow.

  • Install imagededup from the GitHub source:
=0.29" python setup.py install">
git clone https://github.com/idealo/imagededup.git
cd imagededup
pip install "cython>=0.29"
python setup.py install

๐Ÿš€ Quick Start

In order to find duplicates in an image directory using perceptual hashing, following workflow can be used:

  • Import perceptual hashing method
from imagededup.methods import PHash
phasher = PHash()
  • Generate encodings for all images in an image directory
encodings = phasher.encode_images(image_dir='path/to/image/directory')
  • Find duplicates using the generated encodings
duplicates = phasher.find_duplicates(encoding_map=encodings)
  • Plot duplicates obtained for a given file (eg: 'ukbench00120.jpg') using the duplicates dictionary
from imagededup.utils import plot_duplicates
plot_duplicates(image_dir='path/to/image/directory',
                duplicate_map=duplicates,
                filename='ukbench00120.jpg')

The output looks as below:

The complete code for the workflow is:

from imagededup.methods import PHash
phasher = PHash()

# Generate encodings for all images in an image directory
encodings = phasher.encode_images(image_dir='path/to/image/directory')

# Find duplicates using the generated encodings
duplicates = phasher.find_duplicates(encoding_map=encodings)

# plot duplicates obtained for a given file using the duplicates dictionary
from imagededup.utils import plot_duplicates
plot_duplicates(image_dir='path/to/image/directory',
                duplicate_map=duplicates,
                filename='ukbench00120.jpg')

For more examples, refer this part of the repository.

For more detailed usage of the package functionality, refer: https://idealo.github.io/imagededup/

โณ Benchmarks

Detailed benchmarks on speed and classification metrics for different methods have been provided in the documentation. Generally speaking, following conclusions can be made:

  • CNN works best for near duplicates and datasets containing transformations.
  • All deduplication methods fare well on datasets containing exact duplicates, but Difference hashing is the fastest.

๐Ÿค Contribute

We welcome all kinds of contributions. See the Contribution guide for more details.

๐Ÿ“ Citation

Please cite Imagededup in your publications if this is useful for your research. Here is an example BibTeX entry:

@misc{idealods2019imagededup,
  title={Imagededup},
  author={Tanuj Jain and Christopher Lennan and Zubin John and Dat Tran},
  year={2019},
  howpublished={\url{https://github.com/idealo/imagededup}},
}

๐Ÿ— Maintainers

ยฉ Copyright

See LICENSE for details.

Comments
  • Optional parallelization of cosine similarity computation (Issue #95)

    Optional parallelization of cosine similarity computation (Issue #95)

    The PR adds parallel_cosine_similarity to find_duplicates which is set to True by default so that it won't break any existing code. I have a really huge dataset and not enough RAM to use multiprocessing so introducing the ability to disable parallel computation of cosine similarity was the only way to use the package.

    enhancement 
    opened by EduardKononov 8
  • Cannot install imagededup==0.0.1, .. imagededup==0.1.0 because these package versions have conflicting dependencies.

    Cannot install imagededup==0.0.1, .. imagededup==0.1.0 because these package versions have conflicting dependencies.

    Not sure if this is a recent issue, or related to my using an M1 Mac. Nevertheless, the tail of a very long traceback is below:

    ERROR: Cannot install imagededup==0.0.1, imagededup==0.0.2, imagededup==0.0.3, imagededup==0.0.4 and imagededup==0.1.0 because these package versions have conflicting dependencies.
    
    The conflict is caused by:
        imagededup 0.1.0 depends on tensorflow==2.0.0
        imagededup 0.0.4 depends on tensorflow==2.0.0
        imagededup 0.0.3 depends on tensorflow==1.13.1
        imagededup 0.0.2 depends on tensorflow==1.13.1
        imagededup 0.0.1 depends on tensorflow==1.13.1
    
    To fix this you could try to:
    1. loosen the range of package versions you've specified
    2. remove package versions to allow pip attempt to solve the dependency conflict
    
    opened by robmarkcole 7
  • Duplicates are not found when comparing two files

    Duplicates are not found when comparing two files

    The code below always returns empty duplicates when two files are compared - regardless PNG files are same or not. (Python 3.8, OSX Catalina)

    import sys
    import os
    
    from imagededup.methods import PHash
    
    if __name__ == '__main__':
        hasher = PHash()
        image_map = {}
    
        for i in range(1,3):
            if not os.path.exists(sys.argv[i]):
                sys.exit(sys.argv[i] + " not found")
            image_map[sys.argv[i]] = hasher.encode_image(image_file=sys.argv[i])
        
        
        duplicates = hasher.find_duplicates(
            encoding_map=image_map,
            max_distance_threshold=0,
            scores=True)
    
        print(duplicates)
    
    bug duplicate 
    opened by mmertama 7
  • Relax the dependency specifiers

    Relax the dependency specifiers

    I can see why you may think you need to lock these down to exact versions, but in general for a library like this it's better to keep them more liberal for several reasons:

    1. Locking gives a false sense of security - it doesn't actually lock the whole tree, so a release of any of the dependencies of these packages could cause a break. You need to use a tool like Pipenv, poetry or pip-tools to lock the entire dependency tree.

    2. This is a library, you need to ensure that it fits in nicely with larger applications that may well have similar dependencies. By locking them to exact versions you are going to cause dependency issues as their specifiers may conflict (i.e Pillow >6.0.0)

    3. It's conceptually wrong - you're saying that this library only works with Pillow ==6.0.0. Not 6.0.1, which could be a bugfix release that fixes a bunch of issues.

    Testing dependencies, like pytest and others, should also not be locked. They are not critical to the application as a whole, and it's fairly obvious if they break. Which they won't.

    opened by orf 6
  • Installtion error

    Installtion error

    I forked the repository and I worked on the "dev" branch and while running the command:

    python setup.py install
    

    I got this error message:

    Processing dependencies for imagededup==0.1.0
    Searching for tensorflow~=2.0.0
    Reading https://pypi.org/simple/tensorflow/
    No local packages or working download links found for tensorflow~=2.0.0
    
    opened by ShaharNaveh 5
  • Handle multi picture objects (MPO)

    Handle multi picture objects (MPO)

    I have a LOT of images (roughly 1/3 of my entire personal library) that register in PIL as MPO. Your code barfs on all of them, but changing image_utils.py line 15 to IMG_FORMATS = ['JPEG', 'PNG', 'BMP', 'MPO'] fixes this.

    I didn't want to submit a PR in case you already know this and it causes a headache somewhere else. If not, please change this.

    enhancement 
    opened by drrelyea 5
  • Unable to execute this project

    Unable to execute this project

    I have several errors during the execution of this project, the main error is that after launched "pip install imagededup" command, this error comes out: image

    I already looked at all other issues but no one can solve this problem

    opened by uly94 4
  • Fix/installation

    Fix/installation

    Running python setup.py install on the dev branch fails.

    Tensorflow and numpy aren't playing well with each other. Leave tf>1.0 in setup.py with no mention of numpy (i.e., relying on tf to get numpy) leads to an error since tf 2.4.1 gets installed along with numpy 1.20.1 (due to pip resolver algo), but tf 2.4.1 needs numpy=~1.9.2. Explicitly mentioning numpy <1.20.0, makes the installation work. Additionally, newer versions of numpy (1.20.1), scipy (1.6 onwards) and matplotlib do not support python 3.6 anymore. The changes proposed in this PR will also work from Python 3.7 onwards.

    opened by tanujjain 4
  • cannot identify image file 'filename.png' 2021-01-17 20:12:33,709: WARNING Invalid image file filename.png:

    cannot identify image file 'filename.png' 2021-01-17 20:12:33,709: WARNING Invalid image file filename.png:

    Getting this error, cannot identify image file 'filename.png' 2021-01-17 20:12:33,709: WARNING Invalid image file filename.png: in v0.2.4 for .png file

    opened by awsaf49 4
  • Image format of encode_image method

    Image format of encode_image method

    Hi there, Thanks for your repo. To use encode_image(image_array=img_array) method, must the input numpy image array format be as BGR or RGB (because for example OpenCV default format is BGR, but Pillow is RGB)? Best

    opened by ahkarami 4
  • Fix tests

    Fix tests

    Currently we have failing Linux tests because we rely on the order of how images are loaded which varies across OSs. We don't care in which order images are loaded so we shouldn't test for it, that's why I removed the lines from tests/test_hashing.py

    We also have a failing test for macOS Python 3.6 on Azure pipelines which was not reproducible on my MacBook but was fixed by initialising a new CNN object for the failing test.

    opened by clennan 4
  • Supporting image compression?

    Supporting image compression?

    Hey!

    I believe most duplicates are created through compression. For example, if I upload an image to a different service, I re-download it. It's usually compressed, and its metadata may have changed. I have many duplicates from various platforms like Google, Facebook, etc.

    I haven't seen anything in the documentation about how this repository handles compression. Will it be able to recognize duplicate images with various levels of compression?

    Thanks!

    opened by PetrochukM 3
  •  Introducing new optional multiprocessing parameters

    Introducing new optional multiprocessing parameters

    WHAT/WHY

    Introduces new optional multiprocessing parameters to several methods:

    • num_enc_workers - Change number of processes to generate encodings (Addresses #156)
    • num_sim_workers/num_dist_workers - Change number of processes to compute similarity/distances (Addresses #95, #113)

    HOW

    APIs impacted

    For both CNN as well as Hashing functions, the following user-facing api calls get new parameters-

    • encode_images
    • find_duplicates
    • find_duplicates_to_remove

    Choice of default values

    • num_enc_workers: For CNN methods, this is set to 0 by default (0=disabled). Furthermore, parallelization of CNN encoding generation is only supported on linux platform. This is because pytorch requires the call to Dataloader to be wrapped within an if ___name__ == '__main__' construct to enable multiprocessing on non-linux platforms (due to the difference between fork() vs spawn() call used for multiprocessing on different systems). Such a construct does not fit well with the current code structure. For Hashing methods, this parameter is set to the cpu count of the system. The default values preserve backward compatibility.
    • num_dist_workers/num_sim_workers: Set to cpu count of the system. The default values preserve backward compatibility.
    opened by tanujjain 0
  • Formatted and Updated README.md - Added Streamlit based WebApp ๐Ÿ‘จโ€๐Ÿ’ปโœ…

    Formatted and Updated README.md - Added Streamlit based WebApp ๐Ÿ‘จโ€๐Ÿ’ปโœ…

    Hello @tanujjain / @clennan / @datitran, and https://github.com/idealo ,

    Kudos to you for bringing up imagededup. I worked on developing a simple streamlit based webapp on the same and I think it will be fruitful to have it as a part of README here as the motivation behind developing this came from your work ๐Ÿ˜„! The entire webapp codebase has also been added as a separate folder while the main README.md of the project has been updated with the same.

    You can find the entire webapp source-code in the stream_app directory of the repo.

    Happy opensourcing!

    Cheers, Prateek

    opened by prateekralhan 7
  • On demand duplicate check during runtime with a 'growing' BKTree

    On demand duplicate check during runtime with a 'growing' BKTree

    What I would like to achieve I about the following:

    EXISTING_HASHES: set = set()
    def is_duplicate(img_bytes: bytes):
        if get_hash(img_bytes) in EXISTING_HASHES:
            return True
        return False
    
    def main():
        image_bytes = get_new_image()
        if is_duplicate(image_bytes)
            return
    
        with open(file) as f:
           f.write(image_bytes)
            
    
    opened by sla-te 0
  • Error in the case of too many images

    Error in the case of too many images

    Hello,

    I got a problem when I tried to find duplicating image in a dataset of 40000 images. image

    image

    I have already tried this solution but it didn't work https://github.com/idealo/imagededup/issues/95

    Do you know how to fix it ? Thank you in advance.

    Best regard

    opened by hoangkhoiLE 0
Releases(v0.3.0)
  • v0.3.0(Oct 15, 2022)

    Installation fix

    • Make package installable by removing tensorflow as a dependency and replacing it with pytorch #173
    • Drop support for python 3.6 and python 3.7 #173

    โœจ New features and improvements

    • Use MobileNetv3 for generating CNN encodings #173
    • Introduce a 'recursive' option to generate encodings for images organized in a nested directory structure #104

    Breaking changes

    • Size of CNN encodings is 576 instead of 1024 #173
    • Since CNN encodings are generated using a different network, the robustness might be different; user might need to change similarity threshold settings #173
    • Hashes (all types) may be different from previous versions for a given image #173
    Source code(tar.gz)
    Source code(zip)
  • v0.2.4(Nov 23, 2020)

    ๐Ÿ”ด Bug fixes

    • Fix broken cython brute force in Python 3.8 #117
    • Close figure after plotting to avoid figure overwrite #111
    • Allow encode_image method of cnn to accept 2d arrays #110
    • Relax dependencies and update packages #116, #107, #102, #119
    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Dec 11, 2019)

    โœจ New features and improvements

    • Switched to creating list comprehensions to create lists on demand instead of slower explicit for loops that rely on calling the append function in every iteration. #76
    • Used sets for membership tests
    • Used broadcasting instead of explicit for loops
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Nov 3, 2019)

  • v0.2.0(Oct 30, 2019)

    โœจ New features and improvements

    • Implemented Cython implementation for brute force. This is now used as default search_method on Linux and MacOS X. For Windows, we still use bktree as default as we are not sure that popcnt is supported #56
    • Expand supported image formats. Now it also supports: 'MPO', 'PPM', 'TIFF', 'GIF', 'SVG', 'PGM', 'PBM' #35

    ๐Ÿ”ด Bug fixes

    • Relaxing the package dependencies #36
    • Removal of print statements #39
    • Fix type error when saving scores #55 & #61

    ๐Ÿ‘ฅ Contributors

    Thanks to @jonatron, @orf, @DannyFeliz, @ImportTaste, @fridzema, @DannyFeliz, @iozevo, @MomIsBestFriend, @YadunandanH for the pull requests and contributions.

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Oct 8, 2019)

    This is the first release of imagededup.

    We added: ๐Ÿงฎ Several hashing algorithms (PHash, DHash, WHash, AHash) and convolutional neural networks ๐Ÿ”Ž An evaluation framework to judge the quality of deduplication ๐Ÿ–ผ Easy plotting functionality of duplicates โš™๏ธ Simple API

    Source code(tar.gz)
    Source code(zip)
Owner
idealo
idealo's technology org page, Germany's largest price comparison service. Visit us at https://idealo.github.io/.
idealo
use tensorflow 2.0 to tell a dog and cat from a specified picture

dog_or_cat use tensorflow 2.0 to tell a dog and cat from a specified picture This is one of the classic experiments for the introduction of deep learn

ไฝ ่ฟ™ไธชไปฃ็ ๆˆ‘็œ‹ไธๆ‡‚ 1 Oct 22, 2021
pytorch implementation for PointNet

PointNet.pytorch This repo is implementation for PointNet in pytorch. The model is in pointnet/model.py. It is teste

Fei Xia 1.7k Dec 30, 2022
Differentiable Surface Triangulation

Differentiable Surface Triangulation This is our implementation of the paper Differentiable Surface Triangulation that enables optimization for any pe

61 Dec 07, 2022
A parametric soroban written with CADQuery.

A parametric soroban written in CADQuery The purpose of this project is to demonstrate how "code CAD" can be intuitive to learn. See soroban.py for a

Lee 4 Aug 13, 2022
Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch

30 Days Of Machine Learning Using Pytorch Objective of the repository is to learn and build machine learning models using Pytorch. List of Algorithms

Mayur 119 Nov 24, 2022
Repository of the paper Compressing Sensor Data for Remote Assistance of Autonomous Vehicles using Deep Generative Models at ML4AD @ NeurIPS 2021.

Compressing Sensor Data for Remote Assistance of Autonomous Vehicles using Deep Generative Models Code and supplementary materials Repository of the p

Daniel Bogdoll 4 Jul 13, 2022
Foreground-Action Consistency Network for Weakly Supervised Temporal Action Localization

FAC-Net Foreground-Action Consistency Network for Weakly Supervised Temporal Action Localization Linjiang Huang (CUHK), Liang Wang (CASIA), Hongsheng

21 Nov 22, 2022
RipsNet: a general architecture for fast and robust estimation of the persistent homology of point clouds

RipsNet: a general architecture for fast and robust estimation of the persistent homology of point clouds This repository contains the code asscoiated

Felix Hensel 14 Dec 12, 2022
Official repository for "Orthogonal Projection Loss" (ICCV'21)

Orthogonal Projection Loss (ICCV'21) Kanchana Ranasinghe, Muzammal Naseer, Munawar Hayat, Salman Khan, & Fahad Shahbaz Khan Paper Link | Project Page

Kanchana Ranasinghe 83 Dec 26, 2022
Multi-Anchor Active Domain Adaptation for Semantic Segmentation (ICCV 2021 Oral)

Multi-Anchor Active Domain Adaptation for Semantic Segmentation Munan Ning*, Donghuan Lu*, Dong Weiโ€ , Cheng Bian, Chenglang Yuan, Shuang Yu, Kai Ma, Y

Munan Ning 36 Dec 07, 2022
Compositional and Parameter-Efficient Representations for Large Knowledge Graphs

NodePiece - Compositional and Parameter-Efficient Representations for Large Knowledge Graphs NodePiece is a "tokenizer" for reducing entity vocabulary

Michael Galkin 107 Jan 04, 2023
This codebase proposes modular light python and pytorch implementations of several LiDAR Odometry methods

pyLiDAR-SLAM This codebase proposes modular light python and pytorch implementations of several LiDAR Odometry methods, which can easily be evaluated

Kitware, Inc. 208 Dec 16, 2022
แผˆฮฝฮฑฯ„ฮฟฮผฮฎ is a PyTorch library to analyze representation of neural networks

แผˆฮฝฮฑฯ„ฮฟฮผฮฎ is a PyTorch library to analyze representation of neural networks

Ryuichiro Hataya 50 Dec 05, 2022
Resilience from Diversity: Population-based approach to harden models against adversarial attacks

Resilience from Diversity: Population-based approach to harden models against adversarial attacks Requirements To install requirements: pip install -r

0 Nov 23, 2021
[ICCV 2021] FaPN: Feature-aligned Pyramid Network for Dense Image Prediction

FaPN: Feature-aligned Pyramid Network for Dense Image Prediction [arXiv] [Project Page] @inproceedings{ huang2021fapn, title={{FaPN}: Feature-alig

Shihua Huang 23 Jul 22, 2022
Semi-supervised Domain Adaptation via Minimax Entropy

Semi-supervised Domain Adaptation via Minimax Entropy (ICCV 2019) Install pip install -r requirements.txt The code is written for Pytorch 0.4.0, but s

Vision and Learning Group 243 Jan 09, 2023
An experimentation and research platform to investigate the interaction of automated agents in an abstract simulated network environments.

CyberBattleSim April 8th, 2021: See the announcement on the Microsoft Security Blog. CyberBattleSim is an experimentation research platform to investi

Microsoft 1.5k Dec 25, 2022
Contrastive Learning of Structured World Models

Contrastive Learning of Structured World Models This repository contains the official PyTorch implementation of: Contrastive Learning of Structured Wo

Thomas Kipf 371 Jan 06, 2023
Source for the paper "Universal Activation Function for machine learning"

Universal Activation Function Tensorflow and Pytorch source code for the paper Yuen, Brosnan, Minh Tu Hoang, Xiaodai Dong, and Tao Lu. "Universal acti

4 Dec 03, 2022
Replication Package for "An Empirical Study of the Effectiveness of an Ensemble of Stand-alone Sentiment Detection Tools for Software Engineering Datasets"

Replication Package for "An Empirical Study of the Effectiveness of an Ensemble of Stand-alone Sentiment Detection Tools for Software Engineering Data

2 Oct 06, 2022