python library for invisible image watermark (blind image watermark)

Overview

invisible-watermark

PyPI License Python Platform Downloads

invisible-watermark is a python library and command line tool for creating invisible watermark over image.(aka. blink image watermark, digital image watermark). The algorithm doesn't reply on the original image.

Note that this library is still experimental and it doesn't support GPU acceleration, carefully deploy it on the production environment. The default method dwtDCT(one variant of frequency methods) is ready for on-the-fly embedding, the other methods are too slow on a CPU only environment.

supported algorithms

speed

  • default embedding method dwtDct is fast and suitable for on-the-fly embedding
  • dwtDctSvd is 3x slower and rivaGan is 10x slower, for large image they are not suitable for on-the-fly embedding

accuracy

  • The algorithm cannot gurantee to decode the original watermarks 100% accurately even though we don't apply any attack.
  • Known defects: Test shows all algorithms do not perform well for web page screenshots or posters with homogenous background color

Supported Algorithms

  • dwtDct: DWT + DCT transform, embed watermark bit into max non-trivial coefficient of block dct coefficents

  • dwtDctSvd: DWT + DCT transform, SVD decomposition of each block, embed watermark bit into singular value decomposition

  • rivaGan: encoder/decoder model with Attention mechanism + embed watermark bits into vector.

background:

How to install

pip install invisible-watermark

Library API

Embed watermark

  • example embed 4 characters (32 bits) watermark
import cv2
from imwatermark import WatermarkEncoder

bgr = cv2.imread('test.png')
wm = 'test'

encoder = WatermarkEncoder()
encoder.set_watermark('bytes', wm.encode('utf-8'))
bgr_encoded = encoder.encode(bgr, 'dwtDct')

cv2.imwrite('test_wm.png', bgr_encoded)

Decode watermark

  • example decode 4 characters (32 bits) watermark
import cv2
from imwatermark import WatermarkDecoder

bgr = cv2.imread('test_wm.png')

decoder = WatermarkDecoder('bytes', 32)
watermark = decoder.decode(bgr, 'dwtDct')
print(watermark.decode('utf-8'))

CLI Usage

embed watermark:  ./invisible-watermark -v -a encode -t bytes -m dwtDct -w 'hello' -o ./test_vectors/wm.png ./test_vectors/original.jpg

decode watermark: ./invisible-watermark -v -a decode -t bytes -m dwtDct -l 40 ./test_vectors/wm.png

positional arguments:
  input                 The path of input

optional arguments:
  -h, --help            show this help message and exit
  -a ACTION, --action ACTION
                        encode|decode (default: None)
  -t TYPE, --type TYPE  bytes|b16|bits|uuid|ipv4 (default: bits)
  -m METHOD, --method METHOD
                        dwtDct|dwtDctSvd|rivaGan (default: maxDct)
  -w WATERMARK, --watermark WATERMARK
                        embedded string (default: )
  -l LENGTH, --length LENGTH
                        watermark bits length, required for bytes|b16|bits
                        watermark (default: 0)
  -o OUTPUT, --output OUTPUT
                        The path of output (default: None)
  -v, --verbose         print info (default: False)

Test Result

For better doc reading, we compress all images in this page, but the test is taken on 1920x1080 original image.

Methods are not robust to resize or aspect ratio changed crop but robust to noise, color filter, brightness and jpg compress.

rivaGan outperforms the default method on crop attack.

only default method is ready for on-the-fly embedding.

Input

  • Input Image: 1960x1080 Image
  • Watermark:
    • For freq method, we use 64bits, string expression "qingquan"
    • For RivaGan method, we use 32bits, string expression "qing"
  • Parameters: only take U frame to keep image quality, scale=36

Attack Performance

Watermarked Image

wm

Attacks Image Freq Method RivaGan
JPG Compress wm_jpg Pass Pass
Noise wm_noise Pass Pass
Brightness wm_darken Pass Pass
Overlay wm_overlay Pass Pass
Mask wm_mask_large Pass Pass
crop 7x5 wm_crop_7x5 Fail Pass
Resize 50% wm_resize_half Fail Fail
Rotate 30 degress wm_rotate Fail Fail

Running Speed (CPU Only)

Image Method Encoding Decoding
1920x1080 dwtDct 300-350ms 150ms-200ms
1920x1080 dwtDctSvd 1500ms-2s ~1s
1920x1080 rivaGan ~5s 4-5s
600x600 dwtDct 70ms 60ms
600x600 dwtDctSvd 185ms 320ms
600x600 rivaGan 1s 600ms

RivaGAN Experimental

Further, We will deliver the 64bit rivaGan model and test the performance on GPU environment.

Detail: https://github.com/DAI-Lab/RivaGAN

Zhang, Kevin Alex and Xu, Lei and Cuesta-Infante, Alfredo and Veeramachaneni, Kalyan. Robust Invisible Video Watermarking with Attention. MIT EECS, September 2019.[PDF]

You might also like...
[CVPR 2021] Unsupervised Degradation Representation Learning for Blind Super-Resolution
[CVPR 2021] Unsupervised Degradation Representation Learning for Blind Super-Resolution

DASR Pytorch implementation of "Unsupervised Degradation Representation Learning for Blind Super-Resolution", CVPR 2021 [arXiv] Overview Requirements

[NeurIPS 2020] Blind Video Temporal Consistency via Deep Video Prior
[NeurIPS 2020] Blind Video Temporal Consistency via Deep Video Prior

pytorch-deep-video-prior (DVP) Official PyTorch implementation for NeurIPS 2020 paper: Blind Video Temporal Consistency via Deep Video Prior TensorFlo

DAN: Unfolding the Alternating Optimization for Blind Super Resolution

DAN-Basd-on-Openmmlab DAN: Unfolding the Alternating Optimization for Blind Super Resolution We reproduce DAN via mmediting based on open-sourced code

Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

Real-ESRGAN Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data Ported from https://github.com/xinntao/Real-ESRGAN Depend

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)
Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)
Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021) Jiaxi Jiang, Kai Zhang, Radu Timofte Computer Vision Lab, ETH Zurich, Switzerland 🔥

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis
Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis

Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis [Paper] [Online Demo] The following results are obtained by our SCUNet with purely syn

Fast image augmentation library and easy to use wrapper around other libraries. Documentation:  https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125
Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

Comments
  • Potentially performance issues

    Potentially performance issues

    When using an image larger than 1MB the performance degradates quickly. What we observe is that with an image of 1920 × 1080 the performance is great, but using an image of 9504 × 6336 inside a container with 20GB of RAM after ~40 minutes the flask repository we put on top of the library crashes because the container is OOM. Is there a way to improve performance in this sense?

    opened by luca-simonetti 0
  • CLI decode doesn't work if output image is JPG

    CLI decode doesn't work if output image is JPG

    I'm trying to use as CLI and python script to generate a wmrked JPG but watermark decode doesn't show anything:

    C:\Users\me\AppData\Local\Programs\Python\Python310\Scripts>py invisible-watermark "F:\JPEG\_DSC5341.jpg" -v -a encode -t bytes -m dwtDct -w '1234' -o "F:\JPEG\_DSC5341-w.jpg"
    watermark length: 48
    encode time ms: 2819.3318843841553
    
    C:\Users\me\AppData\Local\Programs\Python\Python310\Scripts>py invisible-watermark "F:\JPEG\_DSC5341-w.jpg" -v -a decode -t bytes -m dwtDct -l 48
    decode time ms: 1944.9546337127686
    

    It's like there is no watermark impressed in it, unless I use a PNG as output. I posted the images I'm using for test purpouses.

    raw img test wm img test_wm

    opened by TheNemus 0
  • Example code not working

    Example code not working

    encoded the image, then decoding returned nothing, not "test" like expected.

    edit: tried with a different png image: Traceback (most recent call last): File "/home/me/whisper/decode.py", line 8, in print(watermark.decode('utf-8')) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte

    opened by ClashSAN 0
  • How does this work?

    How does this work?

    I think a quick blurb about how the watermarks implemented by this package work would be helpful. Is it the pixel rounding that I can read about here? https://invisiblewatermark.net/how-invisible-watermarks-work.html

    opened by kevinlinxc 0
Releases(0.1.5)
Owner
Shield Mountain
Video on demand with multi DRM enterprise solutions
Shield Mountain
Learning Skeletal Articulations with Neural Blend Shapes

This repository provides an end-to-end library for automatic character rigging and blend shapes generation as well as a visualization tool. It is based on our work Learning Skeletal Articulations wit

Peizhuo 504 Dec 30, 2022
Activity image-based video retrieval

Cross-modal-retrieval Our approach is focus on Activity Image-to-Video Retrieval (AIVR) task. The compared methods are state-of-the-art single modalit

BCMI 75 Oct 21, 2021
The code from the paper Character Transformations for Non-Autoregressive GEC Tagging

Character Transformations for Non-Autoregressive GEC Tagging Milan Straka, Jakub Náplava, Jana Straková Charles University Faculty of Mathematics and

ÚFAL 5 Dec 10, 2022
Text Extraction Formulation + Feedback Loop for state-of-the-art WSD (EMNLP 2021)

ConSeC is a novel approach to Word Sense Disambiguation (WSD), accepted at EMNLP 2021. It frames WSD as a text extraction task and features a feedback loop strategy that allows the disambiguation of

Sapienza NLP group 36 Dec 13, 2022
Download from Onlyfans.com.

OnlySave: Onlyfans downloader Getting Started: Download the setup executable from the latest release. Install and run. Only works on Windows currently

4 May 30, 2022
Light-weight network, depth estimation, knowledge distillation, real-time depth estimation, auxiliary data.

light-weight-depth-estimation Boosting Light-Weight Depth Estimation Via Knowledge Distillation, https://arxiv.org/abs/2105.06143 Junjie Hu, Chenyou F

Junjie Hu 13 Dec 10, 2022
A scikit-learn-compatible module for estimating prediction intervals.

|Anaconda|_ MAPIE - Model Agnostic Prediction Interval Estimator MAPIE allows you to easily estimate prediction intervals using your favourite sklearn

SimAI 584 Dec 27, 2022
Basit bir burç modülü.

Bu modulu burclar hakkinda gundelik bir sekilde bilgi alin diye yaptim ve sizler icin kullanima sunuyorum. Modulun kullanimi asiri basit: Ornek Kullan

Special 17 Jun 08, 2022
TLoL (Python Module) - League of Legends Deep Learning AI (Research and Development)

TLoL-py - League of Legends Deep Learning Library TLoL-py is the Python component of the TLoL League of Legends deep learning library. It provides a s

7 Nov 29, 2022
Paaster is a secure by default end-to-end encrypted pastebin built with the objective of simplicity.

Follow the development of our desktop client here Paaster Paaster is a secure by default end-to-end encrypted pastebin built with the objective of sim

Ward 211 Dec 25, 2022
Annealed Flow Transport Monte Carlo

Annealed Flow Transport Monte Carlo Open source implementation accompanying ICML 2021 paper by Michael Arbel*, Alexander G. D. G. Matthews* and Arnaud

DeepMind 30 Nov 21, 2022
The Environment I built to study Reinforcement Learning + Pokemon Showdown

pokemon-showdown-rl-environment The Environment I built to study Reinforcement Learning + Pokemon Showdown Been a while since I ran this. Think it is

3 Jan 16, 2022
Diverse Image Captioning with Context-Object Split Latent Spaces (NeurIPS 2020)

Diverse Image Captioning with Context-Object Split Latent Spaces This repository is the PyTorch implementation of the paper: Diverse Image Captioning

Visual Inference Lab @TU Darmstadt 34 Nov 21, 2022
A vision library for performing sliced inference on large images/small objects

SAHI: Slicing Aided Hyper Inference A vision library for performing sliced inference on large images/small objects Overview Object detection and insta

Open Business Software Solutions 2.3k Jan 04, 2023
Running AlphaFold2 (from ColabFold) in Azure Machine Learning

Running AlphaFold2 (from ColabFold) in Azure Machine Learning Colby T. Ford, Ph.D. Companion repository for Medium Post: How to predict many protein s

Colby T. Ford 3 Feb 18, 2022
[CVPR2021 Oral] UP-DETR: Unsupervised Pre-training for Object Detection with Transformers

UP-DETR: Unsupervised Pre-training for Object Detection with Transformers This is the official PyTorch implementation and models for UP-DETR paper: @a

dddzg 430 Dec 23, 2022
PSML: A Multi-scale Time-series Dataset for Machine Learning in Decarbonized Energy Grids

PSML: A Multi-scale Time-series Dataset for Machine Learning in Decarbonized Energy Grids The electric grid is a key enabling infrastructure for the a

Texas A&M Engineering Research 19 Jan 07, 2023
Unified MultiWOZ evaluation scripts for the context-to-response task.

MultiWOZ Context-to-Response Evaluation Standardized and easy to use Inform, Success, BLEU ~ See the paper ~ Easy-to-use scripts for standardized eval

Tomáš Nekvinda 38 Dec 13, 2022
Prompt Tuning with Rules

PTR Code and datasets for our paper "PTR: Prompt Tuning with Rules for Text Classification" If you use the code, please cite the following paper: @art

THUNLP 118 Dec 30, 2022
Face uncertainty quantification or estimation using PyTorch.

Face-uncertainty-pytorch This is a demo code of face uncertainty quantification or estimation using PyTorch. The uncertainty of face recognition is af

Kaen 3 Sep 16, 2022