python library for invisible image watermark (blind image watermark)

Overview

invisible-watermark

PyPI License Python Platform Downloads

invisible-watermark is a python library and command line tool for creating invisible watermark over image.(aka. blink image watermark, digital image watermark). The algorithm doesn't reply on the original image.

Note that this library is still experimental and it doesn't support GPU acceleration, carefully deploy it on the production environment. The default method dwtDCT(one variant of frequency methods) is ready for on-the-fly embedding, the other methods are too slow on a CPU only environment.

supported algorithms

speed

  • default embedding method dwtDct is fast and suitable for on-the-fly embedding
  • dwtDctSvd is 3x slower and rivaGan is 10x slower, for large image they are not suitable for on-the-fly embedding

accuracy

  • The algorithm cannot gurantee to decode the original watermarks 100% accurately even though we don't apply any attack.
  • Known defects: Test shows all algorithms do not perform well for web page screenshots or posters with homogenous background color

Supported Algorithms

  • dwtDct: DWT + DCT transform, embed watermark bit into max non-trivial coefficient of block dct coefficents

  • dwtDctSvd: DWT + DCT transform, SVD decomposition of each block, embed watermark bit into singular value decomposition

  • rivaGan: encoder/decoder model with Attention mechanism + embed watermark bits into vector.

background:

How to install

pip install invisible-watermark

Library API

Embed watermark

  • example embed 4 characters (32 bits) watermark
import cv2
from imwatermark import WatermarkEncoder

bgr = cv2.imread('test.png')
wm = 'test'

encoder = WatermarkEncoder()
encoder.set_watermark('bytes', wm.encode('utf-8'))
bgr_encoded = encoder.encode(bgr, 'dwtDct')

cv2.imwrite('test_wm.png', bgr_encoded)

Decode watermark

  • example decode 4 characters (32 bits) watermark
import cv2
from imwatermark import WatermarkDecoder

bgr = cv2.imread('test_wm.png')

decoder = WatermarkDecoder('bytes', 32)
watermark = decoder.decode(bgr, 'dwtDct')
print(watermark.decode('utf-8'))

CLI Usage

embed watermark:  ./invisible-watermark -v -a encode -t bytes -m dwtDct -w 'hello' -o ./test_vectors/wm.png ./test_vectors/original.jpg

decode watermark: ./invisible-watermark -v -a decode -t bytes -m dwtDct -l 40 ./test_vectors/wm.png

positional arguments:
  input                 The path of input

optional arguments:
  -h, --help            show this help message and exit
  -a ACTION, --action ACTION
                        encode|decode (default: None)
  -t TYPE, --type TYPE  bytes|b16|bits|uuid|ipv4 (default: bits)
  -m METHOD, --method METHOD
                        dwtDct|dwtDctSvd|rivaGan (default: maxDct)
  -w WATERMARK, --watermark WATERMARK
                        embedded string (default: )
  -l LENGTH, --length LENGTH
                        watermark bits length, required for bytes|b16|bits
                        watermark (default: 0)
  -o OUTPUT, --output OUTPUT
                        The path of output (default: None)
  -v, --verbose         print info (default: False)

Test Result

For better doc reading, we compress all images in this page, but the test is taken on 1920x1080 original image.

Methods are not robust to resize or aspect ratio changed crop but robust to noise, color filter, brightness and jpg compress.

rivaGan outperforms the default method on crop attack.

only default method is ready for on-the-fly embedding.

Input

  • Input Image: 1960x1080 Image
  • Watermark:
    • For freq method, we use 64bits, string expression "qingquan"
    • For RivaGan method, we use 32bits, string expression "qing"
  • Parameters: only take U frame to keep image quality, scale=36

Attack Performance

Watermarked Image

wm

Attacks Image Freq Method RivaGan
JPG Compress wm_jpg Pass Pass
Noise wm_noise Pass Pass
Brightness wm_darken Pass Pass
Overlay wm_overlay Pass Pass
Mask wm_mask_large Pass Pass
crop 7x5 wm_crop_7x5 Fail Pass
Resize 50% wm_resize_half Fail Fail
Rotate 30 degress wm_rotate Fail Fail

Running Speed (CPU Only)

Image Method Encoding Decoding
1920x1080 dwtDct 300-350ms 150ms-200ms
1920x1080 dwtDctSvd 1500ms-2s ~1s
1920x1080 rivaGan ~5s 4-5s
600x600 dwtDct 70ms 60ms
600x600 dwtDctSvd 185ms 320ms
600x600 rivaGan 1s 600ms

RivaGAN Experimental

Further, We will deliver the 64bit rivaGan model and test the performance on GPU environment.

Detail: https://github.com/DAI-Lab/RivaGAN

Zhang, Kevin Alex and Xu, Lei and Cuesta-Infante, Alfredo and Veeramachaneni, Kalyan. Robust Invisible Video Watermarking with Attention. MIT EECS, September 2019.[PDF]

You might also like...
[CVPR 2021] Unsupervised Degradation Representation Learning for Blind Super-Resolution
[CVPR 2021] Unsupervised Degradation Representation Learning for Blind Super-Resolution

DASR Pytorch implementation of "Unsupervised Degradation Representation Learning for Blind Super-Resolution", CVPR 2021 [arXiv] Overview Requirements

[NeurIPS 2020] Blind Video Temporal Consistency via Deep Video Prior
[NeurIPS 2020] Blind Video Temporal Consistency via Deep Video Prior

pytorch-deep-video-prior (DVP) Official PyTorch implementation for NeurIPS 2020 paper: Blind Video Temporal Consistency via Deep Video Prior TensorFlo

DAN: Unfolding the Alternating Optimization for Blind Super Resolution

DAN-Basd-on-Openmmlab DAN: Unfolding the Alternating Optimization for Blind Super Resolution We reproduce DAN via mmediting based on open-sourced code

Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

Real-ESRGAN Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data Ported from https://github.com/xinntao/Real-ESRGAN Depend

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)
Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)
Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021) Jiaxi Jiang, Kai Zhang, Radu Timofte Computer Vision Lab, ETH Zurich, Switzerland 🔥

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis
Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis

Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis [Paper] [Online Demo] The following results are obtained by our SCUNet with purely syn

Fast image augmentation library and easy to use wrapper around other libraries. Documentation:  https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125
Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

Comments
  • Potentially performance issues

    Potentially performance issues

    When using an image larger than 1MB the performance degradates quickly. What we observe is that with an image of 1920 × 1080 the performance is great, but using an image of 9504 × 6336 inside a container with 20GB of RAM after ~40 minutes the flask repository we put on top of the library crashes because the container is OOM. Is there a way to improve performance in this sense?

    opened by luca-simonetti 0
  • CLI decode doesn't work if output image is JPG

    CLI decode doesn't work if output image is JPG

    I'm trying to use as CLI and python script to generate a wmrked JPG but watermark decode doesn't show anything:

    C:\Users\me\AppData\Local\Programs\Python\Python310\Scripts>py invisible-watermark "F:\JPEG\_DSC5341.jpg" -v -a encode -t bytes -m dwtDct -w '1234' -o "F:\JPEG\_DSC5341-w.jpg"
    watermark length: 48
    encode time ms: 2819.3318843841553
    
    C:\Users\me\AppData\Local\Programs\Python\Python310\Scripts>py invisible-watermark "F:\JPEG\_DSC5341-w.jpg" -v -a decode -t bytes -m dwtDct -l 48
    decode time ms: 1944.9546337127686
    

    It's like there is no watermark impressed in it, unless I use a PNG as output. I posted the images I'm using for test purpouses.

    raw img test wm img test_wm

    opened by TheNemus 0
  • Example code not working

    Example code not working

    encoded the image, then decoding returned nothing, not "test" like expected.

    edit: tried with a different png image: Traceback (most recent call last): File "/home/me/whisper/decode.py", line 8, in print(watermark.decode('utf-8')) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte

    opened by ClashSAN 0
  • How does this work?

    How does this work?

    I think a quick blurb about how the watermarks implemented by this package work would be helpful. Is it the pixel rounding that I can read about here? https://invisiblewatermark.net/how-invisible-watermarks-work.html

    opened by kevinlinxc 0
Releases(0.1.5)
Owner
Shield Mountain
Video on demand with multi DRM enterprise solutions
Shield Mountain
Pre-trained Deep Learning models and demos (high quality and extremely fast)

OpenVINO™ Toolkit - Open Model Zoo repository This repository includes optimized deep learning models and a set of demos to expedite development of hi

OpenVINO Toolkit 3.4k Dec 31, 2022
Generating Videos with Scene Dynamics

Generating Videos with Scene Dynamics This repository contains an implementation of Generating Videos with Scene Dynamics by Carl Vondrick, Hamed Pirs

Carl Vondrick 706 Jan 04, 2023
Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021

ACTOR Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021. Please visit our we

Mathis Petrovich 248 Dec 23, 2022
Deep learning operations reinvented (for pytorch, tensorflow, jax and others)

This video in better quality. einops Flexible and powerful tensor operations for readable and reliable code. Supports numpy, pytorch, tensorflow, and

Alex Rogozhnikov 6.2k Jan 01, 2023
4th place solution for the SIGIR 2021 challenge.

SIGIR-2021 (Tinkoff.AI) How to start Download train and test data: https://sigir-ecom.github.io/data-task.html Place it under sigir-2021/data/. Run py

Tinkoff.AI 4 Jul 01, 2022
A setup script to generate ITK Python Wheels

ITK Python Package This project provides a setup.py script to build ITK Python binary packages and infrastructure to build ITK external module Python

Insight Software Consortium 59 Dec 14, 2022
Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

1 Oct 11, 2021
Speech recognition tool to convert audio to text transcripts, for Linux and Raspberry Pi.

Spchcat Speech recognition tool to convert audio to text transcripts, for Linux and Raspberry Pi. Description spchcat is a command-line tool that read

Pete Warden 279 Jan 03, 2023
A high performance implementation of HDBSCAN clustering.

HDBSCAN HDBSCAN - Hierarchical Density-Based Spatial Clustering of Applications with Noise. Performs DBSCAN over varying epsilon values and integrates

2.3k Jan 02, 2023
[CVPR 2022] Deep Equilibrium Optical Flow Estimation

Deep Equilibrium Optical Flow Estimation This is the official repo for the paper Deep Equilibrium Optical Flow Estimation (CVPR 2022), by Shaojie Bai*

CMU Locus Lab 136 Dec 18, 2022
Convert Apple NeuralHash model for CSAM Detection to ONNX.

Apple NeuralHash is a perceptual hashing method for images based on neural networks. It can tolerate image resize and compression.

Asuhariet Ygvar 1.5k Dec 31, 2022
Good Semi-Supervised Learning That Requires a Bad GAN

Good Semi-Supervised Learning that Requires a Bad GAN This is the code we used in our paper Good Semi-supervised Learning that Requires a Bad GAN Ziha

Zhilin Yang 177 Dec 12, 2022
Art Project "Schrödinger's Game of Life"

Repo of the project "Team Creative Quantum AI: Schrödinger's Game of Life" Installation new conda env: conda create --name qcml python=3.8 conda activ

ℍ◮ℕℕ◭ℍ ℝ∈ᛔ∈ℝ 2 Sep 15, 2022
Project dự đoán giá cổ phiếu bằng thuật toán LSTM gồm: code train và code demo

Web predicts stock prices using Long - Short Term Memory algorithm Give me some start please!!! User interface image: Choose: DayBegin, DayEnd, Stock

Vo Thuong Truong Nhon 8 Nov 11, 2022
A Real-World Benchmark for Reinforcement Learning based Recommender System

RL4RS: A Real-World Benchmark for Reinforcement Learning based Recommender System RL4RS is a real-world deep reinforcement learning recommender system

121 Dec 01, 2022
The AWS Certified SysOps Administrator

The AWS Certified SysOps Administrator – Associate (SOA-C02) exam is intended for system administrators in a cloud operations role who have at least 1 year of hands-on experience with deployment, man

Aiden Pearce 32 Dec 11, 2022
Emulation and Feedback Fuzzing of Firmware with Memory Sanitization

BaseSAFE This repository contains the BaseSAFE Rust APIs, introduced by "BaseSAFE: Baseband SAnitized Fuzzing through Emulation". The example/ directo

Security in Telecommunications 138 Dec 16, 2022
The official repository for "Score Transformer: Generating Musical Scores from Note-level Representation" (MMAsia '21)

Score Transformer This is the official repository for "Score Transformer": Score Transformer: Generating Musical Scores from Note-level Representation

22 Dec 22, 2022
Patch2Pix: Epipolar-Guided Pixel-Level Correspondences [CVPR2021]

Patch2Pix for Accurate Image Correspondence Estimation This repository contains the Pytorch implementation of our paper accepted at CVPR2021: Patch2Pi

Qunjie Zhou 199 Nov 29, 2022
In-Place Activated BatchNorm for Memory-Optimized Training of DNNs

In-Place Activated BatchNorm In-Place Activated BatchNorm for Memory-Optimized Training of DNNs In-Place Activated BatchNorm (InPlace-ABN) is a novel

1.3k Dec 29, 2022