Old Photo Restoration (Official PyTorch Implementation)

Overview

Old Photo Restoration (Official PyTorch Implementation)

Project Page | Paper (CVPR version) | Paper (Journal version) | Pretrained Model | Colab Demo 🔥

Bringing Old Photos Back to Life, CVPR2020 (Oral)

Old Photo Restoration via Deep Latent Space Translation, PAMI Under Review

Ziyu Wan1, Bo Zhang2, Dongdong Chen3, Pan Zhang4, Dong Chen2, Jing Liao1, Fang Wen2
1City University of Hong Kong, 2Microsoft Research Asia, 3Microsoft Cloud AI, 4USTC

Notes of this project

The code originates from our research project and the aim is to demonstrate the research idea, so we have not optimized it from a product perspective. And we will spend time to address some common issues, such as out of memory issue, limited resolution, but will not involve too much in engineering problems, such as speedup of the inference, fastapi deployment and so on. We welcome volunteers to contribute to this project to make it more usable for practical application.

New

You can now play with our Colab and try it on your photos.

Requirement

The code is tested on Ubuntu with Nvidia GPUs and CUDA installed. Python>=3.6 is required to run the code.

Installation

Clone the Synchronized-BatchNorm-PyTorch repository for

cd Face_Enhancement/models/networks/
git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
cd ../../../
cd Global/detection_models
git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
cd ../../

Download the landmark detection pretrained model

cd Face_Detection/
wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
bzip2 -d shape_predictor_68_face_landmarks.dat.bz2
cd ../

Download the pretrained model from Azure Blob Storage, put the file Face_Enhancement/checkpoints.zip under ./Face_Enhancement, and put the file Global/checkpoints.zip under ./Global. Then unzip them respectively.

cd Face_Enhancement/
wget https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Face_Enhancement/checkpoints.zip
unzip checkpoints.zip
cd ../
cd Global/
wget https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Global/checkpoints.zip
unzip checkpoints.zip
cd ../

Install dependencies:

pip install -r requirements.txt

How to use?

Note: GPU can be set 0 or 0,1,2 or 0,2; use -1 for CPU

1) Full Pipeline

You could easily restore the old photos with one simple command after installation and downloading the pretrained model.

For images without scratches:

python run.py --input_folder [test_image_folder_path] \
              --output_folder [output_path] \
              --GPU 0

For scratched images:

python run.py --input_folder [test_image_folder_path] \
              --output_folder [output_path] \
              --GPU 0 \
              --with_scratch

Note: Please try to use the absolute path. The final results will be saved in ./output_path/final_output/. You could also check the produced results of different steps in output_path.

2) Scratch Detection

Currently we don't plan to release the scratched old photos dataset with labels directly. If you want to get the paired data, you could use our pretrained model to test the collected images to obtain the labels.

cd Global/
python detection.py --test_path [test_image_folder_path] \
                    --output_dir [output_path] \
                    --input_size [resize_256|full_size|scale_256]

3) Global Restoration

A triplet domain translation network is proposed to solve both structured degradation and unstructured degradation of old photos.

cd Global/
python test.py --Scratch_and_Quality_restore \
               --test_input [test_image_folder_path] \
               --test_mask [corresponding mask] \
               --outputs_dir [output_path]

python test.py --Quality_restore \
 --test_input [test_image_folder_path] \
 --outputs_dir [output_path]

4) Face Enhancement

We use a progressive generator to refine the face regions of old photos. More details could be found in our journal submission and ./Face_Enhancement folder.

NOTE: This repo is mainly for research purpose and we have not yet optimized the running performance.

Since the model is pretrained with 256*256 images, the model may not work ideally for arbitrary resolution.

To Do

  • Clean testing code
  • Release pretrained model
  • Collab demo
  • Replace face detection module (dlib) with RetinaFace
  • Release training code

Citation

If you find our work useful for your research, please consider citing the following papers :)

@inproceedings{wan2020bringing,
title={Bringing Old Photos Back to Life},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2747--2757},
year={2020}
}
@misc{2009.07047,
Author = {Ziyu Wan and Bo Zhang and Dongdong Chen and Pan Zhang and Dong Chen and Jing Liao and Fang Wen},
Title = {Old Photo Restoration via Deep Latent Space Translation},
Year = {2020},
Eprint = {arXiv:2009.07047},
}

If you are also interested in the legacy photo/video colorization, please refer to this work.

Maintenance

This project is currently maintained by Ziyu Wan and is for academic research use only. If you have any questions, feel free to contact [email protected].

License

The codes and the pretrained model in this repository are under the MIT license as specified by the LICENSE file. We use our labeled dataset to train the scratch detection model.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Comments
  • RuntimeError: CUDA out of memory.

    RuntimeError: CUDA out of memory.

    I get the following error: RuntimeError: CUDA out of memory. Tried to allocate 88.00 MiB (GPU 0; 5.80 GiB total capacity; 4.14 GiB already allocated; 154.56 MiB free; 4.24 GiB reserved in total by PyTorch)

    Is there a way to allocate more memory? I do not get why 4.14Gb are already allocated.

    opened by grenaud 21
  • Software installation/use tutorial

    Software installation/use tutorial

    Could someone insert an installation and use tutorial for Windows 10? I have installed Python, Anaconda, PyThorc and Nvidia Cuda, but I didn't understand how to install and use this artificial intelligence software for restoring old photos (sorry if my English is bad but I'm Italian and I'm using Google Translate)

    opened by m2asp 20
  • My reimplemention of training

    My reimplemention of training

    follow the jouranl edition of the paper, I wrote a pytorch Pseudocode training code

        # build 2 vae network, 3 discriminators but NO transfer network for now
        # and their optimizer
        vae1, xr_recon_d, z_xr_d, \
            vae2, y_recon_d, \
            optimizer_vae1, optimizer_d1, \
            optimizer_vae2, optimizer_d2 = build_model(opt)
        start_iter = 0
        if opt.load_checkpoint_iter>0:
            checkpoint_path = checkpoint_root + f'/global_checkpoint_{opt.load_checkpoint_iter}.pth'
            if not Path(checkpoint_path).exists():
                print(f"ERROR! checkpoint_path {checkpoint_path} is None")
                exit(-1)
            state_dict = torch.load(checkpoint_path)
            start_iter = state_dict['iter']
            assert state_dict['batch_size'] == opt.batch_size, f"ERROR - batch size changed! load: {state_dict['batch_size']}, but now {opt.batch_size}"
            vae1.load_state_dict(state_dict['vae1'])
            xr_recon_d.load_state_dict(state_dict['xr_recon_d'])
            z_xr_d.load_state_dict(state_dict['z_xr_d'])
            vae2.load_state_dict(state_dict['vae2'])
            y_recon_d.load_state_dict(state_dict['y_recon_d'])
            optimizer_vae1.load_state_dict(state_dict['optimizer_vae1'])
            optimizer_d1.load_state_dict(state_dict['optimizer_d1'])
            optimizer_vae2.load_state_dict(state_dict['optimizer_vae2']) 
            optimizer_d2.load_state_dict(state_dict['optimizer_d2']) 
            print("checkpoint load successfully!")
        # create dataloader
        dataLoaderR, dataLoaderXY = get_dataloader(opt)
        dataLoaderXY_iter = iter(dataLoaderXY)
        dataLoaderR_iter = iter(dataLoaderR)
        start = time.perf_counter()
        print("train start!")
        for ii in range(opt.total_iter - start_iter):
            current_iter = ii + start_iter
            try:
                x, y, path_y = dataLoaderXY_iter.next()
            except:
                dataLoaderXY_iter = iter(dataLoaderXY)
                x, y, path_y = dataLoaderXY_iter.next()
            try:
                r, path_r = dataLoaderR_iter.next()
            except:
                dataLoaderR_iter = iter(dataLoaderR)
                r, path_r = dataLoaderR_iter.next()
            ### following the practice in U-GAT-IT:
            ### train D and G iteratively, but not training D multiple times than training G
            r = r.to(opt.device)
            x = x.to(opt.device)
            y = y.to(opt.device)
            if opt.debug and current_iter%500==0:
                torchvision.utils.save_image(y, 'train_vae_y.png', normalize=True)
                torchvision.utils.save_image(x, 'train_vae_x.png', normalize=True)
                torchvision.utils.save_image(r, 'train_vae_r.png', normalize=True)
    
            ### vae1 train d
            # save gpu memory since no need calc grad for net G when train net D
            with torch.no_grad():
                z_x, mean_x, var_x, recon_x = vae1(x)
                z_r, mean_r, var_r, recon_r = vae1(r)
                batch_requires_grad(z_x, mean_x, var_x, recon_x,
                                    z_r, mean_r, var_r, recon_r)
            loss_1 = 0
            adv_loss_d_x = lsgan_d(xr_recon_d(x), xr_recon_d(recon_x))
            adv_loss_d_r = lsgan_d(xr_recon_d(r), xr_recon_d(recon_r))
            # z_x is real and z_r is fake here because let z_r close to z_x
            adv_loss_d_xr = lsgan_d(z_xr_d(z_x), z_xr_d(z_r))
            loss_1_d = adv_loss_d_x + adv_loss_d_r + adv_loss_d_xr
            loss_1_d.backward()
            optimizer_d1.step()
            optimizer_d1.zero_grad()
            ### vae1 train g
            # since we need update weights of G, the result should be re-calculate with grad
            z_x, mean_x, var_x, recon_x = vae1(x)
            z_r, mean_r, var_r, recon_r = vae1(r)
            adv_loss_g_x = lsgan_g(xr_recon_d(recon_x))
            adv_loss_g_r = lsgan_g(xr_recon_d(recon_r))
            # z_x is real and z_r is fake here because let z_r close to z_x
            adv_loss_g_xr = lsgan_g(z_xr_d(z_r))
            KLDloss_1_x = -0.5 * torch.sum(1 + var_x - mean_x.pow(2) - var_x.exp())  # KLD
            L1loss_1_x  = opt.weight_alpha * F.l1_loss(x, recon_x)
            KLDloss_1_r = -0.5 * torch.sum(1 + var_r - mean_r.pow(2) - var_r.exp())  # KLD
            L1loss_1_r  = opt.weight_alpha * F.l1_loss(r, recon_r)
            loss_1_g = adv_loss_g_x + KLDloss_1_x + L1loss_1_x \
                     + adv_loss_g_r + KLDloss_1_r + L1loss_1_r \
                     + adv_loss_g_xr
            loss_1_g.backward()
            optimizer_vae1.step()
            optimizer_vae1.zero_grad()
    
            ### vae2 train d
            # save gpu memory since no need calc grad for net G when train net D
            with torch.no_grad():
                z_y, mean_y, var_y, recon_y = vae2(y)
                batch_requires_grad(z_y, mean_y, var_y, recon_y)
            adv_loss_d_y = lsgan_d(y_recon_d(y), y_recon_d(recon_y))
            loss_2_d = adv_loss_d_y
            loss_2_d.backward()
            optimizer_d2.step()
            optimizer_d2.zero_grad()
            ### vae2 train g
            # since we need update weights of G, the result should be re-calculate with grad
            z_y, mean_y, var_y, recon_y = vae2(y)
            adv_loss_g_y = lsgan_g(y_recon_d(recon_y))
            KLDloss_1_y = -0.5 * torch.sum(1 + var_y - mean_y.pow(2) - var_y.exp())  # KLD
            L1loss_1_y  = opt.weight_alpha * F.l1_loss(y, recon_y)
            loss_2_g = adv_loss_g_y + KLDloss_1_y + L1loss_1_y
            loss_2_g.backward()
            optimizer_vae2.step()
            optimizer_vae2.zero_grad()
            # debug
            if opt.debug and current_iter%500==0:
                # [print(k, 'channel 0:\n', v[0][0]) for k,v in list(model.named_parameters()) if k in ["netG_A.encoder.13.conv_block.5.weight", "netG_A.decoder.4.conv_block.5.weight"]]
                torchvision.utils.save_image(recon_x, 'train_vae_recon_x.png', normalize=True)
                torchvision.utils.save_image(recon_r, 'train_vae_recon_r.png', normalize=True)
                torchvision.utils.save_image(recon_y, 'train_vae_recon_y.png', normalize=True)
           
            if current_iter%500==0:
                print(f"""STEP {current_iter:06d} {time.perf_counter() - start:.1f} s
                loss_1_d = adv_loss_d_x + adv_loss_d_r + adv_loss_d_xr
                {loss_1_d:.3f} = {adv_loss_d_x:.3f} + {adv_loss_d_r:.3f} + {adv_loss_d_xr:.3f}
                loss_1_g = adv_loss_g_x + KLDloss_1_x + L1loss_1_x \
                     + adv_loss_g_r + KLDloss_1_r + L1loss_1_r \
                     + adv_loss_g_xr
                {loss_1_g:.3f} = {adv_loss_g_x:.3f} + {KLDloss_1_x:.3f} + {L1loss_1_x:.3f} \
                     + {adv_loss_g_r:.3f} + {KLDloss_1_r:.3f} + {L1loss_1_r:.3f} \
                     + {adv_loss_g_xr:.3f}
                """)
            if (current_iter+1)%2000==0:
                # finish the current_iter-th step, e.g. finish iter0, save as 1, resume train from iter 1
                state = {
                    'iter': current_iter,
                    'batch_size': opt.batch_size,
                    #
                    'vae1': vae1.state_dict(),
                    'xr_recon_d': xr_recon_d.state_dict(),
                    'z_xr_d': z_xr_d.state_dict(),
                    #
                    'vae2': vae2.state_dict(),
                    'y_recon_d': y_recon_d.state_dict(),
                    #
                    'optimizer_vae1': optimizer_vae1.state_dict(),
                    'optimizer_d1': optimizer_d1.state_dict(),
                    'optimizer_vae2': optimizer_vae2.state_dict(),
                    'optimizer_d2': optimizer_d2.state_dict(),
                    }
                torch.save(state, checkpoint_root + f'/global_checkpoint_{current_iter}.pth')
        print("global", time.perf_counter() - start, ' s')
    

    where the lsgan_d and lsgan_g is defined as following:

    import torch
    import torch.nn.functional as F
    ### lsgan: a=0, b=c=1
    def lsgan_d(d_logit_real, d_logit_fake):
        return F.mse_loss(d_logit_real, torch.ones_like(d_logit_real)) + d_logit_fake.pow(2).mean()
    
    def lsgan_g(d_logit_fake):
        return F.mse_loss(d_logit_fake, torch.ones_like(d_logit_fake))
    
    opened by vegetable09 13
  • cuda requirement

    cuda requirement

    Is it possible to run this on a (recent) Mac, which does not support CUDA? I would have guessed setting --GPU 0 would not attempt to call CUDA, but it fails.

    File "/Users/../Desktop/bopbtl/venv/lib/python3.7/site-packages/torch/cuda/__init__.py", line 61, in _check_driver
        raise AssertionError("Torch not compiled with CUDA enabled")
    AssertionError: Torch not compiled with CUDA enabled
    
    good first issue 
    opened by bpops 13
  • Make frontend for the project

    Make frontend for the project

    Make a interactive frontend for the project Bringing Old Photos Back , where the user can upload photo or click it by camera and then can download the result image and share it.

    opened by Aayush-hub 11
  • [Feature] Added image filter script

    [Feature] Added image filter script

    Fixes: #125 Added script to apply filters on output image. Currently added script to make output image black and white. Will adding more scripts to more beautify images according to choice. This a feature which can help edit old images to look cool in this time.

    Sample Video:

    https://user-images.githubusercontent.com/65889104/112759421-875fa180-9010-11eb-87bd-66cd79222127.mp4

    opened by Aayush-hub 11
  • cuDNN error: CUDNN_STATUS_INTERNAL_ERROR

    cuDNN error: CUDNN_STATUS_INTERNAL_ERROR

    Running Stage 1: Overall restoration Now you are processing 5d6bfe49ge1b23ec32641&690.jpg Skip 5d6bfe49ge1b23ec32641&690.jpg due to an error: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR You can try to repro this exception using the following code snippet. If that doesn't trigger the error, please include your original repro script when reporting this issue.

    import torch torch.backends.cuda.matmul.allow_tf32 = True torch.backends.cudnn.benchmark = True torch.backends.cudnn.deterministic = False torch.backends.cudnn.allow_tf32 = True data = torch.randn([1, 3, 470, 446], dtype=torch.float, device='cuda', requires_grad=True) net = torch.nn.Conv2d(3, 64, kernel_size=[7, 7], padding=[0, 0], stride=[1, 1], dilation=[1, 1], groups=1) net = net.cuda().float() out = net(data) out.backward(torch.randn_like(out)) torch.cuda.synchronize()

    ConvolutionParams data_type = CUDNN_DATA_FLOAT padding = [0, 0, 0] stride = [1, 1, 0] dilation = [1, 1, 0] groups = 1 deterministic = false allow_tf32 = true input: TensorDescriptor 0x54bd8730 type = CUDNN_DATA_FLOAT nbDims = 4 dimA = 1, 3, 470, 446, strideA = 628860, 209620, 446, 1, output: TensorDescriptor 0x54bda2f0 type = CUDNN_DATA_FLOAT nbDims = 4 dimA = 1, 64, 464, 440, strideA = 13066240, 204160, 440, 1, weight: FilterDescriptor 0x3b058e0 type = CUDNN_DATA_FLOAT tensor_format = CUDNN_TENSOR_NCHW nbDims = 4 dimA = 64, 3, 7, 7, Pointer addresses: input: 0x60b80c400 output: 0x60be60000 weight: 0x601660000

    Skipping non-file final_output Skipping non-file stage_1_restore_output Skipping non-file stage_2_detection_output Skipping non-file stage_3_face_output Now you are processing timg.jpg Skip timg.jpg due to an error: CUDA error: an illegal memory access was encountered Finish Stage 1 ...

    Is this because of insufficient video memory?

    opened by wang7143 11
  • No such file or directory: output/stage_1_restore_output/restored_image

    No such file or directory: output/stage_1_restore_output/restored_image

    Need to manually create the restored_image dir otherwise the following failure occurs:

    $ python run.py --input_folder ~/Documents/revive-old-photos/input --output ~/Documents/revive-old-photos/output --GPU 0
    Running Stage 1: Overall restoration
    Traceback (most recent call last):
      File "test.py", line 6, in <module>
        from torch.autograd import Variable
    ImportError: No module named torch.autograd
    Traceback (most recent call last):
      File "run.py", line 79, in <module>
        for x in os.listdir(stage_1_results):
    FileNotFoundError: [Errno 2] No such file or directory: '~/Documents/revive-old-photos/output/stage_1_restore_output/restored_image'
    
    opened by drupanchal 11
  • Should the L1 factor 'alpha' be larger?

    Should the L1 factor 'alpha' be larger?

    I reimplement the training code, however, it is hard to increase SSIM to 0.69 when training VAE1 and VAE2 unless I change the L1 factor 'alpha'. I would like to know that how long did it take you to train VAE1 and VAE2? Would you tell me the rough time about releasing training code?

    opened by HaloKester 9
  • CUDA out of memory

    CUDA out of memory

    With some image throws this exception: Skip a.png due to an error: CUDA out of memory. Tried to allocate 3.10 GiB (GPU 0; 8.00 GiB total capacity; 3.83 GiB already allocated; 2.17 GiB free; 4.40 GiB reserved in total by PyTorch).

    Thanks in advance

    opened by Davydhh 9
  • Couldn't install libraries in requirements.txt

    Couldn't install libraries in requirements.txt

    when i trying to install required libraries i got the following error:

    Running setup.py install for torch ... error ERROR: Command errored out with exit status 1: command: 'c:\users\mahmed.ssmain\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\mahmed .SSMAIN\AppData\Local\Temp\pip-install-5a62zurd\torch\setup.py'"'"'; file='"'"'C:\Users\mahmed.SSMAIN\AppData\Local\Temp\pip-install-5a62zurd\torch\ setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec '"'"'))' install --record 'C:\Users\mahmed.SSMAIN\AppData\Local\Temp\pip-record-gm8ndtxb\install-record.txt' --single-version-externally-managed --compile --install-h eaders 'C:\Users\mahmed.SSMAIN\AppData\Local\Programs\Python\Python38\Include\torch' cwd: C:\Users\mahmed.SSMAIN\AppData\Local\Temp\pip-install-5a62zurd\torch
    Complete output (23 lines): running install running build_deps Traceback (most recent call last): File "", line 1, in File "C:\Users\mahmed.SSMAIN\AppData\Local\Temp\pip-install-5a62zurd\torch\setup.py", line 225, in setup(name="torch", version="0.1.2.post2", File "C:\Users\mahmed.SSMAIN\AppData\Local\Programs\Python\Python38\lib\site-packages\setuptools_init_.py", line 145, in setup return distutils.core.setup(**attrs) File "C:\Users\mahmed.SSMAIN\AppData\Local\Programs\Python\Python38\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\Users\mahmed.SSMAIN\AppData\Local\Programs\Python\Python38\lib\distutils\dist.py", line 966, in run_commands self.run_command(cmd) File "C:\Users\mahmed.SSMAIN\AppData\Local\Programs\Python\Python38\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\mahmed.SSMAIN\AppData\Local\Temp\pip-install-5a62zurd\torch\setup.py", line 99, in run self.run_command('build_deps') File "C:\Users\mahmed.SSMAIN\AppData\Local\Programs\Python\Python38\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\Users\mahmed.SSMAIN\AppData\Local\Programs\Python\Python38\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\mahmed.SSMAIN\AppData\Local\Temp\pip-install-5a62zurd\torch\setup.py", line 51, in run from tools.nnwrap import generate_wrappers as generate_nn_wrappers ModuleNotFoundError: No module named 'tools.nnwrap'

    opened by Mahmoud-Tawfeek 8
  • Error in Downloading Checkpoint for Face Enhancement

    Error in Downloading Checkpoint for Face Enhancement

    --2022-12-28 03:36:54-- https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Face_Enhancement/checkpoints.zip Resolving facevc.blob.core.windows.net (facevc.blob.core.windows.net)... 20.60.228.1 Connecting to facevc.blob.core.windows.net (facevc.blob.core.windows.net)|20.60.228.1|:443... connected. HTTP request sent, awaiting response... 404 The specified resource does not exist. 2022-12-28 03:36:54 ERROR 404: The specified resource does not exist..

    unzip: cannot find or open checkpoints.zip, checkpoints.zip.zip or checkpoints.zip.ZIP. /content/photo_restoration /content/photo_restoration/Global --2022-12-28 03:36:54-- https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Global/checkpoints.zip Resolving facevc.blob.core.windows.net (facevc.blob.core.windows.net)... 20.60.228.1 Connecting to facevc.blob.core.windows.net (facevc.blob.core.windows.net)|20.60.228.1|:443... connected. HTTP request sent, awaiting response... 404 The specified resource does not exist. 2022-12-28 03:36:55 ERROR 404: The specified resource does not exist..

    Please help in this regards

    opened by m-aliabbas 1
  • ModuleNotFoundError: No module named 'torch'

    ModuleNotFoundError: No module named 'torch'

    image ┌──(root㉿kali)-[/home/kirisawa/Bringing-Old-Photos-Back-to-Life] └─# python3 run.py
    Running Stage 1: Overall restoration Traceback (most recent call last): File "/home/kirisawa/Bringing-Old-Photos-Back-to-Life/Global/test.py", line 6, in from torch.autograd import Variable ModuleNotFoundError: No module named 'torch' Traceback (most recent call last): File "/home/kirisawa/Bringing-Old-Photos-Back-to-Life/run.py", line 102, in for x in os.listdir(stage_1_results): FileNotFoundError: [Errno 2] No such file or directory: '/home/kirisawa/Bringing-Old-Photos-Back-to-Life/output/stage_1_restore_output/restored_image'

    I've run the requirements .txt but I'm still having problems.😭😭

    opened by whitenight1985 2
  • How does this project

    How does this project "generate scratches"

    Hello, author. First of all, thank you very much for the code you posted. I have a question for you, that is, I can't find the code that adds scratches. Can you tell me where it is?

    opened by xiaoyanghuha 0
  • Missing additional download

    Missing additional download

    # pull the syncBN repo
    %cd photo_restoration/Face_Enhancement/models/networks
    !git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
    !cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
    %cd ../../../
    
    %cd Global/detection_models
    !git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
    !cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
    %cd ../../
    
    # download the landmark detection model
    %cd Face_Detection/
    !wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
    !bzip2 -d shape_predictor_68_face_landmarks.dat.bz2
    %cd ../
    
    # download the pretrained model
    %cd Face_Enhancement/
    !wget https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Face_Enhancement/checkpoints.zip
    !unzip checkpoints.zip
    %cd ../
    
    %cd Global/
    !wget https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Global/checkpoints.zip
    !unzip checkpoints.zip
    %cd ../
    
    /content/photo_restoration
    /content/photo_restoration/Face_Enhancement
    --2022-11-09 21:01:48--  https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Face_Enhancement/checkpoints.zip
    Resolving facevc.blob.core.windows.net (facevc.blob.core.windows.net)... 20.60.228.1
    Connecting to facevc.blob.core.windows.net (facevc.blob.core.windows.net)|20.60.228.1|:443... connected.
    HTTP request sent, awaiting response... 404 The specified resource does not exist.
    2022-11-09 21:01:49 ERROR 404: The specified resource does not exist..
    
    unzip:  cannot find or open checkpoints.zip, checkpoints.zip.zip or checkpoints.zip.ZIP.
    /content/photo_restoration
    /content/photo_restoration/Global
    --2022-11-09 21:01:49--  https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Global/checkpoints.zip
    Resolving facevc.blob.core.windows.net (facevc.blob.core.windows.net)... 20.60.228.1
    Connecting to facevc.blob.core.windows.net (facevc.blob.core.windows.net)|20.60.228.1|:443... connected.
    HTTP request sent, awaiting response... 404 The specified resource does not exist.
    2022-11-09 21:01:50 ERROR 404: The specified resource does not exist..
    
    unzip:  cannot find or open checkpoints.zip, checkpoints.zip.zip or checkpoints.zip.ZIP.
    /content/photo_restoration
    
    opened by adamsatyr 5
  • New GUI, fixed argparse error in run.py and More.

    New GUI, fixed argparse error in run.py and More.

    • added new GUI with options, settings, multilanguage and futuristic design.
    • run.py now can be imported as library and accept parámeters, Is usted by the GUI.
    • fixed error in run.py: error unknown argument > caused by argparse when the projets was executed, In a path that contain spaces or if input, output contain spaces. I added quotes '\"' to the input and output paths in all stages code.
    opened by Erickesau 5
Releases(v1.0)
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
Towards Boosting the Accuracy of Non-Latin Scene Text Recognition

Convolutional Recurrent Neural Network + CTCLoss | STAR-Net Code for paper "Towards Boosting the Accuracy of Non-Latin Scene Text Recognition" Depende

Sanjana Gunna 7 Aug 07, 2022
PyZebrascope - an open-source Python platform for brain-wide neural activity imaging in behaving zebrafish

PyZebrascope - an open-source Python platform for brain-wide neural activity imaging in behaving zebrafish

1 May 31, 2022
Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation

A Theoretical Analysis of the Repetition Problem in Text Generation This repository share the code for the paper "A Theoretical Analysis of the Repeti

Zihao Fu 37 Nov 21, 2022
Sequence to Sequence Models with PyTorch

Sequence to Sequence models with PyTorch This repository contains implementations of Sequence to Sequence (Seq2Seq) models in PyTorch At present it ha

Sandeep Subramanian 708 Dec 19, 2022
GrailQA: Strongly Generalizable Question Answering

GrailQA is a new large-scale, high-quality KBQA dataset with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It ca

OSU DKI Lab 76 Dec 21, 2022
How to Leverage Multimodal EHR Data for Better Medical Predictions?

How to Leverage Multimodal EHR Data for Better Medical Predictions? This repository contains the code of the paper: How to Leverage Multimodal EHR Dat

13 Dec 13, 2022
Classification models 1D Zoo - Keras and TF.Keras

Classification models 1D Zoo - Keras and TF.Keras This repository contains 1D variants of popular CNN models for classification like ResNets, DenseNet

Roman Solovyev 12 Jan 06, 2023
Arbitrary Distribution Modeling with Censorship in Real Time 59 2 60 3 Bidding Advertising for KDD'21

Arbitrary_Distribution_Modeling This repo implements the Neighborhood Likelihood Loss (NLL) and Arbitrary Distribution Modeling (ADM, with Interacting

7 Jan 03, 2023
A collection of implementations of deep domain adaptation algorithms

Deep Transfer Learning on PyTorch This is a PyTorch library for deep transfer learning. We divide the code into two aspects: Single-source Unsupervise

Yongchun Zhu 647 Jan 03, 2023
Locally Constrained Self-Attentive Sequential Recommendation

LOCKER This is the pytorch implementation of this paper: Locally Constrained Self-Attentive Sequential Recommendation. Zhankui He, Handong Zhao, Zhe L

Zhankui (Aaron) He 8 Jul 30, 2022
Hough Transform and Hough Line Transform Using OpenCV

Hough transform is a feature extraction method for detecting simple shapes such as circles, lines, etc in an image. Hough Transform and Hough Line Transform is implemented in OpenCV with two methods;

Happy N. Monday 3 Feb 15, 2022
General Virtual Sketching Framework for Vector Line Art (SIGGRAPH 2021)

General Virtual Sketching Framework for Vector Line Art - SIGGRAPH 2021 Paper | Project Page Outline Dependencies Testing with Trained Weights Trainin

Haoran MO 118 Dec 27, 2022
A PyTorch implementation for our paper "Dual Contrastive Learning: Text Classification via Label-Aware Data Augmentation".

Dual-Contrastive-Learning A PyTorch implementation for our paper "Dual Contrastive Learning: Text Classification via Label-Aware Data Augmentation". Y

hoshi-hiyouga 85 Dec 26, 2022
structured-generative-modeling

This repository contains the implementation for the paper Information Theoretic StructuredGenerative Modeling, Specially thanks for the open-source co

0 Oct 11, 2021
The code for paper "Contrastive Spatio-Temporal Pretext Learning for Self-supervised Video Representation" which is accepted by AAAI 2022

Contrastive Spatio Temporal Pretext Learning for Self-supervised Video Representation (AAAI 2022) The code for paper "Contrastive Spatio-Temporal Pret

8 Jun 30, 2022
Official implementation of EfficientPose

EfficientPose This is the official implementation of EfficientPose. We based our work on the Keras EfficientDet implementation xuannianz/EfficientDet

2 May 17, 2022
Official implementation of "Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets" (CVPR2021)

Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets This is the official implementation of "Towards Good Pract

Sanja Fidler's Lab 52 Nov 22, 2022
[ICCV 2021] HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration

HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration Introduction The repository contains the source code and pre-tr

Intelligent Sensing, Perception and Computing Group 55 Dec 14, 2022
Junction Tree Variational Autoencoder for Molecular Graph Generation (ICML 2018)

Junction Tree Variational Autoencoder for Molecular Graph Generation Official implementation of our Junction Tree Variational Autoencoder https://arxi

Wengong Jin 418 Jan 07, 2023
A lightweight deep network for fast and accurate optical flow estimation.

FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation The official PyTorch implementation of FastFlowNet (ICRA 2021). Authors: Lingtong

Tone 161 Jan 03, 2023