Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation

Overview

๐Ÿ“– Depth-Aware Generative Adversarial Network for Talking Head Video Generation (CVPR 2022)

๐Ÿ”ฅ If DaGAN is helpful in your photos/projects, please help to โญ it or recommend it to your friends. Thanks ๐Ÿ”ฅ

[Paper] โ€ƒ [Project Page] โ€ƒ [Demo] โ€ƒ [Poster Video]

Fa-Ting Hong, Longhao Zhang, Li Shen, Dan Xu
The Hong Kong University of Science and Technology

Cartoon Sample

cartoon.mp4

Human Sample

celeb.mp4

Voxceleb1 Dataset

๐Ÿšฉ Updates

  • ๐Ÿ”ฅ ๐Ÿ”ฅ โœ… May 19, 2022: The depth face model trained on Voxceleb2 is released! (The corresponding checkpoint of DaGAN will release soon). Click the LINK

  • ๐Ÿ”ฅ ๐Ÿ”ฅ โœ… April 25, 2022: Integrated into Huggingface Spaces ๐Ÿค— using Gradio. Try out the web demo: Hugging Face Spaces (GPU version will come soon!)

  • ๐Ÿ”ฅ ๐Ÿ”ฅ โœ… Add SPADE model, which produces more natural results.

๐Ÿ”ง Dependencies and Installation

Installation

We now provide a clean version of DaGAN, which does not require customized CUDA extensions.

  1. Clone repo

    git clone https://github.com/harlanhong/CVPR2022-DaGAN.git
    cd CVPR2022-DaGAN
  2. Install dependent packages

    pip install -r requirements.txt
    
    ## Install the Face Alignment lib
    cd face-alignment
    pip install -r requirements.txt
    python setup.py install

โšก Quick Inference

We take the paper version for an example. More models can be found here.

YAML configs

See config/vox-adv-256.yaml to get description of each parameter.

Pre-trained checkpoint

The pre-trained checkpoint of face depth network and our DaGAN checkpoints can be found under following link: OneDrive.

Inference! To run a demo, download checkpoint and run the following command:

CUDA_VISIBLE_DEVICES=0 python demo.py  --config config/vox-adv-256.yaml --driving_video path/to/driving --source_image path/to/source --checkpoint path/to/checkpoint --relative --adapt_scale --kp_num 15 --generator DepthAwareGenerator 

The result will be stored in result.mp4. The driving videos and source images should be cropped before it can be used in our method. To obtain some semi-automatic crop suggestions you can use python crop-video.py --inp some_youtube_video.mp4. It will generate commands for crops using ffmpeg.

๐Ÿ’ป Training

Datasets

  1. VoxCeleb. Please follow the instruction from https://github.com/AliaksandrSiarohin/video-preprocessing.

Train on VoxCeleb

To train a model on specific dataset run:

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --master_addr="0.0.0.0" --master_port=12348 run.py --config config/vox-adv-256.yaml --name DaGAN --rgbd --batchsize 12 --kp_num 15 --generator DepthAwareGenerator

The code will create a folder in the log directory (each run will create a new name-specific directory). Checkpoints will be saved to this folder. To check the loss values during training see log.txt. By default the batch size is tunned to run on 8 GeForce RTX 3090 gpu (You can obtain the best performance after about 150 epochs). You can change the batch size in the train_params in .yaml file.

๐Ÿšฉ Please use multiple GPUs to train your own model, if you use only one GPU, you would meet the inplace problem.

Also, you can watch the training loss by running the following command:

tensorboard --logdir log/DaGAN/log

When you kill your process for some reasons in the middle of training, a zombie process may occur, you can kill it using our provided tool:

python kill_port.py PORT

Training on your own dataset

  1. Resize all the videos to the same size e.g 256x256, the videos can be in '.gif', '.mp4' or folder with images. We recommend the later, for each video make a separate folder with all the frames in '.png' format. This format is loss-less, and it has better i/o performance.

  2. Create a folder data/dataset_name with 2 subfolders train and test, put training videos in the train and testing in the test.

  3. Create a config config/dataset_name.yaml, in dataset_params specify the root dir the root_dir: data/dataset_name. Also adjust the number of epoch in train_params.

๐Ÿ“œ Acknowledgement

Our DaGAN implementation is inspired by FOMM. We appreciate the authors of FOMM for making their codes available to public.

๐Ÿ“œ BibTeX

@inproceedings{hong2022depth,
            title={Depth-Aware Generative Adversarial Network for Talking Head Video Generation},
            author={Hong, Fa-Ting and Zhang, Longhao and Shen, Li and Xu, Dan},
            journal={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
            year={2022}
          }

๐Ÿ“ง Contact

If you have any question, please email [email protected].

Comments
  • add web demo/model to Huggingface

    add web demo/model to Huggingface

    Hi, would you be interested in adding DaGAN to Hugging Face? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community. Models/datasets/spaces(web demos) can be added to a user account or organization similar to github.

    Example from other organizations: Keras: https://huggingface.co/keras-io Microsoft: https://huggingface.co/microsoft Facebook: https://huggingface.co/facebook

    Example spaces with repos: github: https://github.com/salesforce/BLIP Spaces: https://huggingface.co/spaces/salesforce/BLIP

    github: https://github.com/facebookresearch/omnivore Spaces: https://huggingface.co/spaces/akhaliq/omnivore

    and here are guides for adding spaces/models/datasets to your org

    How to add a Space: https://huggingface.co/blog/gradio-spaces how to add models: https://huggingface.co/docs/hub/adding-a-model uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html

    Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.

    opened by AK391 18
  • crop face

    crop face

    Your work is amazing!

    But I have two questions๏ผš

    1. Is it possible to pad more borders when cropping faces? Or does it have to crop the face strictly according to the detected box?
    2. https://github.com/harlanhong/CVPR2022-DaGAN/blob/78b22edcdbb4192b81c5adf343f980b42cddfe5d/crop-video.py#L25 When -1 is used, the IndexError is reported.
    opened by Carlyx 6
  • The generated face remains the same pose

    The generated face remains the same pose

    Thanks for your good work; however when i tried run the demo, the generated video tends to remains the same pose as the source image; while in the paper (Figure 2) the generated results have driving frame's pose(this is also the case for the results from README), so why is this the case?

    https://user-images.githubusercontent.com/29053705/165462856-da97c242-b091-4609-b122-414c4216f492.mp4

    opened by hallwaypzh 4
  • Error in running a demo version!

    Error in running a demo version!

    Hello! Thanks for sharing openly amazing work! My research is also related to generating talking faces. I face error when tried to run: CUDA_VISIBLE_DEVICES=0 python demo.py --config config/vox-adv-256.yaml --driving_video data/2.mp4 --source_image data/2.jpg --checkpoint depth/models/weights_19/encoder.pth --relative --adapt_scale --kp_num 15 --generator DepthAwareGenerator image Can you please correct me where I made mistakes while running the demo one?

    opened by muxiddin19 3
  • testing error

    testing error

    when i run this command CUDA_VISIBLE_DEVICES=0 python demo.py --config config/vox-adv-256.yaml --driving_video ./example_video.mp4 --source_image ./example_image.png --checkpoint ./checkpoints/SPADE_DaGAN_vox_adv_256.pth.tar --relative --adapt_scale --kp_num 15 --generator SPADEDepthAwareGenerator --result_video results/example_out.mp4 --find_best_frame

    I got the following error๏ผš Traceback (most recent call last): File "demo.py", line 169, in depth_encoder.load_state_dict(filtered_dict_enc) File "/home/miniconda3/envs/dagan/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1407, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for ResnetEncoder: size mismatch for encoder.layer1.0.conv1.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for encoder.layer1.1.conv1.weight: copying a param with shape torch.Size([64, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for encoder.layer2.0.conv1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]). size mismatch for encoder.layer2.0.downsample.0.weight: copying a param with shape torch.Size([512, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 64, 1, 1]). size mismatch for encoder.layer2.0.downsample.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for encoder.layer2.0.downsample.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for encoder.layer2.0.downsample.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for encoder.layer2.0.downsample.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for encoder.layer2.1.conv1.weight: copying a param with shape torch.Size([128, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]). size mismatch for encoder.layer3.0.conv1.weight: copying a param with shape torch.Size([256, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]). size mismatch for encoder.layer3.0.downsample.0.weight: copying a param with shape torch.Size([1024, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 128, 1, 1]). size mismatch for encoder.layer3.0.downsample.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for encoder.layer3.0.downsample.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for encoder.layer3.0.downsample.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for encoder.layer3.0.downsample.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for encoder.layer3.1.conv1.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]). size mismatch for encoder.layer4.0.conv1.weight: copying a param with shape torch.Size([512, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 256, 3, 3]). size mismatch for encoder.layer4.0.downsample.0.weight: copying a param with shape torch.Size([2048, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 256, 1, 1]). size mismatch for encoder.layer4.0.downsample.1.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for encoder.layer4.0.downsample.1.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for encoder.layer4.0.downsample.1.running_mean: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for encoder.layer4.0.downsample.1.running_var: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for encoder.layer4.1.conv1.weight: copying a param with shape torch.Size([512, 2048, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for encoder.fc.weight: copying a param with shape torch.Size([1000, 2048]) from checkpoint, the shape in current model is torch.Size([1000, 512]).

    opened by Ha0Tang 3
  • Fix some codes about py-feat library

    Fix some codes about py-feat library

    Hi @harlanhong !

    First, I'm very pleased to see your works, DaGAN. Thanks for your effort. The reason why I issue this post is I just want to fix your code a little bit. In your utils.py, there are some codes using py-feat library and this a causes of problem. I don't know which version of py-feat you use, but no matter what you should change some codes like this way due to latest version using this way:

    p1 = out1.facepose().values # AS-IS
    p1 = out1.facepose.values # TO-BE
    

    because latest version of py-feat uses facepose as property like this:

    @property
        def facepose(self):
            """Returns the facepose data using the columns set in fex.facepose_columns
    
            Returns:
                DataFrame: facepose data
            """
            return self[self.facepose_columns]
    

    Could you fix this problems for anybody who will use this codes?

    opened by samsara-ku 3
  • Size of input

    Size of input

    Hello Thanks for your great work! I have a question, does your model support input resolution higher, than 256px? 512px for example I see that in code input video and image are resized to 256px, so causes the loss of visual quality Is there a way to use 512x512 img/vid without losing quality?

    opened by NikitaKononov 3
  • Error while training on VoxCeleb

    Error while training on VoxCeleb

    Hi, I am trying to train DaGAN on VoxCeleb. The following error is occurring.

      File "run.py", line 144, in <module>
        train(config, generator, discriminator, kp_detector, opt.checkpoint, log_dir, dataset, opt.local_rank,device,opt,writer)
      File "/home/madhav3101/gan_codes/CVPR2022-DaGAN/train.py", line 66, in train
        losses_generator, generated = generator_full(x)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/madhav3101/gan_codes/CVPR2022-DaGAN/modules/model.py", line 189, in forward
        kp_driving = self.kp_extractor(driving)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 886, in forward
        output = self.module(*inputs[0], **kwargs[0])
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/madhav3101/gan_codes/CVPR2022-DaGAN/modules/keypoint_detector.py", line 51, in forward
        feature_map = self.predictor(x) #x bz,4,64,64
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/madhav3101/gan_codes/CVPR2022-DaGAN/modules/util.py", line 252, in forward
        return self.decoder(self.encoder(x))
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/madhav3101/gan_codes/CVPR2022-DaGAN/modules/util.py", line 178, in forward
        out = up_block(out)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/madhav3101/gan_codes/CVPR2022-DaGAN/modules/util.py", line 92, in forward
        out = self.norm(out)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 745, in forward
        self.eps,
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/nn/functional.py", line 2283, in batch_norm
        input, weight, bias, running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled
     (function _print_stack)
    ^M  0%|          | 0/3965 [00:26<?, ?it/s]
    ^M  0%|          | 0/150 [00:26<?, ?it/s]
    
    Traceback (most recent call last):
      File "run.py", line 144, in <module>
        train(config, generator, discriminator, kp_detector, opt.checkpoint, log_dir, dataset, opt.local_rank,device,opt,writer)
      File "/home/madhav3101/gan_codes/CVPR2022-DaGAN/train.py", line 70, in train
        loss.backward()
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/_tensor.py", line 307, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/autograd/__init__.py", line 156, in backward
        allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
    RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [32]] is at version 4; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
    /home/madhav3101/env_tf/lib/python3.7/site-packages/torch/distributed/launch.py:186: FutureWarning: The module torch.distributed.launch is deprecated
    and will be removed in future. Use torchrun.
    Note that --use_env is set by default in torchrun.
    If your script expects `--local_rank` argument to be set, please
    change it to read from `os.environ['LOCAL_RANK']` instead. See
    https://pytorch.org/docs/stable/distributed.html#launch-utility for
    further instructions
    
      FutureWarning,
    ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 13113) of binary: /home/madhav3101/env_tf/bin/python
    Traceback (most recent call last):
      File "/home/madhav3101/miniconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/home/madhav3101/miniconda3/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in <module>
        main()
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main
        launch(args)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch
        run(args)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run
        )(*cmd_args)
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
        return launch_agent(self._config, self._entrypoint, list(args))
      File "/home/madhav3101/env_tf/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
        failures=result.failures,
    torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
    ============================================================
    run.py FAILED
    ------------------------------------------------------------
    Failures:
      <NO_OTHER_FAILURES>
    ------------------------------------------------------------
    Root Cause (first observed failure):
    [0]:
      time      : 2022-04-25_17:30:13
      host      : gnode90.local
      rank      : 0 (local_rank: 0)
      exitcode  : 1 (pid: 13113)
      error_file: <N/A>
      traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
    ============================================================
    
    
    opened by mdv3101 3
  • Missing setup.py

    Missing setup.py

    Hi,

    Thanks for this wonderful work!

    It seems that the setup.py file is missing in this new version. Is it possible for you to upload it again? Thanks a lot for the help!

    Best, Wenhua

    opened by WinnieLaugh 2
  • About measurement question

    About measurement question

    Hi, @harlanhong . First, I appreciate your nice work in this fields.

    I'm just asking you how to measure the details of your metric result.

    Did you write simple codes or just import library functions to measure those results in tables?

    And if you wrote the codes, could you share that? If not, what library did you import to measure those results?

    Thank you.

    image

    opened by samsara-ku 2
  • Error as training on my own dataset, did anyone have this problem before?

    Error as training on my own dataset, did anyone have this problem before?

    [W python_anomaly_mode.cpp:104] Warning: Error detected in CudnnBatchNormBackward. Traceback of forward call that caused the error: File "run.py", line 144, in train(config, generator, discriminator, kp_detector, opt.checkpoint, log_dir, dataset, opt.local_rank,device,opt,writer) File "/mnt/users/CVPR2022-DaGAN-master/train.py", line 66, in train losses_generator, generated = generator_full(x)

    Meanwhile there's another problem as well: Traceback (most recent call last): File "run.py", line 144, in train(config, generator, discriminator, kp_detector, opt.checkpoint, log_dir, dataset, opt.local_rank,device,opt,writer) File "/mnt/users/CVPR2022-DaGAN-master/train.py", line 74, in train loss.backward() File "/home/anaconda3/envs/DaGAN/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [32]] is at version 5; expected version 4 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

    It seems an inplace problem happen, but I couldn't find anywhere with an inplace code.

    opened by twilight0718 2
  • How do I train the network with my own data?

    How do I train the network with my own data?

    Hi, First I want to thank you for providing the code. DaGAN works like magic.

    Here is my issue: I'd like to create a video of guy with strong emotion, like screaming. I have the driving video, but the generated clip from DaGAN doesn't share the strong emotion as the driving video, the mouth only open slightly, unlike the wide open mouth in the driving video.

    I thought it is the dataset problem: there are not many strong emotions from the voxceleb dataset, which consists of interview videos. I set out to train the model from scratch with the driving video (about 1500 face images). I use your resnet-50 depth encoder/decoder pretrained weights, and train my own generator, kp detector and discriminator. However, the results are horrible. The face doesn't even change expression.

    My question is: 1. should I train from scratch or just fine-tune your model with my driving video? 2. When I train the network, I just input a bunch of face images of the same person, with different expression/head pose. Is this right? Does the "driving" and "source" frame has to be close together in the video (only slight expression/pose change)?

    Thanks a lot!

    opened by pcmdrg 5
  • evaluation and comparison with MarioNETte and MeshG

    evaluation and comparison with MarioNETte and MeshG

    Hello, thanks for releasing the code of this excellent work ! I have a question about evaluation and comparison with MarioNETte and MeshG. As mentioned in the paper, the test set sampling strategy follows that of MarioNETte. And the reported results of MarioNETte and MeshG are replicated from their original papers. So I wonder if the test set lists in the folder './data' such as '/data/celeV_cross_id_evaluation.csv' are the same as MarioNETte and MeshG. Looking forward to your reply ! Thanks !

    opened by ozhyo 18
  • Error No such file or directory: 'depth/models/weights_19/encoder.pth'

    Error No such file or directory: 'depth/models/weights_19/encoder.pth'

    I downloaded the pre-trained weights from the onedrive DaGAN_vox_adv_256.pth.tar and put it in a checkpoints directory. When I run the demo command with --cpu I get the following error:

    (dagan) [email protected] CVPR2022-DaGAN % python demo.py --config config/vox-adv-256.yaml --driving_video ./assets/driving.mp4 --source_image ./assets/leo.jpg --checkpoint ./checkpoints/DaGAN_vox_adv_256.pth.tar --relative --adapt_scale --kp_num 15 --generator DepthAwareGenerator --cpu                 
    Traceback (most recent call last):
      File "demo.py", line 165, in <module>
        loaded_dict_enc = torch.load('depth/models/weights_19/encoder.pth')
      File "/Users/user/miniconda3/envs/dagan/lib/python3.7/site-packages/torch/serialization.py", line 594, in load
        with _open_file_like(f, 'rb') as opened_file:
      File "/Users/user/miniconda3/envs/dagan/lib/python3.7/site-packages/torch/serialization.py", line 230, in _open_file_like
        return _open_file(name_or_buffer, mode)
      File "/Users/user/miniconda3/envs/dagan/lib/python3.7/site-packages/torch/serialization.py", line 211, in __init__
        super(_open_file, self).__init__(open(name, mode))
    FileNotFoundError: [Errno 2] No such file or directory: 'depth/models/weights_19/encoder.pth'
    

    How can I solve it? Many thanks, great job and good luck for ICLR :) !

    opened by tikitong 2
Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis (CVPR2022)

Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis Multi-View Consistent Generative Adversarial Networks for 3D-aware

Xuanmeng Zhang 75 Nov 1, 2022
Official code release for "Learned Spatial Representations for Few-shot Talking-Head Synthesis" ICCV 2021

Official code release for "Learned Spatial Representations for Few-shot Talking-Head Synthesis" ICCV 2021

Moustafa Meshry 16 Oct 5, 2022
This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".

AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis | Project Page | Paper | PyTorch implementation for the paper "AD-NeRF: Audio

null 510 Nov 3, 2022
Unofficial implementation of One-Shot Free-View Neural Talking Head Synthesis

face-vid2vid Usage Dataset Preparation cd datasets wget https://yt-dl.org/downloads/latest/youtube-dl -O youtube-dl chmod a+rx youtube-dl python load_

worstcoder 59 Oct 31, 2022
Code for the CVPR2022 paper "Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity"

Introduction This is an official release of the paper "Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity" (arxiv link). Abstrac

Leo 18 Nov 2, 2022
Official code for "Towards An End-to-End Framework for Flow-Guided Video Inpainting" (CVPR2022)

E2FGVI (CVPR 2022) English | ็ฎ€ไฝ“ไธญๆ–‡ This repository contains the official implementation of the following paper: Towards An End-to-End Framework for Flo

Media Computing Group @ Nankai University 492 Nov 3, 2022
The implemention of Video Depth Estimation by Fusing Flow-to-Depth Proposals

Flow-to-depth (FDNet) video-depth-estimation This is the implementation of paper Video Depth Estimation by Fusing Flow-to-Depth Proposals Jiaxin Xie,

null 32 Jun 14, 2022
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

M4Depth This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in M4Depth: A moti

Michaรซl Fonder 71 Oct 19, 2022
Digan - Official PyTorch implementation of Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks

DIGAN (ICLR 2022) Official PyTorch implementation of "Generating Videos with Dyn

Sihyun Yu 140 Oct 28, 2022
Code for Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021)

Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021) Hang Zhou, Yasheng Sun, Wayne Wu, Chen Cha

Hang_Zhou 595 Nov 1, 2022
Code for One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022)

One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022) Paper | Demo Requirements Python >= 3.6 , Pytorch >

FuxiVirtualHuman 69 Nov 7, 2022
Light-weight network, depth estimation, knowledge distillation, real-time depth estimation, auxiliary data.

light-weight-depth-estimation Boosting Light-Weight Depth Estimation Via Knowledge Distillation, https://arxiv.org/abs/2105.06143 Junjie Hu, Chenyou F

Junjie Hu 12 May 24, 2022
This is an official implementation of the CVPR2022 paper "Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots".

Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots Blind2Unblind Citing Blind2Unblind @inproceedings{wang2022blind2unblind, tit

demonsjin 50 Nov 3, 2022
The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift

TwoStageAlign The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift Pa

Shi Guo 32 Oct 11, 2022
Towards Implicit Text-Guided 3D Shape Generation (CVPR2022)

Towards Implicit Text-Guided 3D Shape Generation Towards Implicit Text-Guided 3D Shape Generation (CVPR2022) Code for the paper [Towards Implicit Text

null 52 Oct 26, 2022
ฯ€-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis

ฯ€-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis Project Page | Paper | Data Eric Ryan Chan*, Marco Monteiro*, Pe

null 364 Oct 25, 2022
Official code for "Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes", CVPR2022

[CVPR 2022] Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes Dongkwon Jin, Wonhui Park, Seong-Gyun Jeong, Heeyeon Kwon, and Cha

Dongkwon Jin 102 Oct 17, 2022
Inference code for "StylePeople: A Generative Model of Fullbody Human Avatars" paper. This code is for the part of the paper describing video-based avatars.

NeuralTextures This is repository with inference code for paper "StylePeople: A Generative Model of Fullbody Human Avatars" (CVPR21). This code is for

Visual Understanding Lab @ Samsung AI Center Moscow 18 Oct 6, 2022
Unofficial implement with paper SpeakerGAN: Speaker identification with conditional generative adversarial network

Introduction This repository is about paper SpeakerGAN , and is unofficially implemented by Mingming Huang ([email protected]), Tiezheng Wang (wtz920729

null 6 Sep 28, 2022