3D-aware GANs based on NeRF (arXiv).

Overview

CIPS-3D

This repository will contain the code of the paper,
CIPS-3D: A 3D-Aware Generator of GANs Based on Conditionally-Independent Pixel Synthesis.

We are planning to publish the training code here in December. But if the github star reaches two hundred, I will advance the date. Stay tuned 🕙 .

Demo videos

demo1.mp4
demo2.mp4
demo_animal_finetuned.mp4
demo3.mp4
demo4.mp4
demo5.mp4

Mirror symmetry problem

The problem of mirror symmetry refers to the sudden change of the direction of the bangs near the yaw angle of pi/2. We propose to use an auxiliary discriminator to solve this problem (please see the paper).

Note that in the initial stage of training, the auxiliary discriminator must dominate the generator more than the main discriminator does. Otherwise, if the main discriminator dominates the generator, the mirror symmetry problem will still occur. In practice, progressive training is able to guarantee this. We have trained many times from scratch. Adding an auxiliary discriminator stably solves the mirror symmetry problem. If you find any problems with this idea, please open an issue.

Envs


Training


Citation

If you find our work useful in your research, please cite:


@article{zhou2021CIPS3D,
  title = {{{CIPS}}-{{3D}}: A {{3D}}-{{Aware Generator}} of {{GANs Based}} on {{Conditionally}}-{{Independent Pixel Synthesis}}},
  shorttitle = {{{CIPS}}-{{3D}}},
  author = {Zhou, Peng and Xie, Lingxi and Ni, Bingbing and Tian, Qi},
  year = {2021},
  eprint = {2110.09788},
  eprinttype = {arxiv},
  primaryclass = {cs, eess},
  archiveprefix = {arXiv}
}

Acknowledgments

Comments
  • CUDA error: out of memory

    CUDA error: out of memory

    Hi guy, There is an issue CUDA error: out of memory (even with batch size = 1) when I try to run training script with this command CUDA_VISIBLE_DEVICES=2 python -c "import sys; sys.path.append('./'); from exp.tests.test_cips3d import Testing_ffhq_exp; Testing_ffhq_exp().test_train_ffhq(debug=False)" --tl_opts batch_size 1 img_size 32 total_iters 80000

    I try to run on V100 GPU with 32Gb mem. What should I do? Btw, really appreciate your work, a great paper. 👏

    image

    opened by longnhatne 7
  • Problem about reproducing the results

    Problem about reproducing the results

    Hi, PeterouZh,

    I'm reproducing your results at the same pace with you. Honestly speaking, this model takes about 40 hours to reach 64x64 at FID 15.97 with 8 A100 gpus. While I change the resolution to 128x128, the FID reach to 23.58. I'm still traning it and it reach FID 20.03 yet.

    How can this model reach FID 6.XX as you described in paper? Do we miss some key things? It looks that this model can only reach 10+ FID in 256 resolutions because the performance increases very lowly when the FID reach 16 at 64x64 resolution.

    By the way, I try to reproduce your results few weeks ago but I met problems about moxing. Does moxing provide very important tricks for this work?

    opened by 0three 7
  • The quality of generated images for FFHQ

    The quality of generated images for FFHQ

    Hello,

    Thanks for sharing your source code and pre-trained weights. I am trying to generate high-quality images from FFHQ pre-trained model. However, the quality of generated images is not as good enough as stated in the paper. I could not reproduce the results.

    I am using the pre-trained weights from here https://github.com/PeterouZh/CIPS-3D/releases/tag/v0.0.2

    The command I tried: python exp/cips3d/scripts/sample_images.py --tl_config_file exp/cips3d/configs/ffhq_exp.yaml --tl_command sample_images

    Generated images: 0048220334 0038712131 0002215104

    Do you have any idea regarding the problem?

    opened by enisimsar 6
  • How can I get an image resolution greater than 256?

    How can I get an image resolution greater than 256?

    Hi! You did a great job, thanks for such a great paper and promptly published CIPS-3D code.

    I've already gotten good results with your pipeline, but for images with resolution 64x64. Now I'm waiting the results of generating images with a resolution of 128x128. And I will further train for higher resolution images.

    I understand correctly, in order to get 512x512 images, I need to convert the original FFHQ dataset once again through your script dataset_tool.py, but specifying the resize for 512 in it? And after I need to run training pipeline with lower values for generator learning rate and discriminator learning rate?

    Thanks!

    opened by gofixyourself 4
  • > I want test some other image on your model. But I dont konw how to do it. If I have image sequence with pose data,how to test?

    > I want test some other image on your model. But I dont konw how to do it. If I have image sequence with pose data,how to test?

    I want test some other image on your model. But I dont konw how to do it. If I have image sequence with pose data,how to test?

    1. Align the images in the way of StyleGAN. You can refer to this script align_images.py.
    2. Project the aligned images into the W space, also known as GAN inversion. Different from the common 2D inversion, you'd better set an appropriate yaw/pitch/fov for the CIPS-3D generator to make the initial pose of G(w) and the image to be inverted consistent.
    3. After you get the w of the image, you can reconstruct images of different styles using G'(w). G' can be obtained by interpolating generators of different domains.

    Hope this helps.

    Originally posted by @PeterouZh in https://github.com/PeterouZh/CIPS-3D/issues/7#issuecomment-963163677

    opened by zhywanna 2
  • Configuration environment issues

    Configuration environment issues

    Hi,good job!

    I have a problem, please help me.

    pip install -e torch_fidelity_lib ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /media/sdb/wd/test_code/CIPS-3D/torch_fidelity_lib

    opened by Stephanie-ustc 2
  • The pretrained model can be used in finetune_photo2cartoon.sh?

    The pretrained model can be used in finetune_photo2cartoon.sh?

    I load the FFHQ pretrained model from Pre-trained checkpoints. And change the finetune_dir as Pre-trained checkpoints in finetune_photo2cartoon.sh. But it seems not to work. I want to know if the pre-trained model can be used in finetune_photo2cartoon.sh?

    opened by Benwang-chen 1
  • A few questions

    A few questions

    Dear Dr.Zhou, Thanks for sharing your great job and congratulations on your graduating Ph.D ! I have a few questions and hoping for your reply.

    1、I found a command in another issue https://github.com/PeterouZh/CIPS-3D/issues/31#issue-1196645855 python exp/cips3d/scripts/sample_images.py --tl_config_file exp/cips3d/configs/ffhq_exp.yaml --tl_command sample_images But I can't find those arguments in sample_images.py and confuse about why he knows how to use. And I also found some packages import from tl2 library, but failed to find any documentation. I wonder if there are any instruction i miss in addition to README. 2、I saw two generators file in /CIPS-3D/exp/cips3d/models generator.py and generator_v1.py, which one should I use ? 3、Which class in generator files indicates the complete generator module cause I want to do some inversion tests and not sure whether it's class GeneratorNerfINR ? And the G_ema.pth or generator.pth in ckpt is the corresponding parametors to the generator which I can directly load into, am i right? 4、What is the use of state_dict.pth in ckpt.

    By the way, I think using Chinese is more convenience for us. Thanks!

    opened by zhywanna 1
  • Output images with gradient during inference

    Output images with gradient during inference

    Hi there,

    I try to output the image with the gradient. However, I found that if I use your default testing code, it will call whole_grad_forward (https://github.com/PeterouZh/CIPS-3D/blob/aee40251a02c34e58d3002bcb845151c41b538f0/exp/dev/nerf_inr/models/generator_nerf_inr_v16.py#L1395), and will remove the gradient. If I comment out the torch.no_grad(), it would be out of memory. Is there a way to output the image with gradient? Thanks

    opened by lelechen63 1
  • closed

    closed

    Hi,

    Thanks for the great work. I am trying to inverse the image into w/z using the pretrained model. So would you release the pretrained discriminator to enable the inversion feature? Thanks

    opened by lelechen63 1
  • Question about the input of shallow nerf network

    Question about the input of shallow nerf network

    I know nerf is a view-dependent synthesis method due to a direction input. However, in your code. I find you don't use it. Why can cips3d still work? just input the world coordinate can achieve new view synthesis? why?

    opened by shoutOutYangJie 1
  • Why not train from scratch?

    Why not train from scratch?

    您好,感谢您的开源代码。

    在Readme中您有说明,生成高分辨率时的训练流程是32->64->128->256, 每次训练都基于前一分辨率得到的model进行finetune。 这样的训练策略的确会比直接训练要容易得多,那请问您试过直接训练256分辨率吗,调整训练参数是否也能得到类似的效果?

    opened by BlingHe 0
  • How to view G model effects?web_demo.py only 3 same pics

    How to view G model effects?web_demo.py only 3 same pics

    How to view G model effects?

    run web_demo.py like this , web only display 3 same pictures,1picture display nothing(be black).

    image

    image

    web_demo.py like below : image

    opened by jojoWd 0
  • Can I put my face photo in your pre-trained  web_ Demo to generate a  3D? video

    Can I put my face photo in your pre-trained web_ Demo to generate a 3D? video

    Hello, thank you for your contribution. I try to run your web_ Demo. I saw you say"Thus current stylization is limited to randomly generated images. To edit a real image, we need to project the image to the latent space of the generator. ".So I can't import other face images to produce the effect like the demo-video? Thank you.

    opened by lemonsstyle 0
  • How to set the near and far plane in NeRF network?

    How to set the near and far plane in NeRF network?

    Thanks for your excellent work. I am curious why you set the ray_near and ray_end to 0.88 and 1.12? (and for other variables like h_stddev etc.) Is that set empirically?

    opened by cwchenwang 1
  • add web demo/model to Huggingface

    add web demo/model to Huggingface

    Hi, would you be interested in adding CIPS-3D to Hugging Face? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community.

    Example from other organizations: Keras: https://huggingface.co/keras-io Microsoft: https://huggingface.co/microsoft Facebook: https://huggingface.co/facebook

    Example spaces with repos: github: https://github.com/salesforce/BLIP Spaces: https://huggingface.co/spaces/salesforce/BLIP

    github: https://github.com/facebookresearch/omnivore Spaces: https://huggingface.co/spaces/akhaliq/omnivore

    and here are guides for adding spaces/models/datasets to your org

    How to add a Space: https://huggingface.co/blog/gradio-spaces how to add models: https://huggingface.co/docs/hub/adding-a-model uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html

    Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.

    opened by AK391 1
Owner
Peterou
I have trained thousands of GAN models in the past three years, including WGAN, BigGAN, and StyleGAN.
Peterou
Monitora la qualità della ricezione dei segnali radio nelle province siciliane.

FMap-server Monitora la qualità della ricezione dei segnali radio nelle province siciliane. Conversion data Frequency - StationName maps are stored in

Triglie 5 May 24, 2021
Official Implementation for Fast Training of Neural Lumigraph Representations using Meta Learning.

Fast Training of Neural Lumigraph Representations using Meta Learning Project Page | Paper | Data Alexander W. Bergman, Petr Kellnhofer, Gordon Wetzst

Alex 39 Oct 08, 2022
Official Pytorch Implementation of GraphiT

GraphiT: Encoding Graph Structure in Transformers This repository implements GraphiT, described in the following paper: Grégoire Mialon*, Dexiong Chen

Inria Thoth 80 Nov 27, 2022
Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Single Optical Path

Keyhole Imaging Code & Dataset Code associated with the paper "Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Singl

Stanford Computational Imaging Lab 20 Feb 03, 2022
Dense matching library based on PyTorch

Dense Matching A general dense matching library based on PyTorch. For any questions, issues or recommendations, please contact Prune at

Prune Truong 399 Dec 28, 2022
[CoRL 21'] TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo

TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo Lukas Koestler1*    Nan Yang1,2*,†    Niclas Zeller2,3    Daniel Cremers1

TUM Computer Vision Group 744 Jan 04, 2023
Official implementation of the paper "Steganographer Detection via a Similarity Accumulation Graph Convolutional Network"

SAGCN - Official PyTorch Implementation | Paper | Project Page This is the official implementation of the paper "Steganographer detection via a simila

ZHANG Zhi 1 Nov 26, 2021
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

Hongsuk Choi 215 Jan 06, 2023
Tensorflow2.0 🍎🍊 is delicious, just eat it! 😋😋

How to eat TensorFlow2 in 30 days ? 🔥 🔥 Click here for Chinese Version(中文版) 《10天吃掉那只pyspark》 🚀 github项目地址: https://github.com/lyhue1991/eat_pyspark

lyhue1991 9.7k Jan 01, 2023
implicit displacement field

Geometry-Consistent Neural Shape Representation with Implicit Displacement Fields [project page][paper][cite] Geometry-Consistent Neural Shape Represe

Yifan Wang 100 Dec 19, 2022
Code Release for ICCV 2021 (oral), "AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds"

AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds (ICCV 2021 oral) **Project Page | Arxiv ** Runsong Zhu¹, Yuan Liu², Zhen Dong¹, Te

40 Dec 30, 2022
ColossalAI-Benchmark - Performance benchmarking with ColossalAI

Benchmark for Tuning Accuracy and Efficiency Overview The benchmark includes our

HPC-AI Tech 31 Oct 07, 2022
[ICML 2021] “ Self-Damaging Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang

Self-Damaging Contrastive Learning Introduction The recent breakthrough achieved by contrastive learning accelerates the pace for deploying unsupervis

VITA 51 Dec 29, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 43 Dec 05, 2022
Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François Charton, Amaury Hayat (ENPC-Rutgers) and Guillaume Lample

Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a

Facebook Research 171 Nov 23, 2022
The official pytorch implemention of the CVPR paper "Temporal Modulation Network for Controllable Space-Time Video Super-Resolution".

This is the official PyTorch implementation of TMNet in the CVPR 2021 paper "Temporal Modulation Network for Controllable Space-Time VideoSuper-Resolu

Gang Xu 95 Oct 24, 2022
Learning Synthetic Environments and Reward Networks for Reinforcement Learning

Learning Synthetic Environments and Reward Networks for Reinforcement Learning We explore meta-learning agent-agnostic neural Synthetic Environments (

AutoML-Freiburg-Hannover 16 Sep 02, 2022
A CNN implementation using only numpy. Supports multidimensional images, stride, etc.

A CNN implementation using only numpy. Supports multidimensional images, stride, etc. Speed up due to heavy use of slicing and mathematical simplification..

2 Nov 30, 2021
Transparent Transformer Segmentation

Transparent Transformer Segmentation Introduction This repository contains the data and code for IJCAI 2021 paper Segmenting transparent object in the

谢恩泽 140 Jan 02, 2023