Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers

Overview

Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers

PWC PWC


Results

results on COCO val

Backbone Method Lr Schd PQ Config Download
R-50 Panoptic-SegFormer 1x 48.0 config model
R-50 Panoptic-SegFormer 2x 49.6 config model
R-101 Panoptic-SegFormer 2x 50.6 config model
PVTv2-B5 (much lighter) Panoptic-SegFormer 2x 55.6 config model
Swin-L (window size 7) Panoptic-SegFormer 2x 55.8 config model

Install

Prerequisites

  • Linux
  • Python 3.6+
  • PyTorch 1.5+
  • torchvision
  • CUDA 9.2+ (If you build PyTorch from source, CUDA 9.0 is also compatible)
  • GCC 5+
  • mmcv-full==1.3.4
  • mmdet==2.12.0 # higher version may not work
  • timm==0.4.5
  • einops==0.3.0
  • Pillow==8.0.1
  • opencv-python==4.5.2

note: PyTorch1.8 has a bug in its adamw.py and it is solved in PyTorch1.9(see), you can easily solve it by comparing the difference.

install Panoptic SegFormer

python setup.py install 

Datasets

When I began this project, mmdet dose not support panoptic segmentation officially. I convert the dataset from panoptic segmentation format to instance segmentation format for convenience.

1. prepare data (COCO)

cd Panoptic-SegFormer
mkdir datasets
cd datasets
ln -s path_to_coco coco
mkdir annotations/
cd annotations
wget http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip
unzip panoptic_annotations_trainval2017.zip

Then the directory structure should be the following:

Panoptic-SegFormer
├── datasets
│   ├── annotations/
│   │   ├── panoptic_train2017/
│   │   ├── panoptic_train2017.json
│   │   ├── panoptic_val2017/
│   │   └── panoptic_val2017.json
│   └── coco/ 
│
├── config
├── checkpoints
├── easymd
...

2. convert panoptic format to detection format

cd Panoptic-SegFormer
./tools/convert_panoptic_coco.sh coco

Then the directory structure should be the following:

Panoptic-SegFormer
├── datasets
│   ├── annotations/
│   │   ├── panoptic_train2017/
│   │   ├── panoptic_train2017_detection_format.json
│   │   ├── panoptic_train2017.json
│   │   ├── panoptic_val2017/
│   │   ├── panoptic_val2017_detection_format.json
│   │   └── panoptic_val2017.json
│   └── coco/ 
│
├── config
├── checkpoints
├── easymd
...

Run (panoptic segmentation)

train

single-machine with 8 gpus.

./tools/dist_train.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py 8

test

./tools/dist_test.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py path/to/model.pth 8

Citing

If you use Panoptic SegFormer in your research, please use the following BibTeX entry.

@article{li2021panoptic,
  title={Panoptic SegFormer},
  author={Li, Zhiqi and Wang, Wenhai and Xie, Enze and Yu, Zhiding and Anandkumar, Anima and Alvarez, Jose M and Lu, Tong and Luo, Ping},
  journal={arXiv},
  year={2021}
}

Acknowledgement

Mainly based on Defromable DETR from MMdet.

Thanks very much for other open source works: timm, Panoptic FCN, MaskFomer, QueryInst

Comments
  • How demo one picture result ?

    How demo one picture result ?

    Dear friend, Thanks you for your good job. Now we do not want to download coco datasets, just want to give one picture, segment it and show its result. How to do it ? Best regards,

    opened by delldu 3
  • what's pvt_v2_ap in code?

    what's pvt_v2_ap in code?

    I found there are many names that obscure to understand. For example: pvt_v2_ap what that stands for? and what's single_stage_w_mask stands for?

    image

    and those file differences?

    opened by jinfagang 2
  • how to visualize demo image?

    how to visualize demo image?

    Dear friend, how to visualize the segmentation result of custom images? I run the infererce.py and didn’t get a good result. Like this: 000000

    I think there are some faults in my code.

    Here is my code:

    from mmcv.runner import checkpoint
    from mmdet.apis.inference import init_detector,LoadImage, inference_detector
    import easymd
    import cv2
    import random
    import colorsys
    import numpy as np
    
    def random_colors(N, bright=True):
        brightness = 1.0 if bright else 0.7
        hsv = [(i / float(N), 1, brightness) for i in range(N)]
        colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv))
        random.shuffle(colors)
        return colors
    
    def apply_mask(image, mask, color, alpha=0.5):
        for c in range(3):
            image[:, :, c] = np.where(mask == 0,
                                      image[:, :, c],
                                      image[:, :, c] *
                                      (1 - alpha) + alpha * color[c] * 255)
        return image
    
    config = './configs/panformer/panformer_pvtb5_24e_coco_panoptic.py'
    #checkpoints = './checkpoints/pseg_r101_r50_latest.pth'
    checkpoints = "./checkpoints/panoptic_segformer_pvtv2b5_2x.pth"
    img_path = "img_path "
    mask_save_path = "save_path"
    
    colors = random_colors(80)
    
    model = init_detector(config,checkpoint=checkpoints)
    
    results = inference_detector(model, [img_path])
    
    img = cv2.imread(img_path)
    
    seg = results['segm'][0]
    N = len(seg)
    
    masked_image = img.copy()
    for i in range(N):
        color = colors[i]
        masks = np.sum(seg[i], axis=0)
        masked_image = apply_mask(masked_image, masks, color)
        # for mask in seg[i]:
        #     masked_image = apply_mask(masked_image, mask, color)
    
    # cv2.imshow("a", masked_image)
    
    opened by garriton 0
  • Location Decoder loss

    Location Decoder loss

    https://github.com/zhiqi-li/Panoptic-SegFormer/blob/e604ef810eaf5101106d221db4b6970c2daca5c9/easymd/models/panformer/panformer_head.py#L360-L364

    Why does the location decoder only compute the losses of the first L-1 layers not the whole L layers?

    opened by hust-nj 0
  • Instruction for single GPU run

    Instruction for single GPU run

    Hi thanks for sharing your works. Iwas trying to run it on single gpu. Would you pls add some instructions or scripts to run it in single gpu? That would be a great help.

    kind regards Abdullah

    opened by nazib 1
  • Impossible to debug, single_gpu code paths are broken

    Impossible to debug, single_gpu code paths are broken

    It seems that the multi gpu training and eval works great, however, while trying to debug you're opt for using a single gpu.
    In that case the code breaks in several parts during the evaluation of the validation set.
    Any chance for a hotfix? :)

    To reproduce, try to run the code from PyCharm in debug mode while there's only one GPU available.

    opened by aviadmx 1
  • Why instance annotations are required along panoptic ones?

    Why instance annotations are required along panoptic ones?

    The model solves the panoptic segmentation task, why does the validation dataset uses the instance segmentation annotations?

    data = dict(
        samples_per_gpu=2,
        workers_per_gpu=2,
        train=dict(
            type=dataset_type,
            ann_file= './datasets/annotations/panoptic_train2017_detection_format.json',
            img_prefix=data_root + 'train2017/',
            pipeline=train_pipeline),
        val=dict( 
          
            segmentations_folder='./seg',
            gt_json = './datasets/annotations/panoptic_val2017.json',
            gt_folder = './datasets/annotations/panoptic_val2017',
            type=dataset_type,
            ann_file=data_root + 'annotations/instances_val2017.json', # Why?
            img_prefix=data_root + 'val2017/',
            pipeline=test_pipeline),
        test=dict(
            segmentations_folder='./seg',
            gt_json = './datasets/annotations/panoptic_val2017.json',
            gt_folder = './datasets/annotations/panoptic_val2017',
            type=dataset_type,
            #ann_file= './datasets/coco/annotations/image_info_test-dev2017.json',
            ann_file=data_root + 'annotations/instances_val2017.json', # Why?
            #img_prefix=data_root + '/test2017/',
            img_prefix=data_root + 'val2017/',
            pipeline=test_pipeline)
            )
    

    We eventually use the instances_val2017.json file instead of panoptic_val2017.json

    opened by aviadmx 3
  • Loading checkpoint

    Loading checkpoint

    When loading the Swin-L checkpoint by adding a load_from line to the config configs/panformer/panformer_swinl_24e_coco_panoptic.pyz as following:

    load_from='./pretrained/panoptic_segformer_swinl_2x.pth'
    

    The loading fails with an error about keys mismatch:

    unexpected key in source state_dict: bbox_head.cls_branches2.0.weight, bbox_head.cls_branches2.0.bias, bbox_head.cls_branches2.1.weight, bbox_head.cls_branches2.1.bias, bbox_head.cls_branches2.2.weight, bbox_head.cls_branches2.2.bias, bbox_head.cls_branches2.3.weight, bbox_head.cls_branches2.3.bias, bbox_head.mask_head.blocks.0.head_norm1.weight, bbox_head.mask_head.blocks.0.head_norm1.bias, bbox_head.mask_head.blocks.0.attn.q.weight, bbox_head.mask_head.blocks.0.attn.q.bias, bbox_head.mask_head.blocks.0.attn.k.weight, bbox_head.mask_head.blocks.0.attn.k.bias, bbox_head.mask_head.blocks.0.attn.v.weight, bbox_head.mask_head.blocks.0.attn.v.bias, bbox_head.mask_head.blocks.0.attn.proj.weight, bbox_head.mask_head.blocks.0.attn.proj.bias, bbox_head.mask_head.blocks.0.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.0.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.0.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.0.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.0.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.0.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.0.attn.linear.0.weight, bbox_head.mask_head.blocks.0.attn.linear.0.bias, bbox_head.mask_head.blocks.0.head_norm2.weight, bbox_head.mask_head.blocks.0.head_norm2.bias, bbox_head.mask_head.blocks.0.mlp.fc1.weight, bbox_head.mask_head.blocks.0.mlp.fc1.bias, bbox_head.mask_head.blocks.0.mlp.fc2.weight, bbox_head.mask_head.blocks.0.mlp.fc2.bias, bbox_head.mask_head.blocks.1.head_norm1.weight, bbox_head.mask_head.blocks.1.head_norm1.bias, bbox_head.mask_head.blocks.1.attn.q.weight, bbox_head.mask_head.blocks.1.attn.q.bias, bbox_head.mask_head.blocks.1.attn.k.weight, bbox_head.mask_head.blocks.1.attn.k.bias, bbox_head.mask_head.blocks.1.attn.v.weight, bbox_head.mask_head.blocks.1.attn.v.bias, bbox_head.mask_head.blocks.1.attn.proj.weight, bbox_head.mask_head.blocks.1.attn.proj.bias, bbox_head.mask_head.blocks.1.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.1.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.1.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.1.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.1.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.1.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.1.attn.linear.0.weight, bbox_head.mask_head.blocks.1.attn.linear.0.bias, bbox_head.mask_head.blocks.1.head_norm2.weight, bbox_head.mask_head.blocks.1.head_norm2.bias, bbox_head.mask_head.blocks.1.mlp.fc1.weight, bbox_head.mask_head.blocks.1.mlp.fc1.bias, bbox_head.mask_head.blocks.1.mlp.fc2.weight, bbox_head.mask_head.blocks.1.mlp.fc2.bias, bbox_head.mask_head.blocks.2.head_norm1.weight, bbox_head.mask_head.blocks.2.head_norm1.bias, bbox_head.mask_head.blocks.2.attn.q.weight, bbox_head.mask_head.blocks.2.attn.q.bias, bbox_head.mask_head.blocks.2.attn.k.weight, bbox_head.mask_head.blocks.2.attn.k.bias, bbox_head.mask_head.blocks.2.attn.v.weight, bbox_head.mask_head.blocks.2.attn.v.bias, bbox_head.mask_head.blocks.2.attn.proj.weight, bbox_head.mask_head.blocks.2.attn.proj.bias, bbox_head.mask_head.blocks.2.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.2.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.2.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.2.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.2.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.2.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.2.attn.linear.0.weight, bbox_head.mask_head.blocks.2.attn.linear.0.bias, bbox_head.mask_head.blocks.2.head_norm2.weight, bbox_head.mask_head.blocks.2.head_norm2.bias, bbox_head.mask_head.blocks.2.mlp.fc1.weight, bbox_head.mask_head.blocks.2.mlp.fc1.bias, bbox_head.mask_head.blocks.2.mlp.fc2.weight, bbox_head.mask_head.blocks.2.mlp.fc2.bias, bbox_head.mask_head.blocks.3.head_norm1.weight, bbox_head.mask_head.blocks.3.head_norm1.bias, bbox_head.mask_head.blocks.3.attn.q.weight, bbox_head.mask_head.blocks.3.attn.q.bias, bbox_head.mask_head.blocks.3.attn.k.weight, bbox_head.mask_head.blocks.3.attn.k.bias, bbox_head.mask_head.blocks.3.attn.v.weight, bbox_head.mask_head.blocks.3.attn.v.bias, bbox_head.mask_head.blocks.3.attn.proj.weight, bbox_head.mask_head.blocks.3.attn.proj.bias, bbox_head.mask_head.blocks.3.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.3.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.3.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.3.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.3.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.3.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.3.attn.linear.0.weight, bbox_head.mask_head.blocks.3.attn.linear.0.bias, bbox_head.mask_head.blocks.3.head_norm2.weight, bbox_head.mask_head.blocks.3.head_norm2.bias, bbox_head.mask_head.blocks.3.mlp.fc1.weight, bbox_head.mask_head.blocks.3.mlp.fc1.bias, bbox_head.mask_head.blocks.3.mlp.fc2.weight, bbox_head.mask_head.blocks.3.mlp.fc2.bias, bbox_head.mask_head.attnen.q.weight, bbox_head.mask_head.attnen.q.bias, bbox_head.mask_head.attnen.k.weight, bbox_head.mask_head.attnen.k.bias, bbox_head.mask_head.attnen.linear_l1.0.weight, bbox_head.mask_head.attnen.linear_l1.0.bias, bbox_head.mask_head.attnen.linear_l2.0.weight, bbox_head.mask_head.attnen.linear_l2.0.bias, bbox_head.mask_head.attnen.linear_l3.0.weight, bbox_head.mask_head.attnen.linear_l3.0.bias, bbox_head.mask_head.attnen.linear.0.weight, bbox_head.mask_head.attnen.linear.0.bias, bbox_head.mask_head2.blocks.0.head_norm1.weight, bbox_head.mask_head2.blocks.0.head_norm1.bias, bbox_head.mask_head2.blocks.0.attn.q.weight, bbox_head.mask_head2.blocks.0.attn.q.bias, bbox_head.mask_head2.blocks.0.attn.k.weight, bbox_head.mask_head2.blocks.0.attn.k.bias, bbox_head.mask_head2.blocks.0.attn.v.weight, bbox_head.mask_head2.blocks.0.attn.v.bias, bbox_head.mask_head2.blocks.0.attn.proj.weight, bbox_head.mask_head2.blocks.0.attn.proj.bias, bbox_head.mask_head2.blocks.0.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.0.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.0.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.0.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.0.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.0.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.0.attn.linear.0.weight, bbox_head.mask_head2.blocks.0.attn.linear.0.bias, bbox_head.mask_head2.blocks.0.head_norm2.weight, bbox_head.mask_head2.blocks.0.head_norm2.bias, bbox_head.mask_head2.blocks.0.mlp.fc1.weight, bbox_head.mask_head2.blocks.0.mlp.fc1.bias, bbox_head.mask_head2.blocks.0.mlp.fc2.weight, bbox_head.mask_head2.blocks.0.mlp.fc2.bias, bbox_head.mask_head2.blocks.0.self_attention.qkv.weight, bbox_head.mask_head2.blocks.0.self_attention.qkv.bias, bbox_head.mask_head2.blocks.0.self_attention.proj.weight, bbox_head.mask_head2.blocks.0.self_attention.proj.bias, bbox_head.mask_head2.blocks.0.norm3.weight, bbox_head.mask_head2.blocks.0.norm3.bias, bbox_head.mask_head2.blocks.1.head_norm1.weight, bbox_head.mask_head2.blocks.1.head_norm1.bias, bbox_head.mask_head2.blocks.1.attn.q.weight, bbox_head.mask_head2.blocks.1.attn.q.bias, bbox_head.mask_head2.blocks.1.attn.k.weight, bbox_head.mask_head2.blocks.1.attn.k.bias, bbox_head.mask_head2.blocks.1.attn.v.weight, bbox_head.mask_head2.blocks.1.attn.v.bias, bbox_head.mask_head2.blocks.1.attn.proj.weight, bbox_head.mask_head2.blocks.1.attn.proj.bias, bbox_head.mask_head2.blocks.1.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.1.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.1.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.1.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.1.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.1.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.1.attn.linear.0.weight, bbox_head.mask_head2.blocks.1.attn.linear.0.bias, bbox_head.mask_head2.blocks.1.head_norm2.weight, bbox_head.mask_head2.blocks.1.head_norm2.bias, bbox_head.mask_head2.blocks.1.mlp.fc1.weight, bbox_head.mask_head2.blocks.1.mlp.fc1.bias, bbox_head.mask_head2.blocks.1.mlp.fc2.weight, bbox_head.mask_head2.blocks.1.mlp.fc2.bias, bbox_head.mask_head2.blocks.1.self_attention.qkv.weight, bbox_head.mask_head2.blocks.1.self_attention.qkv.bias, bbox_head.mask_head2.blocks.1.self_attention.proj.weight, bbox_head.mask_head2.blocks.1.self_attention.proj.bias, bbox_head.mask_head2.blocks.1.norm3.weight, bbox_head.mask_head2.blocks.1.norm3.bias, bbox_head.mask_head2.blocks.2.head_norm1.weight, bbox_head.mask_head2.blocks.2.head_norm1.bias, bbox_head.mask_head2.blocks.2.attn.q.weight, bbox_head.mask_head2.blocks.2.attn.q.bias, bbox_head.mask_head2.blocks.2.attn.k.weight, bbox_head.mask_head2.blocks.2.attn.k.bias, bbox_head.mask_head2.blocks.2.attn.v.weight, bbox_head.mask_head2.blocks.2.attn.v.bias, bbox_head.mask_head2.blocks.2.attn.proj.weight, bbox_head.mask_head2.blocks.2.attn.proj.bias, bbox_head.mask_head2.blocks.2.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.2.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.2.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.2.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.2.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.2.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.2.attn.linear.0.weight, bbox_head.mask_head2.blocks.2.attn.linear.0.bias, bbox_head.mask_head2.blocks.2.head_norm2.weight, bbox_head.mask_head2.blocks.2.head_norm2.bias, bbox_head.mask_head2.blocks.2.mlp.fc1.weight, bbox_head.mask_head2.blocks.2.mlp.fc1.bias, bbox_head.mask_head2.blocks.2.mlp.fc2.weight, bbox_head.mask_head2.blocks.2.mlp.fc2.bias, bbox_head.mask_head2.blocks.2.self_attention.qkv.weight, bbox_head.mask_head2.blocks.2.self_attention.qkv.bias, bbox_head.mask_head2.blocks.2.self_attention.proj.weight, bbox_head.mask_head2.blocks.2.self_attention.proj.bias, bbox_head.mask_head2.blocks.2.norm3.weight, bbox_head.mask_head2.blocks.2.norm3.bias, bbox_head.mask_head2.blocks.3.head_norm1.weight, bbox_head.mask_head2.blocks.3.head_norm1.bias, bbox_head.mask_head2.blocks.3.attn.q.weight, bbox_head.mask_head2.blocks.3.attn.q.bias, bbox_head.mask_head2.blocks.3.attn.k.weight, bbox_head.mask_head2.blocks.3.attn.k.bias, bbox_head.mask_head2.blocks.3.attn.v.weight, bbox_head.mask_head2.blocks.3.attn.v.bias, bbox_head.mask_head2.blocks.3.attn.proj.weight, bbox_head.mask_head2.blocks.3.attn.proj.bias, bbox_head.mask_head2.blocks.3.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.3.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.3.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.3.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.3.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.3.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.3.attn.linear.0.weight, bbox_head.mask_head2.blocks.3.attn.linear.0.bias, bbox_head.mask_head2.blocks.3.head_norm2.weight, bbox_head.mask_head2.blocks.3.head_norm2.bias, bbox_head.mask_head2.blocks.3.mlp.fc1.weight, bbox_head.mask_head2.blocks.3.mlp.fc1.bias, bbox_head.mask_head2.blocks.3.mlp.fc2.weight, bbox_head.mask_head2.blocks.3.mlp.fc2.bias, bbox_head.mask_head2.blocks.3.self_attention.qkv.weight, bbox_head.mask_head2.blocks.3.self_attention.qkv.bias, bbox_head.mask_head2.blocks.3.self_attention.proj.weight, bbox_head.mask_head2.blocks.3.self_attention.proj.bias, bbox_head.mask_head2.blocks.3.norm3.weight, bbox_head.mask_head2.blocks.3.norm3.bias, bbox_head.mask_head2.blocks.4.head_norm1.weight, bbox_head.mask_head2.blocks.4.head_norm1.bias, bbox_head.mask_head2.blocks.4.attn.q.weight, bbox_head.mask_head2.blocks.4.attn.q.bias, bbox_head.mask_head2.blocks.4.attn.k.weight, bbox_head.mask_head2.blocks.4.attn.k.bias, bbox_head.mask_head2.blocks.4.attn.v.weight, bbox_head.mask_head2.blocks.4.attn.v.bias, bbox_head.mask_head2.blocks.4.attn.proj.weight, bbox_head.mask_head2.blocks.4.attn.proj.bias, bbox_head.mask_head2.blocks.4.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.4.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.4.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.4.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.4.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.4.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.4.attn.linear.0.weight, bbox_head.mask_head2.blocks.4.attn.linear.0.bias, bbox_head.mask_head2.blocks.4.head_norm2.weight, bbox_head.mask_head2.blocks.4.head_norm2.bias, bbox_head.mask_head2.blocks.4.mlp.fc1.weight, bbox_head.mask_head2.blocks.4.mlp.fc1.bias, bbox_head.mask_head2.blocks.4.mlp.fc2.weight, bbox_head.mask_head2.blocks.4.mlp.fc2.bias, bbox_head.mask_head2.blocks.4.self_attention.qkv.weight, bbox_head.mask_head2.blocks.4.self_attention.qkv.bias, bbox_head.mask_head2.blocks.4.self_attention.proj.weight, bbox_head.mask_head2.blocks.4.self_attention.proj.bias, bbox_head.mask_head2.blocks.4.norm3.weight, bbox_head.mask_head2.blocks.4.norm3.bias, bbox_head.mask_head2.blocks.5.head_norm1.weight, bbox_head.mask_head2.blocks.5.head_norm1.bias, bbox_head.mask_head2.blocks.5.attn.q.weight, bbox_head.mask_head2.blocks.5.attn.q.bias, bbox_head.mask_head2.blocks.5.attn.k.weight, bbox_head.mask_head2.blocks.5.attn.k.bias, bbox_head.mask_head2.blocks.5.attn.v.weight, bbox_head.mask_head2.blocks.5.attn.v.bias, bbox_head.mask_head2.blocks.5.attn.proj.weight, bbox_head.mask_head2.blocks.5.attn.proj.bias, bbox_head.mask_head2.blocks.5.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.5.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.5.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.5.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.5.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.5.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.5.attn.linear.0.weight, bbox_head.mask_head2.blocks.5.attn.linear.0.bias, bbox_head.mask_head2.blocks.5.head_norm2.weight, bbox_head.mask_head2.blocks.5.head_norm2.bias, bbox_head.mask_head2.blocks.5.mlp.fc1.weight, bbox_head.mask_head2.blocks.5.mlp.fc1.bias, bbox_head.mask_head2.blocks.5.mlp.fc2.weight, bbox_head.mask_head2.blocks.5.mlp.fc2.bias, bbox_head.mask_head2.blocks.5.self_attention.qkv.weight, bbox_head.mask_head2.blocks.5.self_attention.qkv.bias, bbox_head.mask_head2.blocks.5.self_attention.proj.weight, bbox_head.mask_head2.blocks.5.self_attention.proj.bias, bbox_head.mask_head2.blocks.5.norm3.weight, bbox_head.mask_head2.blocks.5.norm3.bias, bbox_head.mask_head2.attnen.q.weight, bbox_head.mask_head2.attnen.q.bias, bbox_head.mask_head2.attnen.k.weight, bbox_head.mask_head2.attnen.k.bias, bbox_head.mask_head2.attnen.linear_l1.0.weight, bbox_head.mask_head2.attnen.linear_l1.0.bias, bbox_head.mask_head2.attnen.linear_l2.0.weight, bbox_head.mask_head2.attnen.linear_l2.0.bias, bbox_head.mask_head2.attnen.linear_l3.0.weight, bbox_head.mask_head2.attnen.linear_l3.0.bias, bbox_head.mask_head2.attnen.linear.0.weight, bbox_head.mask_head2.attnen.linear.0.bias
    
    missing keys in source state_dict: bbox_head.cls_thing_branches.0.weight, bbox_head.cls_thing_branches.0.bias, bbox_head.cls_thing_branches.1.weight, bbox_head.cls_thing_branches.1.bias, bbox_head.cls_thing_branches.2.weight, bbox_head.cls_thing_branches.2.bias, bbox_head.cls_thing_branches.3.weight, bbox_head.cls_thing_branches.3.bias, bbox_head.things_mask_head.blocks.0.head_norm1.weight, bbox_head.things_mask_head.blocks.0.head_norm1.bias, bbox_head.things_mask_head.blocks.0.attn.q.weight, bbox_head.things_mask_head.blocks.0.attn.q.bias, bbox_head.things_mask_head.blocks.0.attn.k.weight, bbox_head.things_mask_head.blocks.0.attn.k.bias, bbox_head.things_mask_head.blocks.0.attn.v.weight, bbox_head.things_mask_head.blocks.0.attn.v.bias, bbox_head.things_mask_head.blocks.0.attn.proj.weight, bbox_head.things_mask_head.blocks.0.attn.proj.bias, bbox_head.things_mask_head.blocks.0.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.0.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.0.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.0.attn.linear.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear.0.bias, bbox_head.things_mask_head.blocks.0.head_norm2.weight, bbox_head.things_mask_head.blocks.0.head_norm2.bias, bbox_head.things_mask_head.blocks.0.mlp.fc1.weight, bbox_head.things_mask_head.blocks.0.mlp.fc1.bias, bbox_head.things_mask_head.blocks.0.mlp.fc2.weight, bbox_head.things_mask_head.blocks.0.mlp.fc2.bias, bbox_head.things_mask_head.blocks.1.head_norm1.weight, bbox_head.things_mask_head.blocks.1.head_norm1.bias, bbox_head.things_mask_head.blocks.1.attn.q.weight, bbox_head.things_mask_head.blocks.1.attn.q.bias, bbox_head.things_mask_head.blocks.1.attn.k.weight, bbox_head.things_mask_head.blocks.1.attn.k.bias, bbox_head.things_mask_head.blocks.1.attn.v.weight, bbox_head.things_mask_head.blocks.1.attn.v.bias, bbox_head.things_mask_head.blocks.1.attn.proj.weight, bbox_head.things_mask_head.blocks.1.attn.proj.bias, bbox_head.things_mask_head.blocks.1.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.1.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.1.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.1.attn.linear.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear.0.bias, bbox_head.things_mask_head.blocks.1.head_norm2.weight, bbox_head.things_mask_head.blocks.1.head_norm2.bias, bbox_head.things_mask_head.blocks.1.mlp.fc1.weight, bbox_head.things_mask_head.blocks.1.mlp.fc1.bias, bbox_head.things_mask_head.blocks.1.mlp.fc2.weight, bbox_head.things_mask_head.blocks.1.mlp.fc2.bias, bbox_head.things_mask_head.blocks.2.head_norm1.weight, bbox_head.things_mask_head.blocks.2.head_norm1.bias, bbox_head.things_mask_head.blocks.2.attn.q.weight, bbox_head.things_mask_head.blocks.2.attn.q.bias, bbox_head.things_mask_head.blocks.2.attn.k.weight, bbox_head.things_mask_head.blocks.2.attn.k.bias, bbox_head.things_mask_head.blocks.2.attn.v.weight, bbox_head.things_mask_head.blocks.2.attn.v.bias, bbox_head.things_mask_head.blocks.2.attn.proj.weight, bbox_head.things_mask_head.blocks.2.attn.proj.bias, bbox_head.things_mask_head.blocks.2.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.2.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.2.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.2.attn.linear.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear.0.bias, bbox_head.things_mask_head.blocks.2.head_norm2.weight, bbox_head.things_mask_head.blocks.2.head_norm2.bias, bbox_head.things_mask_head.blocks.2.mlp.fc1.weight, bbox_head.things_mask_head.blocks.2.mlp.fc1.bias, bbox_head.things_mask_head.blocks.2.mlp.fc2.weight, bbox_head.things_mask_head.blocks.2.mlp.fc2.bias, bbox_head.things_mask_head.blocks.3.head_norm1.weight, bbox_head.things_mask_head.blocks.3.head_norm1.bias, bbox_head.things_mask_head.blocks.3.attn.q.weight, bbox_head.things_mask_head.blocks.3.attn.q.bias, bbox_head.things_mask_head.blocks.3.attn.k.weight, bbox_head.things_mask_head.blocks.3.attn.k.bias, bbox_head.things_mask_head.blocks.3.attn.v.weight, bbox_head.things_mask_head.blocks.3.attn.v.bias, bbox_head.things_mask_head.blocks.3.attn.proj.weight, bbox_head.things_mask_head.blocks.3.attn.proj.bias, bbox_head.things_mask_head.blocks.3.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.3.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.3.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.3.attn.linear.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear.0.bias, bbox_head.things_mask_head.blocks.3.head_norm2.weight, bbox_head.things_mask_head.blocks.3.head_norm2.bias, bbox_head.things_mask_head.blocks.3.mlp.fc1.weight, bbox_head.things_mask_head.blocks.3.mlp.fc1.bias, bbox_head.things_mask_head.blocks.3.mlp.fc2.weight, bbox_head.things_mask_head.blocks.3.mlp.fc2.bias, bbox_head.things_mask_head.attnen.q.weight, bbox_head.things_mask_head.attnen.q.bias, bbox_head.things_mask_head.attnen.k.weight, bbox_head.things_mask_head.attnen.k.bias, bbox_head.things_mask_head.attnen.linear_l1.0.weight, bbox_head.things_mask_head.attnen.linear_l1.0.bias, bbox_head.things_mask_head.attnen.linear_l2.0.weight, bbox_head.things_mask_head.attnen.linear_l2.0.bias, bbox_head.things_mask_head.attnen.linear_l3.0.weight, bbox_head.things_mask_head.attnen.linear_l3.0.bias, bbox_head.things_mask_head.attnen.linear.0.weight, bbox_head.things_mask_head.attnen.linear.0.bias, bbox_head.stuff_mask_head.blocks.0.head_norm1.weight, bbox_head.stuff_mask_head.blocks.0.head_norm1.bias, bbox_head.stuff_mask_head.blocks.0.attn.q.weight, bbox_head.stuff_mask_head.blocks.0.attn.q.bias, bbox_head.stuff_mask_head.blocks.0.attn.k.weight, bbox_head.stuff_mask_head.blocks.0.attn.k.bias, bbox_head.stuff_mask_head.blocks.0.attn.v.weight, bbox_head.stuff_mask_head.blocks.0.attn.v.bias, bbox_head.stuff_mask_head.blocks.0.attn.proj.weight, bbox_head.stuff_mask_head.blocks.0.attn.proj.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.0.head_norm2.weight, bbox_head.stuff_mask_head.blocks.0.head_norm2.bias, bbox_head.stuff_mask_head.blocks.0.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.0.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.0.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.0.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.0.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.0.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.0.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.0.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.0.norm3.weight, bbox_head.stuff_mask_head.blocks.0.norm3.bias, bbox_head.stuff_mask_head.blocks.1.head_norm1.weight, bbox_head.stuff_mask_head.blocks.1.head_norm1.bias, bbox_head.stuff_mask_head.blocks.1.attn.q.weight, bbox_head.stuff_mask_head.blocks.1.attn.q.bias, bbox_head.stuff_mask_head.blocks.1.attn.k.weight, bbox_head.stuff_mask_head.blocks.1.attn.k.bias, bbox_head.stuff_mask_head.blocks.1.attn.v.weight, bbox_head.stuff_mask_head.blocks.1.attn.v.bias, bbox_head.stuff_mask_head.blocks.1.attn.proj.weight, bbox_head.stuff_mask_head.blocks.1.attn.proj.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.1.head_norm2.weight, bbox_head.stuff_mask_head.blocks.1.head_norm2.bias, bbox_head.stuff_mask_head.blocks.1.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.1.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.1.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.1.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.1.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.1.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.1.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.1.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.1.norm3.weight, bbox_head.stuff_mask_head.blocks.1.norm3.bias, bbox_head.stuff_mask_head.blocks.2.head_norm1.weight, bbox_head.stuff_mask_head.blocks.2.head_norm1.bias, bbox_head.stuff_mask_head.blocks.2.attn.q.weight, bbox_head.stuff_mask_head.blocks.2.attn.q.bias, bbox_head.stuff_mask_head.blocks.2.attn.k.weight, bbox_head.stuff_mask_head.blocks.2.attn.k.bias, bbox_head.stuff_mask_head.blocks.2.attn.v.weight, bbox_head.stuff_mask_head.blocks.2.attn.v.bias, bbox_head.stuff_mask_head.blocks.2.attn.proj.weight, bbox_head.stuff_mask_head.blocks.2.attn.proj.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.2.head_norm2.weight, bbox_head.stuff_mask_head.blocks.2.head_norm2.bias, bbox_head.stuff_mask_head.blocks.2.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.2.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.2.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.2.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.2.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.2.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.2.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.2.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.2.norm3.weight, bbox_head.stuff_mask_head.blocks.2.norm3.bias, bbox_head.stuff_mask_head.blocks.3.head_norm1.weight, bbox_head.stuff_mask_head.blocks.3.head_norm1.bias, bbox_head.stuff_mask_head.blocks.3.attn.q.weight, bbox_head.stuff_mask_head.blocks.3.attn.q.bias, bbox_head.stuff_mask_head.blocks.3.attn.k.weight, bbox_head.stuff_mask_head.blocks.3.attn.k.bias, bbox_head.stuff_mask_head.blocks.3.attn.v.weight, bbox_head.stuff_mask_head.blocks.3.attn.v.bias, bbox_head.stuff_mask_head.blocks.3.attn.proj.weight, bbox_head.stuff_mask_head.blocks.3.attn.proj.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.3.head_norm2.weight, bbox_head.stuff_mask_head.blocks.3.head_norm2.bias, bbox_head.stuff_mask_head.blocks.3.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.3.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.3.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.3.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.3.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.3.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.3.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.3.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.3.norm3.weight, bbox_head.stuff_mask_head.blocks.3.norm3.bias, bbox_head.stuff_mask_head.blocks.4.head_norm1.weight, bbox_head.stuff_mask_head.blocks.4.head_norm1.bias, bbox_head.stuff_mask_head.blocks.4.attn.q.weight, bbox_head.stuff_mask_head.blocks.4.attn.q.bias, bbox_head.stuff_mask_head.blocks.4.attn.k.weight, bbox_head.stuff_mask_head.blocks.4.attn.k.bias, bbox_head.stuff_mask_head.blocks.4.attn.v.weight, bbox_head.stuff_mask_head.blocks.4.attn.v.bias, bbox_head.stuff_mask_head.blocks.4.attn.proj.weight, bbox_head.stuff_mask_head.blocks.4.attn.proj.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.4.head_norm2.weight, bbox_head.stuff_mask_head.blocks.4.head_norm2.bias, bbox_head.stuff_mask_head.blocks.4.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.4.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.4.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.4.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.4.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.4.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.4.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.4.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.4.norm3.weight, bbox_head.stuff_mask_head.blocks.4.norm3.bias, bbox_head.stuff_mask_head.blocks.5.head_norm1.weight, bbox_head.stuff_mask_head.blocks.5.head_norm1.bias, bbox_head.stuff_mask_head.blocks.5.attn.q.weight, bbox_head.stuff_mask_head.blocks.5.attn.q.bias, bbox_head.stuff_mask_head.blocks.5.attn.k.weight, bbox_head.stuff_mask_head.blocks.5.attn.k.bias, bbox_head.stuff_mask_head.blocks.5.attn.v.weight, bbox_head.stuff_mask_head.blocks.5.attn.v.bias, bbox_head.stuff_mask_head.blocks.5.attn.proj.weight, bbox_head.stuff_mask_head.blocks.5.attn.proj.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.5.head_norm2.weight, bbox_head.stuff_mask_head.blocks.5.head_norm2.bias, bbox_head.stuff_mask_head.blocks.5.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.5.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.5.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.5.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.5.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.5.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.5.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.5.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.5.norm3.weight, bbox_head.stuff_mask_head.blocks.5.norm3.bias, bbox_head.stuff_mask_head.attnen.q.weight, bbox_head.stuff_mask_head.attnen.q.bias, bbox_head.stuff_mask_head.attnen.k.weight, bbox_head.stuff_mask_head.attnen.k.bias, bbox_head.stuff_mask_head.attnen.linear_l1.0.weight, bbox_head.stuff_mask_head.attnen.linear_l1.0.bias, bbox_head.stuff_mask_head.attnen.linear_l2.0.weight, bbox_head.stuff_mask_head.attnen.linear_l2.0.bias, bbox_head.stuff_mask_head.attnen.linear_l3.0.weight, bbox_head.stuff_mask_head.attnen.linear_l3.0.bias, bbox_head.stuff_mask_head.attnen.linear.0.weight, bbox_head.stuff_mask_head.attnen.linear.0.bias
    
    opened by aviadmx 3
  • ImportError Libtorch_cpu.so: undefined symbol

    ImportError Libtorch_cpu.so: undefined symbol

    Thank you for this awesome work

    Unfortunately I can't run the training because I get the following error

    ./tools/dist_train.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py 1
    + CONFIG=./configs/panformer/panformer_r50_24e_coco_panoptic.py
    + GPUS=1
    + PORT=29503
    ++ dirname ./tools/dist_train.sh
    ++ dirname ./tools/dist_train.sh
    + PYTHONPATH=./tools/..:
    + python -m torch.distributed.launch --nproc_per_node=1 --master_port=29503 ./tools/train.py ./configs/panformer/panformer_r50_24e_coco_panoptic.py --launcher pytorch --deterministic
    Traceback (most recent call last):
      File "/home/vision/anaconda3/envs/psf/lib/python3.7/runpy.py", line 183, in _run_module_as_main
        mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
      File "/home/vision/anaconda3/envs/psf/lib/python3.7/runpy.py", line 109, in _get_module_details
        __import__(pkg_name)
      File "/home/vision/anaconda3/envs/psf/lib/python3.7/site-packages/torch/__init__.py", line 197, in <module>
        from torch._C import *  # noqa: F403
    ImportError: /home/vision/anaconda3/envs/psf/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: _ZNK3c1010TensorImpl23shallow_copy_and_detachERKNS_15VariableVersionEb
    

    This is my environment:

    screen screen1

    opened by EnnioEvo 0
Owner
Nanjing University, China.
Image Restoration Using Swin Transformer for VapourSynth

SwinIR SwinIR function for VapourSynth, based on https://github.com/JingyunLiang/SwinIR. Dependencies NumPy PyTorch, preferably with CUDA. Note that t

Holy Wu 11 Jun 19, 2022
Tensorflow2.0 🍎🍊 is delicious, just eat it! 😋😋

How to eat TensorFlow2 in 30 days ? 🔥 🔥 Click here for Chinese Version(中文版) 《10天吃掉那只pyspark》 🚀 github项目地址: https://github.com/lyhue1991/eat_pyspark

lyhue1991 9.7k Jan 01, 2023
Voice Conversion by CycleGAN (语音克隆/语音转换):CycleGAN-VC3

CycleGAN-VC3-PyTorch 中文说明 | English This code is a PyTorch implementation for paper: CycleGAN-VC3: Examining and Improving CycleGAN-VCs for Mel-spectr

Kun Ma 110 Dec 24, 2022
Official implementation of Densely connected normalizing flows

Densely connected normalizing flows This repository is the official implementation of NeurIPS 2021 paper Densely connected normalizing flows. Poster a

Matej Grcić 31 Dec 12, 2022
Weakly Supervised 3D Object Detection from Point Cloud with Only Image Level Annotation

SCCKTIM Weakly Supervised 3D Object Detection from Point Cloud with Only Image-Level Annotation Our code will be available soon. The class knowledge t

1 Nov 12, 2021
RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation

RIFE RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation Ported from https://github.com/hzwer/arXiv2020-RIFE Dependencies NumPy

49 Jan 07, 2023
Implementation of Nalbach et al. 2017 paper.

Deep Shading Convolutional Neural Networks for Screen-Space Shading Our project is based on Nalbach et al. 2017 paper. In this project, a set of buffe

Marcel Santana 17 Sep 08, 2022
EZ graph is an easy to use AI solution that allows you to make and train your neural networks without a single line of code.

EZ-Graph EZ Graph is a GUI that allows users to make and train neural networks without writing a single line of code. Requirements python 3 pandas num

1 Jul 03, 2022
A modular PyTorch library for optical flow estimation using neural networks

A modular PyTorch library for optical flow estimation using neural networks

neu-vig 113 Dec 20, 2022
Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

Real-ESRGAN Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data Ported from https://github.com/xinntao/Real-ESRGAN Depend

Holy Wu 44 Dec 27, 2022
(EI 2022) Controllable Confidence-Based Image Denoising

Image Denoising with Control over Deep Network Hallucination Paper and arXiv preprint -- Our frequency-domain insights derive from SFM and the concept

Images and Visual Representation Laboratory (IVRL) at EPFL 5 Dec 18, 2022
Official Pytorch Implementation of Unsupervised Image Denoising with Frequency Domain Knowledge

Unsupervised Image Denoising with Frequency Domain Knowledge (BMVC 2021 Oral) : Official Project Page This repository provides the official PyTorch im

Donggon Jang 12 Sep 26, 2022
Chatbot in 200 lines of code using TensorLayer

Seq2Seq Chatbot This is a 200 lines implementation of Twitter/Cornell-Movie Chatbot, please read the following references before you read the code: Pr

TensorLayer Community 820 Dec 17, 2022
Randstad Artificial Intelligence Challenge (powered by VGEN). Soluzione proposta da Stefano Fiorucci (anakin87) - primo classificato

Randstad Artificial Intelligence Challenge (powered by VGEN) Soluzione proposta da Stefano Fiorucci (anakin87) - primo classificato Struttura director

Stefano Fiorucci 1 Nov 13, 2021
Half Instance Normalization Network for Image Restoration

HINet Half Instance Normalization Network for Image Restoration, based on https://github.com/megvii-model/HINet. Dependencies NumPy PyTorch, preferabl

Holy Wu 4 Jun 06, 2022
TransFGU: A Top-down Approach to Fine-Grained Unsupervised Semantic Segmentation

TransFGU: A Top-down Approach to Fine-Grained Unsupervised Semantic Segmentation Zhaoyun Yin, Pichao Wang, Fan Wang, Xianzhe Xu, Hanling Zhang, Hao Li

DamoCV 25 Dec 16, 2022
General Assembly Capstone: NBA Game Predictor

Project 6: Predicting NBA Games Problem Statement Can I predict the results of NBA games from the back-half of a season from the opening half of the s

Adam Muhammad Klesc 1 Jan 14, 2022
An Implementation of SiameseRPN with Feature Pyramid Networks

SiameseRPN with FPN This project is mainly based on HelloRicky123/Siamese-RPN. What I've done is just add a Feature Pyramid Network method to the orig

3 Apr 16, 2022
Bu repo SAHI uygulamasını mantığını öğreniyoruz.

SAHI-Learn: SAHI'den Beraber Kodlamak İster Misiniz Herkese merhabalar ben Kadir Nar. SAHI kütüphanesine gönüllü geliştiriciyim. Bu repo SAHI kütüphan

Kadir Nar 11 Aug 22, 2022
A time series processing library

Timeseria Timeseria is a time series processing library which aims at making it easy to handle time series data and to build statistical and machine l

Stefano Alberto Russo 11 Aug 08, 2022