Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers

Overview

Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers

PWC PWC


Results

results on COCO val

Backbone Method Lr Schd PQ Config Download
R-50 Panoptic-SegFormer 1x 48.0 config model
R-50 Panoptic-SegFormer 2x 49.6 config model
R-101 Panoptic-SegFormer 2x 50.6 config model
PVTv2-B5 (much lighter) Panoptic-SegFormer 2x 55.6 config model
Swin-L (window size 7) Panoptic-SegFormer 2x 55.8 config model

Install

Prerequisites

  • Linux
  • Python 3.6+
  • PyTorch 1.5+
  • torchvision
  • CUDA 9.2+ (If you build PyTorch from source, CUDA 9.0 is also compatible)
  • GCC 5+
  • mmcv-full==1.3.4
  • mmdet==2.12.0 # higher version may not work
  • timm==0.4.5
  • einops==0.3.0
  • Pillow==8.0.1
  • opencv-python==4.5.2

note: PyTorch1.8 has a bug in its adamw.py and it is solved in PyTorch1.9(see), you can easily solve it by comparing the difference.

install Panoptic SegFormer

python setup.py install 

Datasets

When I began this project, mmdet dose not support panoptic segmentation officially. I convert the dataset from panoptic segmentation format to instance segmentation format for convenience.

1. prepare data (COCO)

cd Panoptic-SegFormer
mkdir datasets
cd datasets
ln -s path_to_coco coco
mkdir annotations/
cd annotations
wget http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip
unzip panoptic_annotations_trainval2017.zip

Then the directory structure should be the following:

Panoptic-SegFormer
├── datasets
│   ├── annotations/
│   │   ├── panoptic_train2017/
│   │   ├── panoptic_train2017.json
│   │   ├── panoptic_val2017/
│   │   └── panoptic_val2017.json
│   └── coco/ 
│
├── config
├── checkpoints
├── easymd
...

2. convert panoptic format to detection format

cd Panoptic-SegFormer
./tools/convert_panoptic_coco.sh coco

Then the directory structure should be the following:

Panoptic-SegFormer
├── datasets
│   ├── annotations/
│   │   ├── panoptic_train2017/
│   │   ├── panoptic_train2017_detection_format.json
│   │   ├── panoptic_train2017.json
│   │   ├── panoptic_val2017/
│   │   ├── panoptic_val2017_detection_format.json
│   │   └── panoptic_val2017.json
│   └── coco/ 
│
├── config
├── checkpoints
├── easymd
...

Run (panoptic segmentation)

train

single-machine with 8 gpus.

./tools/dist_train.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py 8

test

./tools/dist_test.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py path/to/model.pth 8

Citing

If you use Panoptic SegFormer in your research, please use the following BibTeX entry.

@article{li2021panoptic,
  title={Panoptic SegFormer},
  author={Li, Zhiqi and Wang, Wenhai and Xie, Enze and Yu, Zhiding and Anandkumar, Anima and Alvarez, Jose M and Lu, Tong and Luo, Ping},
  journal={arXiv},
  year={2021}
}

Acknowledgement

Mainly based on Defromable DETR from MMdet.

Thanks very much for other open source works: timm, Panoptic FCN, MaskFomer, QueryInst

Comments
  • How demo one picture result ?

    How demo one picture result ?

    Dear friend, Thanks you for your good job. Now we do not want to download coco datasets, just want to give one picture, segment it and show its result. How to do it ? Best regards,

    opened by delldu 3
  • what's pvt_v2_ap in code?

    what's pvt_v2_ap in code?

    I found there are many names that obscure to understand. For example: pvt_v2_ap what that stands for? and what's single_stage_w_mask stands for?

    image

    and those file differences?

    opened by jinfagang 2
  • how to visualize demo image?

    how to visualize demo image?

    Dear friend, how to visualize the segmentation result of custom images? I run the infererce.py and didn’t get a good result. Like this: 000000

    I think there are some faults in my code.

    Here is my code:

    from mmcv.runner import checkpoint
    from mmdet.apis.inference import init_detector,LoadImage, inference_detector
    import easymd
    import cv2
    import random
    import colorsys
    import numpy as np
    
    def random_colors(N, bright=True):
        brightness = 1.0 if bright else 0.7
        hsv = [(i / float(N), 1, brightness) for i in range(N)]
        colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv))
        random.shuffle(colors)
        return colors
    
    def apply_mask(image, mask, color, alpha=0.5):
        for c in range(3):
            image[:, :, c] = np.where(mask == 0,
                                      image[:, :, c],
                                      image[:, :, c] *
                                      (1 - alpha) + alpha * color[c] * 255)
        return image
    
    config = './configs/panformer/panformer_pvtb5_24e_coco_panoptic.py'
    #checkpoints = './checkpoints/pseg_r101_r50_latest.pth'
    checkpoints = "./checkpoints/panoptic_segformer_pvtv2b5_2x.pth"
    img_path = "img_path "
    mask_save_path = "save_path"
    
    colors = random_colors(80)
    
    model = init_detector(config,checkpoint=checkpoints)
    
    results = inference_detector(model, [img_path])
    
    img = cv2.imread(img_path)
    
    seg = results['segm'][0]
    N = len(seg)
    
    masked_image = img.copy()
    for i in range(N):
        color = colors[i]
        masks = np.sum(seg[i], axis=0)
        masked_image = apply_mask(masked_image, masks, color)
        # for mask in seg[i]:
        #     masked_image = apply_mask(masked_image, mask, color)
    
    # cv2.imshow("a", masked_image)
    
    opened by garriton 0
  • Location Decoder loss

    Location Decoder loss

    https://github.com/zhiqi-li/Panoptic-SegFormer/blob/e604ef810eaf5101106d221db4b6970c2daca5c9/easymd/models/panformer/panformer_head.py#L360-L364

    Why does the location decoder only compute the losses of the first L-1 layers not the whole L layers?

    opened by hust-nj 0
  • Instruction for single GPU run

    Instruction for single GPU run

    Hi thanks for sharing your works. Iwas trying to run it on single gpu. Would you pls add some instructions or scripts to run it in single gpu? That would be a great help.

    kind regards Abdullah

    opened by nazib 1
  • Impossible to debug, single_gpu code paths are broken

    Impossible to debug, single_gpu code paths are broken

    It seems that the multi gpu training and eval works great, however, while trying to debug you're opt for using a single gpu.
    In that case the code breaks in several parts during the evaluation of the validation set.
    Any chance for a hotfix? :)

    To reproduce, try to run the code from PyCharm in debug mode while there's only one GPU available.

    opened by aviadmx 1
  • Why instance annotations are required along panoptic ones?

    Why instance annotations are required along panoptic ones?

    The model solves the panoptic segmentation task, why does the validation dataset uses the instance segmentation annotations?

    data = dict(
        samples_per_gpu=2,
        workers_per_gpu=2,
        train=dict(
            type=dataset_type,
            ann_file= './datasets/annotations/panoptic_train2017_detection_format.json',
            img_prefix=data_root + 'train2017/',
            pipeline=train_pipeline),
        val=dict( 
          
            segmentations_folder='./seg',
            gt_json = './datasets/annotations/panoptic_val2017.json',
            gt_folder = './datasets/annotations/panoptic_val2017',
            type=dataset_type,
            ann_file=data_root + 'annotations/instances_val2017.json', # Why?
            img_prefix=data_root + 'val2017/',
            pipeline=test_pipeline),
        test=dict(
            segmentations_folder='./seg',
            gt_json = './datasets/annotations/panoptic_val2017.json',
            gt_folder = './datasets/annotations/panoptic_val2017',
            type=dataset_type,
            #ann_file= './datasets/coco/annotations/image_info_test-dev2017.json',
            ann_file=data_root + 'annotations/instances_val2017.json', # Why?
            #img_prefix=data_root + '/test2017/',
            img_prefix=data_root + 'val2017/',
            pipeline=test_pipeline)
            )
    

    We eventually use the instances_val2017.json file instead of panoptic_val2017.json

    opened by aviadmx 3
  • Loading checkpoint

    Loading checkpoint

    When loading the Swin-L checkpoint by adding a load_from line to the config configs/panformer/panformer_swinl_24e_coco_panoptic.pyz as following:

    load_from='./pretrained/panoptic_segformer_swinl_2x.pth'
    

    The loading fails with an error about keys mismatch:

    unexpected key in source state_dict: bbox_head.cls_branches2.0.weight, bbox_head.cls_branches2.0.bias, bbox_head.cls_branches2.1.weight, bbox_head.cls_branches2.1.bias, bbox_head.cls_branches2.2.weight, bbox_head.cls_branches2.2.bias, bbox_head.cls_branches2.3.weight, bbox_head.cls_branches2.3.bias, bbox_head.mask_head.blocks.0.head_norm1.weight, bbox_head.mask_head.blocks.0.head_norm1.bias, bbox_head.mask_head.blocks.0.attn.q.weight, bbox_head.mask_head.blocks.0.attn.q.bias, bbox_head.mask_head.blocks.0.attn.k.weight, bbox_head.mask_head.blocks.0.attn.k.bias, bbox_head.mask_head.blocks.0.attn.v.weight, bbox_head.mask_head.blocks.0.attn.v.bias, bbox_head.mask_head.blocks.0.attn.proj.weight, bbox_head.mask_head.blocks.0.attn.proj.bias, bbox_head.mask_head.blocks.0.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.0.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.0.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.0.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.0.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.0.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.0.attn.linear.0.weight, bbox_head.mask_head.blocks.0.attn.linear.0.bias, bbox_head.mask_head.blocks.0.head_norm2.weight, bbox_head.mask_head.blocks.0.head_norm2.bias, bbox_head.mask_head.blocks.0.mlp.fc1.weight, bbox_head.mask_head.blocks.0.mlp.fc1.bias, bbox_head.mask_head.blocks.0.mlp.fc2.weight, bbox_head.mask_head.blocks.0.mlp.fc2.bias, bbox_head.mask_head.blocks.1.head_norm1.weight, bbox_head.mask_head.blocks.1.head_norm1.bias, bbox_head.mask_head.blocks.1.attn.q.weight, bbox_head.mask_head.blocks.1.attn.q.bias, bbox_head.mask_head.blocks.1.attn.k.weight, bbox_head.mask_head.blocks.1.attn.k.bias, bbox_head.mask_head.blocks.1.attn.v.weight, bbox_head.mask_head.blocks.1.attn.v.bias, bbox_head.mask_head.blocks.1.attn.proj.weight, bbox_head.mask_head.blocks.1.attn.proj.bias, bbox_head.mask_head.blocks.1.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.1.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.1.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.1.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.1.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.1.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.1.attn.linear.0.weight, bbox_head.mask_head.blocks.1.attn.linear.0.bias, bbox_head.mask_head.blocks.1.head_norm2.weight, bbox_head.mask_head.blocks.1.head_norm2.bias, bbox_head.mask_head.blocks.1.mlp.fc1.weight, bbox_head.mask_head.blocks.1.mlp.fc1.bias, bbox_head.mask_head.blocks.1.mlp.fc2.weight, bbox_head.mask_head.blocks.1.mlp.fc2.bias, bbox_head.mask_head.blocks.2.head_norm1.weight, bbox_head.mask_head.blocks.2.head_norm1.bias, bbox_head.mask_head.blocks.2.attn.q.weight, bbox_head.mask_head.blocks.2.attn.q.bias, bbox_head.mask_head.blocks.2.attn.k.weight, bbox_head.mask_head.blocks.2.attn.k.bias, bbox_head.mask_head.blocks.2.attn.v.weight, bbox_head.mask_head.blocks.2.attn.v.bias, bbox_head.mask_head.blocks.2.attn.proj.weight, bbox_head.mask_head.blocks.2.attn.proj.bias, bbox_head.mask_head.blocks.2.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.2.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.2.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.2.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.2.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.2.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.2.attn.linear.0.weight, bbox_head.mask_head.blocks.2.attn.linear.0.bias, bbox_head.mask_head.blocks.2.head_norm2.weight, bbox_head.mask_head.blocks.2.head_norm2.bias, bbox_head.mask_head.blocks.2.mlp.fc1.weight, bbox_head.mask_head.blocks.2.mlp.fc1.bias, bbox_head.mask_head.blocks.2.mlp.fc2.weight, bbox_head.mask_head.blocks.2.mlp.fc2.bias, bbox_head.mask_head.blocks.3.head_norm1.weight, bbox_head.mask_head.blocks.3.head_norm1.bias, bbox_head.mask_head.blocks.3.attn.q.weight, bbox_head.mask_head.blocks.3.attn.q.bias, bbox_head.mask_head.blocks.3.attn.k.weight, bbox_head.mask_head.blocks.3.attn.k.bias, bbox_head.mask_head.blocks.3.attn.v.weight, bbox_head.mask_head.blocks.3.attn.v.bias, bbox_head.mask_head.blocks.3.attn.proj.weight, bbox_head.mask_head.blocks.3.attn.proj.bias, bbox_head.mask_head.blocks.3.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.3.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.3.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.3.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.3.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.3.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.3.attn.linear.0.weight, bbox_head.mask_head.blocks.3.attn.linear.0.bias, bbox_head.mask_head.blocks.3.head_norm2.weight, bbox_head.mask_head.blocks.3.head_norm2.bias, bbox_head.mask_head.blocks.3.mlp.fc1.weight, bbox_head.mask_head.blocks.3.mlp.fc1.bias, bbox_head.mask_head.blocks.3.mlp.fc2.weight, bbox_head.mask_head.blocks.3.mlp.fc2.bias, bbox_head.mask_head.attnen.q.weight, bbox_head.mask_head.attnen.q.bias, bbox_head.mask_head.attnen.k.weight, bbox_head.mask_head.attnen.k.bias, bbox_head.mask_head.attnen.linear_l1.0.weight, bbox_head.mask_head.attnen.linear_l1.0.bias, bbox_head.mask_head.attnen.linear_l2.0.weight, bbox_head.mask_head.attnen.linear_l2.0.bias, bbox_head.mask_head.attnen.linear_l3.0.weight, bbox_head.mask_head.attnen.linear_l3.0.bias, bbox_head.mask_head.attnen.linear.0.weight, bbox_head.mask_head.attnen.linear.0.bias, bbox_head.mask_head2.blocks.0.head_norm1.weight, bbox_head.mask_head2.blocks.0.head_norm1.bias, bbox_head.mask_head2.blocks.0.attn.q.weight, bbox_head.mask_head2.blocks.0.attn.q.bias, bbox_head.mask_head2.blocks.0.attn.k.weight, bbox_head.mask_head2.blocks.0.attn.k.bias, bbox_head.mask_head2.blocks.0.attn.v.weight, bbox_head.mask_head2.blocks.0.attn.v.bias, bbox_head.mask_head2.blocks.0.attn.proj.weight, bbox_head.mask_head2.blocks.0.attn.proj.bias, bbox_head.mask_head2.blocks.0.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.0.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.0.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.0.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.0.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.0.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.0.attn.linear.0.weight, bbox_head.mask_head2.blocks.0.attn.linear.0.bias, bbox_head.mask_head2.blocks.0.head_norm2.weight, bbox_head.mask_head2.blocks.0.head_norm2.bias, bbox_head.mask_head2.blocks.0.mlp.fc1.weight, bbox_head.mask_head2.blocks.0.mlp.fc1.bias, bbox_head.mask_head2.blocks.0.mlp.fc2.weight, bbox_head.mask_head2.blocks.0.mlp.fc2.bias, bbox_head.mask_head2.blocks.0.self_attention.qkv.weight, bbox_head.mask_head2.blocks.0.self_attention.qkv.bias, bbox_head.mask_head2.blocks.0.self_attention.proj.weight, bbox_head.mask_head2.blocks.0.self_attention.proj.bias, bbox_head.mask_head2.blocks.0.norm3.weight, bbox_head.mask_head2.blocks.0.norm3.bias, bbox_head.mask_head2.blocks.1.head_norm1.weight, bbox_head.mask_head2.blocks.1.head_norm1.bias, bbox_head.mask_head2.blocks.1.attn.q.weight, bbox_head.mask_head2.blocks.1.attn.q.bias, bbox_head.mask_head2.blocks.1.attn.k.weight, bbox_head.mask_head2.blocks.1.attn.k.bias, bbox_head.mask_head2.blocks.1.attn.v.weight, bbox_head.mask_head2.blocks.1.attn.v.bias, bbox_head.mask_head2.blocks.1.attn.proj.weight, bbox_head.mask_head2.blocks.1.attn.proj.bias, bbox_head.mask_head2.blocks.1.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.1.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.1.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.1.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.1.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.1.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.1.attn.linear.0.weight, bbox_head.mask_head2.blocks.1.attn.linear.0.bias, bbox_head.mask_head2.blocks.1.head_norm2.weight, bbox_head.mask_head2.blocks.1.head_norm2.bias, bbox_head.mask_head2.blocks.1.mlp.fc1.weight, bbox_head.mask_head2.blocks.1.mlp.fc1.bias, bbox_head.mask_head2.blocks.1.mlp.fc2.weight, bbox_head.mask_head2.blocks.1.mlp.fc2.bias, bbox_head.mask_head2.blocks.1.self_attention.qkv.weight, bbox_head.mask_head2.blocks.1.self_attention.qkv.bias, bbox_head.mask_head2.blocks.1.self_attention.proj.weight, bbox_head.mask_head2.blocks.1.self_attention.proj.bias, bbox_head.mask_head2.blocks.1.norm3.weight, bbox_head.mask_head2.blocks.1.norm3.bias, bbox_head.mask_head2.blocks.2.head_norm1.weight, bbox_head.mask_head2.blocks.2.head_norm1.bias, bbox_head.mask_head2.blocks.2.attn.q.weight, bbox_head.mask_head2.blocks.2.attn.q.bias, bbox_head.mask_head2.blocks.2.attn.k.weight, bbox_head.mask_head2.blocks.2.attn.k.bias, bbox_head.mask_head2.blocks.2.attn.v.weight, bbox_head.mask_head2.blocks.2.attn.v.bias, bbox_head.mask_head2.blocks.2.attn.proj.weight, bbox_head.mask_head2.blocks.2.attn.proj.bias, bbox_head.mask_head2.blocks.2.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.2.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.2.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.2.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.2.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.2.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.2.attn.linear.0.weight, bbox_head.mask_head2.blocks.2.attn.linear.0.bias, bbox_head.mask_head2.blocks.2.head_norm2.weight, bbox_head.mask_head2.blocks.2.head_norm2.bias, bbox_head.mask_head2.blocks.2.mlp.fc1.weight, bbox_head.mask_head2.blocks.2.mlp.fc1.bias, bbox_head.mask_head2.blocks.2.mlp.fc2.weight, bbox_head.mask_head2.blocks.2.mlp.fc2.bias, bbox_head.mask_head2.blocks.2.self_attention.qkv.weight, bbox_head.mask_head2.blocks.2.self_attention.qkv.bias, bbox_head.mask_head2.blocks.2.self_attention.proj.weight, bbox_head.mask_head2.blocks.2.self_attention.proj.bias, bbox_head.mask_head2.blocks.2.norm3.weight, bbox_head.mask_head2.blocks.2.norm3.bias, bbox_head.mask_head2.blocks.3.head_norm1.weight, bbox_head.mask_head2.blocks.3.head_norm1.bias, bbox_head.mask_head2.blocks.3.attn.q.weight, bbox_head.mask_head2.blocks.3.attn.q.bias, bbox_head.mask_head2.blocks.3.attn.k.weight, bbox_head.mask_head2.blocks.3.attn.k.bias, bbox_head.mask_head2.blocks.3.attn.v.weight, bbox_head.mask_head2.blocks.3.attn.v.bias, bbox_head.mask_head2.blocks.3.attn.proj.weight, bbox_head.mask_head2.blocks.3.attn.proj.bias, bbox_head.mask_head2.blocks.3.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.3.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.3.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.3.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.3.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.3.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.3.attn.linear.0.weight, bbox_head.mask_head2.blocks.3.attn.linear.0.bias, bbox_head.mask_head2.blocks.3.head_norm2.weight, bbox_head.mask_head2.blocks.3.head_norm2.bias, bbox_head.mask_head2.blocks.3.mlp.fc1.weight, bbox_head.mask_head2.blocks.3.mlp.fc1.bias, bbox_head.mask_head2.blocks.3.mlp.fc2.weight, bbox_head.mask_head2.blocks.3.mlp.fc2.bias, bbox_head.mask_head2.blocks.3.self_attention.qkv.weight, bbox_head.mask_head2.blocks.3.self_attention.qkv.bias, bbox_head.mask_head2.blocks.3.self_attention.proj.weight, bbox_head.mask_head2.blocks.3.self_attention.proj.bias, bbox_head.mask_head2.blocks.3.norm3.weight, bbox_head.mask_head2.blocks.3.norm3.bias, bbox_head.mask_head2.blocks.4.head_norm1.weight, bbox_head.mask_head2.blocks.4.head_norm1.bias, bbox_head.mask_head2.blocks.4.attn.q.weight, bbox_head.mask_head2.blocks.4.attn.q.bias, bbox_head.mask_head2.blocks.4.attn.k.weight, bbox_head.mask_head2.blocks.4.attn.k.bias, bbox_head.mask_head2.blocks.4.attn.v.weight, bbox_head.mask_head2.blocks.4.attn.v.bias, bbox_head.mask_head2.blocks.4.attn.proj.weight, bbox_head.mask_head2.blocks.4.attn.proj.bias, bbox_head.mask_head2.blocks.4.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.4.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.4.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.4.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.4.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.4.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.4.attn.linear.0.weight, bbox_head.mask_head2.blocks.4.attn.linear.0.bias, bbox_head.mask_head2.blocks.4.head_norm2.weight, bbox_head.mask_head2.blocks.4.head_norm2.bias, bbox_head.mask_head2.blocks.4.mlp.fc1.weight, bbox_head.mask_head2.blocks.4.mlp.fc1.bias, bbox_head.mask_head2.blocks.4.mlp.fc2.weight, bbox_head.mask_head2.blocks.4.mlp.fc2.bias, bbox_head.mask_head2.blocks.4.self_attention.qkv.weight, bbox_head.mask_head2.blocks.4.self_attention.qkv.bias, bbox_head.mask_head2.blocks.4.self_attention.proj.weight, bbox_head.mask_head2.blocks.4.self_attention.proj.bias, bbox_head.mask_head2.blocks.4.norm3.weight, bbox_head.mask_head2.blocks.4.norm3.bias, bbox_head.mask_head2.blocks.5.head_norm1.weight, bbox_head.mask_head2.blocks.5.head_norm1.bias, bbox_head.mask_head2.blocks.5.attn.q.weight, bbox_head.mask_head2.blocks.5.attn.q.bias, bbox_head.mask_head2.blocks.5.attn.k.weight, bbox_head.mask_head2.blocks.5.attn.k.bias, bbox_head.mask_head2.blocks.5.attn.v.weight, bbox_head.mask_head2.blocks.5.attn.v.bias, bbox_head.mask_head2.blocks.5.attn.proj.weight, bbox_head.mask_head2.blocks.5.attn.proj.bias, bbox_head.mask_head2.blocks.5.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.5.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.5.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.5.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.5.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.5.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.5.attn.linear.0.weight, bbox_head.mask_head2.blocks.5.attn.linear.0.bias, bbox_head.mask_head2.blocks.5.head_norm2.weight, bbox_head.mask_head2.blocks.5.head_norm2.bias, bbox_head.mask_head2.blocks.5.mlp.fc1.weight, bbox_head.mask_head2.blocks.5.mlp.fc1.bias, bbox_head.mask_head2.blocks.5.mlp.fc2.weight, bbox_head.mask_head2.blocks.5.mlp.fc2.bias, bbox_head.mask_head2.blocks.5.self_attention.qkv.weight, bbox_head.mask_head2.blocks.5.self_attention.qkv.bias, bbox_head.mask_head2.blocks.5.self_attention.proj.weight, bbox_head.mask_head2.blocks.5.self_attention.proj.bias, bbox_head.mask_head2.blocks.5.norm3.weight, bbox_head.mask_head2.blocks.5.norm3.bias, bbox_head.mask_head2.attnen.q.weight, bbox_head.mask_head2.attnen.q.bias, bbox_head.mask_head2.attnen.k.weight, bbox_head.mask_head2.attnen.k.bias, bbox_head.mask_head2.attnen.linear_l1.0.weight, bbox_head.mask_head2.attnen.linear_l1.0.bias, bbox_head.mask_head2.attnen.linear_l2.0.weight, bbox_head.mask_head2.attnen.linear_l2.0.bias, bbox_head.mask_head2.attnen.linear_l3.0.weight, bbox_head.mask_head2.attnen.linear_l3.0.bias, bbox_head.mask_head2.attnen.linear.0.weight, bbox_head.mask_head2.attnen.linear.0.bias
    
    missing keys in source state_dict: bbox_head.cls_thing_branches.0.weight, bbox_head.cls_thing_branches.0.bias, bbox_head.cls_thing_branches.1.weight, bbox_head.cls_thing_branches.1.bias, bbox_head.cls_thing_branches.2.weight, bbox_head.cls_thing_branches.2.bias, bbox_head.cls_thing_branches.3.weight, bbox_head.cls_thing_branches.3.bias, bbox_head.things_mask_head.blocks.0.head_norm1.weight, bbox_head.things_mask_head.blocks.0.head_norm1.bias, bbox_head.things_mask_head.blocks.0.attn.q.weight, bbox_head.things_mask_head.blocks.0.attn.q.bias, bbox_head.things_mask_head.blocks.0.attn.k.weight, bbox_head.things_mask_head.blocks.0.attn.k.bias, bbox_head.things_mask_head.blocks.0.attn.v.weight, bbox_head.things_mask_head.blocks.0.attn.v.bias, bbox_head.things_mask_head.blocks.0.attn.proj.weight, bbox_head.things_mask_head.blocks.0.attn.proj.bias, bbox_head.things_mask_head.blocks.0.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.0.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.0.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.0.attn.linear.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear.0.bias, bbox_head.things_mask_head.blocks.0.head_norm2.weight, bbox_head.things_mask_head.blocks.0.head_norm2.bias, bbox_head.things_mask_head.blocks.0.mlp.fc1.weight, bbox_head.things_mask_head.blocks.0.mlp.fc1.bias, bbox_head.things_mask_head.blocks.0.mlp.fc2.weight, bbox_head.things_mask_head.blocks.0.mlp.fc2.bias, bbox_head.things_mask_head.blocks.1.head_norm1.weight, bbox_head.things_mask_head.blocks.1.head_norm1.bias, bbox_head.things_mask_head.blocks.1.attn.q.weight, bbox_head.things_mask_head.blocks.1.attn.q.bias, bbox_head.things_mask_head.blocks.1.attn.k.weight, bbox_head.things_mask_head.blocks.1.attn.k.bias, bbox_head.things_mask_head.blocks.1.attn.v.weight, bbox_head.things_mask_head.blocks.1.attn.v.bias, bbox_head.things_mask_head.blocks.1.attn.proj.weight, bbox_head.things_mask_head.blocks.1.attn.proj.bias, bbox_head.things_mask_head.blocks.1.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.1.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.1.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.1.attn.linear.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear.0.bias, bbox_head.things_mask_head.blocks.1.head_norm2.weight, bbox_head.things_mask_head.blocks.1.head_norm2.bias, bbox_head.things_mask_head.blocks.1.mlp.fc1.weight, bbox_head.things_mask_head.blocks.1.mlp.fc1.bias, bbox_head.things_mask_head.blocks.1.mlp.fc2.weight, bbox_head.things_mask_head.blocks.1.mlp.fc2.bias, bbox_head.things_mask_head.blocks.2.head_norm1.weight, bbox_head.things_mask_head.blocks.2.head_norm1.bias, bbox_head.things_mask_head.blocks.2.attn.q.weight, bbox_head.things_mask_head.blocks.2.attn.q.bias, bbox_head.things_mask_head.blocks.2.attn.k.weight, bbox_head.things_mask_head.blocks.2.attn.k.bias, bbox_head.things_mask_head.blocks.2.attn.v.weight, bbox_head.things_mask_head.blocks.2.attn.v.bias, bbox_head.things_mask_head.blocks.2.attn.proj.weight, bbox_head.things_mask_head.blocks.2.attn.proj.bias, bbox_head.things_mask_head.blocks.2.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.2.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.2.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.2.attn.linear.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear.0.bias, bbox_head.things_mask_head.blocks.2.head_norm2.weight, bbox_head.things_mask_head.blocks.2.head_norm2.bias, bbox_head.things_mask_head.blocks.2.mlp.fc1.weight, bbox_head.things_mask_head.blocks.2.mlp.fc1.bias, bbox_head.things_mask_head.blocks.2.mlp.fc2.weight, bbox_head.things_mask_head.blocks.2.mlp.fc2.bias, bbox_head.things_mask_head.blocks.3.head_norm1.weight, bbox_head.things_mask_head.blocks.3.head_norm1.bias, bbox_head.things_mask_head.blocks.3.attn.q.weight, bbox_head.things_mask_head.blocks.3.attn.q.bias, bbox_head.things_mask_head.blocks.3.attn.k.weight, bbox_head.things_mask_head.blocks.3.attn.k.bias, bbox_head.things_mask_head.blocks.3.attn.v.weight, bbox_head.things_mask_head.blocks.3.attn.v.bias, bbox_head.things_mask_head.blocks.3.attn.proj.weight, bbox_head.things_mask_head.blocks.3.attn.proj.bias, bbox_head.things_mask_head.blocks.3.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.3.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.3.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.3.attn.linear.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear.0.bias, bbox_head.things_mask_head.blocks.3.head_norm2.weight, bbox_head.things_mask_head.blocks.3.head_norm2.bias, bbox_head.things_mask_head.blocks.3.mlp.fc1.weight, bbox_head.things_mask_head.blocks.3.mlp.fc1.bias, bbox_head.things_mask_head.blocks.3.mlp.fc2.weight, bbox_head.things_mask_head.blocks.3.mlp.fc2.bias, bbox_head.things_mask_head.attnen.q.weight, bbox_head.things_mask_head.attnen.q.bias, bbox_head.things_mask_head.attnen.k.weight, bbox_head.things_mask_head.attnen.k.bias, bbox_head.things_mask_head.attnen.linear_l1.0.weight, bbox_head.things_mask_head.attnen.linear_l1.0.bias, bbox_head.things_mask_head.attnen.linear_l2.0.weight, bbox_head.things_mask_head.attnen.linear_l2.0.bias, bbox_head.things_mask_head.attnen.linear_l3.0.weight, bbox_head.things_mask_head.attnen.linear_l3.0.bias, bbox_head.things_mask_head.attnen.linear.0.weight, bbox_head.things_mask_head.attnen.linear.0.bias, bbox_head.stuff_mask_head.blocks.0.head_norm1.weight, bbox_head.stuff_mask_head.blocks.0.head_norm1.bias, bbox_head.stuff_mask_head.blocks.0.attn.q.weight, bbox_head.stuff_mask_head.blocks.0.attn.q.bias, bbox_head.stuff_mask_head.blocks.0.attn.k.weight, bbox_head.stuff_mask_head.blocks.0.attn.k.bias, bbox_head.stuff_mask_head.blocks.0.attn.v.weight, bbox_head.stuff_mask_head.blocks.0.attn.v.bias, bbox_head.stuff_mask_head.blocks.0.attn.proj.weight, bbox_head.stuff_mask_head.blocks.0.attn.proj.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.0.head_norm2.weight, bbox_head.stuff_mask_head.blocks.0.head_norm2.bias, bbox_head.stuff_mask_head.blocks.0.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.0.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.0.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.0.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.0.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.0.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.0.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.0.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.0.norm3.weight, bbox_head.stuff_mask_head.blocks.0.norm3.bias, bbox_head.stuff_mask_head.blocks.1.head_norm1.weight, bbox_head.stuff_mask_head.blocks.1.head_norm1.bias, bbox_head.stuff_mask_head.blocks.1.attn.q.weight, bbox_head.stuff_mask_head.blocks.1.attn.q.bias, bbox_head.stuff_mask_head.blocks.1.attn.k.weight, bbox_head.stuff_mask_head.blocks.1.attn.k.bias, bbox_head.stuff_mask_head.blocks.1.attn.v.weight, bbox_head.stuff_mask_head.blocks.1.attn.v.bias, bbox_head.stuff_mask_head.blocks.1.attn.proj.weight, bbox_head.stuff_mask_head.blocks.1.attn.proj.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.1.head_norm2.weight, bbox_head.stuff_mask_head.blocks.1.head_norm2.bias, bbox_head.stuff_mask_head.blocks.1.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.1.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.1.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.1.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.1.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.1.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.1.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.1.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.1.norm3.weight, bbox_head.stuff_mask_head.blocks.1.norm3.bias, bbox_head.stuff_mask_head.blocks.2.head_norm1.weight, bbox_head.stuff_mask_head.blocks.2.head_norm1.bias, bbox_head.stuff_mask_head.blocks.2.attn.q.weight, bbox_head.stuff_mask_head.blocks.2.attn.q.bias, bbox_head.stuff_mask_head.blocks.2.attn.k.weight, bbox_head.stuff_mask_head.blocks.2.attn.k.bias, bbox_head.stuff_mask_head.blocks.2.attn.v.weight, bbox_head.stuff_mask_head.blocks.2.attn.v.bias, bbox_head.stuff_mask_head.blocks.2.attn.proj.weight, bbox_head.stuff_mask_head.blocks.2.attn.proj.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.2.head_norm2.weight, bbox_head.stuff_mask_head.blocks.2.head_norm2.bias, bbox_head.stuff_mask_head.blocks.2.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.2.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.2.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.2.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.2.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.2.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.2.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.2.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.2.norm3.weight, bbox_head.stuff_mask_head.blocks.2.norm3.bias, bbox_head.stuff_mask_head.blocks.3.head_norm1.weight, bbox_head.stuff_mask_head.blocks.3.head_norm1.bias, bbox_head.stuff_mask_head.blocks.3.attn.q.weight, bbox_head.stuff_mask_head.blocks.3.attn.q.bias, bbox_head.stuff_mask_head.blocks.3.attn.k.weight, bbox_head.stuff_mask_head.blocks.3.attn.k.bias, bbox_head.stuff_mask_head.blocks.3.attn.v.weight, bbox_head.stuff_mask_head.blocks.3.attn.v.bias, bbox_head.stuff_mask_head.blocks.3.attn.proj.weight, bbox_head.stuff_mask_head.blocks.3.attn.proj.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.3.head_norm2.weight, bbox_head.stuff_mask_head.blocks.3.head_norm2.bias, bbox_head.stuff_mask_head.blocks.3.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.3.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.3.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.3.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.3.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.3.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.3.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.3.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.3.norm3.weight, bbox_head.stuff_mask_head.blocks.3.norm3.bias, bbox_head.stuff_mask_head.blocks.4.head_norm1.weight, bbox_head.stuff_mask_head.blocks.4.head_norm1.bias, bbox_head.stuff_mask_head.blocks.4.attn.q.weight, bbox_head.stuff_mask_head.blocks.4.attn.q.bias, bbox_head.stuff_mask_head.blocks.4.attn.k.weight, bbox_head.stuff_mask_head.blocks.4.attn.k.bias, bbox_head.stuff_mask_head.blocks.4.attn.v.weight, bbox_head.stuff_mask_head.blocks.4.attn.v.bias, bbox_head.stuff_mask_head.blocks.4.attn.proj.weight, bbox_head.stuff_mask_head.blocks.4.attn.proj.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.4.head_norm2.weight, bbox_head.stuff_mask_head.blocks.4.head_norm2.bias, bbox_head.stuff_mask_head.blocks.4.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.4.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.4.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.4.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.4.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.4.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.4.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.4.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.4.norm3.weight, bbox_head.stuff_mask_head.blocks.4.norm3.bias, bbox_head.stuff_mask_head.blocks.5.head_norm1.weight, bbox_head.stuff_mask_head.blocks.5.head_norm1.bias, bbox_head.stuff_mask_head.blocks.5.attn.q.weight, bbox_head.stuff_mask_head.blocks.5.attn.q.bias, bbox_head.stuff_mask_head.blocks.5.attn.k.weight, bbox_head.stuff_mask_head.blocks.5.attn.k.bias, bbox_head.stuff_mask_head.blocks.5.attn.v.weight, bbox_head.stuff_mask_head.blocks.5.attn.v.bias, bbox_head.stuff_mask_head.blocks.5.attn.proj.weight, bbox_head.stuff_mask_head.blocks.5.attn.proj.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.5.head_norm2.weight, bbox_head.stuff_mask_head.blocks.5.head_norm2.bias, bbox_head.stuff_mask_head.blocks.5.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.5.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.5.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.5.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.5.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.5.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.5.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.5.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.5.norm3.weight, bbox_head.stuff_mask_head.blocks.5.norm3.bias, bbox_head.stuff_mask_head.attnen.q.weight, bbox_head.stuff_mask_head.attnen.q.bias, bbox_head.stuff_mask_head.attnen.k.weight, bbox_head.stuff_mask_head.attnen.k.bias, bbox_head.stuff_mask_head.attnen.linear_l1.0.weight, bbox_head.stuff_mask_head.attnen.linear_l1.0.bias, bbox_head.stuff_mask_head.attnen.linear_l2.0.weight, bbox_head.stuff_mask_head.attnen.linear_l2.0.bias, bbox_head.stuff_mask_head.attnen.linear_l3.0.weight, bbox_head.stuff_mask_head.attnen.linear_l3.0.bias, bbox_head.stuff_mask_head.attnen.linear.0.weight, bbox_head.stuff_mask_head.attnen.linear.0.bias
    
    opened by aviadmx 3
  • ImportError Libtorch_cpu.so: undefined symbol

    ImportError Libtorch_cpu.so: undefined symbol

    Thank you for this awesome work

    Unfortunately I can't run the training because I get the following error

    ./tools/dist_train.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py 1
    + CONFIG=./configs/panformer/panformer_r50_24e_coco_panoptic.py
    + GPUS=1
    + PORT=29503
    ++ dirname ./tools/dist_train.sh
    ++ dirname ./tools/dist_train.sh
    + PYTHONPATH=./tools/..:
    + python -m torch.distributed.launch --nproc_per_node=1 --master_port=29503 ./tools/train.py ./configs/panformer/panformer_r50_24e_coco_panoptic.py --launcher pytorch --deterministic
    Traceback (most recent call last):
      File "/home/vision/anaconda3/envs/psf/lib/python3.7/runpy.py", line 183, in _run_module_as_main
        mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
      File "/home/vision/anaconda3/envs/psf/lib/python3.7/runpy.py", line 109, in _get_module_details
        __import__(pkg_name)
      File "/home/vision/anaconda3/envs/psf/lib/python3.7/site-packages/torch/__init__.py", line 197, in <module>
        from torch._C import *  # noqa: F403
    ImportError: /home/vision/anaconda3/envs/psf/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: _ZNK3c1010TensorImpl23shallow_copy_and_detachERKNS_15VariableVersionEb
    

    This is my environment:

    screen screen1

    opened by EnnioEvo 0
Owner
Nanjing University, China.
🏖 Keras Implementation of Painting outside the box

Keras implementation of Image OutPainting This is an implementation of Painting Outside the Box: Image Outpainting paper from Standford University. So

Bendang 1.1k Dec 10, 2022
Code to go with the paper "Decentralized Bayesian Learning with Metropolis-Adjusted Hamiltonian Monte Carlo"

dblmahmc Code to go with the paper "Decentralized Bayesian Learning with Metropolis-Adjusted Hamiltonian Monte Carlo" Requirements: https://github.com

1 Dec 17, 2021
Deploy optimized transformer based models on Nvidia Triton server

Deploy optimized transformer based models on Nvidia Triton server

Lefebvre Sarrut Services 1.2k Jan 05, 2023
Learning to Predict Gradients for Semi-Supervised Continual Learning

Learning to Predict Gradients for Semi-Supervised Continual Learning Code for project: "Learning to Predict Gradients for Semi-Supervised Continual Le

Yan Luo 2 Mar 05, 2022
Vision Transformer for 3D medical image registration (Pytorch).

ViT-V-Net: Vision Transformer for Volumetric Medical Image Registration keywords: vision transformer, convolutional neural networks, image registratio

Junyu Chen 192 Dec 20, 2022
This is a computer vision based implementation of the popular childhood game 'Hand Cricket/Odd or Even' in python

Hand Cricket Table of Content Overview Installation Game rules Project Details Future scope Overview This is a computer vision based implementation of

Abhinav R Nayak 6 Jan 12, 2022
Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency[ECCV 2020]

Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency(ECCV 2020) This is an official python implementati

304 Jan 03, 2023
The implementation of PEMP in paper "Prior-Enhanced Few-Shot Segmentation with Meta-Prototypes"

Prior-Enhanced network with Meta-Prototypes (PEMP) This is the PyTorch implementation of PEMP. Overview of PEMP Meta-Prototypes & Adaptive Prototypes

Jianwei ZHANG 8 Oct 14, 2021
A weakly-supervised scene graph generation codebase. The implementation of our CVPR2021 paper ``Linguistic Structures as Weak Supervision for Visual Scene Graph Generation''

README.md shall be finished soon. WSSGG 0 Overview 1 Installation 1.1 Faster-RCNN 1.2 Language Parser 1.3 GloVe Embeddings 2 Settings 2.1 VG-GT-Graph

Keren Ye 35 Nov 20, 2022
AI-UPV at IberLEF-2021 DETOXIS task: Toxicity Detection in Immigration-Related Web News Comments Using Transformers and Statistical Models

AI-UPV at IberLEF-2021 DETOXIS task: Toxicity Detection in Immigration-Related Web News Comments Using Transformers and Statistical Models Description

Angel de Paula 0 Jun 08, 2022
A Player for Kanye West's Stem Player. Sort of an emulator.

Stem Player Player Stem Player Player Usage Download the latest release here Optional: install ffmpeg, instructions here NOTE: DOES NOT ENABLE DOWNLOA

119 Dec 28, 2022
code and models for "Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation"

Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation This repository contains code and models for the method described in: Golnaz

55 Jun 18, 2022
Pixel-wise segmentation on VOC2012 dataset using pytorch.

PiWiSe Pixel-wise segmentation on the VOC2012 dataset using pytorch. FCN SegNet PSPNet UNet RefineNet For a more complete implementation of segmentati

Bodo Kaiser 378 Dec 30, 2022
Calculates carbon footprint based on fuel mix and discharge profile at the utility selected. Can create graphs and tabular output for fuel mix based on input file of series of power drawn over a period of time.

carbon-footprint-calculator Conda distribution ~/anaconda3/bin/conda install anaconda-client conda-build ~/anaconda3/bin/conda config --set anaconda_u

Seattle university Renewable energy research 7 Sep 26, 2022
Repository For Programmers Seeking a platform to show their skills

Programming-Nerds Repository For Programmers Seeking Pull Requests In hacktoberfest ❓ What's Hacktoberfest 2021? Hacktoberfest is the easiest way to g

42 Oct 29, 2022
Implementation of gaze tracking and demo

Predicting Customer Demand by Using Gaze Detecting and Object Tracking This project is the integration of gaze detecting and object tracking. Predict

2 Oct 20, 2022
Existing Literature about Machine Unlearning

Machine Unlearning Papers 2021 Brophy and Lowd. Machine Unlearning for Random Forests. In ICML 2021. Bourtoule et al. Machine Unlearning. In IEEE Symp

Jonathan Brophy 213 Jan 08, 2023
A curated list of awesome Model-Based RL resources

Awesome Model-Based Reinforcement Learning This is a collection of research papers for model-based reinforcement learning (mbrl). And the repository w

OpenDILab 427 Jan 03, 2023
PyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement."

FullSubNet This Git repository for the official PyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech E

郝翔 357 Jan 04, 2023
My course projects for the 2021 Spring Machine Learning course at the National Taiwan University (NTU)

ML2021Spring There are my projects for the 2021 Spring Machine Learning course at the National Taiwan University (NTU) Course Web : https://speech.ee.

Ding-Li Chen 15 Aug 29, 2022