implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks

Overview

YOLOR

implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks

PWC

Unified Network

To reproduce the results in the paper, please use this branch.

Model Test Size APtest AP50test AP75test APStest APMtest APLtest batch1 throughput
YOLOR-P6 1280 52.6% 70.6% 57.6% 34.7% 56.6% 64.2% 49 fps
YOLOR-W6 1280 54.1% 72.0% 59.2% 36.3% 57.9% 66.1% 47 fps
YOLOR-E6 1280 54.8% 72.7% 60.0% 36.9% 58.7% 66.9% 37 fps
YOLOR-D6 1280 55.4% 73.3% 60.6% 38.0% 59.2% 67.1% 30 fps
YOLOv4-P5 896 51.8% 70.3% 56.6% 33.4% 55.7% 63.4% 41 fps
YOLOv4-P6 1280 54.5% 72.6% 59.8% 36.6% 58.2% 65.5% 30 fps
YOLOv4-P7 1536 55.5% 73.4% 60.8% 38.4% 59.4% 67.7% 16 fps

Installation

Docker environment (recommended)

Expand
# create the docker container, you can change the share memory size if you have more.
nvidia-docker run --name yolor -it -v your_coco_path/:/coco/ -v your_code_path/:/yolor --shm-size=64g nvcr.io/nvidia/pytorch:20.11-py3

# apt install required packages
apt update
apt install -y zip htop screen libgl1-mesa-glx

# pip install required packages
pip install seaborn thop

# install mish-cuda if you want to use mish activation
# https://github.com/thomasbrandon/mish-cuda
# https://github.com/JunnYu/mish-cuda
cd /
git clone https://github.com/JunnYu/mish-cuda
cd mish-cuda
python setup.py build install

# install pytorch_wavelets if you want to use dwt down-sampling module
# https://github.com/fbcotter/pytorch_wavelets
cd /
git clone https://github.com/fbcotter/pytorch_wavelets
cd pytorch_wavelets
pip install .

# go to code folder
cd /yolor

Colab environment

Expand
git clone https://github.com/WongKinYiu/yolor
cd yolor

# pip install required packages
pip install -qr requirements.txt

# install mish-cuda if you want to use mish activation
# https://github.com/thomasbrandon/mish-cuda
# https://github.com/JunnYu/mish-cuda
git clone https://github.com/JunnYu/mish-cuda
cd mish-cuda
python setup.py build install
cd ..

# install pytorch_wavelets if you want to use dwt down-sampling module
# https://github.com/fbcotter/pytorch_wavelets
git clone https://github.com/fbcotter/pytorch_wavelets
cd pytorch_wavelets
pip install .
cd ..

Prepare COCO dataset

Expand
cd /yolor
bash scripts/get_coco.sh

Prepare pretrained weight

Expand
cd /yolor
bash scripts/get_pretrain.sh

Testing

yolor_p6.pt

python test.py --data data/coco.yaml --img 1280 --batch 32 --conf 0.001 --iou 0.65 --device 0 --cfg cfg/yolor_p6.cfg --weights yolor_p6.pt --name yolor_p6_val

You will get the results:

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.52510
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.70718
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.57520
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.37058
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.56878
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.66102
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.39181
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.65229
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.71441
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.57755
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.75337
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.84013

Training

Single GPU training:

python train.py --batch-size 8 --img 1280 1280 --data coco.yaml --cfg cfg/yolor_p6.cfg --weights '' --device 0 --name yolor_p6 --hyp hyp.scratch.1280.yaml --epochs 300

Multiple GPU training:

python -m torch.distributed.launch --nproc_per_node 2 --master_port 9527 train.py --batch-size 16 --img 1280 1280 --data coco.yaml --cfg cfg/yolor_p6.cfg --weights '' --device 0,1 --sync-bn --name yolor_p6 --hyp hyp.scratch.1280.yaml --epochs 300

Training schedule in the paper:

python -m torch.distributed.launch --nproc_per_node 8 --master_port 9527 train.py --batch-size 64 --img 1280 1280 --data data/coco.yaml --cfg cfg/yolor_p6.cfg --weights '' --device 0,1,2,3,4,5,6,7 --sync-bn --name yolor_p6 --hyp hyp.scratch.1280.yaml --epochs 300
python -m torch.distributed.launch --nproc_per_node 8 --master_port 9527 tune.py --batch-size 64 --img 1280 1280 --data data/coco.yaml --cfg cfg/yolor_p6.cfg --weights 'runs/train/yolor_p6/weights/last_298.pt' --device 0,1,2,3,4,5,6,7 --sync-bn --name yolor_p6-tune --hyp hyp.finetune.1280.yaml --epochs 450
python -m torch.distributed.launch --nproc_per_node 8 --master_port 9527 train.py --batch-size 64 --img 1280 1280 --data data/coco.yaml --cfg cfg/yolor_p6.cfg --weights 'runs/train/yolor_p6-tune/weights/epoch_424.pt' --device 0,1,2,3,4,5,6,7 --sync-bn --name yolor_p6-fine --hyp hyp.finetune.1280.yaml --epochs 450

Inference

yolor_p6.pt

python detect.py --source inference/images/horses.jpg --cfg cfg/yolor_p6.cfg --weights yolor_p6.pt --conf 0.25 --img-size 1280 --device 0

You will get the results:

horses

Citation

@article{wang2021you,
  title={You Only Learn One Representation: Unified Network for Multiple Tasks},
  author={Wang, Chien-Yao and Yeh, I-Hau and Liao, Hong-Yuan Mark},
  journal={arXiv preprint arXiv:2105.04206},
  year={2021}
}

Acknowledgements

Expand
Comments
  • Error while resuming training

    Error while resuming training

    Hello, I run a training and stopped it before it ends. When I try to resume the training using python3 train.py --resume I got the following error: Traceback (most recent call last): File "train.py", line 537, in <module> train(hyp, opt, device, tb_writer, wandb) File "train.py", line 81, in train model = Darknet(opt.cfg).to(device) # create File "yolor/models/models.py", line 530, in __init__ self.module_defs = parse_model_cfg(cfg) File "yolor/utils/parse_config.py", line 13, in parse_model_cfg with open(path, 'r') as f: FileNotFoundError: [Errno 2] No such file or directory: '.cfg'

    I also try to run: python3 train.py --cfg my_cfg.cfg --resume but I got the same error.

    Then I noticed that in train.py, l.502 there is the following line: opt.cfg, opt.weights, opt.resume = '', ckpt, True So the cfg filename is set to '', I tried to modify the line this way: opt.weights, opt.resume = ckpt, True but still got the same error.

    Do you have any clue?

    opened by mariusfm54 8
  • How to reproduce results on YOLOR-S4-DWT

    How to reproduce results on YOLOR-S4-DWT

    I'm trying to reproduce results on YOLOR-S4-DWT. It's reported 37% AP on paper branch. However, after several training times, the results around 35.1 and 35.2 AP. I use below command, is there something I need to change?:

    python train.py --batch-size 32 --img 640 640 --data data/coco.yaml --cfg models/yolor-ssss-dwt.yaml --weights '' --device 0 --name yolor-ssss-dwt-baseline --hyp hyp.scratch.s.yaml --epochs 300

    opened by thanhnt-2658 7
  • Maximum number of classes that can be trained?

    Maximum number of classes that can be trained?

    What is the maximum number of classes that can be trained with YOLOR? if i have imagenet object localisation 1000 classes dataset. Would it be able to train those ?

    opened by hiteshhedwig 7
  • inference another models like yolor_W6,E6,D6 error

    inference another models like yolor_W6,E6,D6 error

    Detecting with yolor_p6 is fine.

    But, another models like 'yolor_W6,E6,D6'

    Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\KANG\anaconda3\envs\OD\lib\site-packages\torch\serialization.py", line 594, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "C:\Users\KANG\anaconda3\envs\OD\lib\site-packages\torch\serialization.py", line 853, in _load result = unpickler.load() ModuleNotFoundError: No module named 'models.yolo'

    I guess W6,E6,D6 is saved by torch 1.4. but now i using torch 1.7

    Is there any solution not downgrading torch?

    opened by ehdrndd 6
  • 大佬,bacth_size=1,Out of memory?

    大佬,bacth_size=1,Out of memory?

    大佬,您好,我在使用自定义数据集训练yolor_p6时出现 cuda out of memory,我把batch_size=1依然会出现。我很奇怪。 train command: python train.py --batch-size 1 --img 416 416 --data person.yaml --cfg cfg/yolor_p6.cfg --weights '' --device 2 --name yolor_p6 --hyp hyp.scratch.416.yaml --epochs 300 log image result image 是我哪步出错了吗?

    opened by crazybill-first 6
  • What should I do when testing doesn't work with pycocotools?

    What should I do when testing doesn't work with pycocotools?

                   Class      Images     Targets           P           R      [email protected]
                   Class      Images     Targets           P           R      [email protected]
                   Class      Images     Targets           P           R      [email protected]
                   Class      Images     Targets           P           R      [email protected]
                   Class      Images     Targets           P           R      [email protected]
                   Class      Images     Targets           P           R      [email protected]
      [email protected]:.95: 100%|█| 18/18 [00:14<00:00,  1.89it/s]
                     all         548    3.88e+04       0.377       0.559       0.488       0.309
    Speed: 8.8/4.6/13.4 ms inference/NMS/total per 1280x1280 image at batch-size 32
    
    Evaluating pycocotools mAP... saving runs/test/yolor_p6_val8/best_ap_predictions.json...
    loading annotations into memory...
    ERROR: pycocotools unable to run: expected str, bytes or os.PathLike object, not list
    Results saved to runs/test/yolor_p6_val8
    

    This is what I get while testing. What should I do when this happens?

    opened by cnr0724 6
  • Detection differences between YOLO PyTorch frameworks?

    Detection differences between YOLO PyTorch frameworks?

    I recently used ultralytics YOLOv3 archived repository to convert darknet weights to pytorch weights. I then ran inference on a set of images. Then, I used this yolor repository with the converted YOLOv3 Pytorch weights (and cfg file) to run inference on the same dataset: it appears results are way better, detection is more accurate. I am wondering why results are better with this repository: what's the difference between these two detectors? How comes that I can run inference using YOLOv3 weights with a YOLOR repository? I assume YOLOR reads my cfg file and detect these are YOLOv3 weights and then run YOLOv3 inference on my images but why are the results better than with the YOLOv3 repo then?

    opened by mariusfm54 5
  • How to download the pretrained weights ?

    How to download the pretrained weights ?

    On running

    cd /yolor
    bash scripts/get_pretrain.sh
    

    The .pt file cannot be downloaded due to a google drive warning and instead of the weights it is an html file of the warning

    <!DOCTYPE html><html><head><title>Google Drive - Virus scan warning</title><meta http-equiv="content-type" content="text/html; charset=utf-8"/><style nonce="dRFVskmataaNG/kovcRLZg">/* Copyright 2022 Google Inc. All Rights Reserved. */
    .goog-inline-block{position:relative;display:-moz-inline-box;display:inline-block}* html .goog-inline-block{display:inline}*:first-child+html .goog-inline-block{display:inline}.goog-link-button{position:relative;color:#15c;text-decoration:underline;cursor:pointer}.goog-link-button-disabled{color:#ccc;text-decoration:none;cursor:default}body{color:#222;font:normal 13px/1.4 arial,sans-serif;margin:0}.grecaptcha-badge{visibility:hidden}.uc-main{padding-top:50px;text-align:center}#uc-dl-icon{display:inline-block;margin-top:16px;padding-right:1em;vertical-align:top}#uc-text{display:inline-block;max-width:68ex;text-align:left}.uc-error-caption,.uc-warning-caption{color:#222;font-size:16px}#uc-download-link{text-decoration:none}.uc-name-size a{color:#15c;text-decoration:none}.uc-name-size a:visited{color:#61c;text-decoration:none}.uc-name-size a:active{color:#d14836;text-decoration:none}.uc-footer{color:#777;font-size:11px;padding-bottom:5ex;padding-top:5ex;text-align:center}.uc-footer a{color:#15c}.uc-footer a:visited{color:#61c}.uc-footer a:active{color:#d14836}.uc-footer-divider{color:#ccc;width:100%}</style><link rel="icon" href="null"/></head><body><div class="uc-main"><div id="uc-dl-icon" class="image-container"><div class="drive-sprite-aux-download-file"></div></div><div id="uc-text"><p class="uc-warning-caption">Google Drive can't scan this file for viruses.</p><p class="uc-warning-subcaption"><span class="uc-name-size"><a href="/open?id=1WyzcN1-I0n8BoeRhi_xVt8C5msqdx_7k">yolor-p6.pt</a> (72M)</span> is too large for Google to scan for viruses. Would you still like to download this file?</p><form id="downloadForm" action="https://drive.google.com/uc?export=download&amp;confirm&amp;id=1WyzcN1-I0n8BoeRhi_xVt8C5msqdx_7k&amp;confirm=t" method="post"><input type="submit" id="uc-download-link" class="goog-inline-block jfk-button jfk-button-action" value="Download anyway"/></form></div></div><div class="uc-footer"><hr class="uc-footer-divider"></div></body></html>
    

    And this in turn leads to

    Traceback (most recent call last):
      File "test.py", line 302, in <module>
        test(opt.data,
      File "test.py", line 55, in test
        model = attempt_load(weights, map_location=device)  # load FP32 model
      File "/home/aayush/yolor/models/experimental.py", line 137, in attempt_load
        model.append(torch.load(w, map_location=map_location)['model'].float().fuse().eval())  # load FP32 model
      File "/home/aayush/.local/lib/python3.8/site-packages/torch/serialization.py", line 713, in load
        return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
      File "/home/aayush/.local/lib/python3.8/site-packages/torch/serialization.py", line 920, in _legacy_load
        magic_number = pickle_module.load(f, **pickle_load_args)
    _pickle.UnpicklingError: invalid load key, '<'.
    

    When I manually copy paste the link for yolor_p6.pt in the browser

    https://drive.google.com/uc?export=download&id=1Tdn3yqpZ79X7R1Ql0zNlNScB1Dv9Fp7

    And a folder gets downloaded

    https://drive.google.com/drive/folders/18IoN5F94WjRzvappk_4RRf8j9GIoPKc7?usp=sharing

    I am not sure how to use this folder as the checkpoints ? Can anyone advise on how to download the checkpoints ?

    opened by Aayush-Jain01 4
  • Precision, Recall and mAP seems incoherent

    Precision, Recall and mAP seems incoherent

    Hello,

    I am training YOLOR-D6, I obtain precision, recall and mAP results from test.py with following command using paper branch: python3 test.py --weights ./runs/train/yolor-d6-1280size-multiGPU/weights/best.pt --img 1280 --verbose --data data/dtld_test.yaml --batch 32 --task test --conf 0.4 --iou 0.5

    Here is the result: image I think the mAP0.5 is very high with those recall values. Is there a mistake in metrics.py? Or do I have to run test.py with different options? Thank you.

    opened by yusiyoh 4
  • Training on custom dataset and labels

    Training on custom dataset and labels

    I have a dataset of my own which has 8 labels, completely different from the coco labels. I changed the data/coco.names and data/coco.yaml accordingly. But I get an index error:

    Traceback (most recent call last):
      File "train.py", line 537, in <module>
        train(hyp, opt, device, tb_writer, wandb)
      File "train.py", line 344, in train
        log_imgs=opt.log_imgs if wandb else 0)
      File "/home/ubuntu/yolor/test.py", line 226, in test
        plot_images(img, output_to_target(output, width, height), paths, f, names)  # predictions
      File "/home/ubuntu/yolor/utils/plots.py", line 164, in plot_images
        cls = names[cls] if names else cls
    IndexError: list index out of range
    

    I tried printing the detection classes and its in the range 0-79 i.e coco labels. But why is this happening when I completely changed the labels?

    Training command: python train.py --batch-size 1 --img 1280 1280 --data coco.yaml --cfg cfg/yolor_p6.cfg --weights yolor_p6.pt --device 0 --name yolor_p6_digit --hyp hyp.scratch.1280.yaml --epochs 5

    opened by devloper13 4
  • Issues on the pretrained model

    Issues on the pretrained model

    Hi,

    Great work! I use the main branch to train my model. I checked the pretrained yolor_p6.pt is okay. But could you provide more well-trained model for fine-tune? such as yolor_w6.pt, yolor_e6.pt....

    I noticed in your paper branch, you provide these model, yolor-w6.pt, yolor-e6.pt....But it may not be compatible with the main branch code with the following error:

    ModuleNotFoundError: No module named 'models.yolo'

    Hope to get your advice!

    opened by Yuuuuuuuuuuuuuuuuuummy 4
  • Question: what is the function of pretrained weight

    Question: what is the function of pretrained weight

    Hi, I am trying to train a custom yolor model but have some question towards the command line parameter. It seems we need to add a pretrained weight( commonly use yolor_p6.pt) when running the custom model. What's the function of adding it? Since the classes that yolor_p6.pt used are different from mine. It still works when I left the weight parameter empty. What does it imply? Thank you.

    opened by axonEmily 0
  • RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

    RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

    When Starting training for epochs, a RuntimeError happened.

    Using torch 1.13.1 CUDA:0 (NVIDIA GeForce GTX 1650, 4095MB) ... Epoch gpu_mem box obj cls total targets img_size 0%| | 0/8985 [00:05<?, ?it/s] Traceback (most recent call last): File "D:/browser/cgan/yolor/train.py", line 537, in train(hyp, opt, device, tb_writer, wandb) File "D:/browser/cgan/yolor/train.py", line 288, in train loss, loss_items = compute_loss(pred, targets.to(device), model) # loss scaled by batch_size File "D:\browser\cgan\yolor\utils\loss.py", line 66, in compute_loss tcls, tbox, indices, anchors = build_targets(p, targets, model) # targets File "D:\browser\cgan\yolor\utils\loss.py", line 149, in build_targets a, t = at[j], t.repeat(na, 1, 1)[j] # filter RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

    Please help me

    opened by lincit 0
  • YoloR-CSP Test with yolor-csp.cfg not working

    YoloR-CSP Test with yolor-csp.cfg not working

    Hi everyone

    I'm using the command for test in my model trained.

    python3 test.py --data ./BRA-Dataset.yaml --img 412 --batch 8 --device 0 --cfg cfg/yolor_csp.cfg --weights ../../PESOS1/bestYoloR-CSP.pt --name yolor_csp_val --verbose --names data/BRA.names

    I'm configured the yolor_csp.cfg for test, modifying the filters for 30 (num classes(5) + 5 * 3), the number classes 5 and, implicit_mul with 30.

    But I'm not have a Precision, Recall and, small mAP. However, while I executing test for yolor-p6 model, working not problems.

    The csp.cfg working ? I see that the csp.cfg not have a YoloR layer in final part of file cfg. Foremore, the csp.cfg have a 3 implict_mul, different in comparison with p6.cfg.

    My Output: Model Summary: 529 layers, 52519444 parameters, 52519444 gradients WARNING: --img-size 412 must be multiple of max stride 64, updating to 448 /home/usp/anaconda3/envs/yoloEnv/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.) return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode) Scanning labels ../../BRA-Dataset/labels/val.cache3 (363 found, 0 missing, 0 empty, 0 duplicate, for 363 images): 363it [00:00, 16860.82it/s] Class Images Targets P R [email protected] [email protected]:.95: 50%|██████████████████████████████ | 23/46 [00:02<00:01, 11.95it/s]libpng warning: iCCP: known incorrect sRGB profile Class Images Targets P R [email protected] [email protected]:.95: 100%|████████████████████████████████████████████████████████████| 46/46 [00:04<00:00, 10.71it/s] all 363 403 0 0 0.00315 0.000539 Anta 363 84 0 0 0.00095 0.000168 Jaguarundi 363 68 0 0 0.00144 0.000282 LoboGuara 363 82 0 0 0.00157 0.000302 OncaParda 363 101 0 0 0.00474 0.000945 TamanduaBandeira 363 68 0 0 0.00704 0.000998 Speed: 6.5/2.9/9.5 ms inference/NMS/total per 448x448 image at batch-size 8 Results saved to runs/test/yolor_csp_val2

    opened by GabrielFerrante 0
  • what should be the ideal mAP0.95 should be?

    what should be the ideal mAP0.95 should be?

    Hi, I am training yolor_csp with a single class of ~1500 images, where 400 images are negative images. train test split is of the ratio 80:20. While training with fine-tune hyper parameters for 1280 size for 5000 epochs, best_overall is being generated around epoch #900. after than mAP values are going down. The best mAP0.5:0.95 is coming around 0.85. My query is , is that value too high ? or is it expected? Am I overfitting the data ? Shall I continue training beyond 5000 epochs? Kindly give your valuable feedback.

    opened by saumya221 2
  • Issue with using W & B

    Issue with using W & B

    File "tune.py", line 336, in train results, maps, times = test.test(opt.data, File "/srv/beegfs02/scratch/aegis_guardian/data/Timothy/Solomon/yolor_pytorch/yolor/test.py", line 163, in test box_data = [{"position": {"minX": xyxy[0], "minY": xyxy[1], "maxX": xyxy[2], "maxY": xyxy[3]}, File "/srv/beegfs02/scratch/aegis_guardian/data/Timothy/Solomon/yolor_pytorch/yolor/test.py", line 165, in "box_caption": "%s %.3f" % (names[cls], conf), TypeError: list indices must be integers or slices, not float

    opened by JedSolo 0
Owner
Kin-Yiu, Wong
Kin-Yiu, Wong
Manim is an engine for precise programmatic animations, designed for creating explanatory math videos

Manim is an engine for precise programmatic animations, designed for creating explanatory math videos. Note, there are two versions of manim. This rep

Grant Sanderson 49k Jan 09, 2023
This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".

SimMIM By Zhenda Xie*, Zheng Zhang*, Yue Cao*, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai and Han Hu*. This repo is the official implementation of

Microsoft 674 Dec 26, 2022
Run Effective Large Batch Contrastive Learning on Limited Memory GPU

Gradient Cache Gradient Cache is a simple technique for unlimitedly scaling contrastive learning batch far beyond GPU memory constraint. This means tr

Luyu Gao 198 Dec 29, 2022
PIXIE: Collaborative Regression of Expressive Bodies

PIXIE: Collaborative Regression of Expressive Bodies [Project Page] This is the official Pytorch implementation of PIXIE. PIXIE reconstructs an expres

Yao Feng 331 Jan 04, 2023
This is an official implementation of the High-Resolution Transformer for Dense Prediction.

High-Resolution Transformer for Dense Prediction Introduction This is the official implementation of High-Resolution Transformer (HRT). We present a H

HRNet 403 Dec 13, 2022
JugLab 33 Dec 30, 2022
Unbiased Learning To Rank Algorithms (ULTRA)

This is an Unbiased Learning To Rank Algorithms (ULTRA) toolbox, which provides a codebase for experiments and research on learning to rank with human annotated or noisy labels.

71 Dec 01, 2022
Automatic voice-synthetised summaries of latest research papers on arXiv

PaperWhisperer PaperWhisperer is a Python application that keeps you up-to-date with research papers. How? It retrieves the latest articles from arXiv

Valerio Velardo 124 Dec 20, 2022
A PyTorch implementation of unsupervised SimCSE

A PyTorch implementation of unsupervised SimCSE

99 Dec 23, 2022
Pytorch implementation for "Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion" (NeurIPS 2021)

Density-aware Chamfer Distance This repository contains the official PyTorch implementation of our paper: Density-aware Chamfer Distance as a Comprehe

Tong WU 93 Dec 15, 2022
Using Python to Play Cyberpunk 2077

CyberPython 2077 Using Python to Play Cyberpunk 2077 This repo will contain code from the Cyberpython 2077 video series on Youtube (youtube.

Harrison 118 Oct 18, 2022
A basic reminder tool written in Python.

A simple Python Reminder Here's a basic reminder tool written in Python that speaks to the user and sends a notification. Run pip3 install pyttsx3 w

Sachit Yadav 4 Feb 05, 2022
Python code for the paper How to scale hyperparameters for quickshift image segmentation

How to scale hyperparameters for quickshift image segmentation Python code for the paper How to scale hyperparameters for quickshift image segmentatio

0 Jan 25, 2022
Justmagic - Use a function as a method with this mystic script, like in Nim

justmagic Use a function as a method with this mystic script, like in Nim. Just

witer33 8 Oct 08, 2022
Few-shot Learning of GPT-3

Few-shot Learning With Language Models This is a codebase to perform few-shot "in-context" learning using language models similar to the GPT-3 paper.

Tony Z. Zhao 224 Dec 28, 2022
Julia and Matlab codes to simulated all problems in El-Hachem, McCue and Simpson (2021)

Substrate_Mediated_Invasion Julia and Matlab codes to simulated all problems in El-Hachem, McCue and Simpson (2021) 2DSolver.jl reproduces the simulat

Matthew Simpson 0 Nov 09, 2021
The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".

Watermark-Robustness-Toolbox - Official PyTorch Implementation This repository contains the official PyTorch implementation of the following paper to

49 Dec 19, 2022
Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

OpenDet Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022) Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-So

csuhan 64 Jan 07, 2023
Download and preprocess popular sequential recommendation datasets

Sequential Recommendation Datasets This repository collects some commonly used sequential recommendation datasets in recent research papers and provid

125 Dec 06, 2022
This repo contains the pytorch implementation for Dynamic Concept Learner (accepted by ICLR 2021).

DCL-PyTorch Pytorch implementation for the Dynamic Concept Learner (DCL). More details can be found at the project page. Framework Grounding Physical

Zhenfang Chen 31 Jan 06, 2023