Library for 8-bit optimizers and quantization routines.

Overview

bitsandbytes

Bitsandbytes is a lightweight wrapper around CUDA custom functions, in particular 8-bit optimizers and quantization functions.

Paper -- Video -- Docs

TL;DR

Installation:

  1. Note down version: conda list | grep cudatoolkit
  2. Replace 111 with the version that you see: pip install bitsandbytes-cuda111

Usage:

  1. Comment out optimizer: #torch.optim.Adam(....)
  2. Add 8-bit optimizer of your choice bnb.optim.Adam8bit(....) (arguments stay the same)
  3. Replace embedding layer if necessary: torch.nn.Embedding(..) -> bnb.nn.Embedding(..)

Features

  • 8-bit Optimizers: Adam, AdamW, RMSProp, LARS, LAMB (saves 75% memory)
  • Stable Embedding Layer: Improved stability through better initialization, and normalization
  • 8-bit quantization: Quantile, Linear, and Dynamic quantization
  • Fast quantile estimation: Up to 100x faster than other algorithms

Requirements & Installation

Requirements: anaconda, cudatoolkit, pytorch Hardware requirements: NVIDIA Maxwell GPU or newer (>=GTX 9XX) Supported CUDA versions: 9.2 - 11.3

The requirements can best be fulfilled by installing pytorch via anaconda. You can install PyTorch by following the "Get Started" instructions on the official website.

bitsandbytes is compatible with all major PyTorch releases and cudatoolkit versions, but for now, you need to select the right version manually. To do this run:

conda list | grep cudatoolkit

and take note of the Cuda version that you have installed. Then you can install bitsandbytes via:

# choices: {cuda92, cuda 100, cuda101, cuda102, cuda110, cuda111, cuda113}
# replace XXX with the respective number
pip install bitsandbytes-cudaXXX

To check if your installation was successful, you can execute the following command, which runs a single bnb Adam update.

wget https://gist.githubusercontent.com/TimDettmers/1f5188c6ee6ed69d211b7fe4e381e713/raw/4d17c3d09ccdb57e9ab7eca0171f2ace6e4d2858/check_bnb_install.py && python check_bnb_install.py

Using bitsandbytes

Using the 8-bit Optimizers

With bitsandbytes 8-bit optimizers can be used by changing a single line of code in your codebase. For NLP models we recommend also to use the StableEmbedding layers (see below) which improves results and helps with stable 8-bit optimization. To get started with 8-bit optimizers, it is sufficient to replace your old optimizer with the 8-bit optimizer in the following way:

import bitsandbytes as bnb

# adam = torch.optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.995)) # comment out old optimizer
adam = bnb.optim.Adam8bit(model.parameters(), lr=0.001, betas=(0.9, 0.995)) # add bnb optimizer
adam = bnb.optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.995), optim_bits=8) # equivalent


torch.nn.Embedding(...) ->  bnb.nn.StableEmbedding(...) # recommended for NLP models

Note that by default all parameter tensors with less than 4096 elements are kept at 32-bit even if you initialize those parameters with 8-bit optimizers. This is done since such small tensors do not save much memory and often contain highly variable parameters (biases) or parameters that require high precision (batch norm, layer norm). You can change this behavior like so:

# parameter tensors with less than 16384 values are optimized in 32-bit
# it is recommended to use multiplies of 4096
adam = bnb.optim.Adam8bit(model.parameters(), min_8bit_size=16384) 

Change Bits and other Hyperparameters for Individual Parameters

If you want to optimize some unstable parameters with 32-bit Adam and others with 8-bit Adam, you can use the GlobalOptimManager. With this, we can also configure specific hyperparameters for particular layers, such as embedding layers. To do that, we need two things: (1) register the parameter while they are still on the CPU, (2) override the config with the new desired hyperparameters (anytime, anywhere). See our guide for more details

Fairseq Users

To use the Stable Embedding Layer, override the respective build_embedding(...) function of your model. Make sure to also use the --no-scale-embedding flag to disable scaling of the word embedding layer (nor replaced with layer norm). You can use the optimizers by replacing the optimizer in the respective file (adam.py etc.).

Release and Feature History

For upcoming features and changes and full history see Patch Notes.

Errors

  1. RuntimeError: CUDA error: no kernel image is available for execution on the device. Solution

License

The majority of bitsandbytes is licensed under MIT, however portions of the project are available under separate license terms: Pytorch is licensed under the BSD license.

We thank Fabio Cannizzo for his work on FastBinarySearch which we use for CPU quantization.

Citation

If you found this library and 8-bit optimizers or quantization routines useful, please consider citing out work.

@misc{dettmers2021optim8bit,
      title={8-bit Optimizers via Block-wise Quantization},
      author={Tim Dettmers and Mike Lewis and Sam Shleifer and Luke Zettlemoyer},
      year={2021},
      eprint={2110.02861},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Comments
  • python setup.py install error

    python setup.py install error

    (bitsandbytes) [email protected]:~/disk1/github/bitsandbytes$ python setup.py install Traceback (most recent call last): File "setup.py", line 15, in name = f"bitsandbytes-cuda{os.environ['CUDA_VERSION']}", File "/home/chenxin/disk1/anaconda3/envs/bitsandbytes/lib/python3.8/os.py", line 675, in getitem raise KeyError(key) from None KeyError: 'CUDA_VERSION' (bitsandbytes) [email protected]:~/disk1/github/bitsandbytes$ conda list | grep cudatoolkit cudatoolkit 11.1.1 h6406543_8 conda-forge

    documentation enhancement 
    opened by mathpopo 10
  • Did you ever try MNMT systems?

    Did you ever try MNMT systems?

    As reported in the paper, for training a bi-directional transformer model on WMT14 or WMT16 the performance of 8-bit Adam stays relatively consistent with the 32-bit counterparts. I was also able to verify this on other data sources for training bi-directional models with my own setup.

    However, I've also tried multiple variations of 8-bit optimizers on multilingual neural machine translation (MNMT) models in fairseq and there it seems that even with --no-scale-embedding as well as the StableEmbedding the performance is roughly 3 BLEU behind the counterparts. The --no-scale-embedding flag amounts to roughly 7 BLEU gain, while the xavier init amounts to roughly 0.4 BLEU gain. Didn't look into the effect of the layer norm of the stable embeddings yet.

    Did you do any testing on that and have practical tips on getting the performance up?

    bug question 
    opened by SirRob1997 9
  • undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37

    undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37

    (torch1.8-py3.8) [email protected]:/home/share/jiaofangkai$ python check_bnb_install.py
    Traceback (most recent call last):
      File "check_bnb_install.py", line 1, in <module>
        import bitsandbytes as bnb
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/__init__.py", line 5, in <module>
        from .optim import adam
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/optim/__init__.py", line 5, in <module>
        from .adam import Adam, Adam8bit, Adam32bit
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/optim/adam.py", line 6, in <module>
        from bitsandbytes.optim.optimizer import Optimizer2State
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/optim/optimizer.py", line 6, in <module>
        import bitsandbytes.functional as F
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/functional.py", line 13, in <module>
        lib = ct.cdll.LoadLibrary(os.path.dirname(__file__) + '/libbitsandbytes.so')
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/ctypes/__init__.py", line 459, in LoadLibrary
        return self._dlltype(name)
      File "/home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/ctypes/__init__.py", line 381, in __init__
        self._handle = _dlopen(self._name, mode)
    OSError: /home/share/jiaofangkai/anaconda3/envs/torch1.8-py3.8/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes.so: undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37
    

    Hi, I have encountered similar questions to #5 . I have tested with TeslaT4 and RTX 2080Ti but both failed.

    The environment are as follows:

    # TeslaT4
    Ubuntu 18.04.6, Tesla T4, cuda-10.1, driver vesion: 418.197.02, python=3.8, torch=1.8.1+cu101
    
    # RTX 2080Ti
    Ubuntu 20.04.3, RTX 2080Ti, cuda-10.1, driver version: 435.21, python=3.8, torch=1.8.1+cu101
    
    bug 
    opened by SparkJiao 6
  • Support for Tesla Architecture

    Support for Tesla Architecture

    First of all, great work!

    Secondly, I can see that you specify that Maxwell Architecture is necessary, and I am wondering if

    1. it's possible to do 8-bit optimization on Tesla Architecture
    2. there are plans to implement it

    I ask because Kaggle and Colab notebooks use Tesla Architectures (P100, K80), and I'm sure those communities, myself included, would be interested in using bitsandbytes

    enhancement 
    opened by nbroad1881 5
  • import bitsandbytes as bnb 错误

    import bitsandbytes as bnb 错误

    import bitsandbytes as bnb 出现如下 OSError: /home/anaconda3/envs/ner/lib/python3.6/site-packages/bitsandbytes/libbitsandbytes.so: undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37

    您好,请问这该怎么解决啊

    opened by zhishui3 4
  • no difference in memory usage

    no difference in memory usage

    Hi. I am training my network with bnb.optim.Adam8bit vs torch.optim.Adam but I don't see any difference in memory consumption.

    Running on GTX 2080Ti (single gpu or DDP). with cudatoolkit 11.1.74 bitsandbytes-cuda111

    looking in nvidia-smi I see 9.6GB in both cases Am I missing something here?

    opened by ofrimasad 3
  • errors when training to the third epoch. everytime.

    errors when training to the third epoch. everytime.

    THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCTensorMath.cu line=29 error=1 : invalid argument
    Traceback (most recent call last):
      File "train_pointunet.py", line 211, in <module>
        loss_seg = lossfunc_seg(outputs_seg, labels)+lossfunc_dice(outputs_seg,labels)
      File "/home/why/miniconda3/envs/3.6.8/lib/python3.6/site-packages/torch/tensor.py", line 245, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
      File "/home/why/miniconda3/envs/3.6.8/lib/python3.6/site-packages/torch/autograd/__init__.py", line 147, in backward
        allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
    RuntimeError: cuda runtime error (1) : invalid argument at /pytorch/aten/src/THC/generic/THCTensorMath.cu:29
    

    im very confused because in the first several epoches it works fine.

    opened by Dootmaan 2
  • [Question] Usage of bnb.nn.Embedding with existing classes from other libraries

    [Question] Usage of bnb.nn.Embedding with existing classes from other libraries

    Replace embedding layer if necessary: torch.nn.Embedding(..) -> bnb.nn.Embedding(..)

    Does it suppose user creation of custom classes to replace (for example) huggingface transformers' GPT2DoubleHeadsModel? Or there is something like bnb.optim.GlobalOptimManager which change provided model instance to use bitsandbytes embeddings instead of torch ones?

    enhancement question 
    opened by LSinev 2
  • The code uses more GPU memory with Multi-scale Vision Transformers

    The code uses more GPU memory with Multi-scale Vision Transformers

    Hi,

    Thanks for the great work! I'm currently trying to apply your code to vision transformers, specifically, on this code base: https://github.com/facebookresearch/SlowFast/tree/main/projects/mvit When using torch.optim.SGD(momentum=0.9), the code consumes 9221MiB GPU memory during training. After changing it to use bnb.optim.SGD8bit() with the same arguments, it consumes even a bit more GPU memory of 9235MiB. Do you have any idea why this would happen? Thank you! My CUDA version is 10.2 and torch version is 1.9.1.

    Best, Junwei

    question 
    opened by JunweiLiang 2
  • bnb.optim.AdamW

    bnb.optim.AdamW

    Hey @TimDettmers,

    Awesome library! bnb.optim.Adam saved me from having to use model parallelism :heart_eyes:

    Do you think it would be easy to also add a bnb.optim.AdamW version for https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html#torch.optim.AdamW ?

    Happy to give it a try if you think it's easily feasible :-)

    enhancement 
    opened by patrickvonplaten 2
  • undefined symbol: __fatbinwrap_38

    undefined symbol: __fatbinwrap_38

    With some CUDA versions and on some architectures this error occurs:

    Traceback (most recent call last):
      File "check_bnb_install.py", line 1, in <module>
        import bitsandbytes as bnb
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/__init__.py", line 5, in <module>
        from .optim import adam
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/optim/__init__.py", line 5, in <module>
        from .adam import Adam, Adam8bit, Adam32bit
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/optim/adam.py", line 5, in <module>
        from bitsandbytes.optim.optimizer import Optimizer2State
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/optim/optimizer.py", line 6, in <module>
        import bitsandbytes.functional as F
      File "/miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/functional.py", line 13, in <module>
        lib = ct.cdll.LoadLibrary(os.path.dirname(__file__) + '/libbitsandbytes.so')
      File "/miniconda/envs/pytorch_env/lib/python3.7/ctypes/__init__.py", line 442, in LoadLibrary
        return self._dlltype(name)
      File "/miniconda/envs/pytorch_env/lib/python3.7/ctypes/__init__.py", line 364, in __init__
        self._handle = _dlopen(self._name, mode)
    OSError: /miniconda/envs/pytorch_env/lib/python3.7/site-packages/bitsandbytes/libbitsandbytes.so: undefined symbol: __fatbinwrap_38_cuda_device_runtime_compute_75_cpp1_ii_8b1a5d37
    

    Confirmed for CUDA 10.1 for compute capability 7.5 (V100).

    bug 
    opened by TimDettmers 2
  • 'NoneType' object has no attribute 'cdequantize_blockwise_cpu_fp32'

    'NoneType' object has no attribute 'cdequantize_blockwise_cpu_fp32'

    I am trying to train GPT-J with 8bit weights. It's working well on GPU. But When I try to use it on CPU, it gives this error

    'NoneType' object has no attribute 'cdequantize_blockwise_cpu_fp32'

    I have used dequantize_blockwise from bitsandbytes.functional. Following is the class in which its used:

    class DequantizeAndLinear(torch.autograd.Function):
    
        def forward(ctx, input: torch.Tensor, weights_quantized: torch.ByteTensor,
                    absmax: torch.FloatTensor, code: torch.FloatTensor, bias: torch.FloatTensor):
            weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
            ctx.save_for_backward(input, weights_quantized, absmax, code)
            ctx._has_bias = bias is not None
            return F.linear(input, weights_deq, bias)
    
        def backward(ctx, grad_output: torch.Tensor):
            assert not ctx.needs_input_grad[1] and not ctx.needs_input_grad[2] and not ctx.needs_input_grad[3]
            input, weights_quantized, absmax, code = ctx.saved_tensors
            # grad_output: [*batch, out_features]
            weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
            grad_input = grad_output @ weights_deq
            grad_bias = grad_output.flatten(0, -2).sum(dim=0) if ctx._has_bias else None
            return grad_input, None, None, None, grad_bias
    
    

    Is it possible to run it on CPUor should I have to run it only GPU ?

    opened by HumzaSami00 0
  • Adding Code of Conduct file

    Adding Code of Conduct file

    This is pull request was created automatically because we noticed your project was missing a Code of Conduct file.

    Code of Conduct files facilitate respectful and constructive communities by establishing expected behaviors for project contributors.

    This PR was crafted with love by Facebook's Open Source Team.

    CLA Signed 
    opened by facebook-github-bot 0
  • Adding Contributing file

    Adding Contributing file

    This is pull request was created automatically because we noticed your project was missing a Contributing file.

    CONTRIBUTING files explain how a developer can contribute to the project - which you should actively encourage.

    This PR was crafted with love by Facebook's Open Source Team.

    CLA Signed 
    opened by facebook-github-bot 0
  • 8-bit optimizer crashes when fine-tuning gpt2-large

    8-bit optimizer crashes when fine-tuning gpt2-large

    Using the bnb.optim.Adam8bit optimizer in place of torch.optim.Adam causes a crash after a handful of batches:

    12it [00:22, 1.82s/it]Error an illegal memory access was encountered at line 198 in file /home/alyssa/gpt_math/bitsandbytes/csrc/ops.cu

    I am fine-tuning Huggingface's version of the gpt2-large model on an Ampere 3090 GPU with CUDA version 11.6 and nVidia driver version 510.73.05. I have tried compiling bitsandbytes on my machine from source, and the set_optim_to_run_embedding_in_fp32 trick from https://github.com/huggingface/transformers/issues/14819; neither of them affected the behavior. Running with the standard pytorch Adam optimizer works fine. nvidia-smi shows 16 GB of memory used on a GPU with 24 GB, so it shouldn't be running out of RAM or anywhere close to that.

    opened by rationalism 0
  • bfloat16 grads are not supported

    bfloat16 grads are not supported

    Is there any plans to support models/grads with bfloat16 type? Bfloat gained quite the popularity lately as every ampere GPU supports the type, and eliminates the need for loss scaling compared to float16. This is what I get when I try to initialize bnb.AdamW with a bfloat16 casted model: ValueError: Gradient+optimizer bit data type combination not supported: grad torch.bfloat16, optimizer torch.uint8

    opened by kurumuz 0
  • Check dtype of input tensors is correct

    Check dtype of input tensors is correct

    If a 16-bit float tensor on the CPU was passed as the input to quantize_blockwise or the output buffer for dequantize_blockwise, the code was previously passing its address to the c[de]quantize_blockwise_cpu_fp32 method, silently casting it to a 32-bit float* and resulting in segfaults.

    A similar issue occurs if the absmax/code arguments to dequantize_blockwise are (somehow) 16-bit, resulting in illegal memory accesses on the GPU.

    It took me a little while to track down the causes because of the cryptic errors; so I figured it was worth suggesting these changes. I've only been using the blockwise methods, so it's possible there are similar issues in other parts of the code - might be worth checking :)

    This PR also includes a couple unrelated typo fixes.

    Thanks for your work on this library, it's nice to squeeze the most I can out of my paltry GPU memory :)

    CLA Signed 
    opened by acarapetis 3
Releases(0.26.0)
  • 0.26.0(Nov 29, 2021)

    This release has important bug fixes for the StableEmbedding layer and it introduces new optimizers AdaGrad, and AdamW. The 0.26.0 release also features a new, lightweight embedding class, bnb.nn.Embedding which uses 32-bit optimizers but no layer norm. This layer allows for the easy use of pretrained models that do not use embedding layer norms. Now available on pip.

    Changelog

    Features:

    • Added Adagrad (without grad clipping) as 32-bit and 8-bit block-wise optimizer.
    • Added AdamW (copy of Adam with weight decay init 1e-2). #10
    • Introduced ModuleConfig overrides which can be seamlessly be used at initialization time of a module.
    • Added bnb.nn.Embedding layer which runs at 32-bit but without the layernorm. This works well if you need to fine-tune pretrained models that do not have a embedding layer norm. #19

    Bug fixes:

    • Fixed a bug where weight decay was incorrectly applied to 32-bit Adam. #13
    • Fixed an unsafe use of eval. #8
    • Fixed a bug where the StableEmbedding layer 32-bit optimizer override would not work without registering the whole model first (bnb.optim.GlobalOptimManager.get_instance().register_parameters(model.parameters())). #13 #15

    Docs:

    • Added instructions how to solve "__fatbinwrap_" errors.
    Source code(tar.gz)
    Source code(zip)
  • 0.25.0(Oct 22, 2021)

    This release offers full support for all GPUs for Kepler (GTX 600) or better. It introduces skip_zeros which ensures correct optimizer updates for sparse gradients. While some pieces are still missing, some features for 8-bit optimizer research are added. AnalysisAdam allows the tracking and analysis of 8-bit vs 32-bit Adam quantization errors.

    Features:

    • Added skip_zeros for block-wise and 32-bit optimizers. This ensures correct updates for sparse gradients and sparse models.
    • Added support for Kepler GPUs. (#4)
    • Added Analysis Adam to track 8-bit vs 32-bit quantization errors over time.
    • Make compilation more user-friendly.

    Bug fixes:

    • fixed "undefined symbol: __fatbinwrap_38" error for P100 GPUs on CUDA 10.1 (#5)

    Docs:

    • Added docs with instructions to compile from source.
    Source code(tar.gz)
    Source code(zip)
Owner
Facebook Research
Facebook Research
PyTorch implementation of MulMON

MulMON This repository contains a PyTorch implementation of the paper: Learning Object-Centric Representations of Multi-object Scenes from Multiple Vi

NanboLi 16 Nov 03, 2022
Accommodating supervised learning algorithms for the historical prices of the world's favorite cryptocurrency and boosting it through LightGBM.

Accommodating supervised learning algorithms for the historical prices of the world's favorite cryptocurrency and boosting it through LightGBM.

1 Nov 27, 2021
Testing and Estimation of structural breaks in Stata

xtbreak estimating and testing for many known and unknown structural breaks in time series and panel data. For an overview of xtbreak test see xtbreak

Jan Ditzen 13 Jun 19, 2022
Official code for "Mean Shift for Self-Supervised Learning"

MSF Official code for "Mean Shift for Self-Supervised Learning" Requirements Python = 3.7.6 PyTorch = 1.4 torchvision = 0.5.0 faiss-gpu = 1.6.1 In

UMBC Vision 44 Nov 21, 2022
DziriBERT: a Pre-trained Language Model for the Algerian Dialect

DziriBERT DziriBERT is the first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect. It handles Algerian

117 Jan 07, 2023
DeepRec is a recommendation engine based on TensorFlow.

DeepRec Introduction DeepRec is a recommendation engine based on TensorFlow 1.15, Intel-TensorFlow and NVIDIA-TensorFlow. Background Sparse model is a

Alibaba 676 Jan 03, 2023
EqGAN - Improving GAN Equilibrium by Raising Spatial Awareness

EqGAN - Improving GAN Equilibrium by Raising Spatial Awareness Improving GAN Equilibrium by Raising Spatial Awareness Jianyuan Wang, Ceyuan Yang, Ying

GenForce: May Generative Force Be with You 149 Dec 19, 2022
Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Davis Rempe 367 Dec 24, 2022
Code and real data for the paper "Counterfactual Temporal Point Processes", available at arXiv.

counterfactual-tpp This is a repository containing code and real data for the paper Counterfactual Temporal Point Processes. Pre-requisites This code

Networks Learning 11 Dec 09, 2022
JugLab 33 Dec 30, 2022
Open source hardware and software platform to build a small scale self driving car.

Donkeycar is minimalist and modular self driving library for Python. It is developed for hobbyists and students with a focus on allowing fast experimentation and easy community contributions.

Autorope 2.4k Jan 04, 2023
A DeepStack custom model for detecting common objects in dark/night images and videos.

DeepStack_ExDark This repository provides a custom DeepStack model that has been trained and can be used for creating a new object detection API for d

MOSES OLAFENWA 98 Dec 24, 2022
Code release for NeX: Real-time View Synthesis with Neural Basis Expansion

NeX: Real-time View Synthesis with Neural Basis Expansion Project Page | Video | Paper | COLAB | Shiny Dataset We present NeX, a new approach to novel

536 Dec 20, 2022
Code of the lileonardo team for the 2021 Emotion and Theme Recognition in Music task of MediaEval 2021

Emotion and Theme Recognition in Music The repository contains code for the submission of the lileonardo team to the 2021 Emotion and Theme Recognitio

Vincent Bour 8 Aug 02, 2022
PyTorch Implementation of Temporal Output Discrepancy for Active Learning, ICCV 2021

Temporal Output Discrepancy for Active Learning PyTorch implementation of Semi-Supervised Active Learning with Temporal Output Discrepancy, ICCV 2021.

Siyu Huang 33 Dec 06, 2022
A very tiny, very simple, and very secure file encryption tool.

Picocrypt is a very tiny (hence "Pico"), very simple, yet very secure file encryption tool. It uses the modern ChaCha20-Poly1305 cipher suite as well

Evan Su 1k Dec 30, 2022
Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning"

VANET Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning" Introduction This is the implementation of article VAN

EMDATA-AILAB 23 Dec 26, 2022
Data-driven reduced order modeling for nonlinear dynamical systems

SSMLearn Data-driven Reduced Order Models for Nonlinear Dynamical Systems This package perform data-driven identification of reduced order model based

Haller Group, Nonlinear Dynamics 27 Dec 13, 2022
Multimodal Descriptions of Social Concepts: Automatic Modeling and Detection of (Highly Abstract) Social Concepts evoked by Art Images

MUSCO - Multimodal Descriptions of Social Concepts Automatic Modeling of (Highly Abstract) Social Concepts evoked by Art Images This project aims to i

0 Aug 22, 2021
Attention Probe: Vision Transformer Distillation in the Wild

Attention Probe: Vision Transformer Distillation in the Wild Jiahao Wang, Mingdeng Cao, Shuwei Shi, Baoyuan Wu, Yujiu Yang In ICASSP 2022 This code is

Wang jiahao 3 Oct 31, 2022