A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

Overview
https://readthedocs.org/projects/foolbox/badge/?version=latest

Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX

Foolbox is a Python library that lets you easily run adversarial attacks against machine learning models like deep neural networks. It is built on top of EagerPy and works natively with models in PyTorch, TensorFlow, and JAX.

πŸ”₯ Design

Foolbox 3 a.k.a. Foolbox Native has been rewritten from scratch using EagerPy instead of NumPy to achieve native performance on models developed in PyTorch, TensorFlow and JAX, all with one code base without code duplication.

  • Native Performance: Foolbox 3 is built on top of EagerPy and runs natively in PyTorch, TensorFlow, and JAX and comes with real batch support.
  • State-of-the-art attacks: Foolbox provides a large collection of state-of-the-art gradient-based and decision-based adversarial attacks.
  • Type Checking: Catch bugs before running your code thanks to extensive type annotations in Foolbox.

πŸ“– Documentation

  • Guide: The best place to get started with Foolbox is the official guide.
  • Tutorial: If you are looking for a tutorial, check out this Jupyter notebook colab .
  • Documentation: The API documentation can be found on ReadTheDocs.

πŸš€ Quickstart

pip install foolbox

Foolbox requires Python 3.6 or newer. To use it with PyTorch, TensorFlow, or JAX, the respective framework needs to be installed separately. These frameworks are not declared as dependencies because not everyone wants to use and thus install all of them and because some of these packages have different builds for different architectures and CUDA versions. Besides that, all essential dependencies are automatically installed.

You can see the versions we currently use for testing in the Compatibility section below, but newer versions are in general expected to work.

πŸŽ‰ Example

import foolbox as fb

model = ...
fmodel = fb.PyTorchModel(model, bounds=(0, 1))

attack = fb.attacks.LinfPGD()
epsilons = [0.0, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 1.0]
_, advs, success = attack(fmodel, images, labels, epsilons=epsilons)

More examples can be found in the examples folder, e.g. a full ResNet-18 example.

πŸ“„ Citation

If you use Foolbox for your work, please cite our JOSS paper on Foolbox Native and our ICML workshop paper on Foolbox using the following BibTeX entries:

@article{rauber2017foolboxnative,
  doi = {10.21105/joss.02607},
  url = {https://doi.org/10.21105/joss.02607},
  year = {2020},
  publisher = {The Open Journal},
  volume = {5},
  number = {53},
  pages = {2607},
  author = {Jonas Rauber and Roland Zimmermann and Matthias Bethge and Wieland Brendel},
  title = {Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX},
  journal = {Journal of Open Source Software}
}
@inproceedings{rauber2017foolbox,
  title={Foolbox: A Python toolbox to benchmark the robustness of machine learning models},
  author={Rauber, Jonas and Brendel, Wieland and Bethge, Matthias},
  booktitle={Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning},
  year={2017},
  url={http://arxiv.org/abs/1707.04131},
}

πŸ‘ Contributions

We welcome contributions of all kind, please have a look at our development guidelines. In particular, you are invited to contribute new adversarial attacks. If you would like to help, you can also have a look at the issues that are marked with contributions welcome.

πŸ’‘ Questions?

If you have a question or need help, feel free to open an issue on GitHub. Once GitHub Discussions becomes publically available, we will switch to that.

πŸ’¨ Performance

Foolbox Native is much faster than Foolbox 1 and 2. A basic performance comparison can be found in the performance folder.

🐍 Compatibility

We currently test with the following versions:

  • PyTorch 1.4.0
  • TensorFlow 2.1.0
  • JAX 0.1.57
  • NumPy 1.18.1
Comments
  • boundary attack not finding adversarials, and not returning null

    boundary attack not finding adversarials, and not returning null

    Hello,

    Note: I've updated this issue to reflect new testing I've done.

    I'm using pytorch, a simple MLP model pre-trained on MNIST, and the foolbox boundary attack.

    The Boundary attack often spits out a result that is not adversarial, and without any error or warning.

    Here is the relevant portion of my code

    
            adversarial = attack(image, label)
            classification_label = int(np.argmax(fmodel.predictions(image)))
            adversarial_label = int(np.argmax(fmodel.predictions(adversarial)))
    
            print("source label: " + str(label) + ", adversarial_label: " + str(adversarial_label) + ", classification_label: " + str(classification_label))
    
            if np.array_equal(adversarial, image):
                # this branch is never reached, as expected
                print("Boundary attack did not find adversarial!")
    

    This code is run in a loop.

    Here is a sample of the output

    source label: 9, adversarial_label: 8, classification_label: 9 source label: 8, adversarial_label: 8, classification_label: 8 # THIS SHOULDN'T BE POSSIBE source label: 6, adversarial_label: 6, classification_label: 6 # THIS SHOULDN'T BE POSSIBE source label: 9, adversarial_label: 9, classification_label: 9 source label: 3, adversarial_label: 3, classification_label: 3 # THIS SHOULDN'T BE POSSIBE source label: 9, adversarial_label: 1, classification_label: 9 source label: 4, adversarial_label: 8, classification_label: 4

    Notice that the classification label is always equal to the source label, meaning the classifier never misclassifies in this sample output.

    And yet, the adversarial label is sometimes equal to the source label, meaning an adversarial was not found.

    As well, the fact that the if np.array_equal(adversarial, image): condition is never met suggests that the Boundary attack does do something, but simply outputs an output that the "adversarial" was in reality not adversarial.

    This seems like a bug, but maybe I'm missing something? Was the boundary attack tested in pytorch? (Although I don't see why pytorch would be relevant)

    Thank you!

    opened by gobbedy 32
  • Enabled module selection

    Enabled module selection

    This pull request contains a very minor change to the zoo module.

    In the current version of the zoo it is not possible to use the module_name parameter of model loader. Due to this it is only possible to have one model for each zoo enabled git repository. By enabling the user to specify the module name in the get_model() function this restriction is lifted.

    With this change it is easier for developers to contribute models to the zoo.

    opened by LarsHoldijk 22
  • How to create a model with external predictions?

    How to create a model with external predictions?

    Scenario is the following one: There is a website where i can send a 128x128 colored PNG to and it get's classified either as a cat with confidence >0 or as not-a-cat (confidence =0) ( i have full authorization to use that website how i want to and i can send really as much as i want, to prevent strange questions). I would like to create an adversarial example with foolbox (is a car, get's classified as a cat with +90% confidence) but i can't seem to get my head around how i can create a model with that, so that i can attack it with foolbox boundary attack. Here's a template of what i have:

    images = tf.placeholder(tf.int8, (None, 128, 128, 3))
    label = ["cat", "not-a-cat"]
    
    def get_prediction(image):
        #gets the confidence for the target, between 0 and 1 from the website
        return confidence #float32
    

    Any help/hints would be appreciated, thank you in advance.

    opened by roughentomologyx 17
  • Example doesn't seem to work

    Example doesn't seem to work

    Hi, I tried following the instructions mentioned in the tutorialand examples sections, and create an attack for a VGG19 model. Downloaded the model's checkpoint from here

    The code "runs" however it doesn't seem to be able to generate an adversarial example - the process never ends... What am I doing wrong?

    Please advise

    
    import tensorflow as tf
    from tensorflow.contrib.slim.nets import vgg
    import numpy as np
    import foolbox
    import matplotlib.pyplot as plt
    from foolbox.attacks import LBFGSAttack
    from foolbox.criteria import TargetClassProbability
    
    images = tf.placeholder(tf.float32, shape=(None, 224, 224, 3))
    preprocessed = images - [123.68, 116.78, 103.94]
    logits, _ = vgg.vgg_19(images, is_training=False)
    restorer = tf.train.Saver(tf.trainable_variables())
    
    image, _ = foolbox.utils.imagenet_example()
    
    with foolbox.models.TensorFlowModel(images, logits, (0, 255)) as model:
        restorer.restore(model.session, "./vgg_19.ckpt")
        print(np.argmax(model.predictions(image)))
        target_class = 22
        criterion = TargetClassProbability(target_class, p=0.01)
    
        attack = LBFGSAttack(model, criterion)
        label = np.argmax(model.predictions(image))
    
        adversarial = attack(image=image, label=label)
    
        plt.subplot(1, 3, 1)
        plt.imshow(image)
    
        plt.subplot(1, 3, 2)
        plt.imshow(adversarial)
    
        plt.subplot(1, 3, 3)
        plt.imshow(adversarial - image)
    
    opened by dkarmon 17
  • Foolbox for non-image inputs?

    Foolbox for non-image inputs?

    Hi, I read through the docs and issues, and couldn't find any information about this.

    I want to generate adversarial examples for some non-image based problems. In my specific case, the inputs are fixed length sequence of integers which then goes into an embedding layer and into the network.

    Is there a way to use foolbox in this scenario at the moment?

    Thanks for your time!

    opened by EdwardRaff 16
  • Attacking mnist using foolbox (FGSM)

    Attacking mnist using foolbox (FGSM)

    I am trying to implement gradient based attacks such as fgsm ,Bim,Jsma using foolbox. I took a look at the example in the documentation and accordingly implemented the attack on my own model. My model is defined in keras and accordingly I used foolbox, keras wrapper for the attack. So at a time I am able to generate adversary for only a single example, if I try multiple examples , I get an error saying, Value Error : Cannot feed value of shape (1, 10000, 28, 28, 1) for Tensor 'conv2d_5_input:0', which has shape '(?, 28, 28, 1). Now I understand the error is with the input shape , but my variable explorer displays the shape as (10000,28,28,1) which is the correct shape and still I get the error. I am attaching my code below:

    import foolbox
    import keras
    import numpy as np
    from keras import backend
    from keras.models import load_model
    from keras.datasets import mnist
    from keras.utils import np_utils
    from foolbox.attacks import SaliencyMapAttack
    from foolbox.criteria import Misclassification
    import matplotlib.pyplot as plt
    
    ########################################### Loading the model and preprocessing ###############################
    backend.set_learning_phase(False)
    model = keras.models.load_model('/home/labadmin/Mnist_Digits_Model_CNN.h5')
    fmodel = foolbox.models.KerasModel(model, bounds=(0,1))
    _,(images, labels) = mnist.load_data()
    images = images.reshape(10000,28,28,1)
    images= images.astype('float32')
    images /= 255
    
    ######################################### Attacking the model ################################################
    attack=foolbox.attacks.SaliencyMapAttack(fmodel, criterion=Misclassification())
    adversarial=attack(images[12],labels[12]) # for single image
    adversarial_all=attack(images,labels) # for all the images
    adversarial =adversarial.reshape(1,28,28,1) #reshaping it for model prediction
    model_predictions = model.predict(adversarial)
    print(model_predictions)
    ######################################## Visualization #########################################################
    images=images.reshape(10000,28,28)
    adversarial =adversarial.reshape(28,28)
    
    plt.figure()
    plt.subplot(1,3,1)
    plt.title('Original')
    plt.imshow(images[12])
    plt.axis('off')
    
    plt.subplot(1, 3, 2)
    plt.title('Adversarial')
    plt.imshow(adversarial)
    plt.axis('off')
    
    plt.subplot(1, 3, 3)
    plt.title('Difference')
    difference = adversarial - images[124]
    plt.imshow(difference / abs(difference).max() * 0.2 + 0.5)
    plt.axis('off')
    plt.show()
    
    waiting for reply 
    opened by SaiRaj07 14
  • Add SparseFool attack

    Add SparseFool attack

    Hello,

    I implemented the code of our recent sparse attack namely SparseFool (CVPR 2019). More details about the method can be found here: https://arxiv.org/abs/1811.02248.

    Looking forward to your feedback!

    opened by amodas 13
  • batch support (prototype)

    batch support (prototype)

    This is a prototype of our upcoming batch support, bringing a massive speed-up to Foolbox without the need to rewrite attacks or even changing the attack logic to support batches (which can be very difficult for certain attacks and comes with other disadvantages).

    So far, this works with TensorFlowModel and PyTorchModel. You can try this now using CarliniWagnerAttack, GradientAttack and similar ones and all PGD-like attacks (BasicIterativeMethod, RandomStartProjectedGradientDescentAttack, etc.).

    The current API to try this feature looks like this:

    from foolbox.batching import run_parallel_attack
    
    model = ...
    images = ...
    labels = ...
    
    attack_create_fn = foolbox.attacks.CarliniWagnerL2Attack  # for example
    criterion = foolbox.criteria.Misclassification()  # for example
    
    advs = run_parallel_attack(attack_create_fn, model, criterion, images, labels)
    # advs will be a list of Adversarial objects
    

    This Jupyter notebook contains a complete example: https://gist.github.com/jonasrauber/140ada5a352cabb3dd0e91b9cd3adf03

    opened by jonasrauber 13
  • "nll_loss_forward_no_reduce_cuda_kernel_index" not implemented for 'Int'

    Hi folks,

    I am using Foolbox 3.3.1 to perform some adversarial attacks on resnet50 network. The code is as follows:

    ```
    

    import torch from torchvision import models

    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    model = models.resnet50(pretrained=True).to(device)
    model.eval()
    
    mean = [0.485, 0.456, 0.406]
    std=[0.229, 0.224, 0.225]
    preprocessing = dict(mean=mean, std=std, axis=-3)
    bounds = (0, 1)
    fmodel = fb.models.PyTorchModel(model, bounds=bounds, preprocessing=preprocessing)
    
    images, labels = fb.utils.samples(fmodel, dataset='imagenet', batchsize=8)
    labels_float = labels.to(torch.float32)
    
    
    def perform_attack(attack, fmodel, images, labels, predicted_labels_before_attack):
        print(f'Performing attack with {type(attack).__name__}...', end='')
        raw, clipped, is_adv = attack(fmodel, images, labels, epsilons=0.03)
        print('done')
        logits_after_attacks = fmodel(clipped)
        labels_after_attack = logits_after_attacks.max(dim=1)[1].cpu().numpy()
        for image, predicted_label_before_attack, label, label_after_attack in zip(images, predicted_labels_before_attack, labels.cpu().numpy(), labels_after_attack):
            label_imshow = type(attack).__name__
            if predicted_label_before_attack == label and label != label_after_attack:
                label_imshow += '; successful attack'
            label_imshow += f'\nTrue class: {lab_dict[label]}\nClassified before attack as: {lab_dict[predicted_label_before_attack]}\nClassified after attack as: {lab_dict[label_after_attack]}'
            imshow(image, label_imshow)
    		
    for attack in (
                    fb.attacks.FGSM(), # "nll_loss_forward_no_reduce_cuda_kernel_index" not implemented for 'Int'
                  ):
        perform_attack(attack, fmodel, images, labels, predicted_labels_before_attack)
    
    
    I get the error: 
    
    RuntimeError: "nll_loss_forward_no_reduce_cuda_kernel_index" not implemented for 'Int'
    
    with full stack:
    
         ```
    Performing attack with LinfFastGradientAttack...
        ---------------------------------------------------------------------------
        RuntimeError                              Traceback (most recent call last)
        ~\AppData\Local\Temp/ipykernel_1736/3238714708.py in <module>
             28 #                 fb.attacks.BoundaryAttack(),  # very slow
             29               ):
        ---> 30     perform_attack(attack, fmodel, images, labels, predicted_labels_before_attack)
        
        ~\AppData\Local\Temp/ipykernel_1736/3978727835.py in perform_attack(attack, fmodel, images, labels, predicted_labels_before_attack)
              1 def perform_attack(attack, fmodel, images, labels, predicted_labels_before_attack):
              2     print(f'Performing attack with {type(attack).__name__}...', end='')
        ----> 3     raw, clipped, is_adv = attack(fmodel, images, labels, epsilons=0.03)
              4     print('done')
              5     logits_after_attacks = fmodel(clipped)
        
        ~\anaconda3\envs\adversarial\lib\site-packages\foolbox\attacks\base.py in __call__(***failed resolving arguments***)
            277         success = []
            278         for epsilon in real_epsilons:
        --> 279             xp = self.run(model, x, criterion, epsilon=epsilon, **kwargs)
            280 
            281             # clip to epsilon because we don't really know what the attack returns;
        
        ~\anaconda3\envs\adversarial\lib\site-packages\foolbox\attacks\fast_gradient_method.py in run(self, model, inputs, criterion, epsilon, **kwargs)
             90             raise ValueError("unsupported criterion")
             91 
        ---> 92         return super().run(
             93             model=model, inputs=inputs, criterion=criterion, epsilon=epsilon, **kwargs
             94         )
        
        ~\anaconda3\envs\adversarial\lib\site-packages\foolbox\attacks\gradient_descent_base.py in run(***failed resolving arguments***)
             90 
             91         for _ in range(self.steps):
        ---> 92             _, gradients = self.value_and_grad(loss_fn, x)
             93             gradients = self.normalize(gradients, x=x, bounds=model.bounds)
             94             x = x + gradient_step_sign * stepsize * gradients
        
        ~\anaconda3\envs\adversarial\lib\site-packages\foolbox\attacks\gradient_descent_base.py in value_and_grad(self, loss_fn, x)
             50         x: ep.Tensor,
             51     ) -> Tuple[ep.Tensor, ep.Tensor]:
        ---> 52         return ep.value_and_grad(loss_fn, x)
             53 
             54     def run(
        
        ~\anaconda3\envs\adversarial\lib\site-packages\eagerpy\framework.py in value_and_grad(f, t, *args, **kwargs)
            350     f: Callable[..., TensorType], t: TensorType, *args: Any, **kwargs: Any
            351 ) -> Tuple[TensorType, TensorType]:
        --> 352     return t.value_and_grad(f, *args, **kwargs)
            353 
            354 
        
        ~\anaconda3\envs\adversarial\lib\site-packages\eagerpy\tensor\tensor.py in value_and_grad(self, f, *args, **kwargs)
            541         self: TensorType, f: Callable[..., TensorType], *args: Any, **kwargs: Any
            542     ) -> Tuple[TensorType, TensorType]:
        --> 543         return self._value_and_grad_fn(f, has_aux=False)(self, *args, **kwargs)
            544 
            545     @final
        
        ~\anaconda3\envs\adversarial\lib\site-packages\eagerpy\tensor\pytorch.py in value_and_grad(x, *args, **kwargs)
            493                 loss, aux = f(x, *args, **kwargs)
            494             else:
        --> 495                 loss = f(x, *args, **kwargs)
            496             loss = loss.raw
            497             loss.backward()
        
        ~\anaconda3\envs\adversarial\lib\site-packages\foolbox\attacks\gradient_descent_base.py in loss_fn(inputs)
             40         def loss_fn(inputs: ep.Tensor) -> ep.Tensor:
             41             logits = model(inputs)
        ---> 42             return ep.crossentropy(logits, labels).sum()
             43 
             44         return loss_fn
        
        ~\anaconda3\envs\adversarial\lib\site-packages\eagerpy\framework.py in crossentropy(logits, labels)
            319 
            320 def crossentropy(logits: TensorType, labels: TensorType) -> TensorType:
        --> 321     return logits.crossentropy(labels)
            322 
            323 
        
        ~\anaconda3\envs\adversarial\lib\site-packages\eagerpy\tensor\pytorch.py in crossentropy(self, labels)
            462             raise ValueError("labels must be 1D and must match the length of logits")
            463         return type(self)(
        --> 464             torch.nn.functional.cross_entropy(self.raw, labels.raw, reduction="none")
            465         )
            466 
        
        ~\anaconda3\envs\adversarial\lib\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
           2844     if size_average is not None or reduce is not None:
           2845         reduction = _Reduction.legacy_get_string(size_average, reduce)
        -> 2846     return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
           2847 
           2848 
        
        RuntimeError: "nll_loss_forward_no_reduce_cuda_kernel_index" not implemented for 'Int'
    

    What should I do?

    Note: cross posted at https://stackoverflow.com/questions/71291544/fgsm-attack-in-foolbox

    bug 
    opened by lmsasu 12
  • DeepFool doesn't exactly match the latest reference implementation

    DeepFool doesn't exactly match the latest reference implementation

    This was reported to me by @max-andr. Most of the differences are actually explicitly mentioned in comments in our implementation, but we should check again if we can match the reference implementation more closely and possible mention deviations in the docs, not just in comments.

    @max-andr might create a PR to fix this

    enhancement 
    opened by jonasrauber 12
  • Why doing Normalization before attack? (preprocessing)

    Why doing Normalization before attack? (preprocessing)

    As I know, we should not normalize before Attacks. Does Foolbox also follow this principle? 1. Foolbox explanation says: bounds [0,1] -> preprocessing(normalization) However, the image tensor is [0,1] which doesn’t match with an explanation.

    Using 'transforms.ToTensor()' already makes in the [0, 1] value range. So we don't need to normalize in this case?

    ex. transform = transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ])

    1. Clipped_Adv has value out of [0,1] but original _adv is in [0,1]. Something wrong here?

    ex. Clipped_Adv: [ 0.5029, 0.4851, 0.0167, ..., -1.1999, -1.1302, -0.9559]

    question 
    opened by Changgun-Choi 11
  • About the pgd attacks

    About the pgd attacks

    Hello,

    First of all, thank you very much for publishing this package, this is really helpful. Here I got some inconsistent results from Foolbox and my code. When I test the robustness of my model to pgd_linf attacks bounded by epsilon = 0.3 for mnist dataset, I get 89% accuracy with the following code with PyTorch:

    def pgd_linf(model, x, y, epsilon, alpha = 0.01, number_iter = 40, random_restart = True):
        model.eval()
        if random_restart:
            delta = torch.zeros_like(x).uniform_(-epsilon, epsilon)
            delta.requires_grad = True
        else:
            delta = torch.zeros_like(x,requires_grad=True)
        for _ in range(number_iter):
            loss = nn.CrossEntropyLoss()(model((x + delta).clamp(0,1)), y)
            loss.backward()
            delta.data = (delta.data + (epsilon/0.3)*alpha*delta.grad.detach().sign()).clamp(-epsilon,epsilon)
            delta.grad.zero_()
        return delta.detach()
    

    However, I got only 81% accuracy when I test the robustness of the same model to pgd_linf attacks by using the Foolbox. Here I use the following code:

     model.eval()
     fmodel = fb.PyTorchModel(model, bounds=(0,1))
     total_err = 0
     with torch.no_grad():
            for X,y in test_loader:
                    X,y = X.to(device), y.to(device)
                    with torch.enable_grad():
                            raw, clipped, is_adv = attack(fmodel, X, y, epsilons = 0.3)
                    total_err += torch.sum(is_adv.float())
    print((total_err / len(test_loader.dataset)).cpu())
    

    actually, I have tested the robustness of model to Foolbox pgd and my code multiple times, every time I would get around 10% lower accuracy with Foolbox pgd, so I think this is not an issue about randomness. Can you please help me figure out why the difference happens? Is there anything wrong in my code or the way I use foolbox? Thanks.

    opened by caoyingnwpu 0
  • how to define the bounds

    how to define the bounds

    I trained the model with normalized image, and when attacking the model should I use the training dataset and normalize the image in the same way? Should the bounds be (0,1)? actually after normalization the pixel value of image data is between -3 and 3. Should the bounds be (-3,3)?

    opened by guomanshan 0
  • Deprecation warning using old scipy namespace for gaussian_filter

    Deprecation warning using old scipy namespace for gaussian_filter

    Describe the bug foolbox is importing gaussian_filter from the namespace scipy.ndimage.filters, which is deprecated in more recent versions of scipy so yields a deprecation warning. The correct namespace of scipy.ndimage has been available since at least v1.2 so importing from there should support all target versions of Python (3.6 - 3.8).

    To Reproduce Install latest stable version of scipy (1.9.3). Run a Gaussian blue attack.

    Expected behavior No deprecation warning if newer namespace employed.

    Software (please complete the following information):

    • Foolbox version: 3.3.3
    opened by JamesRamsden-Naimuri 0
  • "nll_loss_forward_no_reduce_cuda_kernel_index" not implemented for 'Float'

    I've had a similar issue in close, but after I use the latest version, I still get an error like this "nll_loss_forward_no_reduce_cuda_kernel_index" not implemented for 'Float' Please help me

    opened by Rivendellad 3
  • Bump pillow from 9.0.1 to 9.3.0 in /tests

    Bump pillow from 9.0.1 to 9.3.0 in /tests

    Bumps pillow from 9.0.1 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Bump tensorflow from 2.6.4 to 2.9.3 in /tests

    Bump tensorflow from 2.6.4 to 2.9.3 in /tests

    Bumps tensorflow from 2.6.4 to 2.9.3.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Releases(v3.3.3)
  • v3.3.3(Apr 2, 2022)

    New Features and Improvements

    • Fixed a bug that AdamPGD attacks actually didn't use the Adam optimizer
    • Attacks now verify that the input to them lies within the bounds of the model
    Source code(tar.gz)
    Source code(zip)
  • v3.3.2(Mar 8, 2022)

    New Features and Improvements

    • Added AdamPGD attack
    • Added pointwise attack
    • Hop-skip-jump attack bug fix (thanks @zhuangzi926)
    • other improvements and bug fixes
    Source code(tar.gz)
    Source code(zip)
  • v3.3.1(Feb 23, 2021)

  • v3.3.0(Feb 10, 2021)

    New Features and Improvements

    • PGD now supports targeted attacks (thanks @zimmerrol)
    • DDN attack bug fixes (thanks @maurapintor)
    • Brendel Bethge attack bug fixes (thanks @wielandbrendel)
    • other improvements and bug fixes
    Source code(tar.gz)
    Source code(zip)
  • v3.2.1(Sep 26, 2020)

  • v3.2.0(Sep 26, 2020)

    • added our JOSS paper
    • added a performance comparison between Foolbox 1, 2, and 3
    • improved tests
    • fixed the TensorFlow example code
    • improved examples
    • improved tutorial
    • updated dependencies
    Source code(tar.gz)
    Source code(zip)
  • v3.1.1(Aug 29, 2020)

  • v3.1.0(Aug 29, 2020)

    New Features

    • ported HopSkipJump attack to v3
    • added clipping-aware noise attacks
    • model wrappers now support data_format
    • JAXModel now supports data_format
    • improved documentation

    Bug Fixes

    • EADAttack bug fixes
    • GenAttack bug fixes
    • Other bug fixes and improvements
    Source code(tar.gz)
    Source code(zip)
  • v3.0.4(Jul 3, 2020)

  • v3.0.3(Jul 3, 2020)

  • v3.0.2(May 23, 2020)

  • v3.0.1(May 23, 2020)

    Bug fixes

    • type annotations are now correctly exposed using py.typed (file was missing in MANIFEST)
    • TransformBoundsWrapper now correctly handles data_format (thanks @zimmerrol)
    Source code(tar.gz)
    Source code(zip)
  • v3.0.0(Mar 22, 2020)

    New Features

    Foolbox 3 aka Foolbox Native has been rewritten from scratch with performance in mind. All code is running natively in PyTorch, TensorFlow and JAX, and all attacks have been rewritten with real batch support.

    Source code(tar.gz)
    Source code(zip)
  • v3.0.0b1(Feb 16, 2020)

  • v3.0.0b0(Feb 15, 2020)

    Foolbox 3 aka Foolbox Native has been rewritten from scratch with performance in mind. All code is running natively in PyTorch, TensorFlow and JAX, and all attacks have been rewritten with real batch support.

    Warning: This is a pre-release beta version. Expect breaking changes.

    Source code(tar.gz)
    Source code(zip)
  • v2.4.0(Feb 7, 2020)

    New Features

    • fixed PyTorch model gradients (fixes DeepFool with batch size > 1)
    • added support for TensorFlow 2.0 and newer (Graph and Eager mode)
    • refactored the tests
    • support for the latest randomgen version
    Source code(tar.gz)
    Source code(zip)
  • v2.3.0(Nov 4, 2019)

    New Features

    • new EnsembleAveragedModel (thanks to @zimmerrol)
    • new foolbox.utils.flatten
    • new foolbox.utils.atleast_kd
    • new foolbox.utils.accuracy
    • PyTorchModel now always warns if model is in train mode, not just once
    • batch support for ModelWithEstimatedGradients

    Bug fixes

    • fixed dtype when using Adam PGD with a PyTorch model
    • fixed CW attack hyperparameters
    Source code(tar.gz)
    Source code(zip)
  • v2.2.0(Oct 28, 2019)

  • v2.1.0(Oct 27, 2019)

    New Features

    • New foolbox.models.JAXModel class to support JAX models (https://github.com/google/jax)
    • The preprocessing argument of models now supports a flip_axis key to support common preprocessing operations like RGB to BGR in a nice way. This builds on the ability to pass dicts to preprocessing introduced in Foolbox 2.0.

    Bug fixes and improvements

    • Fixed a serious bug in the LocalSearchAttack (thanks to @duoergun0729)
    • foolbox.utils.samples now warns if samples are repeated
    • foolbox.utils.sampels now uses PNGs instead of JPGs (except for ImageNet)
    • Other bug fixes
    • Improved docstrings
    • Improved docs
    Source code(tar.gz)
    Source code(zip)
  • v2.0.0(Oct 23, 2019)

    • batch support: check out the new example in the README
    • model and defense zoo: https://foolbox.readthedocs.io/en/latest/user/zoo.html
    • attacks take an optional threshold argument to stop attacks once that threshold is reached

    foolbox.attacks now refers to the attacks with batch support. The old attacks can still be accessed under foolbox.v1.attacks. Batch support has been added to almost all attacks and new attacks will only be implemented with batch support. If you need batch support for an old attack that has not yet been adapted, please open an issue.

    Source code(tar.gz)
    Source code(zip)
  • v2.0.0rc0(Oct 18, 2019)

  • v2.0.0b0(May 21, 2019)

    Batch-support is finally here!

    See #316 for details until we have updated the documentation. Right now it's still limited to a few attacks, but feel free to open an issue for any attack that you need. It's easy to extend to new attacks, we just haven't done it yet and will prioritize based on requests.

    Source code(tar.gz)
    Source code(zip)
  • v1.8.0(Nov 16, 2018)

    Foolbox Model Zoo

    Foolbox now has an easy way to load models or defenses from Git repos: https://foolbox.readthedocs.io/en/latest/user/zoo.html

    Source code(tar.gz)
    Source code(zip)
  • v1.7.0(Oct 24, 2018)

    New Features

    • Foolbox now has support for the Spatial Attack (https://arxiv.org/abs/1712.02779)

    Bug Fixes

    • Foolbox now uses its own random number generators to be independent of seeds set inside models.
    Source code(tar.gz)
    Source code(zip)
  • v1.6.2(Oct 12, 2018)

  • v1.6.1(Oct 8, 2018)

    The foolbox.models.TensorFlowModel.from_keras constructor now automatically uses the session used by tf.keras instead of TensorFlow's default session.

    Source code(tar.gz)
    Source code(zip)
  • v1.6.0(Oct 5, 2018)

  • v1.5.0(Sep 27, 2018)

    New features

    • all Foolbox attacks now support early stopping when reaching a certain perturbation size
      • just pass a threshold to the attack or Adversarial instance during initialization
    • the distance metric can now be passed to the attack during initialization (no need to manually create a Adversarial instance anymore)
    Source code(tar.gz)
    Source code(zip)
  • v1.4.0(Sep 18, 2018)

    • The Adversarial class now remembers the model output for the best adversarial so far. For deterministic models this is the same as fmodel.predictions(adversarial.image), but it can be useful for non-deterministic models. Note that very close to the decision boundary even otherwise deterministic models can become stochastic because of non-deterministic floating point operations such as reduce_sum. In addtion to the new output attribute, there is also a new adversarial_class attribute for convience; it just takes the argmax of the output.
    • new ADefAttack thanks to @EvgeniaAR
    • new NewtonFoolAttack thanks to @bveliqi
    • new FAQ section in the docs: https://foolbox.readthedocs.io/en/latest/user/faq.html
    Source code(tar.gz)
    Source code(zip)
  • v1.3.2(Aug 6, 2018)

Owner
Bethge Lab
Perceiving Neural Networks
Bethge Lab
Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

OpenDet Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022) Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-So

csuhan 64 Jan 07, 2023
Repository for MeshTalk supplemental material and code once the (already approved) 16 GHS captures our lab will make publicly available are released.

meshtalk This repository contains code to run MeshTalk for face animation from audio. If you use MeshTalk, please cite @inproceedings{richard2021mesht

Meta Research 221 Jan 06, 2023
LLVM-based compiler for LightGBM gradient-boosted trees. Speeds up prediction by β‰₯10x.

LLVM-based compiler for LightGBM gradient-boosted trees. Speeds up prediction by β‰₯10x.

Simon Boehm 183 Jan 02, 2023
Unity Propagation in Bayesian Networks Handling Inconsistency via Unity Smoothing

This repository contains the scripts needed to generate the results from the paper Unity Propagation in Bayesian Networks Handling Inconsistency via U

0 Jan 19, 2022
A self-supervised 3D representation learning framework named viewpoint bottleneck.

Pointly-supervised 3D Scene Parsing with Viewpoint Bottleneck Paper Created by Liyi Luo, Beiwen Tian, Hao Zhao and Guyue Zhou from Institute for AI In

63 Aug 11, 2022
CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks

CALVIN CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks Oier Mees, Lukas Hermann, Erick Rosete,

Oier Mees 107 Dec 26, 2022
End-To-End Memory Network using Tensorflow

MemN2N Implementation of End-To-End Memory Networks with sklearn-like interface using Tensorflow. Tasks are from the bAbl dataset. Get Started git clo

Dominique Luna 339 Oct 27, 2022
Pytorch implementation of "Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling"

RNN-for-Joint-NLU Pytorch implementation of "Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling"

Kim SungDong 194 Dec 28, 2022
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval

CLIP4CMR A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval The original data and pre-calculate

24 Dec 26, 2022
Code of Adverse Weather Image Translation with Asymmetric and Uncertainty aware GAN

Adverse Weather Image Translation with Asymmetric and Uncertainty-aware GAN (AU-GAN) Official Tensorflow implementation of Adverse Weather Image Trans

Jeong-gi Kwak 36 Dec 26, 2022
Code for layerwise detection of linguistic anomaly paper (ACL 2021)

Layerwise Anomaly This repository contains the source code and data for our ACL 2021 paper: "How is BERT surprised? Layerwise detection of linguistic

6 Dec 07, 2022
Self-Supervised Pre-Training for Transformer-Based Person Re-Identification

Self-Supervised Pre-Training for Transformer-Based Person Re-Identification [pdf] The official repository for Self-Supervised Pre-Training for Transfo

Hao Luo 116 Jan 04, 2023
MlTr: Multi-label Classification with Transformer

MlTr: Multi-label Classification with Transformer This is official implement of "MlTr: Multi-label Classification with Transformer". Abstract The task

η¨‹ζ˜Ÿ 38 Nov 08, 2022
Official PyTorch implementation of Learning Intra-Batch Connections for Deep Metric Learning (ICML 2021) published at International Conference on Machine Learning

About This repository the official PyTorch implementation of Learning Intra-Batch Connections for Deep Metric Learning. The config files contain the s

Dynamic Vision and Learning Group 41 Dec 10, 2022
Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering

Nvdiffrast – Modular Primitives for High-Performance Differentiable Rendering Modular Primitives for High-Performance Differentiable Rendering Samuli

NVIDIA Research Projects 675 Jan 06, 2023
PyTorch Implementation of our paper Explain Me the Painting: Multi-Topic Knowledgeable Art Description Generation

PyTorch Implementation of our paper Explain Me the Painting: Multi-Topic Knowledgeable Art Description Generation

Zechen Bai 12 Jul 08, 2022
πŸ—£οΈ Microsoft Edge TTS for Home Assistant, no need for app_key

Microsoft Edge TTS for Home Assistant This component is based on the TTS service of Microsoft Edge browser, no need to apply for app_key. Install Down

152 Dec 31, 2022
CountDown to New Year and shoot fireworks

CountDown and Shoot Fireworks About App This is an small application make you re

5 Dec 31, 2022
HyperPose is a library for building high-performance custom pose estimation applications.

HyperPose is a library for building high-performance custom pose estimation applications.

TensorLayer Community 1.2k Jan 04, 2023
This is an (re-)implementation of DeepLab-ResNet in TensorFlow for semantic image segmentation on the PASCAL VOC dataset.

DeepLab-ResNet-TensorFlow This is an (re-)implementation of DeepLab-ResNet in TensorFlow for semantic image segmentation on the PASCAL VOC dataset. Up

19 Jan 16, 2022