Neural Network Libraries

Related tags

Deep Learningnnabla
Overview

Neural Network Libraries

Neural Network Libraries is a deep learning framework that is intended to be used for research, development and production. We aim to have it running everywhere: desktop PCs, HPC clusters, embedded devices and production servers.

Installation

Installing Neural Network Libraries is easy:

pip install nnabla

This installs the CPU version of Neural Network Libraries. GPU-acceleration can be added by installing the CUDA extension with following command.

pip install nnabla-ext-cuda101

Above command is for version 10.1 CUDA Toolkit.

for other versions:
pip install nnabla-ext-cuda100 for CUDA version 10.0.
pip install nnabla-ext-cuda90 for CUDA version 9.0.
pip install nnabla-ext-cuda80 for CUDA version 8.0.

CUDA ver. 9.1, ver. 9.2 are not supported now.

For more details, see the installation section of the documentation.

Building from Source

See Build Manuals.

Running on Docker

For details on running on Docker, see the installation section of the documentation.

Features

Easy, flexible and expressive

The Python API built on the Neural Network Libraries C++11 core gives you flexibility and productivity. For example, a two layer neural network with classification loss can be defined in the following 5 lines of codes (hyper parameters are enclosed by <>).

import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF

x = nn.Variable(<input_shape>)
t = nn.Variable(<target_shape>)
h = F.tanh(PF.affine(x, <hidden_size>, name='affine1'))
y = PF.affine(h, <target_size>, name='affine2')
loss = F.mean(F.softmax_cross_entropy(y, t))

Training can be done by:

import nnabla.solvers as S

# Create a solver (parameter updater)
solver = S.Adam(<solver_params>)
solver.set_parameters(nn.get_parameters())

# Training iteration
for n in range(<num_training_iterations>):
    # Setting data from any data source
    x.d = <set data>
    t.d = <set label>
    # Initialize gradients
    solver.zero_grad()
    # Forward and backward execution
    loss.forward()
    loss.backward()
    # Update parameters by computed gradients
    solver.update()

The dynamic computation graph enables flexible runtime network construction. Neural Network Libraries can use both paradigms of static and dynamic graphs, both using the same API.

x.d = <set data>
t.d = <set label>
drop_depth = np.random.rand(<num_stochastic_layers>) < <layer_drop_ratio>
with nn.auto_forward():
    h = F.relu(PF.convolution(x, <hidden_size>, (3, 3), pad=(1, 1), name='conv0'))
    for i in range(<num_stochastic_layers>):
        if drop_depth[i]:
            continue  # Stochastically drop a layer
        h2 = F.relu(PF.convolution(x, <hidden_size>, (3, 3), pad=(1, 1), 
                                   name='conv%d' % (i + 1)))
        h = F.add2(h, h2)
    y = PF.affine(h, <target_size>, name='classification')
    loss = F.mean(F.softmax_cross_entropy(y, t))
# Backward computation (can also be done in dynamically executed graph)
loss.backward()

Command line utility

Neural Network Libraries provides a command line utility nnabla_cli for easier use of NNL.

nnabla_cli provides following functionality.

  • Training, Evaluation or Inference with NNP file.
  • Dataset and Parameter manipulation.
  • File format converter
    • From ONNX to NNP and NNP to ONNX.
    • From ONNX or NNP to NNB or C source code.

For more details see Documentation

Portable and multi-platform

  • Python API can be used on Linux and Windows
  • Most of the library code is written in C++11, deployable to embedded devices

Extensible

  • Easy to add new modules like neural network operators and optimizers
  • The library allows developers to add specialized implementations (e.g., for FPGA, ...). For example, we provide CUDA backend as an extension, which gives speed-up by GPU accelerated computation.

Efficient

  • High speed on a single CUDA GPU
  • Memory optimization engine
  • Multiple GPU support

Documentation

https://nnabla.readthedocs.org

Getting started

  • A number of Jupyter notebook tutorials can be found in the tutorial folder. We recommend starting from by_examples.ipynb for a first working example in Neural Network Libraries and python_api.ipynb for an introduction into the Neural Network Libraries API.

  • We also provide some more sophisticated examples at nnabla-examples repository.

  • C++ API examples are available in examples/cpp.

Contribution guide

The technology is rapidly progressing, and researchers and developers often want to add their custom features to a deep learning framework. NNabla is really nice in this point. The architecture of Neural Network Libraries is clean and quite simple. Also, you can add new features very easy by the help of our code template generating system. See the following link for details.

License & Notice

Neural Network Libraries is provided under the Apache License Version 2.0 license.

It also depends on some open source software packages. For more information, see LICENSES.

Comments
  • import  nnabla_ext.cuda -  ImportError: libcudart.so.11.0: cannot open shared object file: No such file or directory

    import nnabla_ext.cuda - ImportError: libcudart.so.11.0: cannot open shared object file: No such file or directory

    Forum,

    I am getting the following error when checking my nnabla set up:

    #check nnabla import nnabla import nnabla_ext.cuda import nnabla.ext_utils as nneu import nnabla_ext.cudnn

    Error: ImportError: libcudart.so.11.0: cannot open shared object file: No such file or directory

    I am using the pip install to set up the framework. I do have anaconda installed also.

    Bests, Philip

    opened by philtsmith570 35
  • Implement assign function

    Implement assign function

    Hi, @TE-TakuyaNarihira san.

    I added F.assign function that behaves like tf.assign.

    This is used for manual assignment or manual variable update.

    usecase1: manual assignment

    x = nn.Variable((3, 3))
    y = nn.Variable.from_numpy_array(np.random.random((3, 3)))
    assign_op = F.assign(y, x)
    
    x.d = np.random.random((3, 3))
    assign_op.forward()
    print((x.d == y.d).all()) # True
    print((assign_op.d == x.d).all()) # True
    

    usecase2: manual update

    lr = 1.0
    original = np.random.random((3, 3))
    
    grad = F.constant(1.0, (3, 3))
    y = nn.Variable.from_numpy_array(original)
    train_op = F.assign(y, y + lr * grad)
    
    train_op.forward()
    print((y.d == (original + 1.0)).all()) # True, or False because of rounding floating points
    

    F.assign will be useful to implement syncing multiple parameters or manual SGD implementation in static graph way. In backwarding, F.assign propagates gradients to destination Variable.

    TODO

    • [x] add unit testing
    • [x] backward implementation
    release-note-op-layer 
    opened by takuseno 11
  • Implement forward_all function that performs forwarding on mulptiple variables at once

    Implement forward_all function that performs forwarding on mulptiple variables at once

    Hi, @TE-TakuyaNarihira san.

    I added new function forward_all.

    Formerly, forwarding multiple outputs which shared hidden layers (just like policy and value of actor-critic) requires multiple forwardings in static graph. See the example below.

    x = nn.Variable((3, 3))
    h = PF.affine(x, 3, name='hidden')
    y1 = PF.affine(h, 3, name='y1')
    y2 = PF.affine(h, 3, name='y2')
    
    y1.forward()
    y2.forward() # hidden layer was computed twice!!
    

    This is critical at huge architectures.

    Thus, I added forward_all function.

    x = nn.Variable((3, 3))
    h = PF.affine(x, 3, name='hidden')
    y1 = PF.affine(h, 3, name='y1')
    y2 = PF.affine(h, 3, name='y2')
    
    # compute h only once!
    nn.forward_all([y1, y2])
    

    This function performs forwarding with shared fclosed, which prevents shared layers from being revisited.

    What do you think of this?

    opened by takuseno 10
  • Implement batch_det function

    Implement batch_det function

    Hi, @TE-AkioHayakawa san, @TE-TakuyaNarihira san.

    I've implemented batch_det function that computes determinant of input array.

    a = nn.Variable((2, 13, 13))
    det = F.batch_det(a) # det.shape == (2,)
    
    nd = np.random.random((2, 13, 13))
    a.d = nd
    det.forward()
    
    assert np.allclose(det.d, np.array(list(map(np.linalg.det, nd))))
    

    batch_det works with forward and backward correctly now. cuda extension is implemented.

    release-note-op-layer 
    opened by takuseno 8
  • Implement OrthogonalInitializer

    Implement OrthogonalInitializer

    Hi, @TE-TakuyaNarihira san, @TE-AkioHayakawa san! #243 takes a bit time because my school started 😢

    Anyway, I implemented OrthogonalInitializer which is widely available on other DNN libraries but still effective way in many domains.

    If you think it might be good to nnabla, I will test this more to remove WIP sign.

    Thank you!

    release-note-utility 
    opened by takuseno 8
  • Error in running: mpirun -n 4 python  multi_device_multi_process_classification.py

    Error in running: mpirun -n 4 python multi_device_multi_process_classification.py

    When running multi-gpu support as suggested, the following error occurs:

    comm = C.MultiProcessDataParalellCommunicator(ctx) File "communicator.pyx", line 653, in nnabla.communicator.MultiProcessDataParallelCommunicator RuntimeError: value error in query Failed it != items_.end(): Any of [cudnn:float, cuda:float, cpu:float] could not be found in []

    How to fix it?

    opened by MiZhangWhuer 5
  • Reinforcement learning examples

    Reinforcement learning examples

    Could please anybody commit to post reinforcement learning models such as DNQ and A3C? I need some hint and guide how to build reinforcement learning RNN modlez

    opened by gusdoe 5
  • Implement inverse function

    Implement inverse function

    Hi, @TE-TakuyaNarihira san, @TE-AkioHayakawa san!

    I've implemented inverse matrix function. This uses eigen inverse function with CPU. In GPU context, cublas<t>getrfBatched() is considered to be used.

    x = nn.Variable((1, 10, 10))
    inverse = F.batch_inv(x)
    
    x.d = np.random.random((1, 10, 10))
    inverse.forward()
    
    assert np.allclose(inverse.d, np.linalg.inv(x.d)) # True
    

    CUDA implementation https://github.com/sony/nnabla-ext-cuda/pull/130

    release-note-op-layer 
    opened by takuseno 4
  • Example DeeplabV3 not works

    Example DeeplabV3 not works

    Hi,

    the example on the site https://nnabla.readthedocs.io/en/latest/python/api/models/semantic_segmentation.html didn't work.

    I always get the error: AttributeError: 'NoneType' object has no attribute 'shape'

    2022-04-02 19:54:02,584 [nnabla][INFO]: Initializing CPU extension... 2022-04-02 19:54:02,956 [nnabla][INFO]: Initializing CUDA extension... 2022-04-02 19:54:02,987 [nnabla][INFO]: Initializing cuDNN extension... Loading C:\Users\Armin/nnabla_data\nnp_models\semantic_segmentation/DeepLabV3-voc-coco-os-8.nnp. 2022-04-02 19:54:03,093 [nnabla][INFO]: Downloading DeepLabV3-voc-coco-os-8.nnp from https://nnabla.org/pretrained-models/nnp_models/semantic_segmentation/DeepLabV3-voc-coco-os-8.nnp 2022-04-02 19:54:03,093 [nnabla][INFO]: > C:\Users\Armin/nnabla_data\nnp_models\semantic_segmentation/DeepLabV3-voc-coco-os-8.nnp already exists. 2022-04-02 19:54:03,093 [nnabla][INFO]: > If you have any issue when using this file, 2022-04-02 19:54:03,093 [nnabla][INFO]: > manually remove the file and try download again. Traceback (most recent call last): File "C:/Users/Armin/Desktop/Python/DeepLabv3/main.py", line 21, in y = deeplabv3(x) File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\models\semantic_segmentation\deeplabv3plus.py", line 138, in call net = self.nnp.get_network( File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\utils\nnp_graph.py", line 133, in get_network return NnpNetwork(self.network_dict[name], batch_size, callback=callback) File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\utils\nnp_graph.py", line 45, in init self.proto_network = proto_network.promote(callback) File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\core\graph_def.py", line 1184, in promote return self._patch_by_network_pass(callback) File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\core\graph_def.py", line 1081, in _patch_by_network_pass functions = filter_function_by_callback(functions, callback) File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\core\graph_def.py", line 1068, in filter_function_by_callback pf = callback._apply_generate_function_by_name(pf) File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\utils\nnp_graph.py", line 295, in _apply_generate_function_by_name return self._function_callbacks_by_namef.name File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\utils\nnp_graph.py", line 187, in _callback return callback(v) File "C:\Users\Armin\Desktop\Python\DeepLabv3\venv\lib\site-packages\nnabla\models\semantic_segmentation\deeplabv3plus.py", line 95, in average_pooling_shape s = f.inputs[0].variable.shape AttributeError: 'NoneType' object has no attribute 'shape'

    Process finished with exit code 1

    opened by Armin234 3
  • Implement clip grad by norm at solver

    Implement clip grad by norm at solver

    Hi, @TE-TakuyaNarihira @TE-AkioHayakawa san.

    I've implemented clip_grad_by_norm at Solver class just like weight_decay.

    loss.forward()
    solver.zero_grad()
    loss.backward()
    solver.clip_grad_by_norm(10.0)
    solver.update()
    

    This mathematical formulation is as follows.

    grad = clip_norm * grad / max(clip_norm, l2norm(grad))
    

    If l2norm(grad) is less than the given norm, this function does nothing.

    Clipping gradients often appears at deep reinforcement learning implementation so that this will be very useful.

    Before working on CUDA implementation, please give me feedbacks of implementation design and necessity.

    Thank you.

    opened by takuseno 3
  • How to get or use network from nnp

    How to get or use network from nnp

    Hi.

    Neural Network Console of Windows provides h5 file and Python code. And Cloud version provides just nnp file. I know the nnp file has h5 and network of protocol buffers. But I don't know how to use network in nnp file.

    Usally we can get the python code like below.

    import nnabla as nn
    import nnabla.functions as F
    import nnabla.parametric_functions as PF
    
    def network(x, y, test=False):
        # Input:x -> 1,28,28
        # MaxPooling -> 1,14,14
        h = F.max_pooling(x, (2,2), (2,2))
        # Affine -> 100
        h = PF.affine(h, (100,), name='Affine')
        # ReLU
        h = F.relu(h, True)
        # Affine_2 -> 1
        h = PF.affine(h, (1,), name='Affine_2')
        # Sigmoid
        h = F.sigmoid(h)
        # BinaryCrossEntropy
        h = F.binary_cross_entropy(h, y)
        return h
    

    And we don't neet BinaryCrossEntropy, so delete it. The nnp file's network has it.

    import nnabla as nn
    import nnabla.functions as F
    import nnabla.parametric_functions as PF
    from nnabla.utils import nnp_graph
    
    nnp = nnp_graph.NnpLoader('./result.nnp')
    graph = nnp.get_network('MainValidation')
    y = graph.outputs
    print(y)
    

    This code outputs below.

    {'BinaryCrossEntropy': <Variable((64, 1), need_grad=True) at 0x114f926d8>}
    

    I hope use the nnp in Python like below.

    import nnabla as nn
    
    nn.load_parameters('./result.nnp')
    graph = nn.get_network('MainValidation')
    x = graph.inputs['x']
    y = graph.outputs['y']
    y.forward(x, test=True)
    y.d
    

    How to use network in the nnp file?

    I know nnabla_cli can do it. But I don't know how to do it.

    Thanks

    opened by goofmint 3
  • Bump pillow from 5.4.1 to 9.3.0 in /doc

    Bump pillow from 5.4.1 to 9.3.0 in /doc

    Bumps pillow from 5.4.1 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • cannot import nnabla in `numpy==1.21.4` and `nnabla>=1.29.0`

    cannot import nnabla in `numpy==1.21.4` and `nnabla>=1.29.0`

    • python: 3.8.10
      • installed by pyenv: 2.3.4-14-g093d0b3a
    • I found that I cannot import nnabla in the environment of numpy==1.21.4 and nnabla==1.29.0 or 1.30.0.
    Experiment log
    [email protected]:~$ pip list
    Package    Version
    ---------- -------
    pip        22.2.2
    setuptools 65.3.0
    [email protected]:~$ pip install numpy==1.21.4
    ...
    ...
    [email protected]:~$ pip install nnabla==1.29.0
    ...
    ...
    [email protected]:~$ pip list
    Package         Version
    --------------- -------
    boto3           1.24.76
    botocore        1.27.76
    configparser    5.3.0
    contextlib2     21.6.0
    Cython          0.29.32
    h5py            3.7.0
    imageio         2.22.0
    jmespath        1.0.1
    nnabla          1.29.0
    numpy           1.21.4
    Pillow          9.2.0
    pip             22.2.2
    protobuf        3.19.4
    python-dateutil 2.8.2
    PyYAML          6.0
    s3transfer      0.6.0
    scipy           1.9.1
    setuptools      65.3.0
    six             1.16.0
    tqdm            4.64.1
    urllib3         1.26.12
    [email protected]:~$ python
    Python 3.8.10 (default, Sep 20 2022, 10:17:21)
    [GCC 9.4.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import nnabla
    2022-09-20 10:53:14,554 [nnabla][INFO]: Initializing CPU extension...
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/home/isara/.pyenv/versions/3.8.10/lib/python3.8/site-packages/nnabla/__init__.py", line 32, in <module>
        from .variable import Variable, Context
      File "/home/isara/.pyenv/versions/3.8.10/lib/python3.8/site-packages/nnabla/variable.py", line 17, in <module>
        from ._variable import Context
      File "_variable.pyx", line 1, in init nnabla._variable
      File "_nd_array.pyx", line 1, in init nnabla._nd_array
    ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
    >>> exit()
    Error in atexit._run_exitfuncs:
    Traceback (most recent call last):
      File "<frozen importlib._bootstrap>", line 991, in _find_and_load
      File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
      File "/home/isara/.pyenv/versions/3.8.10/lib/python3.8/site-packages/nnabla/__init__.py", line 32, in <module>
        from .variable import Variable, Context
      File "/home/isara/.pyenv/versions/3.8.10/lib/python3.8/site-packages/nnabla/variable.py", line 17, in <module>
        from ._variable import Context
      File "_variable.pyx", line 1, in init nnabla._variable
      File "_nd_array.pyx", line 1, in init nnabla._nd_array
    ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
    [email protected]:~$
    
    • I got the same error in nnabla==1.30.0.

    • After that, if numpy's version is upgraded to 1.22.4, I can import nnabla.
    Experiment log
    [email protected]:~$ pip list
    Package    Version
    ---------- -------
    pip        22.2.2
    setuptools 65.3.0
    [email protected]:~$ pip install numpy==1.22.4
    ...
    ...
    [email protected]:~$ pip install nnabla==1.29.0
    ...
    ...
    [email protected]:~$ pip list
    Package         Version
    --------------- -------
    boto3           1.24.76
    botocore        1.27.76
    configparser    5.3.0
    contextlib2     21.6.0
    Cython          0.29.32
    h5py            3.7.0
    imageio         2.22.0
    jmespath        1.0.1
    nnabla          1.29.0
    numpy           1.22.4
    Pillow          9.2.0
    pip             22.2.2
    protobuf        3.19.4
    python-dateutil 2.8.2
    PyYAML          6.0
    s3transfer      0.6.0
    scipy           1.9.1
    setuptools      65.3.0
    six             1.16.0
    tqdm            4.64.1
    urllib3         1.26.12
    [email protected]:~$ python
    Python 3.8.10 (default, Sep 20 2022, 10:17:21)
    [GCC 9.4.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import nnabla
    2022-09-20 10:59:24,297 [nnabla][INFO]: Initializing CPU extension...
    >>> exit()
    [email protected]:~$
    
    • I got the same result in nnabla==1.30.0.
    • But, I have not tried the smallest version of numpy that eliminates the error.

    • I would like you to try to reproduce this problem.
    • Then, if there is a problem, please update the required version of numpy.
      • - numpy [required: >=1.20.0, installed: 1.22.4] according to pipdeptree.
    opened by IsaraAomi 3
  • Partially initialized module

    Partially initialized module

    Not sure how to approach a fix.

    Using a pip install for a django backend and I get this message

    File "C:.venv\lib\site-packages\nnabla_init_.py", line 18, in from . import _init # Must be imported first ImportError: cannot import name 'init' from partially initialized module 'nnabla' (most likely due to a circular import) (C:.venv\lib\site-packages\nnabla_init.py)

    opened by ergodaveh 2
  • segmentaion fault occurs in convolution(group=in_channel) during mixed precision training

    segmentaion fault occurs in convolution(group=in_channel) during mixed precision training

    When performing mixed precision training, I got segmentation fault when I specified the convolution group=in_channel.

    import numpy as np
    import nnabla as nn
    import nnabla.functions as F  
    import nnabla.parametric_functions as PF
    from nnabla.ext_utils import get_extension_context
    
    
    ctx = get_extension_context(
            "cudnn",
            device_id='0',
            type_config='half'
        )
    
    nn.set_default_context(ctx)
    x = nn.Variable((8, 32, 32, 32), need_grad=True)
    x.d = np.random.random(x.shape)
    
    h = PF.convolution(x, 32, (3,3), group=32)
    loss = F.sum(h)
    loss.forward(function_post_hook=lambda f: print(f'forward {f}'))
    loss.backward(function_post_hook=lambda f: print(f'backward {f}'))
    

    Above sample code outputs below:

    2022-04-20 16:49:26,777 [nnabla][INFO]: Initializing CPU extension...
    2022-04-20 16:49:26,945 [nnabla][INFO]: Initializing CUDA extension...
    2022-04-20 16:49:26,971 [nnabla][INFO]: Initializing cuDNN extension...
    forward ConvolutionCudaCudnn
    forward SumCuda
    backward SumCuda
    Segmentation fault (core dumped)
    

    For some insights, if I set channel_last option, like h = PF.convolution(x, 32, (3,3), group=32, channel_last=True), this segmentation fault does not occur.

    opened by HiromichiKamata 0
Releases(v1.32.0)
Owner
Sony
Sony Corporation
Sony
Self-supervised learning (SSL) is a method of machine learning

Self-supervised learning (SSL) is a method of machine learning. It learns from unlabeled sample data. It can be regarded as an intermediate form between supervised and unsupervised learning.

Ashish Patel 4 May 26, 2022
「PyTorch Implementation of AnimeGANv2」を用いて、生成した顔画像を元の画像に上書きするデモ

AnimeGANv2-Face-Overlay-Demo PyTorch Implementation of AnimeGANv2を用いて、生成した顔画像を元の画像に上書きするデモです。

KazuhitoTakahashi 21 Oct 18, 2022
10th place solution for Google Smartphone Decimeter Challenge at kaggle.

Under refactoring 10th place solution for Google Smartphone Decimeter Challenge at kaggle. Google Smartphone Decimeter Challenge Global Navigation Sat

12 Oct 25, 2022
Implementation of "Efficient Regional Memory Network for Video Object Segmentation" (Xie et al., CVPR 2021).

RMNet This repository contains the source code for the paper Efficient Regional Memory Network for Video Object Segmentation. Cite this work @inprocee

Haozhe Xie 76 Dec 14, 2022
GAN-STEM-Conv2MultiSlice - Exploring Generative Adversarial Networks for Image-to-Image Translation in STEM Simulation

GAN-STEM-Conv2MultiSlice GAN method to help covert lower resolution STEM images generated by convolution methods to higher resolution STEM images gene

UW-Madison Computational Materials Group 2 Feb 10, 2021
The official codes for the ICCV2021 presentation "Uniformity in Heterogeneity: Diving Deep into Count Interval Partition for Crowd Counting"

UEPNet (ICCV2021 Poster Presentation) This repository contains codes for the official implementation in PyTorch of UEPNet as described in Uniformity i

Tencent YouTu Research 15 Dec 14, 2022
A pytorch implementation of the CVPR2021 paper "VSPW: A Large-scale Dataset for Video Scene Parsing in the Wild"

VSPW: A Large-scale Dataset for Video Scene Parsing in the Wild A pytorch implementation of the CVPR2021 paper "VSPW: A Large-scale Dataset for Video

45 Nov 29, 2022
PyTorch reimplementation of the paper Involution: Inverting the Inherence of Convolution for Visual Recognition [CVPR 2021].

Involution: Inverting the Inherence of Convolution for Visual Recognition Unofficial PyTorch reimplementation of the paper Involution: Inverting the I

Christoph Reich 100 Dec 01, 2022
Barlow Twins and HSIC

Barlow Twins and HSIC Unofficial Pytorch implementation for Barlow Twins and HSIC_SSL on small datasets (CIFAR10, STL10, and Tiny ImageNet). Correspon

Yao-Hung Hubert Tsai 49 Nov 24, 2022
Gym environments used in the paper: "Developmental Reinforcement Learning of Control Policy of a Quadcopter UAV with Thrust Vectoring Rotors"

gym_multirotor Gym to train reinforcement learning agents on UAV platforms Quadrotor Tiltrotor Requirements This package has been tested on Ubuntu 18.

Aditya M. Deshpande 19 Dec 29, 2022
Learn about Spice.ai with in-depth samples

Samples Learn about Spice.ai with in-depth samples ServerOps - Learn when to run server maintainance during periods of low load Gardener - Intelligent

Spice.ai 16 Mar 23, 2022
An imperfect information game is a type of game with asymmetric information

DecisionHoldem An imperfect information game is a type of game with asymmetric information. Compared with perfect information game, imperfect informat

Decision AI 25 Dec 23, 2022
FaceQgen: Semi-Supervised Deep Learning for Face Image Quality Assessment

FaceQgen FaceQgen: Semi-Supervised Deep Learning for Face Image Quality Assessment This repository is based on the paper: "FaceQgen: Semi-Supervised D

Javier Hernandez-Ortega 3 Aug 04, 2022
Explainability for Vision Transformers (in PyTorch)

Explainability for Vision Transformers (in PyTorch) This repository implements methods for explainability in Vision Transformers

Jacob Gildenblat 442 Jan 04, 2023
Depth-Aware Video Frame Interpolation (CVPR 2019)

DAIN (Depth-Aware Video Frame Interpolation) Project | Paper Wenbo Bao, Wei-Sheng Lai, Chao Ma, Xiaoyun Zhang, Zhiyong Gao, and Ming-Hsuan Yang IEEE C

Wenbo Bao 7.7k Dec 31, 2022
Learning the Beauty in Songs: Neural Singing Voice Beautifier; ACL 2022 (Main conference); Official code

Learning the Beauty in Songs: Neural Singing Voice Beautifier Jinglin Liu, Chengxi Li, Yi Ren, Zhiying Zhu, Zhou Zhao Zhejiang University ACL 2022 Mai

Jinglin Liu 257 Dec 30, 2022
Orchestrating Distributed Materials Acceleration Platform Tutorial

Orchestrating Distributed Materials Acceleration Platform Tutorial This tutorial for orchestrating distributed materials acceleration platform was pre

BIG-MAP 1 Jan 25, 2022
A graph adversarial learning toolbox based on PyTorch and DGL.

GraphWar: Arms Race in Graph Adversarial Learning NOTE: GraphWar is still in the early stages and the API will likely continue to change. 🚀 Installat

Jintang Li 54 Jan 05, 2023
Distributed Evolutionary Algorithms in Python

DEAP DEAP is a novel evolutionary computation framework for rapid prototyping and testing of ideas. It seeks to make algorithms explicit and data stru

Distributed Evolutionary Algorithms in Python 4.9k Jan 05, 2023
This is a custom made virus code in python, using tkinter module.

skeleterrorBetaV0.1-Virus-code This is a custom made virus code in python, using tkinter module. This virus is not harmful to the computer, it only ma

AR 0 Nov 21, 2022