This is RFA-Toolbox, a simple and easy-to-use library that allows you to optimize your neural network architectures using receptive field analysis (RFA) and create graph visualizations of your architecture.

Overview

ReceptiveFieldAnalysisToolbox

CI Status Documentation Status

Poetry black pre-commit

PyPI Version Supported Python versions License

This is RFA-Toolbox, a simple and easy-to-use library that allows you to optimize your neural network architectures using receptive field analysis (RFA) and create graph visualizations of your architecture.

Installation

Install this via pip:

pip install rfa_toolbox

What is Receptive Field Analysis?

Receptive Field Analysis (RFA) is a simple yet effective way to optimize the efficiency of any neural architecture without training it.

Usage

This library allows you to look for certain inefficiencies withing your convolutional neural network setup without ever training the model. You can do this simply by importing your architecture into the format of RFA-Toolbox and then use the in-build functions to visualize your architecture using GraphViz. The visualization will automatically mark layers predicted to be unproductive red and critical layers, that are potentially unproductive orange. In edge case scenarios, where the receptive field expands of the boundaries of the image on some but not all tensor-axis, the layer will be marked yellow, since such a layer is probably not operating and maximum efficiency. Being able to detect these types of inefficiencies is especially useful if you plan to train your model on resolutions that are substantially lower than the design-resolution of most models. As an alternative, you can also use the graph from RFA-Toolbox to hook RFA-toolbox more directly into your program.

Examples

There are multiple ways to import your model into RFA-Toolbox for analysis, with additional ways being added in future releases.

PyTorch

The simplest way of importing a model is by directly extracting the compute-graph from the PyTorch-implementation of your model. Here is a simple example:

import torchvision
from rfa_toolbox import create_graph_from_pytorch_model, visualize_architecture
model = torchvision.models.alexnet()
graph = create_graph_from_pytorch_model(model)
visualize_architecture(
    graph, f"alexnet_32_pixel", input_res=32
).view()

This will create a graph of your model and visualize it using GraphViz and color all layers that are predicted to be unproductive for an input resolution of 32x32 pixels: rf_stides.PNG

Keep in mind that the Graph is reverse-engineerd from the PyTorch JIT-compiler, therefore no looping-logic is allowed within the forward pass of the model.

Custom

If you are not able to automatically import your model from PyTorch or you just want some visualization, you can also directly implement the model with the propriatary-Graph-format of RFA-Toolbox. This is similar to coding a compute-graph in a declarative style like in TensorFlow 1.x.

from rfa_toolbox import visualize_architecture
from rfa_toolbox.graphs import EnrichedNetworkNode, LayerDefinition


conv1 = EnrichedNetworkNode(
    name="Conv1",
    layer_info=LayerDefinition(
        name="Conv3x3",
        kernel_size=3, stride_size=1,
        filters=64
    ),
    predecessors=[]
)
conv2 = EnrichedNetworkNode(
    name="Conv2",
    layer_info=LayerDefinition(
        name="Conv3x3",
        kernel_size=3, stride_size=1,
        filters=128
    ),
    predecessors=[conv1]
)

conv3 = EnrichedNetworkNode(
    name="Conv3",
    layer_info=LayerDefinition(
        name="Conv3x3",
        kernel_size=3, stride_size=1,
        filters=256
    ),
    predecessors=[conv1]
)

conv4 = EnrichedNetworkNode(
    name="Conv4",
    layer_info=LayerDefinition(
        name="Conv3x3",
        kernel_size=3, stride_size=1,
        filters=256
    ),
    predecessors=[conv2, conv3]
)

out = EnrichedNetworkNode(
    name="Softmax",
    layer_info=LayerDefinition(
        name="Fully Connected",
        units=1000
    ),
    predecessors=[conv4]
)
visualize_architecture(
    out, f"example_model", input_res=32
).view()

This will produce the following graph:

simple_conv.png

A quick primer on the Receptive Field

To understand how RFA works, we first need to understand what a receptive field is, and it's effect on what the network is learning to detect. Every layer in a (convolutional) neural network has a receptive field. It can be considered the "field of view" of this layer. In more precise terms, we define a receptive field as the area influencing the output of a single position of the convolutional kernel. Here is a simple, 1-dimensional example: rf.PNG The first layer of this simple architecture can only ever "see" the information the input pixels directly under it's kernel, in this scenario 3 pixels. Another observation we can make from this example is that the receptive field size is expanding from layer to layer. This is happening, because the consecutive layers also have kernel sizes greater than 1 pixel, which means that they combine multiple adjacent positions on the feature map into a single position in their output. In other words, every consecutive layer adds additional context to each feature map position by expanding the receptive field. This ultimately allows networks to go from detecting small and simple patterns to big and very complicated ones.

The effective size of the kernel is not the only factor influence the growth of the receptive field size. Another important factor is the stride size: rf_stides.PNG The stride size is the size of the step between the individual kernel positions. Commonly, every possible position is evaluated, which is not affecting the receptive field size in any way. When the stride size is greater than one however valid positions of the kernel are skipped, which reduces the size of the feature map. Since now information on the feature map is now condensed on fewer feature map positions, the growth of the receptive field is multiplied for future layers. In real-world architectures, this is typically the case when downsampling layers like convolutions with a stride size of 2 are used.

Why does the Receptive Field Matter?

At this point you may be wondering why the receptive field of all things is useful for optimizing an architecture. The short answer to this is: because it influences where the network can process patterns of a certain size. Simply speaking each convolutional layer is only able to detect patterns of a certain size because of its receptive field. Interestingly this also means that there is an upper limit to the usefulness of expanding the receptive field. At the latest, this is the case when the receptive field of a layer is BIGGER than the input image, since no novel context can be added at this point. For convolutional layers this is a problem, because layers past this "Border Layer" now lack the primary mechenism convolutional layers use to improve the intermediate representation of the data, making these layers unproductive. If you are interested in the details of this phenomenon I recommend that you read these:

Optimizing Architectures using Receptive Field Analysis

So far, we learned that the expansion of the receptive field is the primary mechanism for improving the intermediate solution utilized by convolutional layers. At the point where this is no longer possible, layers are not able to contribute to the quality of the output of the model and become unproductive. We refer to these layers as unproductive layers. Layers who advance the receptive field sizes beyond the input resolution are referred to as critical layers. Critical layers are not necessarily unproductive, since they are still able to incorporate some novel context into the data, depending on how large the receptive field size of the input is.

Of course, being able to predict why and which layer will become dead weight during training is highly useful, since we can now adjust the design of the architecture to fit our input resolution better without spending time on training models. Depending on the requirements, we may choose to emphasize efficiency by primarily removing unproductive layers. Another option is to focus on predictive performance by making the unproductive layers productive again.

We now illustrate how you may choose to optimize an architecture on a simple example:

Let's take the ResNet architecture, which is a very popular CNN-model. We want to train ResNet18 on ResizedImageNet16, which has a 16 pixel input resolution. When we apply Receptive Field Analysis, we can see that most convolutional layers will in fact not contribute to the inference process (unproductive layers marked red, probable unproductive layers marked orange):

resnet18.PNG

We can clearly see that most of the network's layers will not contribute anything useful to the quality of the output, since their receptive field sizes are too large.

From here on we have multiple ways of optimizing the setup. Of course, we can simply increase the resolution, to involve more layers in the inference process, but that is usually very expensive from a computational point of view. In the first scenario, we are not interested in increasing the predictive performance of the model, we simply want to save computational resources. We reduce the kernel size of the first layer to 3x3 from 7x7. This change allows the first three building blocks to contribute more to the quality of the prediction, since no layer is predicted to be unproductive. We then simply replace the remaining building blocks with a simple output head. This new architecture then looks like this:

resnet18eff.PNG

Note that all previously unproductive layers are now either removed or only marked as "critical", which is generally not a big problem, since the receptive field size is "reset" by the receptive field size after each building block. Also note that fully connected layers are always marked as critical or unproductive, since they technically have an infinite receptive field size.

The resulting architecture achieves slightly better predictive performance as the original architecture, but with substantially lower computational cost. In this case we save approx. 80% of the computational cost and improve the predictive performance slightly from 17% to 18%.

In another scenario we may not be satisfied with the predictive performance. In other words, we want to make use of the underutilized parameters of the network by turning all unproductive layers into productive layers. We achieve this by changing their receptive field sizes. The biggest lever when it comes to changing the receptive field size is always the quantity of downsampling layers. Downsampling layers have a multiplicative effect on the growth of the receptive field for all consecutive layers. We can exploit this by simply removing the MaxPooling layer, which is the second layer of the original architecture. We also reduce the kernel size of the first layer to 3x3 from 7x7, and it's stride size to 1. This drastically reduces the receptive field sizes of the entire architecture, making most layers productive again. We address the remaining unproductive layers to by removing the final downsampling layer and distributing the building blocks as evenly as possible among the three stages between the remaining downsampling layers.

The resulting architecture now looks like this:

resnet18perf.PNG

The architecture now no longer has unproductive layers in their building blocks and only 2 critical layers. This improved architecture also achieves 34% Top1-Accuracy in ResizedImageNet16 instead of the 17% of the original architecture. However, this improvement comes at a price, since the removed downsampling layers have a negative impact on the computations required to process an image, which increases by roughly a factor of 8.

In any way, RFAToolbox allows you to optimize your convolutional neural network architectures for efficiency, performance or a sweetspot between the two without the need for long-running trial-and-error sessions.

Credits

This package was created with Cookiecutter and the browniebroke/cookiecutter-pypackage project template.

Comments
  • required keyword attribute 'name' is undefined

    required keyword attribute 'name' is undefined

    This layer uses a custom function in forward and yields

        graph = create_graph_from_pytorch_model(m, input_res=in_shape)
    
      File "D:\Anaconda\envs\pyt\lib\site-packages\rfa_toolbox\encodings\pytorch\ingest_architecture.py", line 291, in create_graph_from_model
        return make_graph(tm, ref_mod=model).to_graph()
    
      File "D:\Anaconda\envs\pyt\lib\site-packages\rfa_toolbox\encodings\pytorch\ingest_architecture.py", line 172, in make_graph
        make_graph(
    
      File "D:\Anaconda\envs\pyt\lib\site-packages\rfa_toolbox\encodings\pytorch\ingest_architecture.py", line 172, in make_graph
        make_graph(
    
      File "D:\Anaconda\envs\pyt\lib\site-packages\rfa_toolbox\encodings\pytorch\ingest_architecture.py", line 172, in make_graph
        make_graph(
    
      File "D:\Anaconda\envs\pyt\lib\site-packages\rfa_toolbox\encodings\pytorch\ingest_architecture.py", line 132, in make_graph
        submodule_name = find_name(list(n.inputs())[0], self_input)
    
      File "D:\Anaconda\envs\pyt\lib\site-packages\rfa_toolbox\encodings\pytorch\ingest_architecture.py", line 34, in find_name
        cur = i.node().s("name")
    
    RuntimeError: required keyword attribute 'name' is undefined
    

    Perhaps default to a generic name if it can't be extracted

    bug 
    opened by OverLordGoldDragon 11
  • Is GraphViz an essential program of rfa_toolbox?

    Is GraphViz an essential program of rfa_toolbox?

    Describe the bug I met this error as,

    Traceback (most recent call last): ... File "...\lib\site-packages\graphviz_tools.py", line 172, in wrapper return func(*args, **kwargs) File "...\lib\site-packages\graphviz\backend[rendering.py](http://rendering.py)", line 317, in render execute.run_check(cmd, File "...\lib\site-packages\graphviz\backend[execute.py](http://execute.py)", line 88, in run_check raise ExecutableNotFound(cmd) from e graphviz.backend.execute.ExecutableNotFound: failed to execute WindowsPath('dot'), make sure the Graphviz executables are on your systems' PATH

    To Reproduce Steps to reproduce the behavior:

    Additional context I am wondering whether GraphViz is an essential program of rfa_toolbox.

    Your answer and guide will be appreciated!

    bug 
    opened by songyuc 5
  • Support for loading tensorflow model

    Support for loading tensorflow model

    Hi Team,

    Great work, It would be great if its possible to load tensorflow models as well. Hoping to see the feature soon.

    Thanks and Regards, Ramson Jehu K

    enhancement 
    opened by Ramsonjehu 4
  • Sizes of tensors must match except in dimension 1. Expected size 26 but got size 25 for tensor number 1 in the list.

    Sizes of tensors must match except in dimension 1. Expected size 26 but got size 25 for tensor number 1 in the list.

    When loading a model from torch.hub I am getting the following error: Sizes of tensors must match except in dimension 1. Expected size 26 but got size 25 for tensor number 1 in the list

    Minimal working example:

    import torch
    from rfa_toolbox import create_graph_from_pytorch_model, visualize_architecture
    
    model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
    graph = create_graph_from_pytorch_model(model)
    

    Please see for more info: issues/6455

    bug 
    opened by Michelvl92 3
  • `width != height` support [FR]

    `width != height` support [FR]

    Thanks for your work.

    It'd be helpful for r to take on input's dimensionality - i.e. measure receptive field of height and width separately, in case strides and kernel sizes aren't equal throughout the network. The current workaround is to move the dimension of interest to be the first - so if (width, height) = (100, 200), we do (200, 100) and swap all network parameters.

    enhancement 
    opened by OverLordGoldDragon 3
  • Update pre-commit hook asottile/pyupgrade to v2.38.0

    Update pre-commit hook asottile/pyupgrade to v2.38.0

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | asottile/pyupgrade | repository | minor | v2.31.1 -> v2.38.0 |

    Note: The pre-commit manager in Renovate is not supported by the pre-commit maintainers or community. Please do not report any problems there, instead create a Discussion in the Renovate repository if you have any questions.


    Release Notes

    asottile/pyupgrade

    v2.38.0

    Compare Source

    v2.37.3

    Compare Source

    v2.37.2

    Compare Source

    v2.37.1

    Compare Source

    v2.37.0

    Compare Source

    v2.36.0

    Compare Source

    v2.35.0

    Compare Source

    v2.34.0

    Compare Source

    v2.33.0

    Compare Source

    v2.32.1

    Compare Source

    v2.32.0

    Compare Source


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Enabled.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, click this checkbox.

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 2
  • Pooling Layer of PyTorch functional result in wrong graph

    Pooling Layer of PyTorch functional result in wrong graph

    Describe the bug

    The issue arises in complex architectures like InceptionV3, when functional pooling layers are used in a module that has multiple layer processed in parallel. In this case, the graph representation is incorrect.

    To Reproduce Steps to reproduce the behavior:

    import torchvision
    from rfa_toolbox import create_graph_from_pytorch_model, toggle_coerce_torch_functional
    
    # disable the raise condition and treat all functional layers as convolutional layers with kernel_size=3 and stride_size=2
    toggle_coerce_torch_functional(True, kernel_size=3, stride_size=2)
    model = torchvision.models.inceptionv3()
    graph = create_graph_from_pytorch_model(model)
    visualize_architecture(graph, "inceptionv3", input_res=32).view()
    

    Additional context To avoid people making false assumption due to this bug, this is currently classified as a raise-Condition and will crash the graph-creation if not actively disabled, like in the example code.

    This bug can be avoided easily by not using pooling-layers from torch.functional and instead use the object-equivalents in torch.nn

    bug 
    opened by MLRichter 2
  • Update relekang/python-semantic-release action to v7.32.0

    Update relekang/python-semantic-release action to v7.32.0

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | relekang/python-semantic-release | action | minor | v7.31.4 -> v7.32.0 |


    Release Notes

    relekang/python-semantic-release

    v7.32.0

    Compare Source

    Feature
    • Add setting for enforcing textual changelog sections (#​502) (988437d)
    Documentation
    • Correct documented default behaviour for commit_version_number (#​497) (ffae2dc)

    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Enabled.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, click this checkbox.

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 1
  • Update pre-commit hook commitizen-tools/commitizen to v2.35.0

    Update pre-commit hook commitizen-tools/commitizen to v2.35.0

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | commitizen-tools/commitizen | repository | minor | v2.34.0 -> v2.35.0 |

    Note: The pre-commit manager in Renovate is not supported by the pre-commit maintainers or community. Please do not report any problems there, instead create a Discussion in the Renovate repository if you have any questions.


    Release Notes

    commitizen-tools/commitizen

    v2.35.0

    Compare Source

    Feat
    • allow fixup! and squash! in commit messages

    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Enabled.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, click this checkbox.

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 1
  • Update pre-commit/action action to v3

    Update pre-commit/action action to v3

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | pre-commit/action | action | major | v2.0.3 -> v3.0.0 |


    Release Notes

    pre-commit/action

    v3.0.0

    Compare Source

    Breaking

    see README for alternatives


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, click this checkbox.

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 1
  • Update pre-commit hook PyCQA/flake8 to v5 - autoclosed

    Update pre-commit hook PyCQA/flake8 to v5 - autoclosed

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | PyCQA/flake8 | repository | major | 4.0.1 -> 5.0.4 |

    Note: The pre-commit manager in Renovate is not supported by the pre-commit maintainers or community. Please do not report any problems there, instead create a Discussion in the Renovate repository if you have any questions.


    Release Notes

    PyCQA/flake8

    v5.0.4

    Compare Source

    v5.0.3

    Compare Source

    v5.0.2

    Compare Source

    v5.0.1

    Compare Source

    v5.0.0

    Compare Source


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 1
  • Update pre-commit hook PyCQA/flake8 to v6

    Update pre-commit hook PyCQA/flake8 to v6

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | PyCQA/flake8 | repository | major | 4.0.1 -> 6.0.0 |

    Note: The pre-commit manager in Renovate is not supported by the pre-commit maintainers or community. Please do not report any problems there, instead create a Discussion in the Renovate repository if you have any questions.


    Release Notes

    PyCQA/flake8

    v6.0.0

    Compare Source

    v5.0.4

    Compare Source

    v5.0.3

    Compare Source

    v5.0.2

    Compare Source

    v5.0.1

    Compare Source

    v5.0.0

    Compare Source


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 1
  • Update dependency flake8 to v6

    Update dependency flake8 to v6

    Mend Renovate

    This PR contains the following updates:

    | Package | Change | Age | Adoption | Passing | Confidence | |---|---|---|---|---|---| | flake8 (changelog) | ^5.0.0 -> ^6.0.0 | age | adoption | passing | confidence |


    Release Notes

    pycqa/flake8

    v6.0.0

    Compare Source


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 0
  • Update pre-commit hook pre-commit/pre-commit-hooks to v4.4.0

    Update pre-commit hook pre-commit/pre-commit-hooks to v4.4.0

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | pre-commit/pre-commit-hooks | repository | minor | v4.3.0 -> v4.4.0 |

    Note: The pre-commit manager in Renovate is not supported by the pre-commit maintainers or community. Please do not report any problems there, instead create a Discussion in the Renovate repository if you have any questions.


    Release Notes

    pre-commit/pre-commit-hooks

    v4.4.0

    Compare Source


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Enabled.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 0
  • Update pre-commit hook commitizen-tools/commitizen to v2.37.1

    Update pre-commit hook commitizen-tools/commitizen to v2.37.1

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | commitizen-tools/commitizen | repository | minor | v2.35.0 -> v2.37.1 |

    Note: The pre-commit manager in Renovate is not supported by the pre-commit maintainers or community. Please do not report any problems there, instead create a Discussion in the Renovate repository if you have any questions.


    Release Notes

    commitizen-tools/commitizen

    v2.37.1

    Compare Source

    Fix
    • changelog: allow rev range lookups without a tag format

    v2.37.0

    Compare Source

    Feat
    • add major-version-zero option to support initial package development

    v2.36.0

    Compare Source

    Feat
    • scripts: remove venv/bin/
    • scripts: add error message to test
    Fix
    • scripts/test: MinGW64 workaround
    • scripts/test: use double-quotes
    • scripts: pydocstyle and cz
    • bump.py: use sys.stdin.isatty()
    • scripts: use cross-platform POSIX
    • scripts: use portable shebang
    • pythonpackage.yml: undo indent reformatting
    • pythonpackage.yml: use bash

    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Enabled.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 1
  • Update relekang/python-semantic-release action to v7.32.2

    Update relekang/python-semantic-release action to v7.32.2

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | relekang/python-semantic-release | action | patch | v7.32.0 -> v7.32.2 |


    Release Notes

    relekang/python-semantic-release

    v7.32.2

    Compare Source

    Fix
    Documentation

    v7.32.1

    Compare Source

    Fix
    Documentation

    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Enabled.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 1
  • Update pre-commit hook asottile/pyupgrade to v3

    Update pre-commit hook asottile/pyupgrade to v3

    Mend Renovate

    This PR contains the following updates:

    | Package | Type | Update | Change | |---|---|---|---| | asottile/pyupgrade | repository | major | v2.31.1 -> v3.3.1 |

    Note: The pre-commit manager in Renovate is not supported by the pre-commit maintainers or community. Please do not report any problems there, instead create a Discussion in the Renovate repository if you have any questions.


    Release Notes

    asottile/pyupgrade

    v3.3.1

    Compare Source

    v3.3.0

    Compare Source

    v3.2.3

    Compare Source

    v3.2.2

    Compare Source

    v3.2.1

    Compare Source

    v3.2.0

    Compare Source

    v3.1.0

    Compare Source

    v3.0.0

    Compare Source

    v2.38.4

    Compare Source

    v2.38.3

    Compare Source

    v2.38.2

    Compare Source

    v2.38.1

    Compare Source

    v2.38.0

    Compare Source

    v2.37.3

    Compare Source

    v2.37.2

    Compare Source

    v2.37.1

    Compare Source

    v2.37.0

    Compare Source

    v2.36.0

    Compare Source

    v2.35.0

    Compare Source

    v2.34.0

    Compare Source

    v2.33.0

    Compare Source

    v2.32.1

    Compare Source

    v2.32.0

    Compare Source


    Configuration

    📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

    🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

    Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

    🔕 Ignore: Close this PR and you won't be reminded about this update again.


    • [ ] If you want to rebase/retry this PR, check this box

    This PR has been generated by Mend Renovate. View repository job log here.

    dependencies 
    opened by renovate[bot] 1
Releases(v1.7.0)
CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP

CLIP-GEN [简体中文][English] 本项目在萤火二号集群上用 PyTorch 实现了论文 《CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP》。 CLIP-GEN 是一个 Language-F

75 Dec 29, 2022
Implementation of: "Exploring Randomly Wired Neural Networks for Image Recognition"

RandWireNN Unofficial PyTorch Implementation of: Exploring Randomly Wired Neural Networks for Image Recognition. Results Validation result on Imagenet

Seung-won Park 684 Nov 02, 2022
Feedback is important: response-aware feedback mechanism for background based conversation

RFM The code for the paper: "Feedback is important: response-aware feedback mechanism for background based conversation." Requirements python 3.7 pyto

Jiatao Chen 2 Sep 29, 2022
A tensorflow implementation of Fully Convolutional Networks For Semantic Segmentation

##A tensorflow implementation of Fully Convolutional Networks For Semantic Segmentation. #USAGE To run the trained classifier on some images: python w

Alex Seewald 13 Nov 17, 2022
Keep CALM and Improve Visual Feature Attribution

Keep CALM and Improve Visual Feature Attribution Jae Myung Kim1*, Junsuk Choe1*, Zeynep Akata2, Seong Joon Oh1† * Equal contribution † Corresponding a

NAVER AI 90 Dec 07, 2022
UCSD Oasis platform

oasis UCSD Oasis platform Local project setup Install Docker Compose and make sure you have Pip installed Clone the project and go to the project fold

InSTEDD 4 Jun 16, 2021
This is the official released code for our paper, The Emergence of Objectness: Learning Zero-Shot Segmentation from Videos

The-Emergence-of-Objectness This is the official released code for our paper, The Emergence of Objectness: Learning Zero-Shot Segmentation from Videos

44 Oct 08, 2022
A rule learning algorithm for the deduction of syndrome definitions from time series data.

README This project provides a rule learning algorithm for the deduction of syndrome definitions from time series data. Large parts of the algorithm a

0 Sep 24, 2021
This application explain how we can easily integrate Deepface framework with Python Django application

deepface_suite This application explain how we can easily integrate Deepface framework with Python Django application install redis cache install requ

Mohamed Naji Aboo 3 Apr 18, 2022
PyTorch Lightning implementation of Automatic Speech Recognition

lasr Lightening Automatic Speech Recognition An MIT License ASR research library, built on PyTorch-Lightning, for developing end-to-end ASR models. In

Soohwan Kim 40 Sep 19, 2022
Pyramid addon for OpenAPI3 validation of requests and responses.

Validate Pyramid views against an OpenAPI 3.0 document Peace of Mind The reason this package exists is to give you peace of mind when providing a REST

Pylons Project 79 Dec 30, 2022
Tracking Progress in Question Answering over Knowledge Graphs

Tracking Progress in Question Answering over Knowledge Graphs Table of contents Question Answering Systems with Descriptions The QA Systems Table cont

Knowledge Graph Question Answering 47 Jan 02, 2023
Tools for investing in Python

InvestOps Original repository on GitHub Original author is Magnus Erik Hvass Pedersen Introduction This is a Python package with simple and effective

24 Nov 26, 2022
[CIKM 2019] Code and dataset for "Fi-GNN: Modeling Feature Interactions via Graph Neural Networks for CTR Prediction"

FiGNN for CTR prediction The code and data for our paper in CIKM2019: Fi-GNN: Modeling Feature Interactions via Graph Neural Networks for CTR Predicti

Big Data and Multi-modal Computing Group, CRIPAC 75 Dec 30, 2022
An implementation of the WHATWG URL Standard in JavaScript

whatwg-url whatwg-url is a full implementation of the WHATWG URL Standard. It can be used standalone, but it also exposes a lot of the internal algori

314 Dec 28, 2022
Lama-cleaner: Image inpainting tool powered by LaMa

Lama-cleaner: Image inpainting tool powered by LaMa

Qing 5.8k Jan 05, 2023
PyTorch EO aims to make Deep Learning for Earth Observation data easy and accessible to real-world cases and research alike.

Pytorch EO Deep Learning for Earth Observation applications and research. 🚧 This project is in early development, so bugs and breaking changes are ex

earthpulse 28 Aug 25, 2022
CondenseNet V2: Sparse Feature Reactivation for Deep Networks

CondenseNetV2 This repository is the official Pytorch implementation for "CondenseNet V2: Sparse Feature Reactivation for Deep Networks" paper by Le Y

Haojun Jiang 74 Dec 12, 2022
ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021

ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021 Dataset Code Demos Authors: He Zhang, Yuting Ye, Tak

HE ZHANG 194 Dec 06, 2022
Pose Detection and Machine Learning for real-time body posture analysis during exercise to provide audiovisual feedback on improvement of form.

Posture: Pose Tracking and Machine Learning for prescribing corrective suggestions to improve posture and form while exercising. This repository conta

Pratham Mehta 10 Nov 11, 2022