The CLRS Algorithmic Reasoning Benchmark

Related tags

Deep Learningclrs
Overview

The CLRS Algorithmic Reasoning Benchmark

Learning representations of algorithms is an emerging area of machine learning, seeking to bridge concepts from neural networks with classical algorithms. The CLRS Algorithmic Reasoning Benchmark (CLRS) consolidates and extends previous work torward evaluation algorithmic reasoning by providing a suite of implementations of classical algorithms. These algorithms have been selected from the third edition of the standard Introduction to Algorithms by Cormen, Leiserson, Rivest and Stein.

Installation

The CLRS Algorithmic Reasoning Benchmark can be installed with pip directly from GitHub, with the following command:

pip install git+git://github.com/deepmind/clrs.git

or from PyPI:

pip install dm-clrs

Getting started

To set up a Python virtual environment with the required dependencies, run:

python3 -m venv clrs_env
source clrs_env/bin/activate
python setup.py install

and to run our example baseline model:

python -m clrs.examples.run

Algorithms as graphs

CLRS implements the selected algorithms in an idiomatic way, which aligns as closely as possible to the original CLRS 3ed pseudocode. By controlling the input data distribution to conform to the preconditions we are able to automatically generate input/output pairs. We additionally provide trajectories of "hints" that expose the internal state of each algorithm, to both optionally simplify the learning challenge and to distinguish between different algorithms that solve the same overall task (e.g. sorting).

In the most generic sense, algorithms can be seen as manipulating sets of objects, along with any relations between them (which can themselves be decomposed into binary relations). Accordingly, we study all of the algorithms in this benchmark using a graph representation. In the event that objects obey a more strict ordered structure (e.g. arrays or rooted trees), we impose this ordering through inclusion of predecessor links.

How it works

For each algorithm, we provide a canonical set of train, eval and test trajectories for benchmarking out-of-distribution generalization.

Trajectories Problem Size
Train 1000 16
Eval 32 16
Test 32 64

where "problem size" refers to e.g. the length of an array or number of nodes in a graph, depending on the algorithm. These trajectories can be used like so:

train_ds, spec = clrs.clrs21_train("bfs")

for step in range(num_train_steps):
  feedback = train_sampler.next(batch_size)
  model.train(feedback.features)

Here, feedback is a namedtuple with the following structure:

Feedback = collections.namedtuple('Feedback', ['features', 'outputs'])
Features = collections.namedtuple('Features', ['inputs', 'hints', 'lengths'])

where the content of Features can be used for training and outputs is reserved for evaluation. Each field of the tuple is an ndarray with a leading batch dimension. Because hints are provided for the full algorithm trajectory, these contain an additional time dimension padded up to the maximum length max(T) of any trajectory within the dataset. The lengths field specifies the true length t <= max(T) for each trajectory, which can be used e.g. for loss masking.

Please see the examples directory for full working Graph Neural Network (GNN) examples using JAX and the DeepMind JAX Ecosystem of libraries.

What we provide

Algorithms

Our initial CLRS-21 benchmark includes the following 21 algorithms. More algorithms will be supported in the near future.

  • Divide and conquer
    • Maximum subarray (Kadane)
  • Dynamic programming
    • Matrix chain order
    • Optimal binary search tree
  • Graphs
    • Depth-first search
    • Breadth-first search
    • Topological sort
    • Minimum spanning tree (Prim)
    • Single-source shortest-path (Bellman Ford)
    • Single-source shortest-path (Dijsktra)
    • DAG shortest paths
    • All-pairs shortest-path (Floyd Warshall)
  • Greedy
    • Task scheduling
  • Searching
    • Minimum
    • Binary search
    • Quickselect
  • Sorting
    • Insertion sort
    • Bubble sort
    • Heapsort
    • Quicksort
  • Strings
    • String matcher (naive)
    • String matcher (KMP)

Baselines

We additionally provide JAX implementations of the following GNN baselines:

  • Graph Attention Networks (Velickovic et al., ICLR 2018)
  • Message-Passing Neural Networks (Gilmer et al., ICML 2017)

Citation

To cite the CLRS Algorithmic Reasoning Benchmark:

@article{deepmind2021clrs,
  author = {Petar Veli\v{c}kovi\'{c} and Adri\`{a} Puigdom\`{e}nech Badia and
    David Budden and Razvan Pascanu and Andrea Banino and Misha Dashevskiy and
    Raia Hadsell and Charles Blundell},
  title = {The CLRS Algorithmic Reasoning Benchmark},
  year = {2021},
}
Comments
  • More input signals for evaluation

    More input signals for evaluation

    Hi, in some parts of the code you increase input signals for a few algorithms for evaluation. But, the generated dataset on google storage seems to not contain them (i.e., it contains only 32 trajectories of each). Is the change going to be reflected in future versions?

    opened by smahdavi4 9
  • Inability to reproduce paper results

    Inability to reproduce paper results

    Thanks to the authors for constructing this benchmark.

    I'm having trouble reproducing some of the test scores reported in the paper, in Table 2. Comparing my runs against the paper results (averaging across 3 seeds: 42, 43, and 44):

    Graham Scan task: MPNN: 0.6355 vs. 0.9104 published PGN: 0.3622 vs. 0.5687 published

    Binary Search task: MPNN: 0.2026 vs. 0.3683 published PGN: 0.4390 vs. 0.7695 published

    Here are the values I used for my reproduction experiments: image

    Values for batch size, train items, learning rate, and hint teaching forcing noise were obtained from sections 4.1 and 4.2 of the paper. Values for eval_every, dropout, use_ln, and use_lstm (which were not found in the paper) were default values in the provided run file. Additionally, I used processor type "pgn_mask" for the PGN experiments.

    What setting should I use to more accurately reproduce the paper results? Were there hyperparameter settings unspecified in the paper (or specified) that I am getting wrong?

    Finally, I noticed the most recent commit, fixing the axis for mean reduction in PGN. Would that cause PGN to perform differently than reported in the paper? And perhaps explain the discrepancy in results I obtained.

    opened by CameronDiao 5
  • Is the paper still available?

    Is the paper still available?

    Thank you for the benchmark on neural algorithmic reasoning. I love the throwback to the classical CLRS textbook!

    I remember bumping into the PDF paper online, but cannot seem to access it anymore. Is the paper associated with the repo available soon?

    opened by chaitjo 3
  • Why no directed graph for FloydWarshall, Dijkstra, BFS and BellmanFord

    Why no directed graph for FloydWarshall, Dijkstra, BFS and BellmanFord

    Hi,

    What is the reason that you chose to use undirected graphs for the algorithms mentioned in the title? As far as I see, they should all be able to support directed graphs as well.

    Thanks!

    opened by sigeisler 2
  • Hint `A_t` in SCC

    Hint `A_t` in SCC

    Hi,

    thank you for the quite extensive work in putting CLRS together.

    Do you have a reference or reasoning about what hints you included? For example, can you elaborate on why you included hint A_t in strongly connected components?

    Thanks!

    opened by sigeisler 2
  • Sampling bug on undirected weighted graphs

    Sampling bug on undirected weighted graphs

    Hi, I think there is an issue in sampling the undirected weighted graphs. The sampled graph is first symmetrized and then weights are sampled, which makes the weights of each direction different. For algorithms that are capable of handling directed graphs, this might not cause any disruption in algorithm behavior. But, for the ones that output undirected edges (e.g., MST Kruskal), this would not be the true behavior of the algorithm.

    opened by smahdavi4 2
  • Update CLRS models with multi-algorithm options:

    Update CLRS models with multi-algorithm options:

    Update CLRS models with multi-algorithm options:

    • Example of multi-algorithm training with on-the-fly samples of different lengths and parameters.
    • Option to randomize positional input.
    • Option to move "pred_h" constant hints to inputs.
    • Option to enforce permutation constraints on the outputs of sorting algorithms.
    • Option to do soft or hard rematerialization of hints during training.
    • Option for gradient norm clipping.
    • Option to initialize scalar encoders with Xavier weights.
    • New processor types: with triplets and with gating.
    opened by copybara-service[bot] 1
  • Faster batching. Previous version resized batch for each sample, resulting in very long sampler creation times for algorithms like quickselect with big test batches.

    Faster batching. Previous version resized batch for each sample, resulting in very long sampler creation times for algorithms like quickselect with big test batches.

    Faster batching. Previous version resized batch for each sample, resulting in very long sampler creation times for algorithms like quickselect with big test batches.

    opened by copybara-service[bot] 1
  • Pass processor factory instead of processor string when creating model. This makes it easier to add new processors as processor parameters don't need to be passed down to model and net.

    Pass processor factory instead of processor string when creating model. This makes it easier to add new processors as processor parameters don't need to be passed down to model and net.

    Pass processor factory instead of processor string when creating model. This makes it easier to add new processors as processor parameters don't need to be passed down to model and net.

    opened by copybara-service[bot] 1
  • Problems with jax

    Problems with jax

    I installed the required libraries by pip install -r requirement.txt. The CUDA works well and the GPU can be found by tensorflow. However, when I try to run the code, an error occurs.

    "Unable to initialize backend 'cuda'": module 'jaxlib.xla_extension' has no attribute 'GpuAllocatorConfig'

    I searched on the web and found it might be caused by the version of the libraries.

    Could you please share the versions of those packages you used with me? Thank you very much. 微信图片_20220903050002

    opened by WilliamLi0623 0
  • Added subgraph_mode

    Added subgraph_mode

    Use as adjacency matrix the subgraph around nodes having ground-truth hints that changed from the previous iteration.

    Run with the star subgraphs by using python3 -m clrs.examples.run -algorithm dfs --processor_type gatv2 --hint_mode encoded_decoded_nodiff --hint_teacher_forcing_noise 0.5 --subgraph_mode stars

    opened by beabevi 1
  • tensorflow-macos and tensorflow-metal

    tensorflow-macos and tensorflow-metal

    Any comments on using 'tensorflow-macos' and 'tensorflow-metal' in the 'clrs' ecosystem?

    I was able to install tensoflow-macos and tensorflow-metal in the clrs virtual enviornment. My AMD GPU is being recognized ... but 'clrs' is looking for: 'tpu_driver' , 'cuda', 'tpu'. Any ideas?

    % python3 -m clrs.examples.run                                  
    I0605 17:03:39.042836 4560762368 run.py:196] Using CLRS30 spec: {'train': {'num_samples': 1000, 'length': 16, 'seed': 1}, 'val': {'num_samples': 32, 'length': 16, 'seed': 2}, 'test': {'num_samples': 32, 'length': 64, 'seed': 3}}
    I0605 17:03:39.044355 4560762368 run.py:180] Dataset found at /tmp/CLRS30/CLRS30_v1.0.0. Skipping download.
    Metal device set to: AMD Radeon Pro 5700 XT
    
    systemMemory: 128.00 GB
    maxCacheSize: 7.99 GB
    ...
      devices = jax.devices()
      logging.info(devices)
    
    I0605 17:03:40.532486 4560762368 xla_bridge.py:330] Unable to initialize backend 'tpu_driver': NOT_FOUND: Unable to find driver in registry given worker: 
    I0605 17:03:40.534332 4560762368 xla_bridge.py:330] Unable to initialize backend 'gpu': NOT_FOUND: Could not find registered platform with name: "cuda". Available platform names are: Interpreter Host
    I0605 17:03:40.536365 4560762368 xla_bridge.py:330] Unable to initialize backend 'tpu': INVALID_ARGUMENT: TpuPlatform is not available.
    W0605 17:03:40.536445 4560762368 xla_bridge.py:335] No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
    
    

    https://github.com/google/jax/issues/7163

    opened by dbl001 0
Releases(v1.0.0)
  • v1.0.0(Jun 1, 2022)

    Main changes

    • Extended the benchmark from 21 to 30 tasks by adding the following:
      • Activity selection (Gavril, 1972)
      • Longest common subsequence
      • Articulation points
      • Bridges
      • Kosaraju's strongly connected components algorithm (Aho et al., 1974)
      • Kruskal's minimum spanning tree algorithm (Kruskal, 1956)
      • Segment intersection
      • Graham scan convex hull algorithm (Graham, 1972)
      • Jarvis' march convex hull algorithm (Jarvis, 1973)
    • Added new baseline processors:
      • Deep Sets (Zaheer et al., NIPS 2017) and Pointer Graph Networks (Veličković et al., NeurIPS 2020) as particularisations of the existing Message-Passing Neural Network processor.
      • End-to-End Memory Networks (Sukhbaatar et al., NIPS 2015)
      • Graph Attention Networks v2 (Brody et al., ICLR 2022)

    Detailed changes

    • Add PyPI installation instructions. by @copybara-service in https://github.com/deepmind/clrs/pull/6
    • Fix README typo. by @copybara-service in https://github.com/deepmind/clrs/pull/7
    • Expose Sampler base class in public API. by @copybara-service in https://github.com/deepmind/clrs/pull/8
    • Add dataset reader. by @copybara-service in https://github.com/deepmind/clrs/pull/12
    • Patch imbalanced samplers for DFS-based algorithms. by @copybara-service in https://github.com/deepmind/clrs/pull/15
    • Disk-based samplers for convex hull algorithms. by @copybara-service in https://github.com/deepmind/clrs/pull/16
    • Avoid dividing by zero in F_1 score computaton. by @copybara-service in https://github.com/deepmind/clrs/pull/18
    • Sparsify the graphs generated for Kruskal. by @copybara-service in https://github.com/deepmind/clrs/pull/20
    • Option to add an lstm after the processor. by @copybara-service in https://github.com/deepmind/clrs/pull/19
    • Include dataset class and creation using tensorflow_datasets format. by @copybara-service in https://github.com/deepmind/clrs/pull/23
    • Change types of DataPoint and DataPoint members. by @copybara-service in https://github.com/deepmind/clrs/pull/22
    • Remove unnecessary data loading procedures. by @copybara-service in https://github.com/deepmind/clrs/pull/24
    • Modify example to run with the tf.data.Datasets dataset. by @copybara-service in https://github.com/deepmind/clrs/pull/25
    • Expose processors in CLRS by @copybara-service in https://github.com/deepmind/clrs/pull/21
    • Update CLRS-21 to CLRS-30. by @copybara-service in https://github.com/deepmind/clrs/pull/26
    • Update README with new algorithms. by @copybara-service in https://github.com/deepmind/clrs/pull/27
    • Add dropout to example. by @copybara-service in https://github.com/deepmind/clrs/pull/28
    • Make example download dataset. by @copybara-service in https://github.com/deepmind/clrs/pull/30
    • Force full dataset pipeline to be on the CPU. by @copybara-service in https://github.com/deepmind/clrs/pull/31
    • Set default dropout to 0.0 for now. by @copybara-service in https://github.com/deepmind/clrs/pull/32
    • Added support for GATv2 and masked GATs. by @copybara-service in https://github.com/deepmind/clrs/pull/33
    • Pad memory in MemNets and disable embeddings. by @copybara-service in https://github.com/deepmind/clrs/pull/34
    • baselines.py refactoring (2/N) by @copybara-service in https://github.com/deepmind/clrs/pull/36
    • baselines.py refactoring (3/N). by @copybara-service in https://github.com/deepmind/clrs/pull/38
    • Update readme. by @copybara-service in https://github.com/deepmind/clrs/pull/37
    • Generate more samples in tasks where the number of signals is small. by @copybara-service in https://github.com/deepmind/clrs/pull/40
    • Fix MemNet embeddings by @copybara-service in https://github.com/deepmind/clrs/pull/41
    • Supporting multiple attention heads in GAT and GATv2. by @copybara-service in https://github.com/deepmind/clrs/pull/42
    • Use GATv2 + add option to use different number of heads. by @copybara-service in https://github.com/deepmind/clrs/pull/43
    • Fix GAT processors. by @copybara-service in https://github.com/deepmind/clrs/pull/44
    • Fix samplers_test by @copybara-service in https://github.com/deepmind/clrs/pull/47
    • Update requirements.txt by @copybara-service in https://github.com/deepmind/clrs/pull/45
    • Bug in hint loss for CATEGORICAL type. The number of unmasked datapoints (jnp.sum(unmasked_data)) was computed over the whole time sequence instead of the pertinent time slice. by @copybara-service in https://github.com/deepmind/clrs/pull/53
    • Use internal rng for batch selection. Makes batch sampling deterministic given seed. by @copybara-service in https://github.com/deepmind/clrs/pull/49
    • baselines.py refactoring (6/N) by @copybara-service in https://github.com/deepmind/clrs/pull/52
    • Time-chunked datasets. by @copybara-service in https://github.com/deepmind/clrs/pull/48
    • Potential bug in edge diff decoding. by @copybara-service in https://github.com/deepmind/clrs/pull/54
    • Losses for chunked data. by @copybara-service in https://github.com/deepmind/clrs/pull/55
    • Changes to hint losses, mostly for decode_diffs=True. Before, only one of the terms of the MASK type loss was masked by gt_diff. Also, the loss was averaged over all time steps, including steps without diffs and therefore contributing 0 to the loss. Now we average only over the non-zero-diff steps. by @copybara-service in https://github.com/deepmind/clrs/pull/57
    • Adapt baseline model to process multiple algorithms with a single processor. by @copybara-service in https://github.com/deepmind/clrs/pull/59
    • Explicitly denote a hint learning mode, to delimit the tasks of interest to CLRS. by @copybara-service in https://github.com/deepmind/clrs/pull/60
    • Give names to encoder and decoder params. This facilitates analysis, especially in multi-algorithm training. by @copybara-service in https://github.com/deepmind/clrs/pull/63
    • Symmetrise the weights of sampled weighted undirected Erdos-Renyi graphs. by @copybara-service in https://github.com/deepmind/clrs/pull/62
    • Fix dataset size for augmented validation + test sets. by @copybara-service in https://github.com/deepmind/clrs/pull/65
    • Bug when hint mode is 'none': the multi-algorithm version needs something in the list diff decoders. by @copybara-service in https://github.com/deepmind/clrs/pull/66
    • Change requirements to a fixed tensorflow datasets nightly build. by @copybara-service in https://github.com/deepmind/clrs/pull/68
    • Patch KMP algorithm to incorporate the "reset" node. by @copybara-service in https://github.com/deepmind/clrs/pull/69
    • Allow for multiple-batch evaluation in example run script. by @copybara-service in https://github.com/deepmind/clrs/pull/70
    • Bug in SearchSampler: arrays should be sorted. by @copybara-service in https://github.com/deepmind/clrs/pull/71
    • Record separate hint eval scores for analysis. by @copybara-service in https://github.com/deepmind/clrs/pull/72
    • Symmetrised edges for PGN. by @copybara-service in https://github.com/deepmind/clrs/pull/73
    • Option for noise in teacher forcing by @copybara-service in https://github.com/deepmind/clrs/pull/74
    • Regularize PGN_MASK losses by predicting min_value-1 at missing edges instead of -10^5 by @copybara-service in https://github.com/deepmind/clrs/pull/75
    • Make encoded_decoded_nodiff default mode, and add flag to control teacher forcing noise. by @copybara-service in https://github.com/deepmind/clrs/pull/76
    • Detailed evaluation of hints in verbose mode. by @copybara-service in https://github.com/deepmind/clrs/pull/79
    • Pass processor factory instead of processor string when creating model. This makes it easier to add new processors as processor parameters don't need to be passed down to model and net. by @copybara-service in https://github.com/deepmind/clrs/pull/81
    • Update README. by @copybara-service in https://github.com/deepmind/clrs/pull/82
    • Use large negative number instead of 0 to discard non-connected edges for max aggregation in PGN processor. by @copybara-service in https://github.com/deepmind/clrs/pull/83
    • Add tensorflow requirement. by @copybara-service in https://github.com/deepmind/clrs/pull/84
    • Change deprecated tree_multimap to tree_map by @copybara-service in https://github.com/deepmind/clrs/pull/85
    • Increase version number for PyPI release. by @copybara-service in https://github.com/deepmind/clrs/pull/87

    Full Changelog: https://github.com/deepmind/clrs/compare/v0.0.2...v1.0.0

    Source code(tar.gz)
    Source code(zip)
  • v0.0.2(Aug 26, 2021)

  • v0.0.1(Aug 26, 2021)

Owner
DeepMind
DeepMind
Neurolab is a simple and powerful Neural Network Library for Python

Neurolab Neurolab is a simple and powerful Neural Network Library for Python. Contains based neural networks, train algorithms and flexible framework

152 Dec 06, 2022
Python calculations for the position of the sun and moon.

Astral This is 'astral' a Python module which calculates Times for various positions of the sun: dawn, sunrise, solar noon, sunset, dusk, solar elevat

Simon Kennedy 169 Dec 20, 2022
Paper list of log-based anomaly detection

Paper list of log-based anomaly detection

Weibin Meng 411 Dec 05, 2022
Normalization Calibration (NorCal) for Long-Tailed Object Detection and Instance Segmentation

NorCal Normalization Calibration (NorCal) for Long-Tailed Object Detection and Instance Segmentation On Model Calibration for Long-Tailed Object Detec

Tai-Yu (Daniel) Pan 24 Dec 25, 2022
Code and description for my BSc Project, September 2021

BSc-Project Disclaimer: This repo consists of only the additional python scripts necessary to run the agent. To run the project on your own personal d

Matin Tavakoli 20 Jul 19, 2022
Pytorch implementation of "Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech"

GradTTS Unofficial Pytorch implementation of "Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech" (arxiv) About this repo This is an unoffic

HeyangXue1997 103 Dec 23, 2022
Code for "SRHEN: Stepwise-Refining Homography Estimation Network via Parsing Geometric Correspondences in Deep Latent Space"

SRHEN This is a better and simpler implementation for "SRHEN: Stepwise-Refining Homography Estimation Network via Parsing Geometric Correspondences in

1 Oct 28, 2022
A set of Deep Reinforcement Learning Agents implemented in Tensorflow.

Deep Reinforcement Learning Agents This repository contains a collection of reinforcement learning algorithms written in Tensorflow. The ipython noteb

Arthur Juliani 2.2k Jan 01, 2023
OpenGAN: Open-Set Recognition via Open Data Generation

OpenGAN: Open-Set Recognition via Open Data Generation ICCV 2021 (oral) Real-world machine learning systems need to analyze novel testing data that di

Shu Kong 90 Jan 06, 2023
Colab notebook for openai/glide-text2im.

GLIDE text2im on Colab This repository provides a Colab notebook to produce images conditioned on text prompts with GLIDE [1]. Usage Run text2im.ipynb

Wok 19 Oct 19, 2022
It's A ML based Web Site build with python and Django to find the breed of the dog

ML-Based-Dog-Breed-Identifier This is a Django Based Web Site To Identify the Breed of which your DOG belogs All You Need To Do is to Follow These Ste

Sanskar Dwivedi 2 Oct 12, 2022
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

Facebook Research 281 Dec 22, 2022
Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)

AudioCLIP Extending CLIP to Image, Text and Audio This repository contains implementation of the models described in the paper arXiv:2106.13043. This

458 Jan 02, 2023
Implementation of "GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings" in PyTorch

PyGAS: Auto-Scaling GNNs in PyG PyGAS is the practical realization of our G NN A uto S cale (GAS) framework, which scales arbitrary message-passing GN

Matthias Fey 139 Dec 25, 2022
ICCV2021 Oral SA-ConvONet: Sign-Agnostic Optimization of Convolutional Occupancy Networks

Sign-Agnostic Convolutional Occupancy Networks Paper | Supplementary | Video | Teaser Video | Project Page This repository contains the implementation

64 Jan 05, 2023
Code for "Learning the Best Pooling Strategy for Visual Semantic Embedding", CVPR 2021

Learning the Best Pooling Strategy for Visual Semantic Embedding Official PyTorch implementation of the paper Learning the Best Pooling Strategy for V

Jiacheng Chen 106 Jan 06, 2023
For AILAB: Cross Lingual Retrieval on Yelp Search Engine

Cross-lingual Information Retrieval Model for Document Search Train Phase CUDA_VISIBLE_DEVICES="0,1,2,3" \ python -m torch.distributed.launch --nproc_

Chilia Waterhouse 104 Nov 12, 2022
Framework web SnakeServer.

SnakeServer - Framework Web 🐍 Documentação oficial do framework SnakeServer. Conteúdo Sobre Como contribuir Enviar relatórios de segurança Pull reque

Jaedson Silva 0 Jul 21, 2022
WORD: Revisiting Organs Segmentation in the Whole Abdominal Region

WORD: Revisiting Organs Segmentation in the Whole Abdominal Region. This repository provides the codebase and dataset for our work WORD: Revisiting Or

Healthcare Intelligence Laboratory 71 Jan 07, 2023
🦕 NanoSaur is a little tracked robot ROS2 enabled, made for an NVIDIA Jetson Nano

🦕 nanosaur NanoSaur is a little tracked robot ROS2 enabled, made for an NVIDIA Jetson Nano Website: nanosaur.ai Do you need an help? Discord For tech

NanoSaur 162 Dec 09, 2022