Bio-Computing Platform Featuring Large-Scale Representation Learning and Multi-Task Deep Learning “螺旋桨”生物计算工具集

Overview

English | 简体中文


Version python version support os

Latest News

2021.10.25 Paper "Docking-based Virtual Screening with Multi-Task Learning" is accepted by BIBM 2021.

2021.07.29 PaddleHelix released a novel geometry-level molecular pre-training model, taking advantage of the 3D spatial structures of the molecules. Please refer to GEM for more details.

2021.06.17 PaddleHelix team won the 2nd place in the OGB-LCS KDD Cup 2021 PCQM4M-LSC track, predicting DFT-calculated HOMO-LUMO energy gap of molecules. Please refer to the solution for more details.

2021.05.20 PaddleHelix v1.0 released. 1) Update from static framework to dynamic framework; 2) Add new applications: molecular generation and drug-drug synergy.

2021.05.18 Paper "Structure-aware Interactive Graph Neural Networks for the Prediction of Protein-Ligand Binding Affinity" is accepted by KDD 2021. The code is available at here.

2021.03.15 PaddleHelix team ranks 1st in the ogbg-molhiv and ogbg-molpcba of OGB, predicting the molecular properties.


Introduction

PaddleHelix is a bio-computing tool, taking advantage of the machine learning approaches, especially deep neural networks, for facilitating the development of the following areas:

  • Drug Discovery. Provide 1) Large-scale pre-training models: compounds and proteins; 2) Various applications: molecular property prediction, drug-target affinity prediction, and molecular generation.
  • Vaccine Design. Provide RNA design algorithms, including LinearFold and LinearPartition.
  • Precision Medicine. Provide application of drug-drug synergy.

Resources

Application Platform

PaddleHelix platform provides the AI + biochemistry abilities for the scenarios of drug discovery, vaccine design and precision medicine.

Installation Guide

PaddleHelix is a bio-computing repository based on PaddlePaddle, a high-performance Parallelized Deep Learning Platform. The installation prerequisites and guide can be found here.

Tutorials

We provide abundant tutorials to help you navigate the repository and start quickly.

Examples

We also provide examples that implement various algorithms and show the methods running the algorithms:

Competition Solutions

PaddleHelix team participated in multiple competitions related to bio-computing. The solutions can be found here.

Guide for Developers

  • To develope new functions based on the source code of PaddleHelix, please refer to guide for developers.
  • For more details of the APIs, please refer to the documents.

Welcome to Join Us

We are looking for machine learning researchers / engineers or bioinformatics / computational chemistry researchers interested in AI-driven drug design. We base in Shenzhen or Shanghai, China. Please send the resumes to [email protected] or [email protected].

Comments
  • 模型预测的蛋白质其中的每个氨基酸结果都特别大,如1028164807,要转换到对应的字母时发生list index out of range

    模型预测的蛋白质其中的每个氨基酸结果都特别大,如1028164807,要转换到对应的字母时发生list index out of range

    用的模型是helixfold-single/user_data/model_data/helixfold-single.pdparams

    出错的代码在data_utils.py中,: def aatype_to_sequence(aatype):

    return ''.join([
        residue_constants.restypes_with_x[aatype[i]] 
        for i in range(len(aatype))
    ])
    

    Traceback (most recent call last): File "/mnt/workspace/helixfold-single_original/helixfold_single_inference.py", line 121, in main(args) File "/mnt/workspace/helixfold-single_original/helixfold_single_inference.py", line 103, in main args.fasta_file, af2_model_config) File "/mnt/workspace/helixfold-single_original/helixfold_single_inference.py", line 56, in sequence_to_batch sequence, description = read_fasta_file(fasta_file) File "/mnt/workspace/helixfold-single_original/helixfold_single_inference.py", line 42, in read_fasta_file with open(fasta_file, 'r') as f: TypeError: expected str, bytes or os.PathLike object, not NoneType (base) /mnt/workspace> /home/pai/bin/python /mnt/workspace/helixfold-single_original/helixfold_single_inference.py /home/pai/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp /home/pai/lib/python3.6/site-packages/OpenSSL/crypto.py:8: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release. from cryptography import utils, x509 W1202 15:20:51.727890 26523 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 8.0, Driver API Version: 11.4, Runtime API Version: 10.2 W1202 15:20:51.730804 26523 gpu_resources.cc:91] device: 0, cuDNN Version: 7.6. [RunTapeModel] freeze_tape: False model size: 1187148024 Load model from helixfold-single/user_data/model_data/helixfold-single.pdparams 2022-12-02 15:21:01.499896: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. Traceback (most recent call last): File "/mnt/workspace/helixfold-single_original/helixfold_single_inference.py", line 121, in main(args) File "/mnt/workspace/helixfold-single_original/helixfold_single_inference.py", line 106, in main results = model(batch, compute_loss=False) File "/home/pai/lib/python3.6/site-packages/paddle/fluid/dygraph/layers.py", line 948, in call return self.forward(*inputs, **kwargs) File "/mnt/workspace/helixfold-single_original/utils/model_tape.py", line 115, in forward batch = self._forward_tape(batch) File "/mnt/workspace/helixfold-single_original/utils/model_tape.py", line 95, in _forward_tape tape_input = self._create_tape_input(batch) File "/mnt/workspace/helixfold-single_original/utils/model_tape.py", line 80, in _create_tape_input text = aatype_to_sequence(aatype[:seq_len]) File "/mnt/workspace/helixfold-single_original/alphafold_paddle/data/data_utils.py", line 96, in aatype_to_sequence for i in range(len(aatype)) File "/mnt/workspace/helixfold-single_original/alphafold_paddle/data/data_utils.py", line 96, in for i in range(len(aatype))

    opened by wcf653422590 10
  • Get a valueError when I try to run the helixfold-single.

    Get a valueError when I try to run the helixfold-single.

    I use th 2.3version paddle cuda 11.2 linux docker. I solved the dependency according to the readme. And I download the official init model. But when I run the code , I got the valueError. The code is PaddleHelix/apps/protein_folding/helixfold-single/helixfold_single_inference.py

    like robin said, what's the problem?

    2022-09-26 09:24:25.062647: I tensorflow/core/platform/profile_utils/cpu_utils.cc:114] CPU Frequency: 2500000000 Hz
    /usr/local/lib/python3.7/dist-packages/paddle/fluid/framework.py:3623: DeprecationWarning: Op `slice` is executed through `append_op` under the dynamic mode, the corresponding API implementation needs to be upgraded to using `_C_ops` method.
      "using `_C_ops` method." % type, DeprecationWarning)
    Traceback (most recent call last):
      File "helixfold_single_inference.py", line 121, in <module>
        main(args)
      File "helixfold_single_inference.py", line 106, in main
        results = model(batch, compute_loss=False)
      File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 929, in __call__
        return self._dygraph_call_func(*inputs, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 914, in _dygraph_call_func
        outputs = self.forward(*inputs, **kwargs)
      File "/tmp/helix/utils/model_tape.py", line 115, in forward
        batch = self._forward_tape(batch)
      File "/tmp/helix/utils/model_tape.py", line 98, in _forward_tape
        return_representations=True, return_last_n_weight=self.model_config.last_n_weight)
      File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 929, in __call__
        return self._dygraph_call_func(*inputs, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 914, in _dygraph_call_func
        outputs = self.forward(*inputs, **kwargs)
      File "/tmp/helix/tape/others/protein_sequence_model_dynamic.py", line 218, in forward
        return_last_n_weight=return_last_n_weight)
      File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 929, in __call__
        return self._dygraph_call_func(*inputs, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 914, in _dygraph_call_func
        outputs = self.forward(*inputs, **kwargs)
      File "/tmp/helix/tape/others/transformer_block.py", line 530, in forward
        is_recompute=self.training)
      File "/tmp/helix/tape/others/transformer_block.py", line 26, in recompute_wrapper
        return func(*args)
      File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 929, in __call__
        return self._dygraph_call_func(*inputs, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 914, in _dygraph_call_func
        outputs = self.forward(*inputs, **kwargs)
      File "/tmp/helix/tape/others/transformer_block.py", line 480, in forward
        attn_results = self.self_attn(src, src, src, src_mask, relative_pos, rel_embeddings)
      File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 929, in __call__
        return self._dygraph_call_func(*inputs, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 914, in _dygraph_call_func
        outputs = self.forward(*inputs, **kwargs)
      File "/tmp/helix/tape/others/transformer_block.py", line 398, in forward
        rel_att = self.disentangled_attention_bias(query_layer, key_layer, relative_pos, rel_embeddings, scale_factor)
      File "/tmp/helix/tape/others/transformer_block.py", line 367, in disentangled_attention_bias
        c2p_att = self.gather_4d(c2p_att, index=c2p_gather_idx)
      File "/tmp/helix/tape/others/transformer_block.py", line 343, in gather_4d
        stack_0 = paddle.tile(paddle.arange(start=0, end=a, step=1, dtype="float32").reshape([a, 1]), [b * c * d]).reshape([a, b, c, d]).cast(index.dtype)
      File "/usr/local/lib/python3.7/dist-packages/paddle/tensor/manipulation.py", line 3243, in reshape
        out, _ = _C_ops.reshape2(x, None, 'shape', shape)
    ValueError: (InvalidArgument) The 'shape' in ReshapeOp is invalid. The input tensor X'size must be equal to the capacity of 'shape'. But received X's shape = [1, 1067237297], X's size = 1067237297, 'shape' is [1, 16, 2, 2], the capacity of 'shape' is 64.
      [Hint: Expected capacity == in_size, but received capacity:64 != in_size:1067237297.] (at /root/paddlejob/workspace/env_run/Paddle/paddle/fluid/operators/reshape_op.cc:204)
    
    
    opened by yangjinhaoo 10
  • Run error in GEM

    Run error in GEM

    An error raised when I am runing GEM I am using single GPU which is GeForce RTX 2080 Ti with 11G memory my code is the same as that in github: `### build model init_model = '/home/outdo/PaddleHelix/apps/pretrained_compound/ChemRL/GEM/pretrain_models-chemrl_gem/regr.pdparams'

    compound_encoder = GeoGNNModel(compound_encoder_config) model = DownstreamModel(model_config, compound_encoder) if metric == 'square': criterion = nn.MSELoss() else: criterion = nn.L1Loss() encoder_params = compound_encoder.parameters() head_params = exempt_parameters(model.parameters(), encoder_params) encoder_opt = paddle.optimizer.Adam(args.encoder_lr, parameters=encoder_params) head_opt = paddle.optimizer.Adam(args.head_lr, parameters=head_params) print('Total param num: %s' % (len(model.parameters()))) print('Encoder param num: %s' % (len(encoder_params))) print('Head param num: %s' % (len(head_params))) for i, param in enumerate(model.named_parameters()): print(i, param[0], param[1].name)

    if not init_model is None and not args.init_model == "": compound_encoder.set_state_dict(paddle.load(args.init_model)) print('Load state_dict from %s' % args.init_model)`

    error information: `--------------------------------------------------------------------------- OSError Traceback (most recent call last) Input In [25], in <cell line: 5>() 4 import paddle.fluid as fluid 5 with fluid.device_guard("cpu"): ----> 6 compound_encoder = GeoGNNModel(compound_encoder_config) 7 model = DownstreamModel(model_config, compound_encoder) 8 if metric == 'square':

    File ~/PaddleHelix/pahelix/model_zoo/gem_model.py:81, in GeoGNNModel.init(self, model_config) 78 self.bond_float_names = model_config['bond_float_names'] 79 self.bond_angle_float_names = model_config['bond_angle_float_names'] ---> 81 self.init_atom_embedding = AtomEmbedding(self.atom_names, self.embed_dim) 82 self.init_bond_embedding = BondEmbedding(self.bond_names, self.embed_dim) 83 self.init_bond_float_rbf = BondFloatRBF(self.bond_float_names, self.embed_dim)

    File ~/PaddleHelix/pahelix/networks/compound_encoder.py:38, in AtomEmbedding.init(self, atom_names, embed_dim) 36 self.embed_list = nn.LayerList() 37 for name in self.atom_names: ---> 38 embed = nn.Embedding( 39 CompoundKit.get_atom_feature_size(name) + 5, 40 embed_dim, 41 weight_attr=nn.initializer.XavierUniform()) 42 self.embed_list.append(embed)

    File ~/anaconda3/envs/paddlehelix/lib/python3.8/site-packages/paddle/nn/layer/common.py:1453, in Embedding.init(self, num_embeddings, embedding_dim, padding_idx, sparse, weight_attr, name) 1451 self._remote_prefetch = False 1452 self._name = name -> 1453 self.weight = self.create_parameter( 1454 attr=self._weight_attr, 1455 shape=self._size, 1456 dtype=self._dtype, 1457 is_bias=False) 1459 if in_dynamic_mode() and padding_idx != -1: 1460 with paddle.no_grad():

    File ~/anaconda3/envs/paddlehelix/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py:423, in Layer.create_parameter(self, shape, attr, dtype, is_bias, default_initializer) 421 if isinstance(temp_attr, six.string_types) and temp_attr == "": 422 temp_attr = None --> 423 return self._helper.create_parameter(temp_attr, shape, dtype, is_bias, 424 default_initializer)

    File ~/anaconda3/envs/paddlehelix/lib/python3.8/site-packages/paddle/fluid/layer_helper_base.py:376, in LayerHelperBase.create_parameter(self, attr, shape, dtype, is_bias, default_initializer, stop_gradient, type) 370 if is_used: 371 raise ValueError( 372 "parameter name [{}] have be been used. " 373 "In dygraph mode, the name of parameter can't be same." 374 "Please check the parameter attr value passed to self.create_parameter or " 375 "constructor of dygraph Layers".format(attr.name)) --> 376 return self.main_program.global_block().create_parameter( 377 dtype=dtype, 378 shape=shape, 379 type=type, 380 stop_gradient=stop_gradient, 381 **attr._to_kwargs(with_initializer=True)) 382 else: 383 self.startup_program.global_block().create_parameter( 384 dtype=dtype, 385 shape=shape, 386 type=type, 387 **attr._to_kwargs(with_initializer=True))

    File ~/anaconda3/envs/paddlehelix/lib/python3.8/site-packages/paddle/fluid/framework.py:3572, in Block.create_parameter(self, *args, **kwargs) 3570 pass 3571 else: -> 3572 initializer(param, self) 3573 return param

    File ~/anaconda3/envs/paddlehelix/lib/python3.8/site-packages/paddle/fluid/initializer.py:605, in XavierInitializer.call(self, var, block) 603 if self._uniform: 604 limit = np.sqrt(6.0 / float(fan_in + fan_out)) --> 605 out_var = _C_ops.uniform_random('shape', out_var.shape, 'min', 606 -limit, 'max', limit, 'seed', 607 self._seed, 'dtype', out_dtype) 608 else: 609 std = math.sqrt(2.0 / float(fan_in + fan_out))

    OSError: [operator < uniform_random > error]`

    Could anyone help me? Thank you so much!

    opened by kaisermoon 5
  • SIGN算法的数据预处理错误

    SIGN算法的数据预处理错误

    1. setxor错误:举例输入 setxor(a=[1, 0], b=[0, 2]),将会得到 [0, 1, 0, 2], [], 实际上按照bond_graph_base的生成方式,只需要取a[0]和 b[1]即可 https://github.com/PaddlePaddle/PaddleHelix/blob/e5578f72c2a203a27d9df7da111f1ced826c1429/apps/drug_target_interaction/sign/dataset.py#L149

    2. 这里输出的atoms使用的是atom在特征矩阵的维度, 与后面的atom_type不符, 提供的处理好的数据是没有问题的(https://www.dropbox.com/sh/68vc7j5cvqo4p39/AAB_96TpzJWXw6N0zxHdsppEa) https://github.com/PaddlePaddle/PaddleHelix/blob/e5578f72c2a203a27d9df7da111f1ced826c1429/apps/drug_target_interaction/sign/preprocess_pdbbind.py#L280

    存在疑惑的地方: 3. 如果 a边:[0, 1 ], b边[1, 0], 则c边为[0, 0], 如果取dist_mat[0, 0],则c边长度为inf,计算可得夹角为180度(encode为5)但按照其它的边的夹角构造方式,则夹角应该为0度(encode为0) https://github.com/PaddlePaddle/PaddleHelix/blob/e5578f72c2a203a27d9df7da111f1ced826c1429/apps/drug_target_interaction/sign/dataset.py#L152

    image

    opened by chrisxu2016 4
  • How GEM2 use 3d information

    How GEM2 use 3d information

    Hi I recently read your paper "GEM-2: Next Generation Molecular Property Prediction Network by Modeling Full-range Many-body Interactions" and I'm quite impressed by the performance of GEM2 on PCQM4Mv2. However, I have some difficulties in understanding its implementation. In the method call of class OptimusTransformerFn from https://github.com/PaddlePaddle/PaddleHelix/blob/dev/apps/pretrained_compound/ChemRL/GEM-2/src/featurizer.py I see two methods to compute 3d coordiantes for each molecule: (1) raw3d and (2) rdkit3d The first method seems to load the 3d information provided by PCQM4Mv2 dataset, so it only applies for training set. The second method seems to use some built-in algorithm of rdkit to compute 3d information and can apply for both training and valid, test set. So here are my questions: (1) For the result reported in your paper, which method you use to compute 3d information? raw3d or rdkit3d? (2) Does GEM2 use 3d information during inference on valid and test set? Or it just turn off 3d information? (3) If possible, can you provide the pretrained weight for the GEM2 model reported in the paper? Thank you!

    opened by tiendatnguyen-vision 3
  • Enriching installation part of the README and modify setup.py slightly

    Enriching installation part of the README and modify setup.py slightly

    Add instructions on how to create a new environment on conda, and change the paddlepaddle version requirement from only 2.0.0rc0 to equal or bigger than 2.0.0rc0.

    opened by Noisyntrain 3
  • `Model` object has no attribute decode in HelixFold

    `Model` object has no attribute decode in HelixFold

    Stepped through this issue and I've found that <simtk.openmm.app.internal.pdbstructure.Model object at 0x7f424e6f6750> is passed in to this method within openmm that expects a file object. It is possible this is an openmm issue but this is currently blocking my usage of HelixFold. I have verified that I have the latest versions of both openmm and pdbfixer and also have recently pulled the updated setup_env file that made changes to the linking of openmm into simtek. Traceback (most recent call last): File "run_helixfold.py", line 375, in <module> main(args) File "run_helixfold.py", line 280, in main random_seed=random_seed) File "run_helixfold.py", line 160, in predict_structure output_dir, 0, timings) File "/home/common/proj/FoldingBenchMarks/HelixFold/apps/protein_folding/helixfold/alphafold_paddle/model/model.py", line 283, in postprocess relaxed_pdb_str = relaxer.process(prot=prot)[0] File "/home/common/proj/FoldingBenchMarks/HelixFold/apps/protein_folding/helixfold/alphafold_paddle/relax/relax.py", line 63, in process max_outer_iterations=self._max_outer_iterations) File "/home/common/proj/FoldingBenchMarks/HelixFold/apps/protein_folding/helixfold/alphafold_paddle/relax/amber_minimize.py", line 939, in run_pipeline pdb_string = clean_protein(prot, checks=checks) File "/home/common/proj/FoldingBenchMarks/HelixFold/apps/protein_folding/helixfold/alphafold_paddle/relax/amber_minimize.py", line 187, in clean_protein as_file = openmm_app.PDBFile(pdb_structure) File "/home/grads/bernardm/.conda/envs/helixfold/lib/python3.7/site-packages/simtk/openmm/app/pdbfile.py", line 96, in __init__ pdb = PdbStructure(inputfile, load_all_models=True, extraParticleIdentifier=extraParticleIdentifier) File "/home/grads/bernardm/.conda/envs/helixfold/lib/python3.7/site-packages/openmm/app/internal/pdbstructure.py", line 153, in __init__ self._load(input_stream) File "/home/grads/bernardm/.conda/envs/helixfold/lib/python3.7/site-packages/openmm/app/internal/pdbstructure.py", line 161, in _load if not isinstance(pdb_line, str): AttributeError: 'Model' object has no attribute 'decode'

    opened by bernym12 2
  • Question about the Branch Parallelism in Evoformer

    Question about the Branch Parallelism in Evoformer

    Hi

    I mention that you introduce branch parallelism in your arxiv paper. I wonder that is the model structure implemented by BP is identical to the one in Alphafold2 paper. It appears to me that computations are sequential in the paper.

    Thanks!

    opened by zyeric 2
  • Use newer OpenMM

    Use newer OpenMM

    The setup_env script pins OpenMM to 7.5.1, which is an old release that isn't supported anymore. Could that be updated to the current release, or alternatively could the pin just be removed? As far as I can tell nothing in the code requires the old version.

    opened by peastman 2
  • helixfold模型运行时如何控制显存

    helixfold模型运行时如何控制显存

    我按照helixfold的README_inference.md文件运行run_helixfold.py模型时遇到了显存溢出的问题,我使用的是一张12GB的3080Ti。我尝试降低batch的大小,但是看代码中batch似乎是要预测的蛋白质fasta文件的特征文件。所以有什么好的方法能够降低模型占用的显存容量吗,或者有其它能够帮助该模型在12G的显存上运行的建议吗?十分感谢您的帮助!!!!

    opened by TNTSAYou 2
  • 关于PaddleHelix/apps/drug_target_interaction/sign/项目中数据处理问题

    关于PaddleHelix/apps/drug_target_interaction/sign/项目中数据处理问题

    在运行KDD 2021 paper: "Structure-aware Interactive Graph Neural Networks for the Prediction of Protein-Ligand Binding Affinity".这篇文章的代码数据处理部分命令行时 python preprocess_pdbbind.py --data_path_core YOUR_DATASET_PATH --data_path_refined YOUR_DATASET_PATH --dataset_name pdbbind2016 --output_path YOUR_OUTPUT_PATH --cutoff 5 出现了下图中的错误,不知该怎样解决,求助大佬 image

    使用的版本如下 Python 3.8.13 paddlepaddle-gpu 2.3.1.post112

    opened by tiezhuge 2
  • I have a question regarding GEM-2 model, PCQM4Mv2 dataset

    I have a question regarding GEM-2 model, PCQM4Mv2 dataset

    Hi, I have a question regarding GEM-2 model.

    How did you measure the performance of validation set and test set on PCQM4Mv2 dataset?

    Because I knew that they do not provide 3D coordinates.

    Thank you.

    opened by Sangyeup 3
  • Optimize the implementation of StructureModule.

    Optimize the implementation of StructureModule.

    针对PR中的代码修改,我写了单测比较精度:https://gist.github.com/Xreki/f451fcb6c3dfe7d83d137b3f7c0ca3f1

    收集了模型中rots_mul_rotsrots_mul_vecs输入输出的shape,发现主要存在2种配置。

    • rots_mul_rots

    | | a.shape | b.shape | out.shape | 说明 | |---|---|---|---|---| | 不需要广播 | [2, 256, 8, 3, 3] | [2, 256, 8, 3, 3] | [2, 256, 8, 3, 3] | 原始实现需要107个算子,PR修改后只需要1个算子 | | 需要广播 | [2, 256, 1, 3, 3] | [2, 256, 8, 3, 3] | [2, 256, 8, 3, 3] | 原始实现需要107个算子,PR修改后只需要3个算子 |

    • rots_mul_vecs

    | | m.shape | v.shape | out.shape | 说明 | |---|---|---|---|---| | 不需要广播 | [2, 256, 14, 3, 3] | [2, 256, 14, 3] | [2, 256, 14, 3] | PR修改后只需要3个算子 | | 需要广播 | [2, 256, 1, 3, 3] | [2, 256, 8, 3] | [2, 256, 8, 3] | PR修改后只需要5个算子 |

    opened by Xreki 0
Releases(v1.1.0)
  • v1.1.0(Dec 15, 2021)

  • v1.0(Jul 9, 2021)

  • v1.0b(Dec 22, 2020)

    The first version of PaddleHelix. PaddleHelix is a machine-learning-based bio-computing framework aiming at facilitating the development of the following areas: vaccine design, drug discovery, and precision medicine. PaddleHelix provides examples of representation learning of compounds, representation learning of proteins, drug-target interaction, and RNA folding.

    Source code(tar.gz)
    Source code(zip)
This repository is for Competition for ML_data class

This repository is for Competition for ML_data class. Based on mmsegmentatoin,mainly using swin transformer to completed the competition.

jianlong 2 Oct 23, 2022
An implementation of paper `Real-time Convolutional Neural Networks for Emotion and Gender Classification` with PaddlePaddle.

简介 通过PaddlePaddle框架复现了论文 Real-time Convolutional Neural Networks for Emotion and Gender Classification 中提出的两个模型,分别是SimpleCNN和MiniXception。利用 imdb_crop

8 Mar 11, 2022
This is our ARTS test set, an enriched test set to probe Aspect Robustness of ABSA.

This is the repository for our 2020 paper "Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis". Data We provide

35 Nov 16, 2022
Implementations of polygamma, lgamma, and beta functions for PyTorch

lgamma Implementations of polygamma, lgamma, and beta functions for PyTorch. It's very hacky, but that's usually ok for research use. To build, run: .

Rachit Singh 24 Nov 09, 2021
[ICCV2021] Official code for "Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition"

CTR-GCN This repo is the official implementation for Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition. The pap

Yuxin Chen 148 Dec 16, 2022
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

43 Nov 19, 2022
Implementation of "Glancing Transformer for Non-Autoregressive Neural Machine Translation"

GLAT Implementation for the ACL2021 paper "Glancing Transformer for Non-Autoregressive Neural Machine Translation" Requirements Python = 3.7 Pytorch

117 Jan 09, 2023
A mini lib that implements several useful functions binding to PyTorch in C++.

Torch-gather A mini library that implements several useful functions binding to PyTorch in C++. What does gather do? Why do we need it? When dealing w

maxwellzh 8 Sep 07, 2022
Official PyTorch implementation of the NeurIPS 2021 paper StyleGAN3

Alias-Free Generative Adversarial Networks (StyleGAN3) Official PyTorch implementation of the NeurIPS 2021 paper Alias-Free Generative Adversarial Net

Eugenio Herrera 92 Nov 18, 2022
PyTorch implementation of the method described in the paper VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop.

VoiceLoop PyTorch implementation of the method described in the paper VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop. VoiceLoop is a n

Meta Archive 873 Dec 15, 2022
WHENet - ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L

HeadPoseEstimation-WHENet-yolov4-onnx-openvino ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L 1. Usage $ git clone htt

Katsuya Hyodo 49 Sep 21, 2022
MDMM - Learning multi-domain multi-modality I2I translation

Multi-Domain Multi-Modality I2I translation Pytorch implementation of multi-modality I2I translation for multi-domains. The project is an extension to

Hsin-Ying Lee 107 Nov 04, 2022
Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR2022)[paper] Authors: Chenhang He, Ruihuang Li, Shuai Li, L

Billy HE 141 Dec 30, 2022
Reaction SMILES-AA mapping via language modelling

rxn-aa-mapper Reactions SMILES-AA sequence mapping setup conda env create -f conda.yml conda activate rxn_aa_mapper In the following we consider on ex

16 Dec 13, 2022
Official repository of the paper Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

Official repository of the paper Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

Soubhik Sanyal 689 Dec 25, 2022
Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization

Fishr: Invariant Gradient Variances for Out-of-distribution Generalization Official PyTorch implementation of the Fishr regularization for out-of-dist

62 Dec 22, 2022
Exploiting Robust Unsupervised Video Person Re-identification

Exploiting Robust Unsupervised Video Person Re-identification Implementation of the proposed uPMnet. For the preprint, please refer to [Arxiv]. Gettin

1 Apr 09, 2022
[ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Chenyu You, Xiaohui Xie, Zhangyang Wang

Undistillable: Making A Nasty Teacher That CANNOT teach students "Undistillable: Making A Nasty Teacher That CANNOT teach students" Haoyu Ma, Tianlong

VITA 71 Dec 28, 2022
LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT

LightHuBERT LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT | Github | Huggingface | SUPER

WangRui 46 Dec 29, 2022
Additional code for Stable-baselines3 to load and upload models from the Hub.

Hugging Face x Stable-baselines3 A library to load and upload Stable-baselines3 models from the Hub. Installation With pip Examples [Todo: add colab t

Hugging Face 34 Dec 10, 2022