MIM: MIM Installs OpenMMLab Packages

Related tags

Deep Learningmim
Overview

MIM: MIM Installs OpenMMLab Packages

MIM provides a unified API for launching and installing OpenMMLab projects and their extensions, and managing the OpenMMLab model zoo.

Installation

  1. Create a conda virtual environment and activate it.

    conda create -n open-mmlab python=3.7 -y
    conda activate open-mmlab
  2. Install PyTorch and torchvision following the official instructions, e.g.,

    conda install pytorch torchvision -c pytorch

    Note: Make sure that your compilation CUDA version and runtime CUDA version match. You can check the supported CUDA version for precompiled packages on the PyTorch website.

  3. Install MIM

    • from pypi

      python -m pip install mim
    • from source

      git clone https://github.com/open-mmlab/mim.git
      cd mim
      pip install -e .
      # python setup.py develop or python setup.py install
  4. Auto completion (Optional)

    In order to activate shell completion, you need to inform your shell that completion is available for your script.

    • For Bash, add this to ~/.bashrc:

      eval "$(_MIM_COMPLETE=source mim)"
    • For Zsh, add this to ~/.zshrc:

      eval "$(_MIM_COMPLETE=source_zsh mim)"
    • For Fish, add this to ~/.config/fish/completions/mim.fish:

      eval (env _MIM_COMPLETE=source_fish mim)

    Open a new shell to enable completion. Or run the eval command directly in your current shell to enable it temporarily.

    The above eval command will invoke your application every time a shell is started. This may slow down shell startup time significantly.

    Alternatively, you can activate the script. Please refer to activation-script

Command

1. install

asciicast

  • command

    # install latest version of mmcv-full
    > mim install mmcv-full  # wheel
    # install 1.3.1
    > mim install mmcv-full==1.3.1
    # install master branch
    > mim install mmcv-full -f https://github.com/open-mmlab/mmcv.git
    
    # install latest version of mmcls
    > mim install mmcls
    # install 0.11.0
    > mim install mmcls==0.11.0  # v0.11.0
    # install master branch
    > mim install mmcls -f https://github.com/open-mmlab/mmclassification.git
    # install local repo
    > git clone https://github.com/open-mmlab/mmclassification.git
    > cd mmclassification
    > mim install .
    
    # install extension based on OpenMMLab
    mim install mmcls-project -f https://github.com/xxx/mmcls-project.git
  • api

    from mim import install
    
    # install mmcv
    install('mmcv-full')
    
    # install mmcls
    # install mmcls will automatically install mmcv if it is not installed
    install('mmcv-full', find_url='https://github.com/open-mmlab/mmcv.git')
    install('mmcv-full==1.3.1', find_url='https://github.com/open-mmlab/mmcv.git')
    
    # install extension based on OpenMMLab
    install('mmcls-project', find_url='https://github.com/xxx/mmcls-project.git')
2. uninstall

asciicast

  • command

    # uninstall mmcv
    > mim uninstall mmcv-full
    
    # uninstall mmcls
    > mim uninstall mmcls
  • api

    from mim import uninstall
    
    # uninstall mmcv
    uninstall('mmcv-full')
    
    # uninstall mmcls
    uninstall('mmcls)
3. list

asciicast

  • command

    > mim list
    > mim list --all
  • api

    from mim import list_package
    
    list_package()
    list_package(True)
4. search

asciicast

  • command

    > mim search mmcls
    > mim search mmcls==0.11.0 --remote
    > mim search mmcls --config resnet18_b16x8_cifar10
    > mim search mmcls --model resnet
    > mim search mmcls --dataset cifar-10
    > mim search mmcls --valid-field
    > mim search mmcls --condition 'bs>45,epoch>100'
    > mim search mmcls --condition 'bs>45 epoch>100'
    > mim search mmcls --condition '128'
    > mim search mmcls --sort bs epoch
    > mim search mmcls --field epoch bs weight
    > mim search mmcls --exclude-field weight paper
  • api

    from mim import get_model_info
    
    get_model_info('mmcls')
    get_model_info('mmcls==0.11.0', local=False)
    get_model_info('mmcls', models=['resnet'])
    get_model_info('mmcls', training_datasets=['cifar-10'])
    get_model_info('mmcls', filter_conditions='bs>45,epoch>100')
    get_model_info('mmcls', filter_conditions='bs>45 epoch>100')
    get_model_info('mmcls', filter_conditions='128)
    get_model_info('mmcls', sorted_fields=['bs', 'epoch'])
    get_model_info('mmcls', shown_fields=['epoch', 'bs', 'weight'])
5. download

asciicast

  • command

    > mim download mmcls --config resnet18_b16x8_cifar10
    > mim download mmcls --config resnet18_b16x8_cifar10 --dest .
  • api

    from mim import download
    
    download('mmcls', ['resnet18_b16x8_cifar10'])
    download('mmcls', ['resnet18_b16x8_cifar10'], dest_dir='.')
6. train

asciicast

  • command

    # Train models on a single server with one GPU
    > mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1
    # Train models on a single server with 4 GPUs and pytorch distributed
    > mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 4 \
        --launcher pytorch
    # Train models on a slurm HPC with one 8-GPU node
    > mim train mmcls resnet101_b16x8_cifar10.py --launcher slurm --gpus 8 \
        --gpus-per-node 8 --partition partition_name --work-dir tmp
    # Print help messages of sub-command train
    > mim train -h
    # Print help messages of sub-command train and the training script of mmcls
    > mim train mmcls -h
  • api

    from mim import train
    
    train(repo='mmcls', config='resnet18_b16x8_cifar10.py', gpus=1,
          other_args='--work-dir tmp')
    train(repo='mmcls', config='resnet18_b16x8_cifar10.py', gpus=4,
          launcher='pytorch', other_args='--work-dir tmp')
    train(repo='mmcls', config='resnet18_b16x8_cifar10.py', gpus=8,
          launcher='slurm', gpus_per_node=8, partition='partition_name',
          other_args='--work-dir tmp')
7. test

asciicast

  • command

    # Test models on a single server with 1 GPU, report accuracy
    > mim test mmcls resnet101_b16x8_cifar10.py --checkpoint \
        tmp/epoch_3.pth --gpus 1 --metrics accuracy
    # Test models on a single server with 1 GPU, save predictions
    > mim test mmcls resnet101_b16x8_cifar10.py --checkpoint \
        tmp/epoch_3.pth --gpus 1 --out tmp.pkl
    # Test models on a single server with 4 GPUs, pytorch distributed,
    # report accuracy
    > mim test mmcls resnet101_b16x8_cifar10.py --checkpoint \
        tmp/epoch_3.pth --gpus 4 --launcher pytorch --metrics accuracy
    # Test models on a slurm HPC with one 8-GPU node, report accuracy
    > mim test mmcls resnet101_b16x8_cifar10.py --checkpoint \
        tmp/epoch_3.pth --gpus 8 --metrics accuracy --partition \
        partition_name --gpus-per-node 8 --launcher slurm
    # Print help messages of sub-command test
    > mim test -h
    # Print help messages of sub-command test and the testing script of mmcls
    > mim test mmcls -h
  • api

    from mim import test
    test(repo='mmcls', config='resnet101_b16x8_cifar10.py',
         checkpoint='tmp/epoch_3.pth', gpus=1, other_args='--metrics accuracy')
    test(repo='mmcls', config='resnet101_b16x8_cifar10.py',
         checkpoint='tmp/epoch_3.pth', gpus=1, other_args='--out tmp.pkl')
    test(repo='mmcls', config='resnet101_b16x8_cifar10.py',
         checkpoint='tmp/epoch_3.pth', gpus=4, launcher='pytorch',
         other_args='--metrics accuracy')
    test(repo='mmcls', config='resnet101_b16x8_cifar10.py',
         checkpoint='tmp/epoch_3.pth', gpus=8, partition='partition_name',
         launcher='slurm', gpus_per_node=8, other_args='--metrics accuracy')
8. run

asciicast

  • command

    # Get the Flops of a model
    > mim run mmcls get_flops resnet101_b16x8_cifar10.py
    # Publish a model
    > mim run mmcls publish_model input.pth output.pth
    # Train models on a slurm HPC with one GPU
    > srun -p partition --gres=gpu:1 mim run mmcls train \
        resnet101_b16x8_cifar10.py --work-dir tmp
    # Test models on a slurm HPC with one GPU, report accuracy
    > srun -p partition --gres=gpu:1 mim run mmcls test \
        resnet101_b16x8_cifar10.py tmp/epoch_3.pth --metrics accuracy
    # Print help messages of sub-command run
    > mim run -h
    # Print help messages of sub-command run, list all available scripts in
    # codebase mmcls
    > mim run mmcls -h
    # Print help messages of sub-command run, print the help message of
    # training script in mmcls
    > mim run mmcls train -h
  • api

    from mim import run
    
    run(repo='mmcls', command='get_flops',
        other_args='resnet101_b16x8_cifar10.py')
    run(repo='mmcls', command='publish_model',
        other_args='input.pth output.pth')
    run(repo='mmcls', command='train',
        other_args='resnet101_b16x8_cifar10.py --work-dir tmp')
    run(repo='mmcls', command='test',
        other_args='resnet101_b16x8_cifar10.py tmp/epoch_3.pth --metrics accuracy')
9. gridsearch

asciicast

  • command

    # Parameter search with on a single server with one GPU, search learning
    # rate
    > mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \
        --search-args '--optimizer.lr 1e-2 1e-3'
    # Parameter search with on a single server with one GPU, search
    # weight_decay
    > mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \
        --search-args '--optimizer.weight_decay 1e-3 1e-4'
    # Parameter search with on a single server with one GPU, search learning
    # rate and weight_decay
    > mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \
        --search-args '--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay 1e-3 \
        1e-4'
    # Parameter search on a slurm HPC with one 8-GPU node, search learning
    # rate and weight_decay
    > mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 8 \
        --partition partition_name --gpus-per-node 8 --launcher slurm \
        --search-args '--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay 1e-3 \
        1e-4'
    # Parameter search on a slurm HPC with one 8-GPU node, search learning
    # rate and weight_decay, max parallel jobs is 2
    > mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 8 \
        --partition partition_name --gpus-per-node 8 --launcher slurm \
        --max-workers 2 --search-args '--optimizer.lr 1e-2 1e-3 \
        --optimizer.weight_decay 1e-3 1e-4'
    # Print the help message of sub-command search
    > mim gridsearch -h
    # Print the help message of sub-command search and the help message of the
    # training script of codebase mmcls
    > mim gridsearch mmcls -h
  • api

    from mim import gridsearch
    
    gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=1,
               search_args='--optimizer.lr 1e-2 1e-3',
               other_args='--work-dir tmp')
    gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=1,
               search_args='--optimizer.weight_decay 1e-3 1e-4',
               other_args='--work-dir tmp')
    gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=1,
               search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'
                           '1e-3 1e-4',
               other_args='--work-dir tmp')
    gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=8,
               partition='partition_name', gpus_per_node=8, launcher='slurm',
               search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'
                           ' 1e-3 1e-4',
               other_args='--work-dir tmp')
    gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=8,
               partition='partition_name', gpus_per_node=8, launcher='slurm',
               max_workers=2,
               search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'
                           ' 1e-3 1e-4',
               other_args='--work-dir tmp')

Build custom projects with MIM

We provide some examples about how to build custom projects based on OpenMMLAB codebases and MIM in MIM-Example. In mmcls_custom_backbone, we define a custom backbone and a classification config file that uses the backbone. To train this model, you can use the command:

# The working directory is `mim-example/mmcls_custom_backbone`
PYTHONPATH=$PWD:$PYTHONPATH mim train mmcls custom_net_config.py --work-dir tmp --gpus 1

Contributing

We appreciate all contributions to improve mim. Please refer to CONTRIBUTING.md for the contributing guideline.

Comments
  • :books: Refine error message to show how to install mmengine.

    :books: Refine error message to show how to install mmengine.

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    Refine error message to tell the user how to install mmengine.

    Modification

    Modifies the error message in mim with:

    Please install mmengine to use the download command: mim install mmengine.'
    

    BC-breaking (Optional)

    Does the modification introduce changes that break the back-compatibility of the downstream repos? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

    Use cases (Optional)

    If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

    Checklist

    1. Pre-commit or other linting tools are used to fix the potential lint issues.
    2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDet or MMCls.
    4. The documentation has been modified accordingly, like docstring or example tutorials.
    opened by saratrajput 7
  • KeyError: 'Cascade Mask R-CNN'

    KeyError: 'Cascade Mask R-CNN'

    1. I've followed the installation steps from https://mmdetection.readthedocs.io/en/v2.25.0/get_started.html
    2. Then I try to execute the following command from the manual mim download mmdet --config yolov3_mobilenetv2_320_300e_coco --dest .
    $ mim download mmdet --config yolov3_mobilenetv2_320_300e_coco --dest .
    ~/miniconda3/envs/openmmlab/lib/python3.8/site-packages/_distutils_hack/__init__.py:30: UserWarning: Setuptools is replacing distutils.
      warnings.warn("Setuptools is replacing distutils.")
    Traceback (most recent call last):
      File "~/miniconda3/envs/openmmlab/bin/mim", line 33, in <module>
        sys.exit(load_entry_point('openmim', 'console_scripts', 'mim')())
      File "~/miniconda3/envs/openmmlab/lib/python3.8/site-packages/click/core.py", line 829, in __call__
        return self.main(*args, **kwargs)
      File "~/miniconda3/envs/openmmlab/lib/python3.8/site-packages/click/core.py", line 782, in main
        rv = self.invoke(ctx)
      File "~/miniconda3/envs/openmmlab/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
        return _process_result(sub_ctx.command.invoke(sub_ctx))
      File "~/miniconda3/envs/openmmlab/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "~/miniconda3/envs/openmmlab/lib/python3.8/site-packages/click/core.py", line 610, in invoke
        return callback(*args, **kwargs)
      File "~/repositories/mim/mim/commands/download.py", line 44, in cli
        download(package, configs, dest_root)
      File "~/repositories/mim/mim/commands/download.py", line 75, in download
        model_info = get_model_info(
      File "~/repositories/mim/mim/commands/search.py", line 170, in get_model_info
        dataframe = convert2df(metadata)
      File "~/repositories/mim/mim/commands/search.py", line 396, in convert2df
        for key, value in name2collection[collection_name].items():
    KeyError: 'Cascade Mask R-CNN'
    

    I've also tried to install mim directly from master, but the same error appears.

    opened by tik0 7
  • [Fix] Fix the punctuation display problem and the path separator problem on Windows

    [Fix] Fix the punctuation display problem and the path separator problem on Windows

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    1666087934605 1.Fix the punctuation display problem on Windows, when running the command mim run mmdet -h.

    10615f6753552529333432d391461b5 2.Fix the path separator problem on Windows, when running the command mim run mmdet -h.

    Modification

    Replace all Don’t -> Don\'t for problem 1. In mim/click/customcommand.py, '/' -> os.sep for problem 2.

    opened by RangeKing 6
  • Bypass SSL check when installing packages

    Bypass SSL check when installing packages

    I'm working in an environment with a strict custom proxy enforcement which blocks ssl verification to many sites including package hosts. Is there a way to bypass this verification similar to how pip allows the following?

    --trusted-host {host}
    
    opened by Xylobyte 6
  • Specifying gpu-ids with mim train

    Specifying gpu-ids with mim train

    Describe the feature I would like to specify the gpu-ids when I run mim train.

    mim train mmcls config.py --gpu-ids 1
    

    Currently, I think I can only specify the gpus as shown below.

    mim train mmcls config.py --gpus 1
    

    Motivation It is inconvenient when to specify the gpu-ids.

    Related resources https://github.com/open-mmlab/mmclassification/blob/master/tools/train.py#L38-L43

    opened by okotaku 6
  • use site-packages/${project}/.mim/config as the base config

    use site-packages/${project}/.mim/config as the base config

    Describe the feature I would like to be able to specify the config under .mim as the base config and run it as shown below.

    _base_ = [
        'site-packages/mmcls/.mim/config/_base_/models/resnet18_cifar.py',
        'site-packages/mmcls/.mim/config/_base_/datasets/cifar10_bs16.py',
        'site-packages/mmcls/.mim/config/_base_/schedules/cifar10_bs128.py',
        'site-packages/mmcls/.mim/config/_base_/default_runtime.py'
    ]
    

    Motivation When specifying the config in the repo as the base config, I find it inconvenient that I need to either download it locally by git clone or copy the config to be used itself under .mim.

    Additional context The following part of mmcv is the load part of the base config. I think it is possible to load the config from .mim by changing this part, is this the proper way? https://github.com/open-mmlab/mmcv/blob/76cfd77b3a88ce17508471bf335829eb0628abcf/mmcv/utils/config.py#L239-L269

    Is this the proper way to do it, or is there a way to complete it with a .mim repo?

    opened by okotaku 6
  • Installation of rc versions does not work correctly

    Installation of rc versions does not work correctly

    Description If I run this in the command line: pip install openmim mim install mmdet>=3.0.0rc0 mim list prints

    Package    Version    Source
    ---------  ---------  -----------------------------------------
    mmdet      2.25.3     https://github.com/open-mmlab/mmdetection
    

    Expected behaviour mim install mmdet>=3.0.0rc0 should install mmdet v3.0.0rc2 (the latest rc at the time of writing).

    Probable cause Maybe mim installs the newest package newer than 3.0.0rc0 by date. This can lead to the above behaviour.

    Perhaps the version numbers need to be parsed instead of just installing by date of release?

    opened by vavanade 5
  • [Chore] Fix the issue that the click version is locked at 7.x

    [Chore] Fix the issue that the click version is locked at 7.x

    Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

    Motivation

    Fix the issue that the click version is locked at 7.x, fixes #123 In addition, add .vscode into .gitignore make vscode user happy.

    Modification

    • .gitignore
    • setup.py
    • requirements/install.txt
    • mim/click/__init__.py
    • mim/click/compat.py
    • mim/click/option.py
    • mim/commands/download.py
    • mim/commands/install.py
    • mim/commands/search.py
    • mim/commands/uninstall.py

    BC-breaking (Optional)

    No

    Use cases (Optional)

    No

    Checklist

    1. Pre-commit or other linting tools are used to fix the potential lint issues.
    2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDet or MMCls.
    4. The documentation has been modified accordingly, like docstring or example tutorials.
    opened by ice-tong 5
  • Multiple GPU training failed

    Multiple GPU training failed

    I tried to use 2 GPU for training, but it raised a error:

    PYTHONPATH=$PWD:$PYTHONPATH mim train mmaction configs/localization/apn_coralrandom_r3dsony_32x4_10e_thumos14_flow.py --gpus 2 --validate
    
    2021-06-13 16:38:48,309 - mmaction - INFO - workflow: [('train', 1)], max: 10 epochs
    Traceback (most recent call last):
      File "/home/louis/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmaction/tools/train.py", line 199, in <module>
        main()
      File "/home/louis/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmaction/tools/train.py", line 195, in main
        meta=meta)
      File "/home/louis/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmaction/apis/train.py", line 163, in train_model
        runner.run(data_loaders, cfg.workflow, cfg.total_epochs, **runner_kwargs)
      File "/home/louis/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 125, in run
        epoch_runner(data_loaders[i], **kwargs)
      File "/home/louis/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
        self.run_iter(data_batch, train_mode=True, **kwargs)
      File "/home/louis/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 30, in run_iter
        **kwargs)
      File "/home/louis/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 55, in train_step
        ('MMDataParallel only supports single GPU training, if you need to'
    AssertionError: MMDataParallel only supports single GPU training, if you need to train with multiple GPUs, please use MMDistributedDataParallelinstead.
    

    However, I did's manually set the distributed method in my own code. It seems that mim uses the train.py instead of dist_train.sh . How to fix this?

    opened by makecent 4
  • subprocess.CalledProcessError: ...  returned non-zero exit status 1

    subprocess.CalledProcessError: ... returned non-zero exit status 1

    when i run "mim run mmcls get_flops resnet101_b16x8_cifar10.py" OR "mim run mmcls publish_model", it report the error "subprocess.CalledProcessError: ... returned non-zero exit status 1 ". How to solve this?

    opened by wyhhyw123 4
  • error with mmyolo

    error with mmyolo

    Please help me, trying to test download mmyolo config via python:

    env:

    openmim==0.3.2
    mmyolo=0.1.1
    mmengine==0.1.0
    mmcv==2.0.0rc2
    mmdet==3.0.0rc2
    

    command:

    from mim import download
    
    filepath = download("mmyolo", ["yolov5_n-v61_syncbn_fast_8xb16-300e_coco"], dest_root=".")
    

    error:

    [.../lib/python3.8/site-packages/mmyolo/model-index.yml or .../lib/python3.8/site-packages/mmyolo/model_zoo.yml is not found, please upgrade your mmyolo to support search command
    
    opened by fcakyon 3
  • [Fix] decode URL encoded string in config path

    [Fix] decode URL encoded string in config path

    Motivation

    Some packages has config paths which is url encoded (e.g. %2B for +) https://github.com/open-mmlab/mmdetection/blob/db85fd12afce2fe89d1c7b870874c02a90018a16/configs/gn%2Bws/metafile.yml#L17

    Currently downloading those models trrigers "is not found" error because mim does not support url encoded path.

    Modification

    While I am not sure if this should be fixed in mim or in metafile.yml in the package, I firstly attempt to fix in mim package.

    BC-breaking (Optional)

    Use cases (Optional)

    Checklist

    1. Pre-commit or other linting tools are used to fix the potential lint issues.
    2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDet or MMCls.
    4. The documentation has been modified accordingly, like docstring or example tutorials.
    opened by kbumsik 0
  • [Enhance] Support CPU-only train and test in slurm cluster (#189)

    [Enhance] Support CPU-only train and test in slurm cluster (#189)

    Motivation

    Support CPU-only train and test in slurm cluster. (#189)

    Modification

    Modify slurm check conditions to "flag = partition is not None"

    Checklist

    1. Pre-commit or other linting tools are used to fix the potential lint issues.
    2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
    3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDet or MMCls.
    4. The documentation has been modified accordingly, like docstring or example tutorials.
    opened by GhaSiKey 2
  • Can we submit the mim command asynchronously?

    Can we submit the mim command asynchronously?

    Could mim support asynchronous commits ”--async“ - or, alternatively, log output redirection “--output” to redirect the generated logs to another file? mim是否可以支持异步提交--async,或者换种方式,支持日志输出重定向--output,把生成的日志重定向之别的文件。

    opened by GhaSiKey 0
  • How to start a task on a slurm cluster without calling the gpu

    How to start a task on a slurm cluster without calling the gpu

    I tried to use mim to start a task on a slurm cluster and not use gpu, but it seems that the mim command is not supported. 我尝试使用mim在slurm集群上启动任务--launcher slurm,并且不使用gpu,我设置了--gpus-per-node 0,但是似乎不可行。 image

    opened by GhaSiKey 3
  • 这个mim包管理工具,是否支持像pip那样的download-only命令?

    这个mim包管理工具,是否支持像pip那样的download-only命令?

    您好,我想问下,这个mim包管理工具,是否支持像pip那样的download-only名命,将要安装的包,通过mim命令下载下来(包括把依赖也下载下来),而不是安装;用途就是通过这种仅下载的命令,把安装包下载下来后,往内网环境的生产环境部署的需求;应该很多工业界的安装需求,都是这样的gpu在内网环境上,不能访问公网。

    opened by dongfangduoshou123 2
  • [Feature Request] Support returning the `main` function of 'train', and 'test' scripts

    [Feature Request] Support returning the `main` function of 'train', and 'test' scripts

    Describe the feature Support returning the main function of 'train', and 'test' scripts like mim.train(return_function=True)(args).

    Motivation Currently, mim.train and mim.test works as running the script as a subprocess. In addition to this, if it can work as returning the main function of these scripts, it would be nice for third-party frameworks to utilize these scripts in runtime. For example, our work had to copy-and-paste train scripts to utilize these as function.

    One of the possible ways to support this feature is described in https://stackoverflow.com/a/67692.

    import importlib.util
    import sys
    
    pkg_root = get_installed_path(package)
    train_script = osp.join(pkg_root, '.mim', 'tools', 'train.py')
    
    spec = importlib.util.spec_from_file_location("mm.train", train_script)
    foo = importlib.util.module_from_spec(spec)
    sys.modules["mm.train"] = foo
    spec.loader.exec_module(foo)
    foo.main()
    

    Although it can be done in other libraries, it seems safer to be supported by mim.

    In addition to this, like our example in the above link, it would be helpful to be able to give custom args in parse_args(args) in the scripts to customize its functionality.

    Although it affects many other frameworks and packages, I think it would benefit other frameworks in the future! :)

    opened by nijkah 0
Releases(v0.3.4)
  • v0.3.4(Dec 27, 2022)

    Features

    • Add ignore-ssl option to disable the check certificate for mim download (https://github.com/open-mmlab/mim/pull/179)
    • Support running scripts in demo dir (https://github.com/open-mmlab/mim/pull/178)

    Bug fixes

    • Fix the path separator problem of mim run on Windows (https://github.com/open-mmlab/mim/pull/177)
    • Deprecate distutils.version for removing warning info (https://github.com/open-mmlab/mim/pull/185)

    Improvements

    • Use sys.executable for calling Python (https://github.com/open-mmlab/mim/pull/181)

    Contributors

    A total of 5 developers contributed to this release. @RangeKing @nijkah @kim3321 @ice-tong @zhouzaida

    New Contributors

    • @nijkah made their first contribution in https://github.com/open-mmlab/mim/pull/181
    • @kim3321 made their first contribution in https://github.com/open-mmlab/mim/pull/179

    Full Changelog: https://github.com/open-mmlab/mim/compare/v0.3.3...v0.3.4

    Source code(tar.gz)
    Source code(zip)
  • v0.3.3(Nov 9, 2022)

    Features

    • Support mim search mmyolo --remote (https://github.com/open-mmlab/mim/pull/172)

    Bug fixes

    • Fix the display problem and path separator on Windows (https://github.com/open-mmlab/mim/pull/166)

    Improvements

    • Refine the error message to show how to install mmengine. (https://github.com/open-mmlab/mim/pull/161)

    Infra

    • Update pre-commit config (https://github.com/open-mmlab/mim/pull/167)

    Contributors

    A total of 3 developers contributed to this release. @saratrajput @RangeKing @ice-tong

    New Contributors

    • @saratrajput made their first contribution in https://github.com/open-mmlab/mim/pull/161
    • @RangeKing made their first contribution in https://github.com/open-mmlab/mim/pull/166

    Full Changelog: https://github.com/open-mmlab/mim/compare/v0.3.2...v0.3.3

    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Sep 23, 2022)

    Features

    • Support OpenMMLab2.0 in train / test / gridsearch commands.
    • Create the destination directory if it does not exist when using the download command.

    Contributors

    A total of 2 developers contributed to this release. @zhouzaida @ice-tong

    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Sep 5, 2022)

  • v0.3.0(Sep 1, 2022)

  • v0.2.1(Aug 2, 2022)

    Features

    • Add MMCV_BASE_URL environment variable to support customizing mmcv find link (#148)

    Bug fixes

    • Fix the mim --version command crash in click>=8.x (#152)

    Documentations

    • Fix a wrong .mimrc example (#147)

    Infra

    • Build and upload wheel distribution when packaging in CI (#145)

    Contributors

    A total of 2 developers contributed to this release. @zhouzaida @ice-tong

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jun 27, 2022)

    Highlights

    We are excited to introduce mim 0.2.0 to you! This release refactors mim install command and brings great user experience improvement.

    • You can use mim install in the same way you use pip install!
    • Download and install the OpenMMLab packages from PyPI instead of downloading from Github, for higher performance.
    • Automatically solve and install OpenMMLab-related dependencies via pip resolver.

    Improvements

    • Refactor mim install and uninstall for better user experience (#132, #135)
    • Fix the issue that the click version is locked at 7.x (#127)
    • Add a beautiful progress bar (#139)
    • Add more packages that support mim (#134)

    Bug fixes

    • Fix mmaction2 package name and run script (#113)

    Infra

    • Add Windows and macOS tests in CI (#106, #138)
    • Add circleCI (#131)
    • Add pyupgrade and codespell pre-commit hooks (#130)
    • Add extra dependencies to extra_require (#133)

    Contributors

    A total of 4 developers contributed to this release. @zhouzaida @yeliudev @teamwong111 @ice-tong

    Source code(tar.gz)
    Source code(zip)
  • v0.1.6(Jun 14, 2022)

    Improvements

    • Get the version of CUDA properly (#96)

    Bug fixes

    • Fix search error when models miss the results field (#104)
    • Fix mim install does not work with pip >= 22.1 (#117)

    Documentations

    • Use pytorch sphinx theme (#89)
    • Refactor the structure of the documentation (#93, #94)
    • Reorganizing OpenMMLab projects in readme (#98, #99, #107)

    Infra

    • Refine CI (#90, #91, #128)
    • Update test cases (#84, #85, #95)
    • Update pre-commit hooks (#103, #105, #126)

    Contributors

    A total of 5 developers contributed to this release.

    @zhouzaida @kennymckormick @HAOCHENYE @ice-tong @Anton-Cherepkov

    Source code(tar.gz)
    Source code(zip)
  • v0.1.5(Sep 7, 2021)

  • v0.1.4(Aug 4, 2021)

  • v0.1.3(Jul 24, 2021)

  • v0.1.0(May 25, 2021)

Owner
OpenMMLab
OpenMMLab
Implementation of the Point Transformer layer, in Pytorch

Point Transformer - Pytorch Implementation of the Point Transformer self-attention layer, in Pytorch. The simple circuit above seemed to have allowed

Phil Wang 501 Jan 03, 2023
ICCV2021 Oral SA-ConvONet: Sign-Agnostic Optimization of Convolutional Occupancy Networks

Sign-Agnostic Convolutional Occupancy Networks Paper | Supplementary | Video | Teaser Video | Project Page This repository contains the implementation

64 Jan 05, 2023
This repository contains the data and code for the paper "Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors" ([email protected])

GP-VAE This repository provides datasets and code for preprocessing, training and testing models for the paper: Diverse Text Generation via Variationa

Wanyu Du 18 Dec 29, 2022
This is a template for the Non-autoregressive Deep Learning-Based TTS model (in PyTorch).

Non-autoregressive Deep Learning-Based TTS Template This is a template for the Non-autoregressive TTS model. It contains Data Preprocessing Pipeline D

Keon Lee 13 Dec 05, 2022
Official implementation of Few-Shot and Continual Learning with Attentive Independent Mechanisms

Few-Shot and Continual Learning with Attentive Independent Mechanisms This repository is the official implementation of Few-Shot and Continual Learnin

Chikan_Huang 25 Dec 08, 2022
A Probabilistic End-To-End Task-Oriented Dialog Model with Latent Belief States towards Semi-Supervised Learning

LABES This is the code for EMNLP 2020 paper "A Probabilistic End-To-End Task-Oriented Dialog Model with Latent Belief States towards Semi-Supervised L

17 Sep 28, 2022
Here is the diagnostic tool for BMVC 2021 paper Diagnosing Errors in Video Relation Detectors.

Here is the diagnostic tool for BMVC 2021 paper Diagnosing Errors in Video Relation Detectors. We provide a tiny ground truth file demo_gt.json, and t

Shuo Chen 3 Dec 26, 2022
Implementation of "With a Little Help from my Temporal Context: Multimodal Egocentric Action Recognition, BMVC, 2021" in PyTorch

Multimodal Temporal Context Network (MTCN) This repository implements the model proposed in the paper: Evangelos Kazakos, Jaesung Huh, Arsha Nagrani,

Evangelos Kazakos 13 Nov 24, 2022
Code and results accompanying our paper titled Mixture Proportion Estimation and PU Learning: A Modern Approach at Neurips 2021 (Spotlight)

Mixture Proportion Estimation and PU Learning: A Modern Approach This repository is the official implementation of Mixture Proportion Estimation and P

Approximately Correct Machine Intelligence (ACMI) Lab 23 Dec 28, 2022
Supplementary materials for ISMIR 2021 LBD paper "Evaluation of Latent Space Disentanglement in the Presence of Interdependent Attributes"

Evaluation of Latent Space Disentanglement in the Presence of Interdependent Attributes Supplementary materials for ISMIR 2021 LBD submission: K. N. W

Karn Watcharasupat 2 Oct 25, 2021
PyGCL: A PyTorch Library for Graph Contrastive Learning

PyGCL is a PyTorch-based open-source Graph Contrastive Learning (GCL) library, which features modularized GCL components from published papers, standa

PyGCL 588 Dec 31, 2022
Pytorch implementation of our paper under review — Lottery Jackpots Exist in Pre-trained Models

Lottery Jackpots Exist in Pre-trained Models (Paper Link) Requirements Python = 3.7.4 Pytorch = 1.6.1 Torchvision = 0.4.1 Reproduce the Experiment

Yuxin Zhang 27 Jun 28, 2022
Pytorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling

Parallel Tacotron2 Pytorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling

Keon Lee 170 Dec 27, 2022
OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation

Build Type Linux MacOS Windows Build Status OpenPose has represented the first real-time multi-person system to jointly detect human body, hand, facia

25.7k Jan 09, 2023
Ludwig Benchmarking Toolkit

Ludwig Benchmarking Toolkit The Ludwig Benchmarking Toolkit is a personalized benchmarking toolkit for running end-to-end benchmark studies across an

HazyResearch 17 Nov 18, 2022
Efficient 6-DoF Grasp Generation in Cluttered Scenes

Contact-GraspNet Contact-GraspNet: Efficient 6-DoF Grasp Generation in Cluttered Scenes Martin Sundermeyer, Arsalan Mousavian, Rudolph Triebel, Dieter

NVIDIA Research Projects 148 Dec 28, 2022
Preparation material for Dropbox interviews

Dropbox-Onsite-Interviews A guide for the Dropbox onsite interview! The Dropbox interview question bank is very small. The bank has been in a Chinese

386 Dec 31, 2022
NeuTex: Neural Texture Mapping for Volumetric Neural Rendering

NeuTex: Neural Texture Mapping for Volumetric Neural Rendering Paper: https://arxiv.org/abs/2103.00762 Running Run on the provided DTU scene cd run ba

Fanbo Xiang 67 Dec 28, 2022
🛠️ Tools for Transformers compression using Lightning ⚡

Bert-squeeze is a repository aiming to provide code to reduce the size of Transformer-based models or decrease their latency at inference time.

Jules Belveze 66 Dec 11, 2022
Security evaluation module with onnx, pytorch, and SecML.

🚀 🐼 🔥 PandaVision Integrate and automate security evaluations with onnx, pytorch, and SecML! Installation Starting the server without Docker If you

Maura Pintor 11 Apr 12, 2022