GUI for a Vocal Remover that uses Deep Neural Networks.

Overview

Ultimate Vocal Remover GUI v4.0.1

Release Downloads

About

Update April 9th 2021: The v5 beta along with 11 beta models have been released! You can read more about it here!

This application is a GUI version of the vocal remover AI created and posted by GitHub user tsurumeso. This version also comes with a total of 11 high performance models trained by me. You can find tsurumeso's original command line version here.

  • The Developers
    • Anjok07- Model collaborator & UVR developer.
    • aufr33 - Model collaborator & fellow UVR developer. This project wouldn't be what it is without your help, thank you for your continued support!
    • DilanBoskan - The main UVR GUI developer. Thank you for helping bring the GUI to life! Your hard work and continued support is greatly appreciated.
    • tsurumeso - The engineer who authored the original AI code. Thank you for the hard work and dedication you put into the AI code UVR is built on!

Installation

The application was made with Tkinter for cross-platform compatibility, so it should work with Windows, Mac, and Linux systems. However, this application has only been tested on Windows 10 & Linux Ubuntu.

Install Required Applications & Packages

  1. Download & install Python 3.7 here (Windows link)
    • Note: Ensure the "Add Python 3.7 to PATH" box is checked
  2. Once Python has installed, download Ultimate Vocal Remover GUI Version 4.0.1 here
  3. Place the UVR-V4GUI folder contained within the .zip file where ever you wish.
    • Your documents folder or home directory is recommended for easy access.
  4. From the UVR-V4GUI directory, open the Windows Command Prompt and run the following installs -
pip install --no-cache-dir -r requirements.txt
pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio===0.8.1 -f https://download.pytorch.org/whl/torch_stable.html

FFmpeg

FFmpeg must be installed and configured in order for the application to be able to process any track that isn't a .wav file. Instructions for installing FFmpeg can be found on YouTube, WikiHow, Reddit, GitHub, and many other sources around the web.

  • Note: If you are experiencing any errors when attempting to process any media files that are not in the .wav format, please ensure FFmpeg is installed & configured correctly.

Running the Vocal Remover GUI & Models

  • Open the file labeled 'VocalRemover.py'.
    • It's recommended that you create a shortcut for the file labeled 'VocalRemover.py' to your desktop for easy access.
      • Note: If you are unable to open the 'VocalRemover.py' file, please go to the troubleshooting section below.
  • Note: All output audio files will be in the '.wav' format.

Option Guide

Choose AI Engine:

  • This option allows you to toggle between tsurumeso's v2 & v4 AI engines.
    • Note: Each engine comes with it's own set of models.
    • Note: The TTA option and the ability to set the N_FFT value is limited to the v4 engine only.

Model Selections:

The v2 & v4 AI engines use different sets of models. When selected, the models available for v2 or v4 will automatically populate within the model selection dropdowns.

  • Choose Main Model - Here is where you choose the main model to perform a deep vocal removal.
    • Each of the models provided were trained on different parameters, though they can convert tracks of all genres.
    • Each model differs in the way they process given tracks.
      • The 'Model Test Mode' option makes it easier for the user to test different models on given tracks.
  • Choose Stacked Model - These models are meant to clean up vocal artifacts from instrumental outputs.
    • The stacked models provided are only meant to process instrumental outputs created by a main model.
    • Selecting the 'Stack Passes' option will enable you to select a stacked model to run with a main model.
    • The wide range of main model/stacked model combinations gives the user more flexibility in discovering what model blend works best for the track(s) they are proessing.
      • To reiterate, the 'Model Test Mode' option streamlines the process of testing different main model/stacked model combinations on a given track. More information on this option can be found in the next section.

Checkboxes

  • GPU Conversion - Selecting this option ensures the GPU is used to process conversions.
    • Note: This option will not work if you don't have a Cuda compatible GPU.
      • Nividia GPU's are most compatible with Cuda.
    • Note: CPU conversions are much slower compared to those processed through the GPU.
  • Post-process - This option can potentially identify leftover instrumental artifacts within the vocal outputs. This option may improve the separation on some songs.
    • Note: Having this option selected can potentially have an adverse effect on the conversion process, depending on the track. Because of this, it's only recommended as a last resort.
  • TTA - This option performs Test-Time-Augmentation to improve the separation quality.
    • Note: Having this selected will increase the time it takes to complete a conversion.
    • Note: This option is not compatible with the v2 AI engine.
  • Output Image - Selecting this option will include the spectrograms in .jpg format for the instrumental & vocal audio outputs.
  • Stack Passes - This option activates the stacked model conversion process and allows the user to set the number of times a track runs through a stacked model.
    • Note: Unless you have the 'Save All Stacked Outputs' option selected, the following outputs will be saved -
      • Instrumental generated after the last stack pass
      • The vocal track generated by the main model
    • Note: The best range is 3-7 passes. 8 or more passes can result in degraded sound quality for the track.
  • Stack Conversion Only - Selecting this option allows the user to bypass the main model and run a track through a stacked model only.
  • Save All Stacked Outputs - Having this option selected will auto-generate a new folder named after the track being processed to your 'Save to' path. The new folder will contain all of the outputs that were generated after each stack pass. The amount of audio outputs will depend on the number of stack passes chosen.
    • Note: Each output audio file will be appended with the number of passes it has had.
      • Example: If 5 stack passes are chosen, the application will provide you with all 5 pairs of audio outputs generated after each pass, if this option is enabled.
    • This option can be very useful in determining the optimal number of passes needed to clean a track.
    • The 'stacked vocal' tracks will contain the audio of the vocal artifacts that were removed from the instrumental.
      • These files can be used to verify artifact removal.
  • Model Test Mode - This option makes it easier for users to test the results of different models, and model combinations, by eliminating the hassel of having to manually change the filenames and/or create new folders when processing the same track through multiple models. This option structures the model testing process.
    • When 'Model Test Mode' is selected, the application will auto-generate a new folder in the 'Save to' path you have chosen.
      • The new auto-generated folder will be named after the model(s) selected.
      • The output audio files will be saved to the auto-generated directory.
      • The filenames for the instrumental & vocal outputs will have the selected model(s) name(s) appended to them.

Parameter Values

All models released here will have the values they were trained with appended to the end of their filenames like so, 'MGM-HIGHEND_sr44100_hl512_w512_nf2048.pth'. The '_sr44100_hl512_w512_nf2048' portion automatically sets the SR, HOP LENGNTH, WINDOW SIZE, & N_FFT values within the application. If there are no values appended to the end of a selected model filename, the SR, HOP LENGNTH, WINDOW SIZE, & N_FFT fields will be editable and auto-populate with default values.

  • Note - The WINDOW_SIZE value is universal. The smaller your window size, the better your conversions will be. However, a smaller window size means longer conversions times and heavier resource usage.

    • Here are the recommended window size values -
      • 1024 - Low conversion quality, shortest conversion time, low resource usage
      • 512 - Average conversion quality, average conversion time, normal resource usage
      • 320 - Better conversion quality, long conversion time, high resource usage
      • 272 - Best conversion quality, longest conversion time, heavy resource usage
        • 272 is the lowest window size value possible.
  • Default Values:

    • SR - 44100
    • HOP LENGTH - 1024
    • WINDOW SIZE - 320
    • N_FFT - 2048

Other Buttons:

  • Add New Model - This button will automatically open the models folder.
    • Note: If you are adding a new model, make sure to add it accordingly based on the AI engine it was trained on.
      • Example: If you wish to add a model trained on the v4 engine, add it to the correct folder located in the 'models/v4/' directory.
    • Note: The application will automatically detect any models added the correct directories without needing a restart.
  • Restart Button - If the application hangs for any reason, you can hit the circular arrow button immediately to the right of the 'Start Conversion' button.

Models Included

All of the models included in the release were trained on large datasets containing diverse sets of music genres.

PLEASE NOTE: Do not change the name of the models provided! The required parameters are specified and appended to the end of the filenames.

Here's a list of the models included within the package -

  • v4 AI Engine

    • Main Models
      • MGM_MAIN_v4_sr44100_hl512_nf2048.pth - This is the main model that does an excellent job removing vocals from most tracks.
      • MGM_LOWEND_A_v4_sr32000_hl512_nf2048.pth - This model focuses a bit more on removing vocals from lower frequencies.
      • MGM_LOWEND_B_v4_sr33075_hl384_nf2048.pth - This is also a model that focuses on lower end frequencies, but trained with different parameters.
      • MGM_LOWEND_C_v4_sr16000_hl512_nf2048.pth - This is also a model that focuses on lower end frequencies, but trained on a very low sample rate.
      • MGM_HIGHEND_v4_sr44100_hl1024_nf2048.pth - This model slightly focuses a bit more on higher end frequencies.
      • MODEL_BVKARAOKE_by_aufr33_v4_sr33075_hl384_nf1536.pth - This is a beta model that removes main vocals while leaving background vocals intact.
    • Stacked Models
      • StackedMGM_MM_v4_sr44100_hl512_nf2048.pth - This is a strong vocal artifact removal model. This model was made to run with 'MGM_MAIN_v4_sr44100_hl512_nf2048.pth'. However, any combination may yield a desired result.
      • StackedMGM_MLA_v4_sr32000_hl512_nf2048.pth - This is a strong vocal artifact removal model. This model was made to run with 'MGM_MAIN_v4_sr44100_hl512_nf2048.pth'. However, any combination may yield a desired result.
      • StackedMGM_LL_v4_sr32000_hl512_nf2048.pth - This is a strong vocal artifact removal model. This model was made to run with 'MGM_LOWEND_A_v4_sr32000_hl512_nf2048.pth'. However, any combination may yield a desired result.
  • v2 AI Engine

    • Main Models
      • Multi_Genre_Model_v2_sr44100_hl1024.pth - This model yields excellent results for most tracks processed through it.
    • Stacked Models
      • StackedRegA_v2_sr44100_hl1024.pth - This is a standard vocal artifact removal model.
      • StackedArg_v2_sr44100_hl1024.pth - This model removes vocal artifacts a bit more aggressively, but may greatly degrade the audio quality of the output audio.

A special thank you to aufr33 for helping me expand the dataset used to train some of these models and for the helpful training tips.

Other GUI Notes

  • The application will automatically remember your 'save to' path upon closing and reopening until it's changed.
    • Note: The last directory accessed within the application will also be remembered.
  • Multiple conversions are supported.
  • The ability to drag & drop audio files to convert has also been added.
  • Conversion times will greatly depend on your hardware.
    • Note: This application will not be friendly to older or budget hardware. Please proceed with caution! Pay attention to your PC and make sure it doesn't overheat. We are not responsible for any hardware damage.

Troubleshooting

Common Issues

  • This application is not compatible with 32-bit versions of Python. Please make sure your version of Python is 64-bit.
  • If FFmpeg is not installed, the application will throw an error if the user attempts to convert a non-WAV file.

Issue Reporting

Please be as detailed as possible when posting a new issue. Make sure to provide any error outputs and/or screenshots/gif's to give us a clearer understanding of the issue you are experiencing.

If the 'VocalRemover.py' file won't open under any circumstances and all other resources have been exhausted, please do the following -

  1. Open the cmd prompt from the UVR-V4GUI directory
  2. Run the following command -
python VocalRemover.py
  1. Copy and paste the error output shown in the cmd prompt to the issues center on the GitHub repository.

License

The Ultimate Vocal Remover GUI code is MIT-licensed.

  • PLEASE NOTE: For all third party application developers who wish to use our models, please honor the MIT-license by providing credit to UVR and it's developers Anjok07, aufr33, & tsurumeso.

Contributing

  • For anyone interested in the ongoing development of Ultimate Vocal Remover GUI please send us a pull request and we will review it. This project is 100% open-source and free for anyone to use and/or modify as they wish.
  • Please note that we do not maintain or directly support any of tsurumesos AI application code. We only maintain the development and support for the Ultimate Vocal Remover GUI and the models provided.

References

Comments
  • Crashing at

    Crashing at "inverse stft of instruments and vocals"

    Using GPU or CPU, it gets to "inverse stft of instruments and vocals", hangs for about 30 seconds, then closes. This is on v2 and v4. I tried with the latest version of tsurumeso's command line vocal remover and it works fine.

    I'm using a GTX 1080ti, 32GB of system RAM, Windows 10. I tried with a clean install of both Python 3.8 and 3.7.

    The console output: 100%|██████████████████████████████████████████████████████████████████████████████████| 42/42 [00:03<00:00, 11.34it/s]

    Installed packages:

    audioread==2.1.9 cffi==1.14.3 dataclasses==0.6 decorator==4.4.2 future==0.18.2 joblib==0.17.0 librosa==0.6.3 llvmlite==0.31.0 numba==0.48.0 numpy==1.19.4 opencv-python==4.4.0.46 Pillow==8.0.1 pycparser==2.20 resampy==0.2.2 scikit-learn==0.23.2 scipy==1.5.4 six==1.15.0 SoundFile==0.10.3.post1 soundstretch==1.2 threadpoolctl==2.1.0 torch==1.7.0+cu110 torchaudio==0.7.0 torchvision==0.8.1+cu110 tqdm==4.30.0 typing-extensions==3.7.4.3

    Status: Completed Priority: Medium 
    opened by ManOrMonster 34
  • Mac exporting time

    Mac exporting time

    Hey so I just got uvr 5 for Mac. I have an m1 MacBook Air, 256 gb ssd and 8 gb ram. Whenever I process a song using ensemble, it takes literally like over 2 hours to finish. Even now it's still exporting as I'm typing this. Will this be fixed in future updates?

    opened by codplay89 22
  • [INSTALLATION PROBLEM]

    [INSTALLATION PROBLEM]

    To Reproduce On which installation step did you encounter the issue.

    Screenshots Add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: [e.g. iOS]

    Additional context Add any other context about the problem here.

    opened by Mixerrog 14
  • There is a issues with the new lib for windows

    There is a issues with the new lib for windows

    The new update one yall did two days ago when you put the audio in the new batch file and u choose the number it works good until the end. it give a error but i put the old lib and it works but it comes up with a error with reverse

    opened by jobason93 11
  • Error

    Error

    I use UVR installer Untitled

    Process Method: VR Architecture

    If this error persists, please contact the developers with the error details.

    Traceback Error: " File "inference_v5.py", line 739, in main File "inference_v5.py", line 711, in inference File "inference_v5.py", line 683, in _execute File "C:\Users\Admin\AppData\Local\Programs\Ultimate Vocal Remover\lib_v5\nets_123812KB.py", line 106, in predict h = self.forward(x_mag, aggressiveness) File "C:\Users\Admin\AppData\Local\Programs\Ultimate Vocal Remover\lib_v5\nets_123812KB.py", line 75, in forward aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) File "C:\Users\Admin\AppData\Local\Programs\Ultimate Vocal Remover\lib_v5\nets_123812KB.py", line 34, in call h = self.dec2(h, e2) File "C:\Users\Admin\AppData\Local\Programs\Ultimate Vocal Remover\lib_v5\layers_123821KB.py", line 79, in call h = self.conv(x) File "C:\Users\Admin\AppData\Local\Programs\Ultimate Vocal Remover\lib_v5\layers_123821KB.py", line 25, in call return self.conv(x) File "torch\nn\modules\module.py", line 1051, in _call_impl File "torch\nn\modules\container.py", line 139, in forward File "torch\nn\modules\module.py", line 1051, in _call_impl File "torch\nn\modules\conv.py", line 443, in forward File "torch\nn\modules\conv.py", line 439, in _conv_forward " RuntimeError: "[enforce fail at ..\c10\core\CPUAllocator.cpp:79] data. DefaultCPUAllocator: not enough memory: you tried to allocate 13762560 bytes."

    opened by 404000 10
  • ImportError: numpy.core.multiarray failed to import

    ImportError: numpy.core.multiarray failed to import

    When attempting to open the GUI I receive an error;

    C:\Other\Ultimate Vocal Remover>vocalremover.py
     ** On entry to DGEBAL parameter number  3 had an illegal value
     ** On entry to DGEHRD  parameter number  2 had an illegal value
     ** On entry to DORGHR DORGQR parameter number  2 had an illegal value
     ** On entry to DHSEQR parameter number  4 had an illegal value
    ImportError: numpy.core.multiarray failed to import
    Traceback (most recent call last):
      File "C:\Other\Ultimate Vocal Remover\VocalRemover.py", line 24, in <module>
        import inference_v2
      File "C:\Other\Ultimate Vocal Remover\inference_v2.py", line 4, in <module>
        import cv2
      File "C:\Users\Username\AppData\Local\Programs\Python\Python37\lib\site-packages\cv2\__init__.py", line 5, in <module>
        from .cv2 import *
    ImportError: numpy.core.multiarray failed to import
    

    Running Python 3.7.0 and have all the required applications and packages listed in the readme installed.

    opened by JomSpoons 10
  • VR Architecture Error

    VR Architecture Error

    A error reported every time I run on my PC :(

    GPU: RTX2060 6G RAM: 16G

    I am sure that my card was working using CUDA, I could hear the fan noises and see the GPU utilization rate was almost full. If it just had something to do with my settings, you can check my screenshot below:

    Screennshot 2022-07-31 140506

    And Error Log:

    ========== Last Error Received:

    Error Received while processing "4175640287.flac": Process Method: VR Architecture

    If this error persists, please contact the developers with the error details.

    Traceback Error: " File "inference_v5.py", line 842, in main File "lib_v5\spec_utils.py", line 365, in cmb_spectrogram_to_wave_d File "lib_v5\spec_utils.py", line 263, in spectrogram_to_wave" MemoryError: "Unable to allocate 177. MiB for an array with shape (513, 22589) and data type complex128"

    Error Time Stamp [2022-07-31 14:02:05]

    ==========

    opened by Cowared 9
  • Import Error: cannot import name 'bitpack'

    Import Error: cannot import name 'bitpack'

    raceback (most recent call last): File "UVR.py", line 38, in import inference_MDX File "/home/cole/github/UVR-V5.21/inference_MDX.py", line 17, in from demucs.pretrained import get_model as _gm File "/home/cole/github/UVR-V5.21/demucs/pretrained.py", line 15, in from .hdemucs import HDemucs File "/home/cole/github/UVR-V5.21/demucs/hdemucs.py", line 17, in from .demucs import DConv, rescale_module File "/home/cole/github/UVR-V5.21/demucs/demucs.py", line 15, in from .states import capture_init File "/home/cole/github/UVR-V5.21/demucs/states.py", line 19, in from diffq import DiffQuantizer, UniformQuantizer, restore_quantized_state File "/home/cole/github/UVR-V5.21/diffq/init.py", line 22, in from .uniform import UniformQuantizer File "/home/cole/github/UVR-V5.21/diffq/uniform.py", line 13, in from .base import BaseQuantizer File "/home/cole/github/UVR-V5.21/diffq/base.py", line 21, in from . import bitpack ImportError: cannot import name 'bitpack' from partially initialized module 'diffq' (most likely due to a circular import) (/home/cole/github/UVR-V5.21/diffq/init.py)

    got this after I updated to 5.3.0. On Linux Mint 20.2

    opened by AwesomeGamer89 9
  • Extremely slow to convert

    Extremely slow to convert

    Describe the bug Why does It takes aprox 20 min to convert a single flac file

    To Reproduce Steps to reproduce the behavior:

    1. Go to '...'
    2. Click on '....'
    3. Scroll down to '....'
    4. See error

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: [e.g. iOS]
    • Browser [e.g. chrome, safari]
    • Version [e.g. 22]

    Smartphone (please complete the following information):

    • Device: [e.g. iPhone6]
    • OS: [e.g. iOS8.1]
    • Browser [e.g. stock browser, safari]
    • Version [e.g. 22]

    Additional context Add any other context about the problem here.

    Abandoned 
    opened by Aeiron2 9
  • macos m1 mac cannot run python script

    macos m1 mac cannot run python script

    ❯ python uvr.py
    Traceback (most recent call last):
      File "/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/soundfile.py", line 267, in <module>
        _snd = _ffi.dlopen('sndfile')
      File "/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/cffi/api.py", line 150, in dlopen
        lib, function_cache = _make_ffi_library(self, name, flags)
      File "/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/cffi/api.py", line 832, in _make_ffi_library
        backendlib = _load_backend_lib(backend, libname, flags)
      File "/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/cffi/api.py", line 827, in _load_backend_lib
        raise OSError(msg)
    OSError: ctypes.util.find_library() did not manage to locate a library called 'sndfile'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/Users/xxx/Downloads/ultimatevocalremovergui-master/uvr.py", line 7, in <module>
        import librosa
      File "/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/librosa/__init__.py", line 209, in <module>
        from . import core
      File "/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/librosa/core/__init__.py", line 6, in <module>
        from .audio import *  # pylint: disable=wildcard-import
      File "/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/librosa/core/audio.py", line 8, in <module>
        import soundfile as sf
      File "/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/soundfile.py", line 276, in <module>
        _snd = _ffi.dlopen(_os.path.join(
      File "/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/cffi/api.py", line 150, in dlopen
        lib, function_cache = _make_ffi_library(self, name, flags)
      File "/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/cffi/api.py", line 832, in _make_ffi_library
        backendlib = _load_backend_lib(backend, libname, flags)
      File "/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/cffi/api.py", line 827, in _load_backend_lib
        raise OSError(msg)
    OSError: cannot load library '/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib': dlopen(/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib, 0x0002): tried: '/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib' (no such file), '/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib' (no such file).  Additionally, ctypes.util.find_library() did not manage to locate a library called '/Users/xxx/.virtualenvs/22-ballontranslator-3.9.13shared/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib'
    
    opened by blacklein 8
  • Unable to select the main model [Mac OS]

    Unable to select the main model [Mac OS]

    I can't select the main model on the "Choose Main Model" field. It doesn't open any options that I can click. I get this error on the command line:

    Exception in Tkinter callback Traceback (most recent call last): File "/opt/anaconda3/lib/python3.7/tkinter/init.py", line 1705, in call return self.func(*args) File "VocalRemover.py", line 620, in open_newModel_filedialog os.startfile(models) AttributeError: module 'os' has no attribute 'startfile'

    Priority: Medium Type: Bug 
    opened by Diego92Ita 8
  • This error occurred as I was using ensemble mode with a wav file I am sorry I am not a developer and not really any use in providing more info

    This error occurred as I was using ensemble mode with a wav file I am sorry I am not a developer and not really any use in providing more info

    What I can say is that I was using a Mac Pro 5,1 40gb ram due xeon x5670 with sapphire rx580 nitro+

    Last Error Received:

    Process: Ensemble Mode

    If this error persists, please contact the developers with the error details.

    Raw Error Details:

    ParameterError: "Audio buffer is not finite everywhere" Traceback Error: " File "UVR.py", line 4578, in process_start File "separate.py", line 634, in seperate File "separate.py", line 830, in spec_to_wav File "lib_v5/spec_utils.py", line 352, in cmb_spectrogram_to_wave File "librosa/util/decorators.py", line 104, in inner_f File "librosa/core/audio.py", line 606, in resample File "librosa/util/decorators.py", line 88, in inner_f File "librosa/util/utils.py", line 294, in valid_audio "

    Error Time Stamp [2023-01-07 22:30:33]

    Full Application Settings:

    vr_model: Choose Model aggression_setting: 10 window_size: 320 batch_size: 4 crop_size: 256 is_tta: True is_output_image: False is_post_process: False is_high_end_process: True post_process_threshold: 0.2 vr_voc_inst_secondary_model: No Model Selected vr_other_secondary_model: No Model Selected vr_bass_secondary_model: No Model Selected vr_drums_secondary_model: No Model Selected vr_is_secondary_model_activate: False vr_voc_inst_secondary_model_scale: 0.9 vr_other_secondary_model_scale: 0.7 vr_bass_secondary_model_scale: 0.5 vr_drums_secondary_model_scale: 0.5 demucs_model: v4 | htdemucs_6s segment: Default overlap: 0.25 shifts: 2 chunks_demucs: Auto margin_demucs: 44100 is_chunk_demucs: False is_primary_stem_only_Demucs: True is_secondary_stem_only_Demucs: False is_split_mode: True is_demucs_combine_stems: False demucs_voc_inst_secondary_model: No Model Selected demucs_other_secondary_model: No Model Selected demucs_bass_secondary_model: No Model Selected demucs_drums_secondary_model: No Model Selected demucs_is_secondary_model_activate: False demucs_voc_inst_secondary_model_scale: 0.9 demucs_other_secondary_model_scale: 0.7 demucs_bass_secondary_model_scale: 0.5 demucs_drums_secondary_model_scale: 0.5 demucs_pre_proc_model: No Model Selected is_demucs_pre_proc_model_activate: False is_demucs_pre_proc_model_inst_mix: False mdx_net_model: Choose Model chunks: Auto margin: 44100 compensate: Auto is_denoise: True is_invert_spec: True mdx_voc_inst_secondary_model: No Model Selected mdx_other_secondary_model: No Model Selected mdx_bass_secondary_model: No Model Selected mdx_drums_secondary_model: No Model Selected mdx_is_secondary_model_activate: False mdx_voc_inst_secondary_model_scale: 0.9 mdx_other_secondary_model_scale: 0.7 mdx_bass_secondary_model_scale: 0.5 mdx_drums_secondary_model_scale: 0.5 is_save_all_outputs_ensemble: True is_append_ensemble_name: True chosen_audio_tool: Manual Ensemble choose_algorithm: Min Spec time_stretch_rate: 2.0 pitch_rate: 2.0 is_gpu_conversion: True is_primary_stem_only: False is_secondary_stem_only: False is_testing_audio: True is_add_model_name: True is_accept_any_input: True is_task_complete: False is_normalization: False is_create_model_folder: True mp3_bit_set: 320k save_format: WAV wav_type_set: 32-bit Float help_hints_var: True model_sample_mode: False model_sample_mode_duration: 30 demucs_stems: Piano

    opened by latextor 1
  • Add a readme file(in model_repo) to help manual download

    Add a readme file(in model_repo) to help manual download

    In UVR 5.5, the models folder has three sub-folder: Demucs_Models , MDX_Net_Models , VR_Models , but no guidance are provided to help users to put all manually-downloaded models into right folder, So I think a simple guidance should be added to https://github.com/TRvlvr/model_repo , in order to help manual download, and avoid unnecessary messy.

    Just for example, the guidance should say: put{1-16}_xx_yy_zz.pth in sub-folder VR_Models

    opened by unbadfish 4
  • OSX standalone - upgrading removes downloaded presets and models

    OSX standalone - upgrading removes downloaded presets and models

    As in title. OSX standalone - upgrading removes presets, settings and downloaded models (because its overriding whole .app file where data is stored). It's not big issue but can be depending on number of future upgrades. If possible please use subfolder of Application Support folder for it.

    Also additional if not for backup purposes: where are the settings stored?

    opened by Mr-Negative 3
  • MAC USERS PLEASE READ BEFORE SUBMITTING ISSUE

    MAC USERS PLEASE READ BEFORE SUBMITTING ISSUE

    A few important notes:

    • If you downloaded the latest patch and found that it wasn't compatible or could not open with your MAC, please download the x86_64 version of the application.

    • You may come across an ominous error when trying to run some mp3s. I've traced this to an issue with how specific Mac libraries read tag data that might be partially corrupt. I'm working on a workaround for this. But in the meantime, please convert the troublesome file to another format and try again.

      • Here are the likely errors you will see if this is the case:
        • ValueError: range() arg 3 must not be zero
        • ValueError: zero-size array to reduction operation maximum which has no identity
    opened by Anjok07 6
Releases(v5.5.0)
  • v5.5.0(Dec 19, 2022)

    General Release Information

    • UVR Version 5.5.0 includes the following:
      • Brand new MDX-Net models available via the Download Center.
      • Full Demucs v1, v2, v3, & v4 compatibility.
      • Additional models and application patches can be downloaded via the "Settings" menu within the application.
      • Much more! Please see the change log here for more information.

    Installations

    These bundles contain the UVR interface, Python, PyTorch, and other dependencies needed to run the application effectively. No prerequisites are required.

    Windows Installation

    • Please Note:

      • This installer is intended for those running Windows 10 or higher.
      • Application functionality for systems running Windows 7 or lower is not guaranteed.
      • Application functionality for Intel Pentium & Celeron CPUs systems is not guaranteed.
      • You must install UVR to the main C:\ drive. Installing UVR to a secondary drive will cause instability.
    • Download the UVR installer for Windows via the link below:

    • Update Package instructions for those who have UVR already installed:

      • If you already have UVR installed you can install this package over it or download it straight from the application.

    MacOS Installation

    • Please Note:

      • This bundle is intended for those running macOS Catalina and above.
      • Application functionality for systems running macOS Mojave or lower.
      • Application functionality for older or budget Mac systems is not guaranteed.
      • Once everything is installed, the application may take up to 5-10 minutes to start for the first time (depending on your Macbook).
    • Download the UVR dmg for MacOS via one of the links below:

    MacOS Users: Having Trouble Opening UVR?

    Due to Apples strict application security, you may need to follow these steps to open UVR.

    First, run the following command via Terminal.app to allow applications to run from all sources (it's recommended that you re-enable this once UVR opens properly.)

    sudo spctl --master-disable
    

    Second, run the following command to bypass Notarization:

    sudo xattr -rd com.apple.quarantine /Applications/Ultimate\ Vocal\ Remover.app
    
    Source code(tar.gz)
    Source code(zip)
    Ultimate_Vocal_Remover_v5_5_MacOS_arm64.dmg(434.40 MB)
    Ultimate_Vocal_Remover_v5_5_MacOS_x86_64.dmg(482.00 MB)
    UVR_v5.5.0_setup.exe(1598.67 MB)
    UVR_v5.5.0_setup_12_18_22_6_41.exe(1598.98 MB)
    UVR_v5.5.0_setup_12_22_22_23_44.exe(1598.89 MB)
  • v5.4.0(Jul 23, 2022)

    General Release Information

    • UVR Version 5.4.0 includes the following:
      • A powerful brand new MDX-Net model (included in the new package)
      • Full Demucs v1 & v2 backward compatibility
      • Ability to download additional models and application patches straight from the application
      • New ensembling options
      • Various bug fixes

    Windows Installation

    This installation bundle contains the UVR interface, Python, PyTorch, and other dependencies needed to run the application effectively. No prerequisites are required.

    • Please Note:

      • This installer is intended for those running Windows 10 or higher.
      • Application functionality for systems running Windows 7 or lower is not guaranteed.
      • Application functionality for Intel Pentium & Celeron CPUs systems is not guaranteed.
    • Download the UVR installer via the link below:

    • Update Package instructions for those who have UVR already installed:

    • Optional

      • Additional models and application patches can be downloaded via the "Settings" menu within the application.
    Source code(tar.gz)
    Source code(zip)
    UVR_v5.4_Update_Package.exe(93.31 MB)
  • v5.3.0(May 11, 2022)

    General Release Information

    Windows Installation

    This installation bundle contains the UVR interface, Python, PyTorch, and other dependencies needed to run the application effectively. No prerequisites are required.

    • Please Note:

      • This installer is intended for those running Windows 10 or higher.
      • Application functionality for systems running Windows 7 or lower is not guaranteed.
      • Application functionality for Intel Pentium & Celeron CPUs systems is not guaranteed.
    • Download the UVR installer via the following link below:

    • Included below:

      • v5_model_expansion_pack.zip: This archive contains 11 additional v5 models and 4 v4 models.
      • Source code.zip: This archive contains the GUI, excluding any models.
      • models.zip: This archive contains all of the base v5 models.
      • UVR_v5.3_automatic_update_package_setup.exe: This will automatically install the update if you currently have UVR v5 already installed on your system.
      • UVR_v5.3_update_patch.zip: This archive contains update files to do a manual upgrade.
        • The following has been added:
          • Demucs v3 has been fully implemented into the GUI. This includes 2 new UVR-trained Demucs v3 models.
          • Additional advanced options.
        • Update installation instructions:
        • Download the automatic installer here
          • If you run into any issues installing the automatic update installer, follow the directions below to install the upgrade manually.
            1. Download the UVR_v5.3_update_patch file here
            2. Navigate to the application directory
            3. Close UVR if you have it open.
            4. Extract all of the contents within the UVR_Patch_v5.3.zip archive exactly as they are to the application directory and overwrite any existing files.
            5. Open the application to ensure workability.
    Source code(tar.gz)
    Source code(zip)
    models.zip(1654.05 MB)
    UVR_v5.3_automatic_update_package_setup.exe(989.14 MB)
    UVR_v5.3_update_patch.zip(1018.83 MB)
    v5_model_expansion_pack.zip(1885.81 MB)
  • v5.1.0(Apr 7, 2022)

    General Release Information

    • Main Downloads:
      • models.zip: This archive contains all 8 compatible v5 models.
      • Source code.zip: This archive contains the GUI, excluding any models.

    Changelog

    v.5.1.0 (04.14.22):

    • Added 'Ensemble Mode'
    • Updated requirements.txt
    • Fixed progress bar
    • Fixed start conversion button
    Source code(tar.gz)
    Source code(zip)
    models.zip(2024.63 MB)
  • 5.0.2(Jul 5, 2021)

    General Release Information

    • This pack includes an additional 5 high-performance models referenced below:
      • HP_4BAND_3090 - This model was trained with a bigger dataset and optimized training parameters. The model weighs 121MB's due to increased capacity.
      • HP2-4BAND-3090_4band_1 - This model was trained with a bigger dataset and optimized training parameters. The model weighs 524MB's due to increased capacity. Conversions will take longer with this capacity type.
      • HP2-4BAND-3090_4band_2 - A lightly fine-tuned version of "HP2-4BAND-3090_4band_1". The model weighs 524MB's due to increased capacity. Conversions will take longer with this capacity type.
      • Vocal_HP_4BAND_3090 - This is a new HP vocal model. The model weighs 121MB's due to increased capacity.
      • Vocal_HP_4BAND_3090_AGG - This is a more aggressive version of the "Vocal_HP_4BAND_3090" vocal model. The model weighs 121MB's due to increased capacity.
    • Instructions for running this version can be found here.
    • These models are not compatible with the v4 GUI!
    • This version is only available via command-line at this time. The v5 GUI is still in development.
      • I included a Windows batch file specifically for the model included in this package for ease of use.
    • For more information on each model, please see the model section here.
    Source code(tar.gz)
    Source code(zip)
    v5_July_2021_5_Models.zip(1312.31 MB)
  • v4.0.1(Nov 13, 2020)

    General Release Information

    • Complete GUI Overhaul
    • 8 Brand New Models
    • 11 Models Total:
      • v2: 1 main model/2 stacked models
      • v4: 5 main models/3 stacked models
    • Main Downloads:
      • models.zip: This archive contains all 12 models only.
      • UVR_V4GUI_All_IN_ONE_12_06.zip: This archive contains the full GUI code, along with all 12 models.
      • Source code.zip: This archive contains the GUI, excluding any models.

    Changelog

    v.4.0.1 (04.12):

    • Fixed VRAM not clearing after audio split #34
    • Saving last used music files
    • Saving last used models #36
    • Added icon to window

    v.4.0.0 (23.11):

    • Initial Release

    Model MD5 Hashes:

    • MODEL_BVKARAOKE_by_aufr33_v4_sr33075_hl384_nf1536.pth - 28063E9F6AB5B341C5F6D3C67F2045B7
    • MGM_MAIN_v4_sr44100_hl512_nf2048.pth - B58090534C52CBC3E9B5104BAD666EF2
    • MGM_LOWEND_B_v4_sr33075_hl384_nf2048.pth - EDC115E7FC523245062200C00CAA847F
    • MGM_LOWEND_A_v4_sr32000_hl512_nf2048.pth - 0AB504864D20F1BD378FE9C81EF37140
    • MGM_LOWEND_C_v4_sr16000_hl512_nf2048.pth - 6A00461C51C2920FD68937D4609ED6C8
    • MGM_HIGHEND_v4_sr44100_hl1024_nf2048.pth - AE702FED0238AFB5346DB8356FE25F13
    • StackedMGM_MM_v4_sr44100_hl512_nf2048.pth - 0CDAB9947F1B0928705F518F3C78EA8F
    • StackedMGM_MLA_v4_sr32000_hl512_nf2048.pth - 80AB74D65E515CAA3622728D2DE07D23
    • StackedMGM_LL_v4_sr32000_hl512_nf2048.pth - 7DD21065BF91C10F7FCCB57D7D83B07F
    • Multi_Genre_Model_v2_sr44100_hl1024.pth - 2EEAC892AC2579161C43D152EF111E69
    • StackedRegA_v2_sr44100_hl1024.pth - 04C085C0D3E4DB77021BB838E6F81736
    • StackedArg_v2_sr44100_hl1024.pth - 046884A393EA185C1E1EC03EFCC58130
    Source code(tar.gz)
    Source code(zip)
    models.zip(369.68 MB)
    UVR_V4GUI_All_IN_ONE_11_23.zip(369.98 MB)
    UVR_V4GUI_All_IN_ONE_12_10.zip(398.20 MB)
  • v2.2.0-GUI(Jul 20, 2020)

A PyTorch-based Semi-Supervised Learning (SSL) Codebase for Pixel-wise (Pixel) Vision Tasks

PixelSSL is a PyTorch-based semi-supervised learning (SSL) codebase for pixel-wise (Pixel) vision tasks. The purpose of this project is to promote the

Zhanghan Ke 255 Dec 11, 2022
Contextual Attention Network: Transformer Meets U-Net

Contextual Attention Network: Transformer Meets U-Net Contexual attention network for medical image segmentation with state of the art results on skin

Reza Azad 67 Nov 28, 2022
Malware Env for OpenAI Gym

Malware Env for OpenAI Gym Citing If you use this code in a publication please cite the following paper: Hyrum S. Anderson, Anant Kharkar, Bobby Fila

ENDGAME 563 Dec 29, 2022
Get a Grip! - A robotic system for remote clinical environments.

Get a Grip! Within clinical environments, sterilization is an essential procedure for disinfecting surgical and medical instruments. For our engineeri

Jay Sharma 1 Jan 05, 2022
Official implementation for paper Knowledge Bridging for Empathetic Dialogue Generation (AAAI 2021).

Knowledge Bridging for Empathetic Dialogue Generation This is the official implementation for paper Knowledge Bridging for Empathetic Dialogue Generat

Qintong Li 50 Dec 20, 2022
GarmentNets: Category-Level Pose Estimation for Garments via Canonical Space Shape Completion

GarmentNets This repository contains the source code for the paper GarmentNets: Category-Level Pose Estimation for Garments via Canonical Space Shape

Columbia Artificial Intelligence and Robotics Lab 43 Nov 21, 2022
CT Based COVID 19 Diagnose by Image Processing and Deep Learning

This project proposed the deep learning and image processing method to undertake the diagnosis on 2D CT image and 3D CT volume.

1 Feb 08, 2022
Code for "Adversarial attack by dropping information." (ICCV 2021)

AdvDrop Code for "AdvDrop: Adversarial Attack to DNNs by Dropping Information(ICCV 2021)." Human can easily recognize visual objects with lost informa

Ranjie Duan 52 Nov 10, 2022
RCDNet: A Model-driven Deep Neural Network for Single Image Rain Removal (CVPR2020)

RCDNet: A Model-driven Deep Neural Network for Single Image Rain Removal (CVPR2020) Hong Wang, Qi Xie, Qian Zhao, and Deyu Meng [PDF] [Supplementary M

Hong Wang 6 Sep 27, 2022
[ICLR 2021] Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments.

[ICLR 2021] RAPID: A Simple Approach for Exploration in Reinforcement Learning This is the Tensorflow implementation of ICLR 2021 paper Rank the Episo

Daochen Zha 48 Nov 21, 2022
Detecting Human-Object Interactions with Object-Guided Cross-Modal Calibrated Semantics

[AAAI2022] Detecting Human-Object Interactions with Object-Guided Cross-Modal Calibrated Semantics Overall pipeline of OCN. Paper Link: [arXiv] [AAAI

13 Nov 21, 2022
This repository is a basic Machine Learning train & validation Template (Using PyTorch)

pytorch_ml_template This repository is a basic Machine Learning train & validation Template (Using PyTorch) TODO Markdown 사용법 Build Docker 사용법 Anacond

1 Sep 15, 2022
REGTR: End-to-end Point Cloud Correspondences with Transformers

REGTR: End-to-end Point Cloud Correspondences with Transformers This repository contains the source code for REGTR. REGTR utilizes multiple transforme

Zi Jian Yew 108 Dec 17, 2022
Unpaired Caricature Generation with Multiple Exaggerations

CariMe-pytorch The official pytorch implementation of the paper "CariMe: Unpaired Caricature Generation with Multiple Exaggerations" CariMe: Unpaired

Gu Zheng 37 Dec 30, 2022
FreeSOLO for unsupervised instance segmentation, CVPR 2022

FreeSOLO: Learning to Segment Objects without Annotations This project hosts the code for implementing the FreeSOLO algorithm for unsupervised instanc

NVIDIA Research Projects 253 Jan 02, 2023
[CVPR 2020] GAN Compression: Efficient Architectures for Interactive Conditional GANs

GAN Compression project | paper | videos | slides [NEW!] GAN Compression is accepted by T-PAMI! We released our T-PAMI version in the arXiv v4! [NEW!]

MIT HAN Lab 1k Jan 07, 2023
Distributed Asynchronous Hyperparameter Optimization better than HyperOpt.

UltraOpt : Distributed Asynchronous Hyperparameter Optimization better than HyperOpt. UltraOpt is a simple and efficient library to minimize expensive

98 Aug 16, 2022
[ICLR 2021] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin

CPT: Efficient Deep Neural Network Training via Cyclic Precision Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin Accep

26 Oct 25, 2022
A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks)

A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks) This repository contains a PyTorch implementation for the paper: Deep Pyra

Greg Dongyoon Han 262 Jan 03, 2023