Voicefixer aims at the restoration of human speech regardless how serious its degraded.

Related tags

Audiovoicefixer
Overview

Open In Colab PyPI version

VoiceFixer

Voicefixer aims at the restoration of human speech regardless how serious its degraded. It can handle noise, reveberation, low resolution (2kHz~44.1kHz) and clipping (0.1-1.0 threshold) effect within one model.

46dAq1.png

Demo

Please visit demo page to view what voicefixer can do.

Usage

from voicefixer import VoiceFixer
voicefixer = VoiceFixer()
voicefixer.restore(input="", # input wav file path
                   output="", # output wav file path
                   cuda=False, # whether to use gpu acceleration
                   mode = 0) # You can try out mode 0, 1 to find out the best result

from voicefixer import Vocoder
# Universal Speaker Independent Vocoder
vocoder = Vocoder(sample_rate=44100) # only support 44100 sample rate
vocoder.oracle(fpath="", # input wav file path
               out_path="") # output wav file path

46dnPO.png 46dMxH.png

Related Material

Comments
  • Issue with defining Module

    Issue with defining Module

    I'm trying to make a Google Colab with the code of this one, but it somehow returned this error: NameError: name 'VoiceFixer' is not defined. I even actually defined VoiceFixer using one of the definitions from line 9 of base.py. So I changed the definition with line 93 of model.py, still got the same error. Do you know any fixes? If yes, reply.

    opened by YTR76 9
  • Inconsistency in the generator architecture

    Inconsistency in the generator architecture

    Thanks for releasing the code publicly. I have a small confusion in the implementation of the generator mentioned here. As per Fig.3(a) in the paper, a mask is predicted from the input noisy audio which is then multiplied with the input to get the clean audio, but in the implementation, it seems the after the masking operation it is further passed through a unet. The loss is also calculated for both the outputs. Can you please clarify the inconsistency? Thanks in advance.

    opened by krantiparida 5
  • Add command line script

    Add command line script

    This update adds a script for processing files directly from the command line. You can test locally by switching to the command-line branch, navigating to the repo folder, and running pip3 install -e . You should be able to run the command voicefixer from any directory.

    opened by chrisbaume 4
  • Possibility of running on Windows?

    Possibility of running on Windows?

    Hello, I stumbled on this repo and found it really interesting. The demos in particular impressed me. I have some old/bad quality speech recordings I'd like to try and enhance, but I'm having trouble running any of the code.

    I am running Windows 10 home, Python 3.9.12 at the moment. No GPU present right now, so that may be a problem? I understand that the code is not well tested on Windows yet. Nevertheless, I am completely ignorant when it comes to getting these sorts of things to run; without clear steps to follow, I am lost.

    If there are legitimate issues running on Windows, I'd like to do my part in making them known, but I'm taking a shot in the dark here. I still hope I can be helpful though!

    I assume that the intended workflow for testing is to read an audio file eg. wav, aiff, raw PCM data etc. and process it, creating a new output file? But please correct me if I'm wrong.

    I followed instructions in readme.md to try and use the Streamlit app. Specifically, I ran these commands: pip install voicefixer==0.0.17 git clone https://github.com/haoheliu/voicefixer.git cd voicefixer pip install streamlit streamlit run test/streamlit.py At this point a Windows firewall dialog comes up and I click allow. Throughout this process, no errors seem to show up. But the models do not appear to download (no terminal updates, and I let it sit for about a day with no changes). Streamlit page remains blank. The last thing I see in terminal is: "  You can now view your Streamlit app in your browser.   Local URL: http://localhost:8501   Network URL: http://10.0.0.37:8501" That local URL is the one shown in the address bar.

    So yeah I'm quite lost. What do you advise? Thanks in advance!

    opened by musicalman 4
  • How to test the model for a single task?

    How to test the model for a single task?

    I ran the test/reference.py to test my distorted speech, and the result was GSR. How to test the model for a single task, such as audio super-resolution only? In addition, what is the delay of voicefixer?

    opened by litong123 4
  • Add streamlit inference demo page

    Add streamlit inference demo page

    image Hi!

    I'm very impressed with your research result, and also I want to test my samples as easily as possible.

    So, I made a simple web-based demo using streamlit.

    opened by AppleHolic 3
  • some questions

    some questions

    Hi, thanks for your great work.
    After reading your paper, I have a question here.

    1. Why use the two-stage algorithm? is it to facilitate more types of speech restoration?
    2. Since there is no information about the speed of the model in the paper, what is the training and inference speed of the model?
    opened by LqNoob 2
  • Can the pretrained model suppot these waveform where target sound is far-field?

    Can the pretrained model suppot these waveform where target sound is far-field?

    I tried to use the test script for restoring my audio, but I obtained worse performance. I suspect the model only supports target sound from close field.

    opened by NewEricWang 2
  • where to find the model(*.pth) to test the effect with my own input wav?

    where to find the model(*.pth) to test the effect with my own input wav?

    hi, i just want to test the powerfull effect of voicefixer, with my own distored wav. so i followed your instruction under Python Examples, but when run python3 test/test.py failed. the error information is as follows~~~~~~~~~ Initializing VoiceFixer... Traceback (most recent call last): File "test/test.py", line 39, in voicefixer = VoiceFixer() File "/root/anaconda3.8/lib/python3.8/site-packages/voicefixer/base.py", line 12, in init self._model = voicefixer_fe(channels=2, sample_rate=44100) File "/root/anaconda3.8/lib/python3.8/site-packages/voicefixer/restorer/model.py", line 140, in init self.vocoder = Vocoder(sample_rate=44100) File "/root/anaconda3.8/lib/python3.8/site-packages/voicefixer/vocoder/base.py", line 14, in init self._load_pretrain(Config.ckpt) File "/root/anaconda3.8/lib/python3.8/site-packages/voicefixer/vocoder/base.py", line 19, in _load_pretrain checkpoint = load_checkpoint(pth, torch.device("cpu")) File "/root/anaconda3.8/lib/python3.8/site-packages/voicefixer/vocoder/model/util.py", line 92, in load_checkpoint checkpoint = torch.load(checkpoint_path, map_location=device) File "/root/anaconda3.8/lib/python3.8/site-packages/torch/serialization.py", line 600, in load with _open_zipfile_reader(opened_file) as opened_zipfile: File "/root/anaconda3.8/lib/python3.8/site-packages/torch/serialization.py", line 242, in init super(_open_zipfile_reader, self).init(torch._C.PyTorchFileReader(name_or_buffer)) RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory It seems that the pretrained model file can not be find. i manually searched the *.pth files but not find, so seeking your help. Thank you!

    opened by yihe1003 2
  • Unable to test, error in state_dict

    Unable to test, error in state_dict

    Hello,

    I am trying to test the code on a wav file. But I receive the following message:

    RuntimeError: Error(s) in loading state_dict for VoiceFixer: Missing key(s) in state_dict: "f_helper.istft.ola_window". Unexpected key(s) in state_dict: "f_helper.istft.reverse.weight", "f_helper.istft.overlap_add.weight".

    Which seemed to be caused by the following line in the code: self._model = self._model.load_from_checkpoint(os.path.join(os.path.expanduser('~'), ".cache/voicefixer/analysis_module/checkpoints/epoch=15_trimed_bn.ckpt"))

    Do you have an idea on how to resolve this issue?

    opened by yalharbi 2
  • Some problems and questions.

    Some problems and questions.

    Hello! I installed your neural network and ran it in Desktop App mode, but I don't see the "Turn on GPU" switch here. This is the first question. Second question: How do I use the models from the demo page? GSR_UNet, VF_Unet, Oracle?

    Thanks in advance for the answer!

    opened by Aspector1 1
  • Artifacts on 's' sounds

    Artifacts on 's' sounds

    Hello! Awesome project, and I totally understand that this isn't your main focus anymore, but I just love the results this gives over almost everything else I've tried for speech restoration.

    However, I'm getting some interesting 's' sounds being dropped occasionally, and was wondering if there was perhaps a way of avoiding that, that you knew of?

    UnvoiceFixed Voicefixed

    Any ideas would be great, thanks!

    opened by JakeCasey 0
  • Lots of noises are added to the unspoken parts and overall quality is not worse - files provides

    Lots of noises are added to the unspoken parts and overall quality is not worse - files provides

    My audio is from my lecture video : https://www.youtube.com/watch?v=2zY1dQDGl3o

    I want to improve overall quality to make it easier to understand

    Here my raw audio : https://drive.google.com/file/d/1gGxH1J3Z_I8NNjqBvbrVB5MA0gh4qCD7/view?usp=share_link

    mode 0 output : https://drive.google.com/file/d/1MRFQecxx9Ikevnsyk9Ivx6Ofr_dqdwFi/view?usp=share_link

    mode 1 output : https://drive.google.com/file/d/1sva-o7Py6beEIWbcA4f0LS1-ikGmvlUC/view?usp=share_link

    mode 2 output : https://drive.google.com/file/d/1sva-o7Py6beEIWbcA4f0LS1-ikGmvlUC/view?usp=share_link

    for example open 1.00.40 and you will see noise

    also improvement is not very good if i am not talking a lot during that part of video

    check out usually the late parts of the sound files and you will see it is actually worse in mode 1 and mode 2

    for example check 1.02.40 mode 1 and see noise and bad sound quality

    for example check 1.32.55 mode 2 and see bad quality and noise glitches

    I don't know maybe you can test and experiment with my speech to improve model even further.

    thank you very much keep up the good work

    opened by FurkanGozukara 2
  • Voice fixer 8000hz to 16000hz how to upsample wav to 16000 hz using voice fixer

    Voice fixer 8000hz to 16000hz how to upsample wav to 16000 hz using voice fixer

    every time when i try telephonic wav(8khz) to rise it to 44khz it removes clarity of audio.... how to give custom upsample rate and there Are 2 people voices in my wav file but when i upsample with voice fixer ,the output wav has only one person voice

    opened by PHVK1611 1
  • How to use my own trained model from voicefixer_main?

    How to use my own trained model from voicefixer_main?

    Hello.

    I am having issue when running your code for inference with the trained model from voicefixer_main, not the pretrained model. Is it possible to use the trained model for test.py?

    I tried to replaced the vf.ckpt with my trined model ~ at the original directory, but it did not work it produced the following error:

    CleanShot 2022-11-04 at 16 34 01@2x

    It seems like the pretrained model voicefixer and the trained model from voicefixer_main are different each other in terms of model's size. the pretrained model is about 489.3 MB the one from voice_fixer main is about 1.3 GB

    opened by utahboy3502 1
  • Padding error with certain input lengths

    Padding error with certain input lengths

    Hello everyone, first of all nice work on the library! Very cool stuff and good out-of-the-box results.

    I've run into a bug though (or at least it looks a lot like one). Certain input lengths trigger padding errors, probably due to how the split-and-concat strategy for larger inputs work in restore_inmem:

    import voicefixer
    import numpy as np
    
    model = voicefixer.VoiceFixer()
    model.restore_inmem(np.random.random(44100*30 + 1))
    
    >>>
    RuntimeError: Argument #4: Padding size should be less than the corresponding input dimension, but got: padding (1024, 1024) at dimension 2 of input [1, 1, 2]
    

    I have a rough idea on how to patch it, so let me know if you'd like a PR.

    Thanks,

    opened by amiasato 1
Releases(v0.0.12)
A simple music player, powered by Python, utilising various libraries such as Tkinter and Pygame

A simple music player, powered by Python, utilising various libraries such as Tkinter and Pygame

PotentialCoding 2 May 12, 2022
Desktop music recognition application for windows

MusicRecognizer Music recognition application for windows You can choose from which of the devices the recording will be made. If you choose speakers,

Nikita Merzlyakov 28 Dec 13, 2022
An 8D music player made to enjoy Halloween this year!🤘

HAPPY HALLOWEEN buddy! Split Player Hello There! Welcome to SplitPlayer... Supposed To Be A 8DPlayer.... You Decide.... It can play the ordinary audio

Akshat Kumar Singh 1 Nov 04, 2021
Mousai is a simple application that can identify song like Shazam

Mousai is a simple application that can identify song like Shazam. It saves the artist, album, and title of the identified song in a JSON file.

Dave Patrick 662 Jan 07, 2023
An AI for Music Generation

An AI for Music Generation

Hao-Wen Dong 1.3k Dec 31, 2022
AudioDVP:Photorealistic Audio-driven Video Portraits

AudioDVP This is the official implementation of Photorealistic Audio-driven Video Portraits. Major Requirements Ubuntu = 18.04 PyTorch = 1.2 GCC =

232 Jan 03, 2023
spafe: Simplified Python Audio-Features Extraction

spafe aims to simplify features extractions from mono audio files. The library can extract of the following features: BFCC, LFCC, LPC, LPCC, MFCC, IMFCC, MSRCC, NGCC, PNCC, PSRCC, PLP, RPLP, Frequenc

Ayoub Malek 310 Jan 01, 2023
Supysonic is a Python implementation of the Subsonic server API.

Supysonic Supysonic is a Python implementation of the Subsonic server API. Current supported features are: browsing (by folders or tags) streaming of

Alban 228 Nov 19, 2022
❤️ This Is The EzilaXMusicPlayer Advaced Repo 🎵

Telegram EzilaXMusicPlayer Bot 🎵 A bot that can play music on telegram group's voice Chat ❤️ Requirements 📝 FFmpeg NodeJS nodesource.com Python 3.7+

Sadew Jayasekara 11 Nov 12, 2022
GNU Radio – the Free and Open Software Radio Ecosystem

GNU Radio is a free & open-source software development toolkit that provides signal processing blocks to implement software radios. It can be used wit

GNU Radio 4.1k Jan 06, 2023
A fast MDCT implementation using SciPy and FFTs

MDCT A fast MDCT implementation using SciPy and FFTs Installation As usual pip install mdct Dependencies NumPy SciPy STFT Usage import mdct spectrum

Nils Werner 43 Sep 02, 2022
Telegram Voice-Chat Bot Written In Python Using Pyrogram.

Telegram Voice-Chat Bot Telegram Voice-Chat Bot To Play Music From Various Sources In Your Group Support All linux based os. Windows Mac Diagram Requi

TheHamkerCat 314 Dec 29, 2022
The project aims to develop a personal-assistant for Windows & Linux-based systems

The project aims to develop a personal-assistant for Windows & Linux-based systems. Samiksha draws its inspiration from virtual assistants like Cortana for Windows, and Siri for iOS. It has been desi

SHUBHANSHU RAI 1 Jan 16, 2022
Klangbecken: The RaBe Endless Music Player

Klangbecken Klangbecken is the minimalistic endless music player for Radio Bern RaBe based on liquidsoap. It supports configurable and editable playli

Radio Bern RaBe 8 Oct 09, 2021
Python Audio Analysis Library: Feature Extraction, Classification, Segmentation and Applications

A Python library for audio feature extraction, classification, segmentation and applications This doc contains general info. Click here for the comple

Theodoros Giannakopoulos 5.1k Jan 02, 2023
An app made in Python using the PyTube and Tkinter libraries to download videos and MP3 audio.

yt-dl (GUI Edition) An app made in Python using the PyTube and Tkinter libraries to download videos and MP3 audio. How do I download this? Windows: Fi

1 Oct 23, 2021
A useful tool to generate chord progressions according to melody MIDIs

Auto chord generator, pure python package that generate chord progressions according to given melodies

Billy Yi 53 Dec 30, 2022
Automatically move or copy files based on metadata associated with the files. For example, file your photos based on EXIF metadata or use MP3 tags to file your music files.

Automatically move or copy files based on metadata associated with the files. For example, file your photos based on EXIF metadata or use MP3 tags to file your music files.

Rhet Turnbull 14 Nov 02, 2022
Users can transcribe their favorite piano recordings to MIDI files after installation

Users can transcribe their favorite piano recordings to MIDI files after installation

190 Dec 17, 2022
Sparse Beta-Divergence Tensor Factorization Library

NTFLib Sparse Beta-Divergence Tensor Factorization Library Based off of this beta-NTF project this library is specially-built to handle tensors where

Stitch Fix Technology 46 Jan 08, 2022