GVT is a generic translation tool for parts of text on the PC screen with Text to Speak functionality.

Overview

🎮 🎧 🚀 Generic Visual Translator 🚀 🎧 🎮

GVT is a generic translation tool for parts of text on the PC screen with Text to Speech functionality. I wanted to create it because the existing tools that I experimented with did not satisfy me in ease-to-use experience and configuration. Personally I used it with Lost Ark (example included generated by 2k monitor) to translate simple dialogues of quests in Italian.

ko-fi

📝 Requirements

Tested Operating Systems : Windows 10/11 Python Version: 3.9.6

  • Easynmt
  • OpenCV2
  • Easyocr
  • Numpy
  • Deepl (Unofficial API)
  • Pyttsx3
  • Pywin32
  • WXWidgets
  • Pygame
  • Keyboard

The requirements.txt file has been created with the versions currently installed on my pc, but it is not excluded that GVT could work also with newer or older versions of the same libraries

Requirements installation command pip install -r requirements.txt

💪 How it works

GVT simply translates a user-defined region of the screen and then recites it using Windows 10/11 TTS (Not tested on Windows 7) showing the translated text instead of the one on the screen.

Before using it, you need to configure the config.yaml file in the same folder.

Then you can run GVT using run.bat or with the command python main.py.

## 👀 File description config.yaml.

Variable Name Type of variable Description Recommended
game_name string between " Application Name
source_language Acronym that corresponds to the Application language (ex. en,de,ch,jp) Language of the application.
target_language Acronym that corresponds to the chosen language (ex. en,de,ch,jp) Language in which to translate.
translation_method deepl | opus Translation Engine. Deepl will use unofficial API. deepl
translation_internal_method offline | online Used only when you select internal in the translation_method variable. offline: is using the model downloaded in the models\opus-mt folder. You can download the entire model here : https://huggingface.co/Helsinki-NLP online: it download the model you need automatically.
gpu_enabled True | False With True and a supported GPU the read of the text will be really fast. True
time_between_captures integer Time that pass before GVT check a new element on the screen. 1
skip_key string between " | "None" If the text can be sent forward, once read, with a key, GVT can send it forward automatically by telling it which key to press. If set to None it will not do anything.
show_text True | False If set to True, an overlay will be shown on the application text, containing the translated text.
time_to_wait_for_word float If tss_enabled is set to False and show_text is set to True GVT will use this parameter to figure out how long to show the overlay text. If tss_enabled is set to True this parameter will be ignored and the overlay will last as long as it takes to play the audio of the text. 0.3
tts_enabled True | False If enabled, GVT will use windows text to Speech the translated phrase.
tts_voice_number integer Use voice_list.py to list all the voices on your system and to see which number corresponds to the one you want to choose.
main_region It contains the coordinates of the region of the screen where the text to be translated will appear. Use GetCoords.py to make your job easier.
main_region > X integer Starting point of the region on the X axis.
main_region > Y integer Starting point of the region on the Y axis.
main_region > extensionOfX integer Number of pixels required to reach the end point of the frame on the X axis.
main_region > extensionOfY integer Number of pixels required to reach the end point of the frame on the Y axis.
activator_region It contains the coordinates where GVT will look for the text activation image to be translated. Once found, GVT will proceed with the translation. Once it disappears it will return to idle state.
activator_region > name string | "None" Name of the image that you will cut from a screenshot of your screen and that identifies the appearance of a text to be translated in the application.It need to be placed in the activators folder
activator_region > X integer Starting point of the region on the X axis.
activator_region > Y integer Starting point of the region on the Y axis.
activator_region > extensionOfX integer Number of pixels required to reach the end point of the frame on the X axis.
activator_region > extensionOfY integer Number of pixels required to reach the end point of the frame on the Y axis.

🚀 Getting started

This is an example based on the LostArk video game

  • Clone this repository on your pc or download the folder and enter in it
  • Launch LostArk and reach a dialogue scene
  • Run runCoordHelper.bat or the command python GetCoords.py
  • Press Z on the upper left point of the text box
  • Press Z on the lower right point of the text box
  • Copy the coordinates from the console instead of the empty fields in the config.yaml file under the main_region and close the console
  • Find the dot or icon that appears whenever the text to be translated also appears, in the case of LostArk it is the Leave button at the bottom right
  • Press the Shift + Win + S buttons on Windows 10 or 11 and select this image and save it later in the ** activators ** folder with a recognizable name
  • Run runCoordHelper.bat again or the command` python GetCoords.py
  • Use the same method as above to get the coordinates of a not too narrow box surrounding the ** activator ** in-game image
  • Copy the coordinates from the console and paste it instead of the empty fields in the config.yaml file under the activator_region and close the console
  • Set the source_language with the acronym of the language you want to translate from, and the target_language for the language you want to translate the game into (use https://github.com/ptrstn/deepl-translate for the reference table and languages supported by deepl or go here https://huggingface.co/Helsinki-NLP for opus models)
  • Set the dialog progress key if desired, otherwise leave it at None. Note: Leave to None if your game have a heavy anti-cheat system that not allow anything except you to press the keys of your keyboard
  • Set show_text and tts_enabled according to what you want enabled/disabled
  • If you have set tts_enabled to True, run runVoiceList.bat or python voice_list.py to find out the number matched to the voices installed in your Windows distribution (is the one in the square parentheses) and set the variable tts_voice_number to the desired number.

Here is an example of the complete file 📋

game_name:  Lost_Ark
source_language: en
target_language: it
translation_method: deepl
translation_internal_method: offline
gpu_enabled: True
time_between_captures: 1
skip_key: "g"
show_text: False
time_to_wait_for_word: 0.3
tts_enabled: True
tts_voice_number: 0

main_region: 
  X: 567
  Y: 1304
  extensionOfX: 2068
  extensionOfY: 1439
activator_region:
  name: "lost_ark.png"
  X: 2
  Y: 1308
  extensionOfX: 2559
  extensionOfY: 1439

  • Execute run.bat

💭 To Do

  • Add the capability to define more regions and activator at once
  • Add the capability to support multiple game just chosing it from a menu
Owner
Nuked
Nuked
Rhasspy 673 Dec 28, 2022
📝An easy-to-use package to restore punctuation of the text.

✏️ rpunct - Restore Punctuation This repo contains code for Punctuation restoration. This package is intended for direct use as a punctuation restorat

Daulet Nurmanbetov 72 Dec 30, 2022
AudioCLIP Extending CLIP to Image, Text and Audio

AudioCLIP Extending CLIP to Image, Text and Audio This repository contains implementation of the models described in the paper arXiv:2106.13043. This

458 Jan 02, 2023
The Classical Language Toolkit

Notice: This Git branch (dev) contains the CLTK's upcoming major release (v. 1.0.0). See https://github.com/cltk/cltk/tree/master and https://docs.clt

Classical Language Toolkit 754 Jan 09, 2023
Official implementations for various pre-training models of ERNIE-family, covering topics of Language Understanding & Generation, Multimodal Understanding & Generation, and beyond.

English|简体中文 ERNIE是百度开创性提出的基于知识增强的持续学习语义理解框架,该框架将大数据预训练与多源丰富知识相结合,通过持续学习技术,不断吸收海量文本数据中词汇、结构、语义等方面的知识,实现模型效果不断进化。ERNIE在累积 40 余个典型 NLP 任务取得 SOTA 效果,并在 G

5.4k Jan 03, 2023
Stanford CoreNLP provides a set of natural language analysis tools written in Java

Stanford CoreNLP Stanford CoreNLP provides a set of natural language analysis tools written in Java. It can take raw human language text input and giv

Stanford NLP 8.8k Jan 07, 2023
SurvTRACE: Transformers for Survival Analysis with Competing Events

⭐ SurvTRACE: Transformers for Survival Analysis with Competing Events This repo provides the implementation of SurvTRACE for survival analysis. It is

Zifeng 13 Oct 06, 2022
NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles

NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles NewsMTSC is a dataset for target-dependent sentiment classification (TSC)

Felix Hamborg 79 Dec 30, 2022
Implementation of legal QA system based on SentenceKoBART

LegalQA using SentenceKoBART Implementation of legal QA system based on SentenceKoBART How to train SentenceKoBART Based on Neural Search Engine Jina

Heewon Jeon(gogamza) 75 Dec 27, 2022
Train 🤗-transformers model with Poutyne.

poutyne-transformers Train 🤗 -transformers models with Poutyne. Installation pip install poutyne-transformers Example import torch from transformers

Lennart Keller 2 Dec 18, 2022
SciBERT is a BERT model trained on scientific text.

SciBERT is a BERT model trained on scientific text.

AI2 1.2k Dec 24, 2022
Train and use generative text models in a few lines of code.

blather Train and use generative text models in a few lines of code. To see blather in action check out the colab notebook! Installation Use the packa

Dan Carroll 16 Nov 07, 2022
Predict an emoji that is associated with a text

Sentiment Analysis Sentiment analysis in computational linguistics is a general term for techniques that quantify sentiment or mood in a text. Can you

Tetsumichi(Telly) Umada 30 Sep 07, 2022
無料で使える中品質なテキスト読み上げソフトウェア、VOICEVOXの音声合成エンジン

VOICEVOX ENGINE VOICEVOXの音声合成エンジン。 実態は HTTP サーバーなので、リクエストを送信すればテキスト音声合成できます。 API ドキュメント VOICEVOX ソフトウェアを起動した状態で、ブラウザから

Hiroshiba 3 Jul 05, 2022
NLPIR tutorial: pretrain for IR. pre-train on raw textual corpus, fine-tune on MS MARCO Document Ranking

pretrain4ir_tutorial NLPIR tutorial: pretrain for IR. pre-train on raw textual corpus, fine-tune on MS MARCO Document Ranking 用作NLPIR实验室, Pre-training

ZYMa 12 Apr 07, 2022
Rethinking the Truly Unsupervised Image-to-Image Translation - Official PyTorch Implementation (ICCV 2021)

Rethinking the Truly Unsupervised Image-to-Image Translation (ICCV 2021) Each image is generated with the source image in the left and the average sty

Clova AI Research 436 Dec 27, 2022
A2T: Towards Improving Adversarial Training of NLP Models (EMNLP 2021 Findings)

A2T: Towards Improving Adversarial Training of NLP Models This is the source code for the EMNLP 2021 (Findings) paper "Towards Improving Adversarial T

QData 17 Oct 15, 2022
NLP Text Classification

多标签文本分类任务 近年来随着深度学习的发展,模型参数的数量飞速增长。为了训练这些参数,需要更大的数据集来避免过拟合。然而,对于大部分NLP任务来说,构建大规模的标注数据集非常困难(成本过高),特别是对于句法和语义相关的任务。相比之下,大规模的未标注语料库的构建则相对容易。为了利用这些数据,我们可以

Jason 1 Nov 11, 2021
A simple Streamlit App to classify swahili news into different categories.

Swahili News Classifier Streamlit App A simple app to classify swahili news into different categories. Installation Install all streamlit requirements

Davis David 4 May 01, 2022
Code for our ACL 2021 (Findings) Paper - Fingerprinting Fine-tuned Language Models in the wild .

🌳 Fingerprinting Fine-tuned Language Models in the wild This is the code and dataset for our ACL 2021 (Findings) Paper - Fingerprinting Fine-tuned La

LCS2-IIITDelhi 5 Sep 13, 2022