Takes a string and puts it through different languages in Google Translate a requested amount of times, returning nonsense.

Overview

PythonTextObfuscator

Takes a string and puts it through different languages in Google Translate a requested amount of times, returning nonsense. Example

Requirements:

python3

For the Selenium Obfuscator:

    -Selenium
    
    -Firefox
    
    -Geckodriver

In the Selenium Obfuscator:

-The major benefit is that you can translate excel documents, the downside is that after 10 or so document translations, Google blocks your ip for a while.

-Translation is generally slower and more limited using selenium as a browser tab is being used to scrape the data. Also beware of RAM usage.

-May no longer be supported in the future due to its drawbacks.

In the Urllib Obfuscator:

-Translation is generally faster and uses very little resources as only html is downloaded through a request. Multiprocessing also allows simultanious requests and can be used to the full extent without worrying about RAM usage.

—Split by length is faster and uses less requests (better for longer texts)

—Split by newline is slower and uses more requests but adds much more translation variety.

-Reminder: Since google has a url request limit, you'll need to switch VPN locations when the request limit is hit.

    ——Don't worry too much though, as it takes quite a bit of requests to get to that point, and the block only lasts for around an hour.
You might also like...
Translate - a PyTorch Language Library

NOTE PyTorch Translate is now deprecated, please use fairseq instead. Translate - a PyTorch Language Library Translate is a library for machine transl

Auto translate textbox from Japanese to English or Indonesia
Auto translate textbox from Japanese to English or Indonesia

priconne-auto-translate Auto translate textbox from Japanese to English or Indonesia How to use Install python first, Anaconda is recommended Install

translate using your voice
translate using your voice

speech-to-text-translator Usage translate using your voice description this project makes translating a word easy, all you have to do is speak and...

translate using your voice

speech-to-text-translator Usage translate using your voice description this project makes translating a word easy, all you have to do is speak and...

This program do translate english words to portuguese

Python-Dictionary This program is used to translate english words to portuguese. Web-Scraping This program use BeautifulSoap to make web scraping, so

Translate U is capable of translating the text present in an image from one language to the other.
Translate U is capable of translating the text present in an image from one language to the other.

Translate U is capable of translating the text present in an image from one language to the other. The app uses OCR and Google translate to identify and translate across 80+ languages.

Graphical user interface for Argos Translate
Graphical user interface for Argos Translate

Argos Translate GUI Website | GitHub | PyPI Graphical user interface for Argos Translate. Install pip3 install argostranslategui

Use the state-of-the-art m2m100 to translate large data on CPU/GPU/TPU. Super Easy!
Use the state-of-the-art m2m100 to translate large data on CPU/GPU/TPU. Super Easy!

Easy-Translate is a script for translating large text files in your machine using the M2M100 models from Facebook/Meta AI. We also privide a script fo

Search for documents in a domain through Google. The objective is to extract metadata

MetaFinder - Metadata search through Google _____ __ ___________ .__ .___ / \

Comments
  • Attempt to decode JSON with unexpected mimetype: text/plain

    Attempt to decode JSON with unexpected mimetype: text/plain

    I'm not sure what's causing this, as the last time I tried this release, this issue was not present. If it's accessing content server-side, then it might be that the server has had a config change resulting in it returning a different mimetype?

    I get the error message below consistently in the console, with %2E being added to the end of the URL each time. It does seem like some translation does happen; in this case, I inputted "Test", and the URL ended with "Hlola".

    https://translate.alefvanoon.xyz/api/v1/zu/mi/Hlola%2E 0, message='Attempt to decode JSON with unexpected mimetype: text/plain; charset=utf-8', url=URL('https://translate.alefvanoon.xyz/api/v1/zu/mi/Hlola')

    From what I've gathered looking online, the issue lies in either line 13, line 469, or both.

    return (await response.json())['translation'].replace('/','⁄')

    text = (await response.json())['translation'].replace('/','⁄')

    Some of the solutions online referred to adding "content_type=None" or "content_type='text/plain'" into the brackets after "json", but this only seemed to cause further issues for me.

    opened by UltraHylia 2
  • Program Freezes Up and Looping Error

    Program Freezes Up and Looping Error

    When you have Chinese (Simplified) and/or Chinese (Traditional) enabled in the language selector, the program can freeze and an error loops in the console. It happens no matter what other languages are enabled.

    https://user-images.githubusercontent.com/60769253/197659506-38871035-e311-4710-9eb9-ac2d7387841f.mp4

    opened by DerpTaco99921 0
Releases(v0.4)
  • v0.4(Feb 2, 2022)

    Rebuilt from the ground up with a new GUI and translation method.

    Changes:

    -Improved GUI.

    -Translations are retrieved from a front-end to Google Translate called Lingva, which removes the issue with being blocked for doing too many requests.

    -Translations are done in an asynchronous function using aiohttp instead of a process pool, which is optimal for large bulk translations.

    -Removed selenium obfuscation.

    Additions: -Importing and saving text files. -Language Selector to activate or deactivate any individual language. -Language setting for the result. -Three different split methods: ____-Initial ________-Text is split by length before being passed into the obfuscate function. ________-Faster as less requests are made. ________-Different languages for each piece. ________-Tabs not preserved. ____-Continuous ________-Text is split by length inside the obfuscate function. ________-Faster as less requests are made. ________-Same languages for each piece. ________-Tabs not preserved. ____-Newline ________-Text is split by newlines and tabs. ________-Slower as more requests are made. ________-Every single line is translated with different languages. ________-Tabs preserved. -Translation Generator which creates a .csv file containing multiple translations of the same text: ____-Repeat mode obfuscates the original text each time, adding the result in each new column. ____-Continue mode obfuscates the results from each subsequent obfuscation, adding the result in each new column.

    Source code(tar.gz)
    Source code(zip)
    Python.Text.Obfuscator.v0.4.zip(15.75 KB)
  • v0.3.1c-r2(Dec 23, 2021)

  • v0.3.1c(Dec 23, 2021)

    Newlines no longer get messed up in Urllib Obfuscator. Added a choice to split by length or by newlines. —Split by length is faster and uses less requests (better for longer texts) —Split by newline is slower and uses more requests but adds much more translation variety. Reminder: Since google has a URL request limit, you'll need to switch VPN locations when the request limit is hit.

    Source code(tar.gz)
    Source code(zip)
    Python.Text.Obfuscator.v0.3.1c.zip(51.63 KB)
  • v0.3.1b(Dec 23, 2021)

  • v0.3.1a(Dec 23, 2021)

  • v0.3(Dec 23, 2021)

    I made massive improvements to the speed of the obfuscation thanks to learning about urllib.

    For example, I did translated the same ~2300 character long string of text 10 times in the old and new version; the old one took 38.8 seconds while the new one took only 6.8 seconds.

    In addition, the capacity to add a larger amount of characters is far increased as it doesn't require Firefox tabs to be open and eating up ram.

    As a test I translated the entire Among Us Wikipedia page 50 times (with a character count of over 60 thousand!), and it only took only 114 seconds to finish translating. Using the old obfuscator I wouldn't be able to translate more than half that amount, and it would take ages to complete (Like 10 mins or more).

    Unfortunately for this version the Excel Obfuscator is removed until I can figure out how to get it to work in urllib, if I can't then I'll probably add it back it with Selenium.

    At least if you couldn't get selenium to work on your computer for the previous versions you don't have to worry about getting it for this.

    Source code(tar.gz)
    Source code(zip)
    Python.Text.Obfuscator.v0.3.zip(5.73 KB)
  • v0.2.2(Dec 23, 2021)

  • v0.2.1b(Dec 23, 2021)

  • v0.2.1a(Dec 23, 2021)

    Fixed TimeoutExceptions for the string translations (textbox input) obfuscation. You can now do as many translations as you want without worrying about encountering an error. Same for amount of characters (as long as your PC can handle of course). As for excel translations they remain unchanged — since I can't do anything about Google's Document translation limit — so just switch locations on VPN like usual after 10 translations for the Excel Obfuscator.

    Source code(tar.gz)
    Source code(zip)
    Python.Text.Obfuscator.v0.2.1.zip(5.88 KB)
  • v0.2(Dec 23, 2021)

  • v0.1b(Dec 23, 2021)

  • v0.1a(Dec 23, 2021)

Code for EMNLP20 paper: "ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training"

ProphetNet-X This repo provides the code for reproducing the experiments in ProphetNet. In the paper, we propose a new pre-trained language model call

Microsoft 394 Dec 17, 2022
ConvBERT: Improving BERT with Span-based Dynamic Convolution

ConvBERT Introduction In this repo, we introduce a new architecture ConvBERT for pre-training based language model. The code is tested on a V100 GPU.

YITUTech 237 Dec 10, 2022
Open-World Entity Segmentation

Open-World Entity Segmentation Project Website Lu Qi*, Jason Kuen*, Yi Wang, Jiuxiang Gu, Hengshuang Zhao, Zhe Lin, Philip Torr, Jiaya Jia This projec

DV Lab 408 Dec 29, 2022
Mastering Transformers, published by Packt

Mastering Transformers This is the code repository for Mastering Transformers, published by Packt. Build state-of-the-art models from scratch with adv

Packt 195 Jan 01, 2023
Bpe algorithm can finetune tokenizer - Bpe algorithm can finetune tokenizer

"# bpe_algorithm_can_finetune_tokenizer" this is an implyment for https://github

张博 1 Feb 02, 2022
This repo contains simple to use, pretrained/training-less models for speaker diarization.

PyDiar This repo contains simple to use, pretrained/training-less models for speaker diarization. Supported Models Binary Key Speaker Modeling Based o

12 Jan 20, 2022
SHAS: Approaching optimal Segmentation for End-to-End Speech Translation

SHAS: Approaching optimal Segmentation for End-to-End Speech Translation In this repo you can find the code of the Supervised Hybrid Audio Segmentatio

Machine Translation @ UPC 21 Dec 20, 2022
Script to download some free japanese lessons in portuguse from NHK

Nihongo_nhk This is a script to download some free japanese lessons in portuguese from NHK. It can be executed by installing the packages with: pip in

Matheus Alves 2 Jan 06, 2022
Diaformer: Automatic Diagnosis via Symptoms Sequence Generation

Diaformer Diaformer: Automatic Diagnosis via Symptoms Sequence Generation (AAAI 2022) Diaformer is an efficient model for automatic diagnosis via symp

Junying Chen 20 Dec 13, 2022
This is a modification of the OpenAI-CLIP repository of moein-shariatnia

This is a modification of the OpenAI-CLIP repository of moein-shariatnia

Sangwon Beak 2 Mar 04, 2022
A complete NLP guideline for enthusiasts

NLP-NINJA A complete guide for Natural Language Processing in Python Table of Contents S.No. Topic Level Meaning 1 Tokenization 🤍 Beginner 2 Stemming

MAINAK CHAUDHURI 22 Dec 27, 2022
Collection of useful (to me) python scripts for interacting with napari

Napari scripts A collection of napari related tools in various state of disrepair/functionality. Browse_LIF_widget.py This module can be imported, for

5 Aug 15, 2022
All the code I wrote for Overwatch-related projects that I still own the rights to.

overwatch_shit.zip This is (eventually) going to contain all the software I wrote during my five-year imprisonment stay playing Overwatch. I'll be add

zkxjzmswkwl 2 Dec 31, 2021
The tool to make NLP datasets ready to use

chazutsu photo from Kaikado, traditional Japanese chazutsu maker chazutsu is the dataset downloader for NLP. import chazutsu r = chazutsu.data

chakki 243 Dec 29, 2022
A Chinese to English Neural Model Translation Project

ZH-EN NMT Chinese to English Neural Machine Translation This project is inspired by Stanford's CS224N NMT Project Dataset used in this project: News C

Zhenbang Feng 29 Nov 26, 2022
A 10000+ hours dataset for Chinese speech recognition

A 10000+ hours dataset for Chinese speech recognition

309 Dec 16, 2022
👑 spaCy building blocks and visualizers for Streamlit apps

spacy-streamlit: spaCy building blocks for Streamlit apps This package contains utilities for visualizing spaCy models and building interactive spaCy-

Explosion 620 Dec 29, 2022
Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction

This is a fork of Fairseq(-py) with implementations of the following models: Pervasive Attention - 2D Convolutional Neural Networks for Sequence-to-Se

Maha 490 Dec 15, 2022
PyTorch implementation of Tacotron speech synthesis model.

tacotron_pytorch PyTorch implementation of Tacotron speech synthesis model. Inspired from keithito/tacotron. Currently not as much good speech quality

Ryuichi Yamamoto 279 Dec 09, 2022
Implementation of some unbalanced loss like focal_loss, dice_loss, DSC Loss, GHM Loss et.al

Implementation of some unbalanced loss for NLP task like focal_loss, dice_loss, DSC Loss, GHM Loss et.al Summary Here is a loss implementation reposit

121 Jan 01, 2023