Natural language detection

Overview

franc

Build Status Coverage Status

Detect the language of text.

What’s so cool about franc?

  1. franc can support more languages(†) than any other library
  2. franc is packaged with support for 82, 187, or 406 languages
  3. franc has a CLI

† - Based on the UDHR, the most translated document in the world.

What’s not so cool about franc?

franc supports many languages, which means it’s easily confused on small samples. Make sure to pass it big documents to get reliable results.

Install

npm:

npm install franc

This installs the franc package, with support for 187 languages (languages which have 1 million or more speakers). franc-min (82 languages, 8m or more speakers) and franc-all (all 406 possible languages) are also available. Finally, use franc-cli to install the CLI.

Browser builds for franc-min, franc, and franc-all are available on GitHub Releases.

Use

var franc = require('franc')

franc('Alle menslike wesens word vry') // => 'afr'
franc('এটি একটি ভাষা একক IBM স্ক্রিপ্ট') // => 'ben'
franc('Alle menneske er fødde til fridom') // => 'nno'

franc('') // => 'und' (language code that stands for undetermined)

// You can change what’s too short (default: 10):
franc('the') // => 'und'
franc('the', {minLength: 3}) // => 'sco'
.all
console.log(franc.all('O Brasil caiu 26 posições'))

Yields:

[ [ 'por', 1 ],
  [ 'src', 0.8797557538750587 ],
  [ 'glg', 0.8708313762329732 ],
  [ 'snn', 0.8633161108501644 ],
  [ 'bos', 0.8172851103804604 ],
  ... 116 more items ]
only
console.log(franc.all('O Brasil caiu 26 posições', {only: ['por', 'spa']}))

Yields:

[ [ 'por', 1 ], [ 'spa', 0.799906059182715 ] ]
ignore
console.log(franc.all('O Brasil caiu 26 posições', {ignore: ['src', 'glg']}))

Yields:

[ [ 'por', 1 ],
  [ 'snn', 0.8633161108501644 ],
  [ 'bos', 0.8172851103804604 ],
  [ 'hrv', 0.8107092531705026 ],
  [ 'lav', 0.810239549084077 ],
  ... 114 more items ]

CLI

Install:

npm install franc-cli --global

Use:

CLI to detect the language of text

Usage: franc [options] <string>

Options:

  -h, --help                    output usage information
  -v, --version                 output version number
  -m, --min-length <number>     minimum length to accept
  -o, --only <string>           allow languages
  -i, --ignore <string>         disallow languages
  -a, --all                     display all guesses

Usage:

# output language
$ franc "Alle menslike wesens word vry"
# afr

# output language from stdin (expects utf8)
$ echo "এটি একটি ভাষা একক IBM স্ক্রিপ্ট" | franc
# ben

# ignore certain languages
$ franc --ignore por,glg "O Brasil caiu 26 posições"
# src

# output language from stdin with only
$ echo "Alle mennesker er født frie og" | franc --only nob,dan
# nob

Supported languages

Package Languages Speakers
franc-min 82 8M or more
franc 187 1M or more
franc-all 406 -

Language code

Note that franc returns ISO 639-3 codes (three letter codes). Not ISO 639-1 or ISO 639-2. See also GH-10 and GH-30.

To get more info about the languages represented by ISO 639-3, use iso-639-3. There is also an index available to map ISO 639-3 to ISO 639-1 codes, iso-639-3/to-1.json, but note that not all 639-3 codes can be represented in 639-1.

Ports

Franc has been ported to several other programming languages.

The works franc is derived from have themselves also been ported to other languages.

Derivation

Franc is a derivative work from guess-language (Python, LGPL), guesslanguage (C++, LGPL), and Language::Guess (Perl, GPL). Their creators granted me the rights to distribute franc under the MIT license: respectively, Kent S. Johnson, Jacob R. Rideout, and Maciej Ceglowski.

License

MIT © Titus Wormer

Comments
  • Add support for BCP 47 and output IANA language subtags

    Add support for BCP 47 and output IANA language subtags

    By default, Franc returns ISO-639-3 three-letter language tags, as listed in the Supported Languages table.

    We would like Franc to alternatively support outputting IANA language subtags as an option, in compliance with the W3C recommendation for specifying the value of the lang attribute in HTML (and the xml:lang attribute in XML) documents.

    (Two- and three-letter) IANA language codes are used as the primary language subtags in the language tag syntax as defined by the IETF’s BCP 47, which may be further specified by adding subtags for “extended language”, script, region, dialect variants, etc. (RFC 5646 describes the syntax in full). The addition of such more fine-grained secondary qualifiers are, I guess, out of Franc’s scope, but it would be very helpful nevertheless when Franc would be able to at least return the IANA primary language tags, which suffice, if used stand-alone, to be still in compliance with the spec.

    On the Web — as the IETF and W3C agree — IANA language subtags and BCP 47 seem to be the de facto industry standard (at least more so than ISO 639-3). Moreover, the naming convention for TeX hyphenation pattern files (such as used by i.a. OpenOffice) use ISO-8859-2 codes, which overlap better with IANA language subtags, too.

    If Franc would output IANA language subtags, then the return values could be used as-is, and without any further post-processing or re-mapping, in, for example CSS rules, specifying hyphenation:

    @media print {
      :lang(nl) { hyphenate-patterns: url(hyphenation/hyph-nl.pat); }
    }
    

    @wooorm :

    1. What is the rationale for Franc to default on ISO-639-3 (only)? Is it a “better” standard, and, if so, why?
    2. If you would agree it would be a good idea for Franc to support BCP 47 and outputting IANA language subtags as an available option, then how would you prefer it to be implemented and accept a PR? (We’d happily contribute.) Would it suffice to add and map them in data/support.json?
    opened by rhythmus 12
  • Reference of source document

    Reference of source document

    It seems that NONE of the languages have sources to the data.json 3-gram model. Is it possible to provide document sources for each language such that we can review the material, and possibly generate 2-grams and 4-grams (or 2/3 or 3/4 or 2/3/4-gram combos) models?

    opened by DonaldTsang 10
  • Problems with franc and Uzbek (uzb, uzn, uzs)

    Problems with franc and Uzbek (uzb, uzn, uzs)

    I have implemented and found that uzbek (my native) language is not working properly. I tested with large data-sets. Can I make contribution? Also, there is some issue on naming convention of language code here, 'uzn' (Nothern Uzbek) language has never been in linguistics. But I wonder how it became ISO 639 identifier.

    opened by muminoff 10
  • BUG: Basic tests show that franc is extremely inaccurate

    BUG: Basic tests show that franc is extremely inaccurate

    > franc.all('Hola amiga', { only: [ 'eng', 'spa', 'por', 'ita', 'fra' ] })
    [
      [ 'spa', 1 ],
      [ 'ita', 0.9323770491803278 ],
      [ 'fra', 0.5942622950819672 ],
      [ 'por', 0.5368852459016393 ],
      [ 'eng', 0 ]
    ]
    > franc.all('Hola mi amiga', { only: [ 'eng', 'spa', 'por', 'ita', 'fra' ] })
    [
      [ 'ita', 1 ],
      [ 'spa', 0.6840958605664488 ],
      [ 'fra', 0.6318082788671024 ],
      [ 'por', 0.08714596949891062 ],
      [ 'eng', 0 ]
    ]
    > franc.all('Ciao amico!', { only: [ 'eng', 'spa', 'por', 'ita', 'fra' ] })
    [
      [ 'spa', 1 ],
      [ 'por', 0.9940758293838863 ],
      [ 'ita', 0.9170616113744076 ],
      [ 'eng', 0.6232227488151658 ],
      [ 'fra', 0.46563981042654023 ]
    ]
    

    These are all completely incorrect accuracies.

    opened by niftylettuce 8
  • Make MAX_LENGTH an options parameter

    Make MAX_LENGTH an options parameter

    Hello!

    First of all, thank you for this wonderful project.

    It seems that franc limits the text sample to analyse to a hard-coded 2048 chars in these lines

    https://github.com/wooorm/franc/blob/5842af9c1a74ffb47ebe3307bfc61cf29b6e842e/packages/franc/index.js#L21 https://github.com/wooorm/franc/blob/5842af9c1a74ffb47ebe3307bfc61cf29b6e842e/packages/franc/index.js#L93

    Could this MAX_LENGTH const be part of options? It seems to me this is due to speed reasons, but I care more about accuracy than speed.

    I am reading web pages that have parts in more than one language, and need to detect the most used language, but maybe the first 2048 characters are in the less used language.

    Sorry if I misinterpreted the code and is not doing what I thought

    opened by porkopek 8
  • Explain the output of 'all'

    Explain the output of 'all'

    The results of 'all' consist of the language code and a score number. I've guessed that the lowest number is the detected language, but what can be learned from the score number? Doesn't seem to be documented.

    I'm looking to detect the language of job titles in English and French only (because Canada) and I was getting results all over the place using just franc(jobTitle) but whitelisting english and french then applying a threshold to the score I was able to tune in a much more accurate result (still a 3.92% error rate over 1020 job titles, but it was in the 25% range before the threshold). Is this a good use for the score or am I just getting lucky?

    opened by stockholmux 8
  • Problems with latin alphabet languages

    Problems with latin alphabet languages

    A term like yellow flicker beat suggest german, english (correct) quite far below.

    Can you explain how this would work?

    I would like to use franc in combination with a spell checker, first detecting the language and then looking up correct words with a spell checker using the identified language.

    opened by djui 8
  • Some Japanese are detected as Chinese mandarin

    Some Japanese are detected as Chinese mandarin

    Hi, I see something strange about Japanese detection,

    if I put a translated text from google translate to Japanese: 裁判の周辺のラオスにUターンした元元兵士

    the lib detects it and returns 'jpn', but if I put a Japanese text from yahoo japan or amazon japan: ここ最近、よく拡散されたつぶやきや画像をまとめてご紹介。気になるも

    it returns 'cmn', does anyone know why?

    opened by ThisIsRoy1 7
  • Consistency on ISO standards for easier integration.

    Consistency on ISO standards for easier integration.

    Revisiting #10 I think its great that you support other languages not found in any of the ISO standards.

    But to those that can be found, the fact that Franc sometimes returns the 2T and others the 2B , makes it really hard to map without huge lists.

    For instance:

    • arm matches 2B for Armenian but not 2T nor 3 which are 'hye'
    • ces, on the other hand, matches 2T and 3 while 2B is 'cze'

    So it makes for difficult integration with standards that you return one or the other without consistency.

    I agree that with languages you wouldn't find, then we must find a solution and it is great! But for those that match, adhering to one or the other would be very helpful.

    Thanks, best regards, Rafa.

    opened by RafaPolit 6
  • Getting weird results

    Getting weird results

    Hey @wooorm am I doing something wrong here?

    > apps.forEach(app => console.log(franc(app.description), app.description))
    
    eng A universal clipboard managing app that makes it easy to access your clipboard from anywhere on any device
    fra 5EPlay CSGO Client
    nob Open-source Markdown editor built for desktop
    eng Communication tool to optimize the connection between people
    vmw Wireless HDMI
    eng An RSS and Atom feed aggregator
    eng A work collaboration product that brings conversation to your files.
    src Pristine Twitter app
    dan A Simple Friendly Markdown Note.
    nno An open source trading platform
    eng A hackable text editor for the 21 st Century
    eng One workspace open to all designers and developers
    nya A place to work + a way to work
    cat An experimental P2P browser
    sco Focused team communications
    sco Bitbloq is a tool to help children to learn and create programs for a microcontroller or robot, and to load them easily.
    eng A simple File Encryption application for Windows. Encrypt your bits.
    eng Markdown editor witch clarity +1
    eng Text editor with the power or Markdown
    eng Open-sourced note app for programmers
    sco Web browser that automatically blocks ads and trackers
    bug Facebook Messenger app
    dan Markdown editor for Mac / Windows / Linux
    fra Desktop build status notifications
    sco Group chat for global teams
    src Your rubik's cube solves
    sco Orthodox web file manager with console and editor
    cat Game development tools
    sco RPG style coding application
    deu Modern browser without tabs
    eng Your personal galaxy of inspiration
    sco A menubar/taskbar Gmail App for Windows, macOS and Linux.
    
    opened by zeke 6
  • Inaccurate detection examples

    Inaccurate detection examples

    Here are just a few inaccuracies I've come across testing this package:

    franc('iphone unlocked') // returns 'ibb' instead of 'eng'
    franc('new refrigerator') // returns 'dan' instead of 'eng'
    franc('макбук копмьютер очень хороший') // returns 'kir' instead of 'rus'
    
    opened by demisx 6
  • Improved accuracy for small documents

    Improved accuracy for small documents

    I'd like to play with patching franc, or making some alternative to it, that can detect the language of small documents much more accurately.

    First of all is this something that could be interesting to merge into franc itself?

    Secondly I'm almost clueless about language classification, could trying the following things make sense?

    1. Storing more than 300 trigrams, maybe 400 or so.
    2. Using quadgrams or bigrams rather than trigrams.
    3. Extracting the trigrams from a longer and more diverse document than the UDHR.

    From a shallow reading of this paper on n-grams it sounds to me like ngrams may be fundamentally not well suited for short documents because there just isn't enough data to reconstruct the top 300 or whatever ngrams reliably from that, maybe 🤔.

    CLD3 seems to feed unigrams bigrams and trigrams to some neural network and that seems to work much better for smaller texts somehow, I'm not sure how or why, maybe that's the way to go.

    Any other ideas that I should try?

    opened by fabiospampinato 19
  • Probability normalization

    Probability normalization

    Currently franc to me often returns a probability close to 1 for many languages, IMO all these probabilities should be normalized to add up to 1.

    Also there seems to always be a language at the top with probability 1, this makes it difficult to judge how sure the "model" is about the detection, which would be another interesting point of data to have.

    opened by fabiospampinato 3
  • Some Chinese sentences are detected as Japanese

    Some Chinese sentences are detected as Japanese

    sentence 1

    特別推薦的必訪店家「ヤマシロヤ」,雖然不在阿美橫町上,但就位於JR上野站廣小路口對面

    jpn 1
    google translate result is Chinese correctly
    

    sentence 2

    特別推薦的必訪店家,雖然不在阿美橫町上,但就位於JR上野站廣小路口對面

    cmn 1
    google translate result is Chinese correctly
    

    Sentence 1 almost are Chinese characters and contains 5 Katakana characters. But its result is jpn incorrectly.

    Sentence 2 are Chinese characters fully, and its result is cmn correctly.

    Maybe the result is related to #77

    opened by kewang 3
  • Use languages' alphabets to make detection more accurate

    Use languages' alphabets to make detection more accurate

    Что это за язык? is a Russian sentence, which is detected as Bulgarian (bul 1, rus 0.938953488372093, mkd 0.9353197674418605). However, neither Bulgarian nor Macedonian have the letters э and ы in their alphabets.

    Same with Чекаю цієї хвилини., which is Ukrainian, but is detected as Northern Uzbek with probability 1 whereas Ukrainian gets only 0.33999999999999997. However, the letters є and ї are used only in Ukrainian whereas the Uzbek Cyrillic alphabet doesn't include as many as five letters from this sentence, namely: ю, ц, і, є and ї.

    I know that Franc is supposed to be not good with short input strings, but taking alphabets into account seems to be a promising way to improve the accuracy.

    opened by thorn0 15
Releases(6.1.0)
Owner
Titus
🐧 Making it easier for developers to develop · core team @unifiedjs · full-time OSS · syntax trees, markdown, markup, natural language 🐧
Titus
Corner-based Region Proposal Network

Corner-based Region Proposal Network CRPN is a two-stage detection framework for multi-oriented scene text. It employs corners to estimate the possibl

xhzdeng 140 Nov 04, 2022
Motion detector, Full body detection, Upper body detection, Cat face detection, Smile detection, Face detection (haar cascade), Silverware detection, Face detection (lbp), and Sending email notifications

Security camera running OpenCV for object and motion detection. The camera will send email with image of any objects it detects. It also runs a server that provides web interface with live stream vid

Peace 10 Jun 30, 2021
A small C++ implementation of LSTM networks, focused on OCR.

clstm CLSTM is an implementation of the LSTM recurrent neural network model in C++, using the Eigen library for numerical computations. Status and sco

Tom 794 Dec 30, 2022
轻量级公式 OCR 小工具:一键识别各类公式图片,并转换为 LaTeX 格式

QC-Formula | 青尘公式 OCR 介绍 轻量级开源公式 OCR 小工具:一键识别公式图片,并转换为 LaTeX 格式。 支持从 电脑本地 导入公式图片;(后续版本将支持直接从网页导入图片) 公式图片支持 .png / .jpg / .bmp,大小为 4M 以内均可; 支持印刷体及手写体,前

青尘工作室 26 Jan 07, 2023
Awesome anomaly detection in medical images

A curated list of awesome anomaly detection works in medical imaging, inspired by the other awesome-* initiatives.

Kang Zhou 57 Dec 19, 2022
Fatigue Driving Detection Based on Dlib

Fatigue Driving Detection Based on Dlib

5 Dec 14, 2022
A curated list of promising OCR resources

Call for contributor(paper summary,dataset generation,algorithm implementation and any other useful resources) awesome-ocr A curated list of promising

wanghaisheng 1.6k Jan 04, 2023
This is a implementation of CRAFT OCR method

This is a implementation of CRAFT OCR method

Esaka 0 Nov 01, 2021
Scene text recognition

AttentionOCR for Arbitrary-Shaped Scene Text Recognition Introduction This is the ranked No.1 tensorflow based scene text spotting algorithm on ICDAR2

777 Jan 09, 2023
The code for CVPR2022 paper "Likert Scoring with Grade Decoupling for Long-term Action Assessment".

Likert Scoring with Grade Decoupling for Long-term Action Assessment This is the code for CVPR2022 paper "Likert Scoring with Grade Decoupling for Lon

10 Oct 21, 2022
Automatically resolve RidderMaster based on TensorFlow & OpenCV

AutoRiddleMaster Automatically resolve RidderMaster based on TensorFlow & OpenCV 基于 TensorFlow 和 OpenCV 实现的全自动化解御迷士小马谜题 Demo How to use Deploy the ser

神龙章轩 5 Nov 19, 2021
MONAI Label is a server-client system that facilitates interactive medical image annotation by using AI.

MONAI Label is a server-client system that facilitates interactive medical image annotation by using AI. It is an open-source and easy-to-install ecosystem that can run locally on a machine with one

Project MONAI 344 Dec 23, 2022
Histogram specification using openCV in python .

histogram specification using openCV in python . Have to input miu and sigma to draw gausssian distribution which will be used to map the input image . Example input can be miu = 128 sigma = 30

Tamzid hasan 6 Nov 17, 2021
Omdena-abuja-anpd - Automatic Number Plate Detection for the security of lives and properties using Computer Vision.

Omdena-abuja-anpd - Automatic Number Plate Detection for the security of lives and properties using Computer Vision.

Abdulazeez Jimoh 1 Jan 01, 2022
TextBoxes++: A Single-Shot Oriented Scene Text Detector

TextBoxes++: A Single-Shot Oriented Scene Text Detector Introduction This is an application for scene text detection (TextBoxes++) and recognition (CR

Minghui Liao 930 Jan 04, 2023
Balabobapy - Using artificial intelligence algorithms to continue the text

Balabobapy - Using artificial intelligence algorithms to continue the text

qxtony 1 Feb 04, 2022
A facial recognition device is a device that takes an image or a video of a human face and compares it to another image faces in a database.

A facial recognition device is a device that takes an image or a video of a human face and compares it to another image faces in a database. The structure, shape and proportions of the faces are comp

Pavankumar Khot 4 Mar 19, 2022
fishington.io bot with OpenCV and NumPy

fishington.io-bot fishington.io bot with using OpenCV and NumPy bot can continue to fishing fully automatically how to use Open cmd in fishington.io-b

Bahadır Araz 77 Jan 02, 2023
A curated list of resources for text detection/recognition (optical character recognition ) with deep learning methods.

awesome-deep-text-detection-recognition A curated list of awesome deep learning based papers on text detection and recognition. Text Detection Papers

2.4k Jan 08, 2023
OCR, Object Detection, Number Plate, Real Time

README.md PrePareded anaconda env requirements.txt clova AI → deep text recognition → trained weights (ex, .pth) wpod-net weights (ex, .h5 , .json) ht

Kaven Lee 7 Dec 06, 2022