An algorithm that can solve the word puzzle Wordle with an optimal number of guesses on HARD mode.

Overview

WordleSolver

An algorithm that can solve the word puzzle Wordle with an optimal number of guesses on HARD mode.

How to use the program


Copy this project with git clone and run python3 solver.py in the terminal.

When you run the program, the algorithm will provide you with an educated guess. Then, you type the guess into Wordle. Once you get the result of how many letters were right, you input it back into the program and will get another guess back. This process will continue until you have solved the puzzle!

Inputting the result of your guesses is easy. If a character is gray, enter '_', if a character is yellow, enter the lowercase letter, and if a character is green, enter the uppercase letter. For example, if the program told you to guess "aeros" and the result of the guess was:

image

You would enter the result as: __r__

Here is another example:

image

You would enter the result as: DR_k_

How the algorithm works

Here's a quick run-down of how the algorithm works. We keep a list of words that the answer can be and keep removing from the list until only one word remains or we guess the right answer. Each word has a unique number associated with it. We can use this number to quickly determine if a word can be an answer based on the results of other guesses. If a word cannot be the answer, it will be removed from our list. The key to the accuracy and efficiency of this algorithm is how this unique number is generated.

The number is the product of a few prime numbers which lets us use modular arithmetic in a clever way! Each letter will have 6 prime numbers associated with it. One "yellow" number and five "green" numbers. We use the one yellow number when we know a letter is in the word but we don't know where. We use one of the green letters when we know that a letter is in a specific spot. You can see these prime numbers in charDict.json. To actually calculate the number of a word, we multiply all the yellow numbers of the characters that make up the word together as well as certain green numbers. The green number we multiply depends on the position the letter appears. If the letter D appears in the first spot, we multiply by its 1st green number. If it was instead in the last spot of the word, we multiply by its 5th green number. The reason we do this is we can utilize modulo to check if a certain word can be an answer based on the result of another guess. For example, if we guessed "aeros" and the word we were trying to find was "drink", we will find that r is somewhere in the word but not in the third spot. Let us say a word has number n. If n%r's yellow number does not equal 0, then we know that word cannot be zero and we can remove it from the list. Also, if n%r's third green number equals 0, we know that it cannot be the answer because r cannot be in the third spot. Similar logic is applied when multiple letters are yellow or some letters come up green. The value of each word does not change, so we can process this information once and store it in a txt file to be used later which is what I did in wordList.txt! If you would like to use a different set of words than what I used, feel free to change the words.txt file and run process.py to generate a new wordList file.

Optimizations

One way to make the algorithm take fewer guesses is to make smarter guesses. As such, an optimization I decided to make is to take into account letter frequency. Letters that appear more often have lower prime numbers associated with them and also that the word that is guessed always has the smallest number associated with it. Now, the primes associated with each letter aren't just chosen arbitrarily and actually tell us some information. "e" is the most common letter and as such has the six smallest prime numbers. I can sort the wordlist and make the algorithm guess the word with the smallest number. So, our algorithm is more likely to guess a word with "e" in it than "q" since words with "e" will probably be smaller. This is good because "e" is much more likely to be in the word than "q". Also, I only need to sort the list once in process.py so there is no significant performance hit!

A drawback of this approach is that words that are made up of repetitive common letters have very low values and are guessed much more. This is not good because words with repeating letters make it harder to narrow down our potential guesses! For example, consider the word "esses" which is made up of only of the two most common letters. It's good that our guesses consist of letters that are common but it is bad that we only get information about two different letters. The way I fixed this is by multiplying words that have characters repeated two or three times by a much bigger prime number so they are weighed down and guesses less often.

Another optimization I made is taking into account how common a word is. There are a lot of niche words in the list that are very rarely used which are likely not the answer to the puzzle. So, once I've narrowed down the possible words to less than a hundred, it makes sense to guess the more common words first. This is why I introduced a second number that is associated which each word. The second number is the frequency of a word in Wikipedia articles. Once there are less than 100 words in the list, the list is resorted by this second number rather than the first and so each guess will be the most common word remaining!

Further Optimizations

As I mentioned before, one of the optimizations I made was having more common letters correspond with smaller prime numbers and sorting the list of words based on the number associated with each word. This is all done just once for each set of words in process.py and is very computationally efficient. However, if more accuracy is desired, the prime number associated with each letter can be re-generated after each guess because the frequency of each letter is likely to change. This may increase accuracy slightly but will take much longer to process which is why I opted against it. After each guess, I would have to re-check the frequency of each letter, calculate the value of each word, and then resort to the entire list based on this new value.

Sources

  • Wordle is by PowerLanguage
  • List of 5 letter words is based on SOWPODS and was taken from Word Game Dictionary. I suspect that PowerLanguage used the same source for wordle as he used a similar source for another project.
  • The frequency of words was taken from lexepedia with a minimum frequency of 1, length of 5, and only includes Wiktionary Words.
Owner
Akil Selvan Rajendra Janarthanan
yo!
Akil Selvan Rajendra Janarthanan
A library for Multilingual Unsupervised or Supervised word Embeddings

MUSE: Multilingual Unsupervised and Supervised Embeddings MUSE is a Python library for multilingual word embeddings, whose goal is to provide the comm

Facebook Research 3k Jan 06, 2023
Sequence model architectures from scratch in PyTorch

This repository implements a variety of sequence model architectures from scratch in PyTorch. Effort has been put to make the code well structured so that it can serve as learning material. The train

Brando Koch 11 Mar 28, 2022
PyJPBoatRace: Python-based Japanese boatrace tools 🚤

pyjpboatrace :speedboat: provides you with useful tools for data analysis and auto-betting for boatrace.

5 Oct 29, 2022
The repository for the paper: Multilingual Translation via Grafting Pre-trained Language Models

Graformer The repository for the paper: Multilingual Translation via Grafting Pre-trained Language Models Graformer (also named BridgeTransformer in t

22 Dec 14, 2022
DziriBERT: a Pre-trained Language Model for the Algerian Dialect

DziriBERT is the first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect.

117 Jan 07, 2023
NLP, before and after spaCy

textacy: NLP, before and after spaCy textacy is a Python library for performing a variety of natural language processing (NLP) tasks, built on the hig

Chartbeat Labs Projects 2k Jan 04, 2023
:house_with_garden: Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.

(Framework for Adapting Representation Models) What is it? FARM makes Transfer Learning with BERT & Co simple, fast and enterprise-ready. It's built u

deepset 1.6k Dec 27, 2022
Create a semantic search engine with a neural network (i.e. BERT) whose knowledge base can be updated

Create a semantic search engine with a neural network (i.e. BERT) whose knowledge base can be updated. This engine can later be used for downstream tasks in NLP such as Q&A, summarization, generation

Diego 1 Mar 20, 2022
Client library to download and publish models and other files on the huggingface.co hub

huggingface_hub Client library to download and publish models and other files on the huggingface.co hub Do you have an open source ML library? We're l

Hugging Face 644 Jan 01, 2023
SEJE is a prototype for the paper Learning Text-Image Joint Embedding for Efficient Cross-Modal Retrieval with Deep Feature Engineering.

SEJE is a prototype for the paper Learning Text-Image Joint Embedding for Efficient Cross-Modal Retrieval with Deep Feature Engineering. Contents Inst

0 Oct 21, 2021
Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)

Linear Multihead Attention (Linformer) PyTorch Implementation of reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer:

Kui Xu 58 Dec 23, 2022
code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"

AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling This repository contains PyTorch evaluation code, training code and pretrain

Facebook Research 94 Oct 26, 2022
NeuralQA: A Usable Library for Question Answering on Large Datasets with BERT

NeuralQA: A Usable Library for (Extractive) Question Answering on Large Datasets with BERT Still in alpha, lots of changes anticipated. View demo on n

Victor Dibia 220 Dec 11, 2022
Grapheme-to-phoneme (G2P) conversion is the process of generating pronunciation for words based on their written form.

Neural G2P to portuguese language Grapheme-to-phoneme (G2P) conversion is the process of generating pronunciation for words based on their written for

fluz 11 Nov 16, 2022
Bidirectional LSTM-CRF and ELMo for Named-Entity Recognition, Part-of-Speech Tagging and so on.

anaGo anaGo is a Python library for sequence labeling(NER, PoS Tagging,...), implemented in Keras. anaGo can solve sequence labeling tasks such as nam

Hiroki Nakayama 1.5k Dec 05, 2022
Graphical user interface for Argos Translate

Argos Translate GUI Website | GitHub | PyPI Graphical user interface for Argos Translate. Install pip3 install argostranslategui

Argos Open Tech 16 Dec 07, 2022
Ukrainian TTS (text-to-speech) using Coqui TTS

title emoji colorFrom colorTo sdk app_file pinned Ukrainian TTS 🐸 green green gradio app.py false Ukrainian TTS 📢 🤖 Ukrainian TTS (text-to-speech)

Yurii Paniv 85 Dec 26, 2022
Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing 🎉 🎉 🎉 We released the 2.0.0 version with TF2 Support. 🎉 🎉 🎉 If you

Eliyar Eziz 2.3k Dec 29, 2022
FB ID CLONER WUTHOT CHECKPOINT, FACEBOOK ID CLONE FROM FILE

* MY SOCIAL MEDIA : Programming And Memes Want to contact Mr. Error ? CONTACT : [ema

Mr. Error 9 Jun 17, 2021
Code for the paper "Are Sixteen Heads Really Better than One?"

Are Sixteen Heads Really Better than One? This repository contains code to reproduce the experiments in our paper Are Sixteen Heads Really Better than

Paul Michel 143 Dec 14, 2022