Finds snippets in iambic pentameter in English-language text and tries to combine them to a rhyming sonnet.

Overview

Sonnet finder

Finds snippets in iambic pentameter in English-language text and tries to combine them to a rhyming sonnet.

Usage

This is a Python script that should run without a GPU or any other special hardware requirements.

  1. Install the required packages, e.g. via: pip install -r requirements.txt

  2. Prepare a plain text file, say input.txt, with text you want to make a sonnet out of (sonnet-ize? sonnet-ify?). It can have multiple sentences on the same line, but a sentence should not be split across multiple lines.

    For example, I used pandoc --to=plain --wrap=none to generate a text file from my LaTeX papers. You could also start by grabbing some text files from Project Gutenberg.

  3. Run sonnet finder: python sonnet_finder.py input.txt -o output.tsv

    Using -o will save a list of all extracted candidate phrases, sorted by rhyme pattern, so you can generate new sonnets more quickly (see below) or browse and cherry-pick from the candidates to make your own sonnet out of these lines.

    Either way, the script will output a full example sonnet to STDOUT (provided enough rhyming pairs in iambic pentameter were found).

  4. If you've saved an output.tsv file before, you can quickly generate new sonnets via python sonnet_remix.py output.tsv. Since the stress and pronunciation prediction can be slow on larger files, this is much better than re-running sonnet_finder.py if you want more automatically generated suggestions.

Examples

This is a sonnet (with cherry-picked lines) made out of my PhD thesis:

the application of existing tools
describe a mapping to a modern form
applying similar replacement rules
the base ensembles slightly outperform

hungarian, icelandic, portuguese
perform a similar evaluation
contemporary lexemes or morphemes
a single dataset in isolation

historical and modern language stages
the weighted combination of encoder
the german dative ending -e in phrases
predictions fed into the next decoder

in this example from the innsbruck letter
machine translation still remains the better

These stanzas are compiled from a couple of automatically-generated suggestions based on the abstracts of all papers published in 2021 in the ACL Anthology:

effective algorithm that enables
improvements on a wide variety
and training with adjudicated labels
anxiety and test anxiety

obtain remarkable improvements on
decoder architecture, which equips
associated with the lexicon
surprising personal relationships

the impact of the anaphoric one
complexity prediction competition
developed for a laboratory run
existing parsers typically condition

examples, while in practice, most unseen
evaluate translation tasks between

Here's the same using Moby Dick:

among the marble senate of the dead
offensive matters consequent upon
a crawling reptile of the land, instead
fifteen, eighteen, and twenty hours on

the lakeman now patrolled the barricade
egyptian tablets, whose antiquity
the waters seemed a golden finger laid
maintains a permanent obliquity

the pequod with the little negro pippin
and with a frightful roll and vomit, he
increased, besides perhaps improving it in
transparent air into the summer sea

the traces of a simple honest heart
the fishery, and not the thousandth part

(The emjambment in the third stanza here is a lucky coincidence; the script currently doesn't do any kind of syntactic analysis or attempt coherence between lines.)

How it works

This script relies on the grapheme-to-phoneme library g2p_en by Park & Kim to convert the English input text to phoneme sequences (i.e., how the text would be pronounced). I chose this because it's a pip-installable Python library that fulfills two important criteria:

  1. it's not restricted to looking up pronunciations in a dictionary, but can handle arbitrary words through the use of a neural model (although, obviously, this will not always be accurate);

  2. it provides stress information for each vowel (i.e., whether any given vowel should be stressed or unstressed, which is important for determining the poetic meter).

The script then scans the g2p output for occurrences of iambic pentameter, i.e. a 0101010101(0) pattern, additionally checking if they coincide with word boundaries.

For finding snippets that rhyme, I rely mostly on Ghazvininejad et al. (2016), particularly §3 (relaxing the iambic pentameter a bit by allowing words that end in 100) and §5.2 (giving an operational definition of "slant rhyme" that I mostly try to follow).

QNA (Questions Nobody Asked)

  • Why does the script sometimes output lines that don't rhyme or don't fit the iambic meter? This script can only be as good as the grapheme-to-phoneme algorithm that's used. It frequently fails on words it doesn't know (for example, it tries to rhyme datasets with Portuguese?!) and also usually fails on abbreviations. Maybe there's a better g2p library that could be used, or the existing g2p_en could be modified to accept a custom dictionary, so you could manually define pronunciations for commonly used words.

  • Could this script also generate other types of poems? Sure. You could start by changing the regex iambic_pentameter to something else; maybe a sequence of dactyls? There are some further hardcoded assumptions in the code about iambic pentameter in the function get_stress_and_boundaries() that might have to be modified.

  • Could this script generate poems in languages other than English? This would require a suitable replacement for g2p_en that predicts pronunciations and stress patterns for the desired language, as well as re-writing the code that determines whether two phrases can rhyme; see the comments in the script for details. In particular, the code for English uses ARPABET notation for the pronunciation, which won't be suitable for other languages.

  • Can this script generate completely novel phrases in the style of an input text? This script does not "hallucinate" any text or generate anything that wasn't already there in the input; if you want to do that, take a look at Deep-speare maybe.

etc.

Written by Marcel Bollmann, inspired by a tweet, licensed under the MIT License.

I'm not the first one to write a script like this, but it was a fun exercise!

Owner
Marcel Bollmann
Computational linguist, postdoc, programming enthusiast.
Marcel Bollmann
Modeling cumulative cases of Covid-19 in the US during the Covid 19 Delta wave using Bayesian methods.

Introduction The goal of this analysis is to find a model that fits the observed cumulative cases of COVID-19 in the US, starting in Mid-July 2021 and

Alexander Keeney 1 Jan 05, 2022
GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model

GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model -- based on GPT-3, called GPT-Codex -- that is fine-tuned on publicly available code from GitHub.

Nathan Cooper 2.3k Jan 01, 2023
Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer

ConSERT Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer Requirements torch==1.6.0

Yan Yuanmeng 478 Dec 25, 2022
Knowledge Graph,Question Answering System,基于知识图谱和向量检索的医疗诊断问答系统

Knowledge Graph,Question Answering System,基于知识图谱和向量检索的医疗诊断问答系统

wangle 823 Dec 28, 2022
Natural Language Processing Tasks and Examples.

Natural Language Processing Tasks and Examples With the advancement of A.I. technology in recent years, natural language processing technology has bee

Soohwan Kim 53 Dec 20, 2022
Implementation of Fast Transformer in Pytorch

Fast Transformer - Pytorch Implementation of Fast Transformer in Pytorch. This only work as an encoder. Yannic video AI Epiphany Install $ pip install

Phil Wang 167 Dec 27, 2022
Training and evaluation codes for the BertGen paper (ACL-IJCNLP 2021)

BERTGEN This repository is the implementation of the paper "BERTGEN: Multi-task Generation through BERT" (https://arxiv.org/abs/2106.03484). The codeb

<a href=[email protected]"> 9 Oct 26, 2022
HF's ML for Audio study group

Hugging Face Machine Learning for Audio Study Group Welcome to the ML for Audio Study Group. Through a series of presentations, paper reading and disc

Vaibhav Srivastav 110 Jan 01, 2023
Translates basic English sentences into the Huna language (hoo-NAH)

huna-translator The Huna Language Translates basic English sentences into the Huna language (hoo-NAH). The Huna constructed language was developed in

Miles Smith 0 Jan 20, 2022
Multi Task Vision and Language

12-in-1: Multi-Task Vision and Language Representation Learning Please cite the following if you use this code. Code and pre-trained models for 12-in-

Meta Research 711 Jan 08, 2023
JaQuAD: Japanese Question Answering Dataset

JaQuAD: Japanese Question Answering Dataset for Machine Reading Comprehension (2022, Skelter Labs)

SkelterLabs 84 Dec 27, 2022
Framework for fine-tuning pretrained transformers for Named-Entity Recognition (NER) tasks

NERDA Not only is NERDA a mesmerizing muppet-like character. NERDA is also a python package, that offers a slick easy-to-use interface for fine-tuning

Ekstra Bladet 141 Dec 30, 2022
Automated question generation and question answering from Turkish texts using text-to-text transformers

Turkish Question Generation Offical source code for "Automated question generation & question answering from Turkish texts using text-to-text transfor

Open Business Software Solutions 29 Dec 14, 2022
Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models

Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model.

Prithivida 681 Jan 01, 2023
The training code for the 4th place model at MDX 2021 leaderboard A.

The training code for the 4th place model at MDX 2021 leaderboard A.

Chin-Yun Yu 32 Dec 18, 2022
The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)

Language Models are Few-shot Multilingual Learners Paper This is the source code of the paper [Arxiv] [ACL Anthology]: This code has been written usin

Genta Indra Winata 45 Nov 21, 2022
Blue Brain text mining toolbox for semantic search and structured information extraction

Blue Brain Search Source Code DOI Data & Models DOI Documentation Latest Release Python Versions License Build Status Static Typing Code Style Securit

The Blue Brain Project 29 Dec 01, 2022
DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference

DeeBERT This is the code base for the paper DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference. Code in this repository is also available

Castorini 132 Nov 14, 2022
Spokestack is a library that allows a user to easily incorporate a voice interface into any Python application with a focus on embedded systems.

Welcome to Spokestack Python! This library is intended for developing voice interfaces in Python. This can include anything from Raspberry Pi applicat

Spokestack 133 Sep 20, 2022
PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.

VAENAR-TTS - PyTorch Implementation PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.

Keon Lee 67 Nov 14, 2022