Machine Psychology: Python Generated Art

Overview

Machine Psychology: Python Generated Art

A limited collection of 64 algorithmically generated artwork. Each unique piece is then given a title by the OpenAI GPT-3 language model.

header_art

This repository contains the code logic to generate the artwork. You can check out the full gallery at https://www.mach-psy.com/ and view the NFTs on OpenSea.

Usage

If you want to use this from the root directory, first make sure ./src is on the PYTHONPATH.

export PYTHONPATH=$PYTHONPATH:"./src"  

(Optional) If you want to use OpenAI name generation, you also need to set up the API key otherwise the name generator will just label all pieces Untitled.

export OPENAI_API_KEY=<YOUR_API_KEY>

Now you can generate the collection.

python src/cmd_generate.py --collection "myCoolCollection" -n 100 -i 1

This will generate a set of artwork called myCoolCollection, starting at index 1, and it will make 100 pieces.

Output

In the output, we will have the actual artwork itself, like this:

art

There will also be a meta-data file.

{
    "item_id": "001",
    "title": "VANISHED DREAMS",
    "start_color_name": "Rangitoto",
    "end_color_name": "Bright Turquoise",
    "code": "A:512:2b3323:00ffe1:314.272.12:393.276.16:369.218.20:345.311.24:414.391.28:97.277.32:362.121.36:314.272.12:182.251.40:161.335.36:314.272.12"
}

And there will be a preview image that combines both things.

preview_art

Creation Process

Each item is generated by an algorithm. The first step is to pick the primary colors for the artwork. I pick a random HSV value in a range, then a secondary color based off of that.

def generate_starting_color():

    # Choose starting HSV values.
    h = random.random()
    s = random.choice([0.3, 0.5, 1, 1])  # Favor saturated colors.
    v = random.choice([0.2, 0.8])  # Either dark or bright.

    return Color.hsv_float_to_rgb_int((h, s, v))

I also name the colors (this is important later) using a color-lookup table that picks the closest (Euclidean distance) match on its HSV value.

The color-to-name mapping logic was ported from an open-source JS script by Chirag Mehta.

class ColorNameMapper:
    def __init__(self, hex_color_map: str) -> None:
        self.color_names: List[Color] = []
        ...

The art itself is then generated by drawing a series of connected lines, with variable thickness. The color and thickness changes between each point. These colors, points, and thickness are then serialized into a code like this:

A:512:332823:29ff00:392.293.12:341.337.16:208.141.20:294.207.24:392.293.12:196.286.28:119.371.32:139.350.36:137.330.40:392.293.12

...which is then used to render the image. In this way, the meta-data also contains a redundant back-up of the image itself.

Name Generation

Finally, this is the most interesting part for me. The title of each piece is also generated by machine as well.

The color names (e.g. Rangitoto, Bright Turquoise) are used as part of a prompt to OpenAI GPT-3 language model.

It comes up with some very interesting stories for each image. For example:

  • VANISHED DREAMS
  • LET’S BURN THE CROWS
  • FROZEN OCEAN

Together the names and the images are both machine generated, and evoke some story or emotion (at least to me), which is why I called this collection "Machine Psychology."

Owner
Pixegami Team
Pixegami Team
Implementation of Multistream Transformers in Pytorch

Multistream Transformers Implementation of Multistream Transformers in Pytorch. This repository deviates slightly from the paper, where instead of usi

Phil Wang 47 Jul 26, 2022
👑 spaCy building blocks and visualizers for Streamlit apps

spacy-streamlit: spaCy building blocks for Streamlit apps This package contains utilities for visualizing spaCy models and building interactive spaCy-

Explosion 620 Dec 29, 2022
test

Lidar-data-decode In this project, you can decode your lidar data frame(pcap file) and make your own datasets(test dataset) in Windows without any hug

46 Dec 05, 2022
DaCy: The State of the Art Danish NLP pipeline using SpaCy

DaCy: A SpaCy NLP Pipeline for Danish DaCy is a Danish preprocessing pipeline trained in SpaCy. At the time of writing it has achieved State-of-the-Ar

Kenneth Enevoldsen 71 Jan 06, 2023
TunBERT is the first release of a pre-trained BERT model for the Tunisian dialect using a Tunisian Common-Crawl-based dataset.

TunBERT is the first release of a pre-trained BERT model for the Tunisian dialect using a Tunisian Common-Crawl-based dataset. TunBERT was applied to three NLP downstream tasks: Sentiment Analysis (S

InstaDeep Ltd 72 Dec 09, 2022
Semantic search through a vectorized Wikipedia (SentenceBERT) with the Weaviate vector search engine

Semantic search through Wikipedia with the Weaviate vector search engine Weaviate is an open source vector search engine with build-in vectorization a

SeMI Technologies 191 Dec 26, 2022
HAN2HAN : Hangul Font Generation

HAN2HAN : Hangul Font Generation

Changwoo Lee 36 Dec 28, 2022
OpenChat: Opensource chatting framework for generative models

OpenChat is opensource chatting framework for generative models.

Hyunwoong Ko 427 Jan 06, 2023
Python library for Serbian Natural language processing (NLP)

SrbAI - Python biblioteka za procesiranje srpskog jezika SrbAI je projekat prikupljanja algoritama i modela za procesiranje srpskog jezika u jedinstve

Serbian AI Society 3 Nov 22, 2022
A library for end-to-end learning of embedding index and retrieval model

Poeem Poeem is a library for efficient approximate nearest neighbor (ANN) search, which has been widely adopted in industrial recommendation, advertis

54 Dec 21, 2022
Yomichad - a Japanese pop-up dictionary that can display readings and English definitions of Japanese words

Yomichad is a Japanese pop-up dictionary that can display readings and English definitions of Japanese words, kanji, and optionally named entities. It is similar to yomichan, 10ten, and rikaikun in s

Jonas Belouadi 7 Nov 07, 2022
Simple multilingual lemmatizer for Python, especially useful for speed and efficiency

Simplemma: a simple multilingual lemmatizer for Python Purpose Lemmatization is the process of grouping together the inflected forms of a word so they

Adrien Barbaresi 70 Dec 29, 2022
Using Bert as the backbone model for lime, designed for NLP task explanation (sentence pair text classification task)

Lime Comparing deep contextualized model for sentences highlighting task. In addition, take the classic explanation model "LIME" with bert-base model

JHJu 2 Jan 18, 2022
Galois is an auto code completer for code editors (or any text editor) based on OpenAI GPT-2.

Galois is an auto code completer for code editors (or any text editor) based on OpenAI GPT-2. It is trained (finetuned) on a curated list of approximately 45K Python (~470MB) files gathered from the

Galois Autocompleter 91 Sep 23, 2022
Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing 🎉 🎉 🎉 We released the 2.0.0 version with TF2 Support. 🎉 🎉 🎉 If you

Eliyar Eziz 2.3k Dec 29, 2022
CATs: Semantic Correspondence with Transformers

CATs: Semantic Correspondence with Transformers For more information, check out the paper on [arXiv]. Training with different backbones and evaluation

74 Dec 10, 2021
Chatbot for the Chatango messaging platform

BroiestBot The baddest bot in the game right now. Uses the ch.py framework for joining Chantango rooms and responding to user messages. Commands If a

Todd Birchard 3 Jan 17, 2022
The following links explain a bit the idea of semantic search and how search mechanisms work by doing retrieve and rerank

Main Idea The following links explain a bit the idea of semantic search and how search mechanisms work by doing retrieve and rerank Semantic Search Re

Sergio Arnaud Gomez 2 Jan 28, 2022
This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

Graph4AI 230 Nov 22, 2022
基于pytorch_rnn的古诗词生成

pytorch_peot_rnn 基于pytorch_rnn的古诗词生成 说明 config.py里面含有训练、测试、预测的参数,更改后运行: python main.py 预测结果 if config.do_predict: result = trainer.generate('丽日照残春')

西西嘛呦 3 May 26, 2022