Galois is an auto code completer for code editors (or any text editor) based on OpenAI GPT-2.

Overview

Galois Autocompleter

Galois Autocompleter

Version Twitter: iedmrc

An autocompleter for code editors based on OpenAI GPT-2.

🏠 Homepage

Galois is an auto code completer for code editors (or any text editor) based on OpenAI GPT-2. It is trained (finetuned) on a curated list of approximately 45K Python (~470MB) files gathered from the Github. Currently, it just works properly on Python but not bad at other languages (thanks to GPT-2's power).

This repository now contains the very first release of the Galois Project. With this project, I aim to create a Deep Learning Based Autocompleter such that anyone can run it on their own computer easily. Thus, coding will be more easier and fun!

Galois demo GIF

Installation

With Docker

Either clone the repository and build the image from docker file or directly run the following command:

docker run --rm -dit -p 3030:3030 iedmrc/galois-autocompleter:latest-gpu

P.S: CPU image is not available on the Docker Hub at the moment so if you want to run it on CPU rather than GPU, clone the repository and build the image as follows:

docker build --build-arg TENSORFLOW_VERSION=1.14.0-py3 -t iedmrc/galois-autocompleter:latest .

Without Docker

Clone the repository:

git clone https://github.com/iedmrc/galois-autocompleter

Download the latest model from releases and uncompress it into the directory:

curl -SL https://github.com/iedmrc/galois-autocompleter/releases/latest/download/model.tar.xz | tar -xJC ./galois-autocompleter

Install dependencies:

pip3 install -r requirements.txt

P.S.: Be sure that you have tensorflow version >= 1.13

Run the autocompleter:

python3 main.py

Usage

Currently, there are no extensions for code editors. You can use it through HTTP. When you run the main.py, it will serve an HTTP (flask) server. Then you can easily make a POST request to the http://localhost:3030/ with the some JSON body like the following:

{text: "your python code goes here"}

An example curl command:

curl -X POST \
  http://localhost:3030/autocomplete \
  -H 'Content-Type: application/json' \
  -d '{"text":"import os\nimport sys\n# Count lines of codes in the given directory, separated by file extension.\ndef main(directory):\n  line_count = {}\n  for filename in os.listdir(directory):\n    _, ext = os.path.splitext(filename)\n    if ext not"}'

Check out the gist here for a docker-compose file.

Finetuning The Model

Even you can finetune (re-train over) the model with/for your code files. Just follow the Max Woolf's gpt-2-simple or Neil Shepperd's gpt-2 repositories with 345M version. But don't forget to replace checkpoint (model) with the one in this repository.

You can train it on the Google Colaboratory for free. But if you need a production-grade (i.e. more accurate) one then you may need to train it for more longer time. In my case, it took ~48 hours on a P100 GPU.

Planned Works

  • Train the model to predict in most common programming languages.
  • Create extensions for most common code editors to use galois as an autocompleter.
  • Create a new, more lightweight but powerful model such that anyone can run it in their computer easily.

Contribution

Contributions are welcome. Feel free to create an issue or a pull request.

Author

👤 Ibrahim Ethem DEMIRCI

Twitter: @iedmrc | Github: @iedmrc | Patreon: @iedmrc

Ibrahim's open-source projects are supported by his Patreon. If you found this project helpful, any monetary contributions to the Patreon are appreciated and will be put to good creative use.

License

It is licensed under MIT License as found in the LICENSE file.

Disclaimer

This repo has no affiliation or relationship with OpenAI.

You might also like...
Text editor on python tkinter to convert english text to other languages with the help of ployglot.
Text editor on python tkinter to convert english text to other languages with the help of ployglot.

Transliterator Text Editor This is a simple transliteration program which is used to convert english word to phonetically matching word in another lan

This repository serves as a place to document a toy attempt on how to create a generative text model in Catalan, based on GPT-2

GPT-2 Catalan playground and scripts to train a GPT-2 model either from scrath or from another pretrained model.

An easy to use, user-friendly and efficient code for extracting OpenAI CLIP (Global/Grid) features from image and text respectively.

Extracting OpenAI CLIP (Global/Grid) Features from Image and Text This repo aims at providing an easy to use and efficient code for extracting image &

Shirt Bot is a discord bot which uses GPT-3 to generate text
Shirt Bot is a discord bot which uses GPT-3 to generate text

SHIRT BOT · Shirt Bot is a discord bot which uses GPT-3 to generate text. Made by Cyclcrclicly#3420 (474183744685604865) on Discord. Support Server EX

Neural text generators like the GPT models promise a general-purpose means of manipulating texts.

Boolean Prompting for Neural Text Generators Neural text generators like the GPT models promise a general-purpose means of manipulating texts. These m

A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN
A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN

artificial intelligence cosmic love and attention fire in the sky a pyramid made of ice a lonely house in the woods marriage in the mountains lantern

Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch

Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch

OpenAI CLIP text encoders for multiple languages!
OpenAI CLIP text encoders for multiple languages!

Multilingual-CLIP OpenAI CLIP text encoders for any language Colab Notebook · Pre-trained Models · Report Bug Overview OpenAI recently released the pa

An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hundreds of billions of parameters or larger.

GPT-NeoX An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hun

Comments
  • train code complete from zero

    train code complete from zero

    Hi, I try to train code complete from zero, below is two settings, and the performance is bad: 1. train from utf-8 encode, the default BPE encoding, 1 batch_size, 3 GPU card, 500K iteration 2. train from ascii encode, 1 batch_size, 3 GPU card, 500K iteration and method 2 is just map ascii from 1 to N, the size is much less, and the performance of the two settings ard bad. do you train by finetune the release model of gpt2? can you share your training settings?

    opened by yuandaxing 2
  • About training ..

    About training ..

    When you specify the training directory to the model, did you extract all .py files and delete the other types or the model will parse all the projects directories that you cloned from Github one by one ?

    opened by dimwael 1
  • Text size limit for the Galois Autocompleter API

    Text size limit for the Galois Autocompleter API

    Hi!

    While testing Galois, I discovered something that seems to be a limit at the size of the text that you can send to the Autocompleter API without getting a 500 error. I could not find the exactly number, but around 2180 characters of code it starts crashing.

    He are the error logs I got:

    Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1356, in _do_call return fn(*args) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1341, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1429, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024) [[{{node sample_sequence/while/model/GatherV2_1}}]]

    bug 
    opened by GabrielTamujo 1
  • Finetuning the Galois' model

    Finetuning the Galois' model

    Hi, @iedmrc I'm finetuning the Galois' model with the gpt-2-simple command aiming at featuring it with our team programming standards. (Well, actually, we hope so!) I'm running the finetune with "steps=-1" (what's, endless run). I'd like to hear from you when should I stop the process. This is the last 4 lines of the current history of the process:

    [310 | 23899.61] loss=0.09 avg=0.36 [320 | 24645.14] loss=0.06 avg=0.35 [330 | 25398.12] loss=0.09 avg=0.34 [340 | 26155.55] loss=0.05 avg=0.33

    Best regard!

    question 
    opened by DenisAraujo68 1
Releases(v0.1.0)
Owner
Galois Autocompleter
Auto code completer for code editors (or any text editor) based on OpenAI GPT-2.
Galois Autocompleter
Rhasspy 673 Dec 28, 2022
Finding Label and Model Errors in Perception Data With Learned Observation Assertions

Finding Label and Model Errors in Perception Data With Learned Observation Assertions This is the project page for Finding Label and Model Errors in P

Stanford Future Data Systems 17 Oct 14, 2022
Word Bot for JKLM Bomb Party

Word Bot for JKLM Bomb Party A bot for Bomb Party on https://www.jklm.fun (Only English) Requirements pynput pyperclip pyautogui Usage: Step 1: Run th

Nicolas 7 Oct 30, 2022
Product-Review-Summarizer - Created a product review summarizer which clustered thousands of product reviews and summarized them into a maximum of 500 characters, saving precious time of customers and helping them make a wise buying decision.

Product-Review-Summarizer - Created a product review summarizer which clustered thousands of product reviews and summarized them into a maximum of 500 characters, saving precious time of customers an

Parv Bhatt 1 Jan 01, 2022
STonKGs is a Sophisticated Transformer that can be jointly trained on biomedical text and knowledge graphs

STonKGs STonKGs is a Sophisticated Transformer that can be jointly trained on biomedical text and knowledge graphs. This multimodal Transformer combin

STonKGs 27 Aug 11, 2022
100+ Chinese Word Vectors 上百种预训练中文词向量

Chinese Word Vectors 中文词向量 中文 This project provides 100+ Chinese Word Vectors (embeddings) trained with different representations (dense and sparse),

embedding 10.4k Jan 09, 2023
A Fast Command Analyser based on Dict and Pydantic

Alconna Alconna 隶属于ArcletProject, 在Cesloi内有内置 Alconna 是 Cesloi-CommandAnalysis 的高级版,支持解析消息链 一般情况下请当作简易的消息链解析器/命令解析器 文档 暂时的文档 Example from arclet.alcon

19 Jan 03, 2023
Collection of scripts to pinpoint obfuscated code

Obfuscation Detection (v1.0) Author: Tim Blazytko Automatically detect control-flow flattening and other state machines Description: Scripts and binar

Tim Blazytko 230 Nov 26, 2022
CredData is a set of files including credentials in open source projects

CredData is a set of files including credentials in open source projects. CredData includes suspicious lines with manual review results and more information such as credential types for each suspicio

Samsung 19 Sep 07, 2022
An easier way to build neural search on the cloud

An easier way to build neural search on the cloud Jina is a deep learning-powered search framework for building cross-/multi-modal search systems (e.g

Jina AI 17.1k Jan 09, 2023
🤕 spelling exceptions builder for lazy people

🤕 spelling exceptions builder for lazy people

Vlad Bokov 3 May 12, 2022
Fast, DB Backed pretrained word embeddings for natural language processing.

Embeddings Embeddings is a python package that provides pretrained word embeddings for natural language processing and machine learning. Instead of lo

Victor Zhong 212 Nov 21, 2022
Line as a Visual Sentence: Context-aware Line Descriptor for Visual Localization

Line as a Visual Sentence with LineTR This repository contains the inference code, pretrained model, and demo scripts of the following paper. It suppo

SungHo Yoon 158 Dec 27, 2022
Nested Named Entity Recognition

Nested Named Entity Recognition Training Dataset: CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark url: https://tianchi.aliyun.

8 Dec 25, 2022
a CTF web challenge about making screenshots

screenshotter (web) A CTF web challenge about making screenshots. It is inspired by a bug found in real life. The challenge was created by @LiveOverfl

219 Jan 02, 2023
Japanese NLP Library

Japanese NLP Library Back to Home Contents 1 Requirements 1.1 Links 1.2 Install 1.3 History 2 Libraries and Modules 2.1 Tokenize jTokenize.py 2.2 Cabo

Pulkit Kathuria 144 Dec 27, 2022
Score-Based Point Cloud Denoising (ICCV'21)

Score-Based Point Cloud Denoising (ICCV'21) [Paper] https://arxiv.org/abs/2107.10981 Installation Recommended Environment The code has been tested in

Shitong Luo 79 Dec 26, 2022
Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning

GenSen Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning Sandeep Subramanian, Adam Trischler, Yoshua B

Maluuba Inc. 309 Oct 19, 2022
Code for Findings at EMNLP 2021 paper: "Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning"

Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning This repo is for Findings at EMNLP 2021 paper: Learn Cont

INK Lab @ USC 6 Sep 02, 2022
Klexikon: A German Dataset for Joint Summarization and Simplification

Klexikon: A German Dataset for Joint Summarization and Simplification Dennis Aumiller and Michael Gertz Heidelberg University Under submission at LREC

Dennis Aumiller 8 Jan 03, 2023