An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"

Related tags

Deep LearningRASP
Overview

RASP

Setup

Mac or Linux

Run ./setup.sh . It will create a python3 virtual environment and install the dependencies for RASP. It will also try to install graphviz (the non-python part) and rlwrap on your machine. If these fail, you will still be able to use RASP, however: the interface will not be as nice without rlwrap, and drawing s-op computation flows will not be possible without graphviz. After having set up, you can run ./rasp.sh to start the RASP read-evaluate-print-loop.

Windows

Follow the instructions given in windows instructions.txt

The REPL

After having set up, if you are in mac/linux, you can run ./rasp.sh to start the RASP REPL. Otherwise, run python3 RASP_support/REPL.py Use Ctrl+C to quit a partially entered command, and Ctrl+D to exit the REPL.

Initial Environment

RASP starts with the base s-ops: tokens, indices, and length. It also has the base functions select, aggregate, and selector_width as described in the paper, a selector full_s created through select(1,1,==) that creates a "full" attention pattern, and several other library functions (check out RASP_support/rasplib.rasp to see them).

Additionally, the REPL begins with a base example, "hello", on which it shows the output for each created s-op or selector. This example can be changed, and toggled on and off, through commands to the REPL.

All RASP commands end with a semicolon. Commands to the REPL -- such as changing the base example -- do not.

Start by following along with the examples -- they are kept at the bottom of this readme.

Note on input types:

RASP expects inputs in four forms: strings, integers, floats, or booleans, handled respectively by tokens_str, tokens_int, tokens_float, and tokens_bool. Initially, RASP loads with tokens set to tokens_str, this can be changed by assignment, e.g.: tokens=tokens_int;. When changing the input type, you will also want to change the base example, e.g.: set example [0,1,2].

Note that assignments do not retroactively change the computation trees of existing s-ops!

Writing and Loading RASP files

To keep and load RASP code from files, save them with .rasp as the extension, and use the 'load' command without the extension. For example, you can load the examples file paper_examples.rasp in this repository to the REPL as follows:

>> load "paper_examples";

This will make (almost) all values in the file available in the loading environment (whether the REPL, or a different .rasp file): values whose names begin with an underscore remain private to the file they are written in. Loading files in the REPL will also print a list of all loaded values.

Syntax Highlighting

For the Sublime Text editor, you can get syntax highlighting for .rasp files as follows:

  1. Install package control for sublime (you might already have it: look in the menu [Sublime Text]->[Preferences] and see if it's there. If not, follow the instructions at https://packagecontrol.io/installation).
  2. Install the 'packagedev' package through package control ([Sublime Text]->[Preferences]->[Package Control], then type [install package], then [packagedev])
  3. After installing PackageDev, create a new syntax definition file through [Tools]->[Packages]->[Package Development]->[New Syntax Definition].
  4. Copy the contents of RASP_support/RASP.sublime-syntax into the new syntax definition file, and save it as RASP.sublime-syntax.

[Above is basically following the instructions in http://ilkinulas.github.io/programming/2016/02/05/sublime-text-syntax-highlighting.html , and then copying in the contents of the provided RASP.sublime-syntax file]

Examples

Play around in the REPL!

Try simple elementwise manipulations of s-ops:

>>  threexindices =3 * indices;
     s-op: threexindices
 	 Example: threexindices("hello") = [0, 3, 6, 9, 12] (ints)
>> indices+indices;
     s-op: out
 	 Example: out("hello") = [0, 2, 4, 6, 8] (ints)

Change the base example, and create a selector that focuses each position on all positions before it:

>> set example "hey"
>> prevs=select(indices,indices,<);
     selector: prevs
 	 Example:
 			     h e y
 			 h |      
 			 e | 1    
 			 y | 1 1  

Check the output of an s-op on your new base example:

>> threexindices;
     s-op: threexindices
 	 Example: threexindices("hey") = [0, 3, 6] (ints)

Or on specific inputs:

>> threexindices(["hi","there"]);
	 =  [0, 3] (ints)
>> threexindices("hiya");
	 =  [0, 3, 6, 9] (ints)

Aggregate with the full selection pattern (loaded automatically with the REPL) to compute the proportion of a letter in your input:

>> full_s;
     selector: full_s
 	 Example:
 			     h e y
 			 h | 1 1 1
 			 e | 1 1 1
 			 y | 1 1 1
>> my_frac=aggregate(full_s,indicator(tokens=="e"));
     s-op: my_frac
 	 Example: my_frac("hey") = [0.333]*3 (floats)

Note: when an s-op's output is identical in all positions, RASP simply prints the output of one position, followed by " * X" (where X is the sequence length) to mark the repetition.

Check if a letter is in your input at all:

>> "e" in tokens;
     s-op: out
 	 Example: out("hey") = [T]*3 (bools)

Alternately, in an elementwise fashion, check if each of your input tokens belongs to some group:

>> vowels = ["a","e","i","o","u"];
     list: vowels = ['a', 'e', 'i', 'o', 'u']
>> tokens in vowels;
     s-op: out
 	 Example: out("hey") = [F, T, F] (bools)

Draw the computation flow for an s-op you have created, on an input of your choice: (this will create a pdf in a subfolder comp_flows of the current directory)

>> draw(my_frac,"abcdeeee");
	 =  [0.5]*8 (floats)

Or simply on the base example:

>> draw(my_frac);
	 =  [0.333]*3 (floats)

If they bother you, turn the examples off, and bring them back when you need them:

>> examples off
>> indices;
     s-op: indices
>> full_s;
     selector: full_s
>> examples on
>> indices;
     s-op: indices
 	 Example: indices("hey") = [0, 1, 2] (ints)

You can also do this selectively, turning only selector or s-op examples on and off, e.g.: selector examples off.

Create a selector that focuses each position on all other positions containing the same token. But first, set the base example to "hello" for a better idea of what's happening:

>> set example "hello"
>> same_token=select(tokens,tokens,==);
     selector: same_token
 	 Example:
 			     h e l l o
 			 h | 1        
 			 e |   1      
 			 l |     1 1  
 			 l |     1 1  
 			 o |         1

Then, use selector_width to compute, for each position, how many other positions the selector same_token focuses it on. This effectively computes an in-place histogram over the input:

>> histogram=selector_width(same_token);
     s-op: histogram
 	 Example: histogram("hello") = [1, 1, 2, 2, 1] (ints)

For more complicated examples, check out paper_examples.rasp!

Experiments on Transformers

The transformers in the paper were trained, and their attention heatmaps visualised, using the code in this repository: https://github.com/tech-srl/RASP-exps

Catbird is an open source paraphrase generation toolkit based on PyTorch.

Catbird is an open source paraphrase generation toolkit based on PyTorch. Quick Start Requirements and Installation The project is based on PyTorch 1.

Afonso Salgado de Sousa 5 Dec 15, 2022
ICLR 2021, Fair Mixup: Fairness via Interpolation

Fair Mixup: Fairness via Interpolation Training classifiers under fairness constraints such as group fairness, regularizes the disparities of predicti

Ching-Yao Chuang 49 Nov 22, 2022
Code, final versions, and information on the Sparkfun Graphical Datasheets

Graphical Datasheets Code, final versions, and information on the SparkFun Graphical Datasheets. Generated Cells After Running Script Example Complete

SparkFun Electronics 102 Jan 05, 2023
📝 Wrapper library for text generation / language models at char and word level with RNN in TensorFlow

tensorlm Generate Shakespeare poems with 4 lines of code. Installation tensorlm is written in / for Python 3.4+ and TensorFlow 1.1+ pip3 install tenso

Kilian Batzner 63 May 22, 2021
Latte: Cross-framework Python Package for Evaluation of Latent-based Generative Models

Cross-framework Python Package for Evaluation of Latent-based Generative Models Latte Latte (for LATent Tensor Evaluation) is a cross-framework Python

Karn Watcharasupat 30 Sep 08, 2022
Multi-Template Mouse Brain MRI Atlas (MBMA): both in-vivo and ex-vivo

Multi-template MRI mouse brain atlas (both in vivo and ex vivo) Mouse Brain MRI atlas (both in-vivo and ex-vivo) (repository relocated from the origin

8 Nov 18, 2022
Selfplay In MultiPlayer Environments

This project allows you to train AI agents on custom-built multiplayer environments, through self-play reinforcement learning.

200 Jan 08, 2023
Source code of AAAI 2022 paper "Towards End-to-End Image Compression and Analysis with Transformers".

Towards End-to-End Image Compression and Analysis with Transformers Source code of our AAAI 2022 paper "Towards End-to-End Image Compression and Analy

37 Dec 21, 2022
🔥RandLA-Net in Tensorflow (CVPR 2020, Oral & IEEE TPAMI 2021)

RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds (CVPR 2020) This is the official implementation of RandLA-Net (CVPR2020, Oral

Qingyong 1k Dec 30, 2022
Official implementation of the paper 'High-Resolution Photorealistic Image Translation in Real-Time: A Laplacian Pyramid Translation Network' in CVPR 2021

LPTN Paper | Supplementary Material | Poster High-Resolution Photorealistic Image Translation in Real-Time: A Laplacian Pyramid Translation Network Ji

372 Dec 26, 2022
DiAne is a smart fuzzer for IoT devices

Diane Diane is a fuzzer for IoT devices. Diane works by identifying fuzzing triggers in the IoT companion apps to produce valid yet under-constrained

seclab 28 Jan 04, 2023
Numenta Platform for Intelligent Computing is an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence based strictly on the neuroscience of the neocortex.

NuPIC Numenta Platform for Intelligent Computing The Numenta Platform for Intelligent Computing (NuPIC) is a machine intelligence platform that implem

Numenta 6.3k Dec 30, 2022
PyTorch implementation for the visual prior component (i.e. perception module) of the Visually Grounded Physics Learner [Li et al., 2020].

VGPL-Visual-Prior PyTorch implementation for the visual prior component (i.e. perception module) of the Visually Grounded Physics Learner (VGPL). Give

Toru 8 Dec 29, 2022
Official PyTorch Implementation of Embedding Transfer with Label Relaxation for Improved Metric Learning, CVPR 2021

Embedding Transfer with Label Relaxation for Improved Metric Learning Official PyTorch implementation of CVPR 2021 paper Embedding Transfer with Label

Sungyeon Kim 37 Dec 06, 2022
Library for implementing reservoir computing models (echo state networks) for multivariate time series classification and clustering.

Framework overview This library allows to quickly implement different architectures based on Reservoir Computing (the family of approaches popularized

Filippo Bianchi 249 Dec 21, 2022
This project contains an implemented version of Face Detection using OpenCV and Mediapipe. This is a code snippet and can be used in projects.

Live-Face-Detection Project Description: In this project, we will be using the live video feed from the camera to detect Faces. It will also detect so

Hassan Shahzad 3 Oct 02, 2021
(AAAI 2021) Progressive One-shot Human Parsing

End-to-end One-shot Human Parsing This is the official repository for our two papers: Progressive One-shot Human Parsing (AAAI 2021) End-to-end One-sh

54 Dec 30, 2022
Code for ICML 2021 paper: How could Neural Networks understand Programs?

OSCAR This repository contains the source code of our ICML 2021 paper How could Neural Networks understand Programs?. Environment Run following comman

Dinglan Peng 115 Dec 17, 2022
Learning Off-Policy with Online Planning, CoRL 2021

LOOP: Learning Off-Policy with Online Planning Accepted in Conference of Robot Learning (CoRL) 2021. Harshit Sikchi, Wenxuan Zhou, David Held Paper In

Harshit Sikchi 24 Nov 22, 2022
Official implementation of GraphMask as presented in our paper Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking.

GraphMask This repository contains an implementation of GraphMask, the interpretability technique for graph neural networks presented in our ICLR 2021

Michael Schlichtkrull 29 Sep 02, 2022