Ascend your Jupyter Notebook usage

Overview

Jupyter Ascending

Sync Jupyter Notebooks from any editor

Jupyter Ascending

About

Jupyter Ascending lets you edit Jupyter notebooks from your favorite editor, then instantly sync and execute that code in the Jupyter notebook running in your browser.

It's the best of both worlds--the autocomplete, keybindings, and refactoring tools you love in your favorite editor, plus the great visualization abilities of a Jupyter notebook.

Combined with basic syncing of your code to a remote server, you can have all the power of a beefy dev-server with all the convenience of editing code locally.

Installation

$ pip install jupyter_ascending && \
jupyter nbextension    install jupyter_ascending --sys-prefix --py && \
jupyter nbextension     enable jupyter_ascending --sys-prefix --py && \
jupyter serverextension enable jupyter_ascending --sys-prefix --py

You can confirm it's installed by checking for jupyter_ascending in:

$ jupyter nbextension     list
$ jupyter serverextension list

Usage

Quickstart

  1. python -m jupyter_ascending.scripts.make_pair --base example

    This makes a pair of synced py and ipynb files, example.sync.py and example.sync.ipynb.

  2. Start jupyter and open the notebook:

    jupyter notebook example.sync.ipynb

  3. Add some code to the .sync.py file, e.g.

    echo 'print("Hello World!")' >> example.sync.py

  4. Sync the code into the jupyter notebook:

    python -m jupyter_ascending.requests.sync --filename example.sync.py

  5. Run that cell of code

    python -m jupyter_ascending.requests.execute --filename example.sync.py --line 16

Set up one of the editor integrations to do all of this from within your favorite editor!

Working with multiple jupyter servers or alternate ports

Currently Jupyter Ascending expects the jupyter server to be running at localhost:8888. If it's running elsewhere (eg due to having multiple jupyter notebooks open), you'll need to set the env variables JUPYTER_ASCENDING_EXECUTE_HOST and JUPYTER_ASCENDING_EXECUTE_PORT appropriately both where you use the client (ie in your editor) and where you start the server.

By default the Jupyter server will search for a free port starting at 8888. If 8888 is unavailable and it selects eg 8889, Jupyter Ascending won't work - as it's expecting to connect to 8888. To force Jupyter to use a specific port, start your jupyter notebook with JUPYTER_PORT=8888 JUPYTER_PORT_RETRIES=0 jupyter notebook (or whatever port you want, setting also JUPYTER_ASCENDING_EXECUTE_PORT appropriately).

Working on a remote server

Jupyter Ascending doesn't know or care if the editor and the jupyter server are on the same machine. The client is just sending requests to http://[jupyter_server_url]:[jupyter_server_port]/jupyter_ascending, with the default set to http://localhost:8888/jupyter_ascending. We typically use SSH to forward the local jupyter port into the remote server, but you can set up the networking however you like, and use the environment variables to tell the client where to look for the Jupyter server.

There's fuzzy-matching logic to match the locally edited file path with the remote notebook file path (eg if the two machines have the code in a different directory), so everything should just work!

Here's an example of how you could set this up:

  1. install jupyter-ascending on both the client and the server

  2. put a copy of your project code on both the client and the server

  3. start a jupyter notebook on the server, and open a .sync.ipynb notebook

  4. set up port forwarding, e.g. with something like this (forwards local port 8888 to the remote port 8888)

    ssh -L 8888:127.0.0.1:8888 [email protected]_hostname

  5. use Jupyter Ascending clients as normal on the corresponding .sync.py file

Security Warning

The jupyter-ascending client-server connection is currently completely unauthenticated, even if you have auth enabled on the Jupyter server. This means that, if your jupyter server port is open to the internet, someone could detect that you have jupyter-ascending running, then sync and run arbitrary code on your machine. That's bad!

For the moment, we recommend only running jupyter-ascending when you're using jupyter locally, or when your jupyter server isn't open to the public internet. For example, we run Jupyter on remote servers, but keep Jupyter accessible only to localhost. Then we use a secure SSH tunnel to do port-forwarding.

Hopefully we can add proper authentication in the future. Contributions are welcome here!

How it works

  • your editor calls the jupyter ascending client library with one of a few commands:
    • sync the code to the notebook (typically on save)
    • run a cell / run all cells / other commands that should be mapped to a keyboard shortcut
  • the client library assembles a HTTP POST request and sends it to the jupyter server
  • there is a jupyter server extension which accepts HTTP POST requests at http://[jupyter_server_url]:[jupyter_server_port]/jupyter_ascending.
  • the server extension matches the request filename to the proper running notebooks and forwards the command along to the notebook plugin
  • a notebook plugin receives the command, and updates the contents of the notebook or executes the requested command.
  • the notebook plugin consists of two parts - one part executes within the python process of the notebook kernel, and the other executes in javascript in the notebook's browser window. the part in python launches a little webserver in a thread, which is how it receives messages the server extension. when the webserver thread starts up, it sends a message to the server extension to "register" itself so the server extension knows where to send commands for that notebook.

Local development

To do local development (only needed if you're modifying the jupyter-ascending code):

# install dependencies
$ poetry install

# Activate the poetry env
$ poetry shell

# Installs the extension, using symlinks
$ jupyter nbextension install --py --sys-prefix --symlink jupyter_ascending

# Enables them, so it auto loads
$ jupyter nbextension enable jupyter_ascending --py --sys-prefix
$ jupyter serverextension enable jupyter_ascending --sys-prefix --py

To check that they are enabled, do something like this:

$ jupyter nbextension list
Known nbextensions:
  config dir: /home/tj/.pyenv/versions/3.8.1/envs/general/etc/jupyter/nbconfig
    notebook section
      jupytext/index  enabled
      - Validating: OK
      jupyter-js-widgets/extension  enabled
      - Validating: OK
      jupyter_ascending/extension  enabled
      - Validating: OK

$ jupyter serverextension list
config dir: /home/tj/.pyenv/versions/3.8.1/envs/general/etc/jupyter
    jupytext  enabled
    - Validating...
      jupytext 1.8.0 OK
    jupyter_ascending  enabled
    - Validating...
      jupyter_ascending 0.1.13 OK

Run tests from the root directory of this repository using python -m pytest ..

Format files with pyfixfmt. In a PyCharm file watcher, something like

python -m pyfixfmt --file-glob $FilePathRelativeToProjectRoot$ --verbose

Pushing a new version to PyPI:

  • Bump the version number in pyproject.toml and _version.py.
  • poetry build
  • poetry publish
  • git tag VERSION and git push origin VERSION

Updating dependencies:

  • Dependency constraints are in pyproject.toml. These are the constraints that will be enforced when distributing the package to end users.
  • These get locked down to specific versions of each package in poetry.lock, when you run poetry lock or poetry install for the first time. poetry.lock is only used by developers using poetry install - the goal is to have a consistent development environment for a all developers.
  • If you make a change to the dependencies in pyproject.toml, you'll want to update the lock file with poetry lock. To get only the minimal required changes, use poetry lock --no-update.
Owner
Untitled AI
We're investigating the fundamentals of learning across humans and machines in order to create more general machine intelligence.
Untitled AI
SMORE: Knowledge Graph Completion and Multi-hop Reasoning in Massive Knowledge Graphs

SMORE: Knowledge Graph Completion and Multi-hop Reasoning in Massive Knowledge Graphs SMORE is a a versatile framework that scales multi-hop query emb

Google Research 135 Dec 27, 2022
Zero-Shot Text-to-Image Generation VQGAN+CLIP Dockerized

VQGAN-CLIP-Docker About Zero-Shot Text-to-Image Generation VQGAN+CLIP Dockerized This is a stripped and minimal dependency repository for running loca

Kevin Costa 73 Sep 11, 2022
Snscrape-jsonl-urls-extractor - Extracts urls from jsonl produced by snscrape

snscrape-jsonl-urls-extractor extracts urls from jsonl produced by snscrape Usag

1 Feb 26, 2022
FIRA: Fine-Grained Graph-Based Code Change Representation for Automated Commit Message Generation

FIRA is a learning-based commit message generation approach, which first represents code changes via fine-grained graphs and then learns to generate commit messages automatically.

Van 21 Dec 30, 2022
Official implementation of the paper "Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering"

Light Field Networks Project Page | Paper | Data | Pretrained Models Vincent Sitzmann*, Semon Rezchikov*, William Freeman, Joshua Tenenbaum, Frédo Dur

Vincent Sitzmann 130 Dec 29, 2022
a reimplementation of UnFlow in PyTorch that matches the official TensorFlow version

pytorch-unflow This is a personal reimplementation of UnFlow [1] using PyTorch. Should you be making use of this work, please cite the paper according

Simon Niklaus 134 Nov 20, 2022
Normal Learning in Videos with Attention Prototype Network

Codes_APN Official codes of CVPR21 paper: Normal Learning in Videos with Attention Prototype Network (https://arxiv.org/abs/2108.11055) Overview of ou

11 Dec 13, 2022
Hysterese plugin with two temperature offset areas

craftbeerpi4 plugin OffsetHysterese Temperatur-Steuerungs-Plugin mit zwei tempereaturbereich abhängigen Offsets. Installation sudo pip3 install https:

HappyHibo 1 Dec 21, 2021
Code for approximate graph reduction techniques for cardinality-based DSFM, from paper

SparseCard Code for approximate graph reduction techniques for cardinality-based DSFM, from paper "Approximate Decomposable Submodular Function Minimi

Nate Veldt 1 Nov 25, 2022
Sparse-dense operators implementation for Paddle

Sparse-dense operators implementation for Paddle This module implements coo, csc and csr matrix formats and their inter-ops with dense matrices. Feel

北海若 3 Dec 17, 2022
This project generates news headlines using a Long Short-Term Memory (LSTM) neural network.

News Headlines Generator bunnysaini/Generate-Headlines Goal This project aims to generate news headlines using a Long Short-Term Memory (LSTM) neural

Bunny Saini 1 Jan 24, 2022
基于Paddle框架的arcface复现

arcface-Paddle 基于Paddle框架的arcface复现 ArcFace-Paddle 本项目基于paddlepaddle框架复现ArcFace,并参加百度第三届论文复现赛,将在2021年5月15日比赛完后提供AIStudio链接~敬请期待 参考项目: InsightFace Padd

QuanHao Guo 16 Dec 15, 2022
PyTorch implementation of DeepUME: Learning the Universal Manifold Embedding for Robust Point Cloud Registration (BMVC 2021)

DeepUME: Learning the Universal Manifold Embedding for Robust Point Cloud Registration [video] [paper] [supplementary] [data] [thesis] Introduction De

Natalie Lang 10 Dec 14, 2022
A Differentiable Recipe for Learning Visual Non-Prehensile Planar Manipulation

A Differentiable Recipe for Learning Visual Non-Prehensile Planar Manipulation This repository contains the source code of the paper A Differentiable

Bernardo Aceituno 2 May 05, 2022
Python implementation of Lightning-rod Agent, the Stack4Things board-side probe

Iotronic Lightning-rod Agent Python implementation of Lightning-rod Agent, the Stack4Things board-side probe. Free software: Apache 2.0 license Websit

2 May 19, 2022
Code accompanying the paper "Knowledge Base Completion Meets Transfer Learning"

Knowledge Base Completion Meets Transfer Learning This code accompanies the paper Knowledge Base Completion Meets Transfer Learning published at EMNLP

14 Nov 27, 2022
Implementation of popular bandit algorithms in batch environments.

batch-bandits Implementation of popular bandit algorithms in batch environments. Source code to our paper "The Impact of Batch Learning in Stochastic

Danil Provodin 2 Sep 11, 2022
Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification.

Easy Few-Shot Learning Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification. This repository is made for you

Sicara 399 Jan 08, 2023
Python implementation of the multistate Bennett acceptance ratio (MBAR)

pymbar Python implementation of the multistate Bennett acceptance ratio (MBAR) method for estimating expectations and free energy differences from equ

Chodera lab // Memorial Sloan Kettering Cancer Center 169 Dec 02, 2022