Ascend your Jupyter Notebook usage

Overview

Jupyter Ascending

Sync Jupyter Notebooks from any editor

Jupyter Ascending

About

Jupyter Ascending lets you edit Jupyter notebooks from your favorite editor, then instantly sync and execute that code in the Jupyter notebook running in your browser.

It's the best of both worlds--the autocomplete, keybindings, and refactoring tools you love in your favorite editor, plus the great visualization abilities of a Jupyter notebook.

Combined with basic syncing of your code to a remote server, you can have all the power of a beefy dev-server with all the convenience of editing code locally.

Installation

$ pip install jupyter_ascending && \
jupyter nbextension    install jupyter_ascending --sys-prefix --py && \
jupyter nbextension     enable jupyter_ascending --sys-prefix --py && \
jupyter serverextension enable jupyter_ascending --sys-prefix --py

You can confirm it's installed by checking for jupyter_ascending in:

$ jupyter nbextension     list
$ jupyter serverextension list

Usage

Quickstart

  1. python -m jupyter_ascending.scripts.make_pair --base example

    This makes a pair of synced py and ipynb files, example.sync.py and example.sync.ipynb.

  2. Start jupyter and open the notebook:

    jupyter notebook example.sync.ipynb

  3. Add some code to the .sync.py file, e.g.

    echo 'print("Hello World!")' >> example.sync.py

  4. Sync the code into the jupyter notebook:

    python -m jupyter_ascending.requests.sync --filename example.sync.py

  5. Run that cell of code

    python -m jupyter_ascending.requests.execute --filename example.sync.py --line 16

Set up one of the editor integrations to do all of this from within your favorite editor!

Working with multiple jupyter servers or alternate ports

Currently Jupyter Ascending expects the jupyter server to be running at localhost:8888. If it's running elsewhere (eg due to having multiple jupyter notebooks open), you'll need to set the env variables JUPYTER_ASCENDING_EXECUTE_HOST and JUPYTER_ASCENDING_EXECUTE_PORT appropriately both where you use the client (ie in your editor) and where you start the server.

By default the Jupyter server will search for a free port starting at 8888. If 8888 is unavailable and it selects eg 8889, Jupyter Ascending won't work - as it's expecting to connect to 8888. To force Jupyter to use a specific port, start your jupyter notebook with JUPYTER_PORT=8888 JUPYTER_PORT_RETRIES=0 jupyter notebook (or whatever port you want, setting also JUPYTER_ASCENDING_EXECUTE_PORT appropriately).

Working on a remote server

Jupyter Ascending doesn't know or care if the editor and the jupyter server are on the same machine. The client is just sending requests to http://[jupyter_server_url]:[jupyter_server_port]/jupyter_ascending, with the default set to http://localhost:8888/jupyter_ascending. We typically use SSH to forward the local jupyter port into the remote server, but you can set up the networking however you like, and use the environment variables to tell the client where to look for the Jupyter server.

There's fuzzy-matching logic to match the locally edited file path with the remote notebook file path (eg if the two machines have the code in a different directory), so everything should just work!

Here's an example of how you could set this up:

  1. install jupyter-ascending on both the client and the server

  2. put a copy of your project code on both the client and the server

  3. start a jupyter notebook on the server, and open a .sync.ipynb notebook

  4. set up port forwarding, e.g. with something like this (forwards local port 8888 to the remote port 8888)

    ssh -L 8888:127.0.0.1:8888 [email protected]_hostname

  5. use Jupyter Ascending clients as normal on the corresponding .sync.py file

Security Warning

The jupyter-ascending client-server connection is currently completely unauthenticated, even if you have auth enabled on the Jupyter server. This means that, if your jupyter server port is open to the internet, someone could detect that you have jupyter-ascending running, then sync and run arbitrary code on your machine. That's bad!

For the moment, we recommend only running jupyter-ascending when you're using jupyter locally, or when your jupyter server isn't open to the public internet. For example, we run Jupyter on remote servers, but keep Jupyter accessible only to localhost. Then we use a secure SSH tunnel to do port-forwarding.

Hopefully we can add proper authentication in the future. Contributions are welcome here!

How it works

  • your editor calls the jupyter ascending client library with one of a few commands:
    • sync the code to the notebook (typically on save)
    • run a cell / run all cells / other commands that should be mapped to a keyboard shortcut
  • the client library assembles a HTTP POST request and sends it to the jupyter server
  • there is a jupyter server extension which accepts HTTP POST requests at http://[jupyter_server_url]:[jupyter_server_port]/jupyter_ascending.
  • the server extension matches the request filename to the proper running notebooks and forwards the command along to the notebook plugin
  • a notebook plugin receives the command, and updates the contents of the notebook or executes the requested command.
  • the notebook plugin consists of two parts - one part executes within the python process of the notebook kernel, and the other executes in javascript in the notebook's browser window. the part in python launches a little webserver in a thread, which is how it receives messages the server extension. when the webserver thread starts up, it sends a message to the server extension to "register" itself so the server extension knows where to send commands for that notebook.

Local development

To do local development (only needed if you're modifying the jupyter-ascending code):

# install dependencies
$ poetry install

# Activate the poetry env
$ poetry shell

# Installs the extension, using symlinks
$ jupyter nbextension install --py --sys-prefix --symlink jupyter_ascending

# Enables them, so it auto loads
$ jupyter nbextension enable jupyter_ascending --py --sys-prefix
$ jupyter serverextension enable jupyter_ascending --sys-prefix --py

To check that they are enabled, do something like this:

$ jupyter nbextension list
Known nbextensions:
  config dir: /home/tj/.pyenv/versions/3.8.1/envs/general/etc/jupyter/nbconfig
    notebook section
      jupytext/index  enabled
      - Validating: OK
      jupyter-js-widgets/extension  enabled
      - Validating: OK
      jupyter_ascending/extension  enabled
      - Validating: OK

$ jupyter serverextension list
config dir: /home/tj/.pyenv/versions/3.8.1/envs/general/etc/jupyter
    jupytext  enabled
    - Validating...
      jupytext 1.8.0 OK
    jupyter_ascending  enabled
    - Validating...
      jupyter_ascending 0.1.13 OK

Run tests from the root directory of this repository using python -m pytest ..

Format files with pyfixfmt. In a PyCharm file watcher, something like

python -m pyfixfmt --file-glob $FilePathRelativeToProjectRoot$ --verbose

Pushing a new version to PyPI:

  • Bump the version number in pyproject.toml and _version.py.
  • poetry build
  • poetry publish
  • git tag VERSION and git push origin VERSION

Updating dependencies:

  • Dependency constraints are in pyproject.toml. These are the constraints that will be enforced when distributing the package to end users.
  • These get locked down to specific versions of each package in poetry.lock, when you run poetry lock or poetry install for the first time. poetry.lock is only used by developers using poetry install - the goal is to have a consistent development environment for a all developers.
  • If you make a change to the dependencies in pyproject.toml, you'll want to update the lock file with poetry lock. To get only the minimal required changes, use poetry lock --no-update.
Owner
Untitled AI
We're investigating the fundamentals of learning across humans and machines in order to create more general machine intelligence.
Untitled AI
A unofficial pytorch implementation of PAN(PSENet2): Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network

Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network Requirements pytorch 1.1+ torchvision 0.3+ pyclipper opencv3 gcc

zhoujun 400 Dec 26, 2022
A naive ROS interface for visualDet3D.

YOLO3D ROS Node This repo contains a Monocular 3D detection Ros node. Base on https://github.com/Owen-Liuyuxuan/visualDet3D All parameters are exposed

Yuxuan Liu 19 Oct 08, 2022
Code for Phase diagram of Stochastic Gradient Descent in high-dimensional two-layer neural networks

Phase diagram of Stochastic Gradient Descent in high-dimensional two-layer neural networks Under construction. Description Code for Phase diagram of S

Rodrigo Veiga 3 Nov 24, 2022
Code for paper PairRE: Knowledge Graph Embeddings via Paired Relation Vectors.

PairRE Code for paper PairRE: Knowledge Graph Embeddings via Paired Relation Vectors. This implementation of PairRE for Open Graph Benchmak datasets (

Alipay 65 Dec 19, 2022
This is the repository of the NeurIPS 2021 paper "Curriculum Disentangled Recommendation withNoisy Multi-feedback"

Curriculum_disentangled_recommendation This is the repository of the NeurIPS 2021 paper "Curriculum Disentangled Recommendation with Noisy Multi-feedb

14 Dec 20, 2022
YOLTv5 rapidly detects objects in arbitrarily large aerial or satellite images that far exceed the ~600×600 pixel size typically ingested by deep learning object detection frameworks

YOLTv5 rapidly detects objects in arbitrarily large aerial or satellite images that far exceed the ~600×600 pixel size typically ingested by deep learning object detection frameworks.

Adam Van Etten 145 Jan 01, 2023
Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP

Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP Abstract: We introduce a method that allows to automatically se

Daniil Pakhomov 134 Dec 19, 2022
Implementation for paper: Self-Regulation for Semantic Segmentation

Self-Regulation for Semantic Segmentation This is the PyTorch implementation for paper Self-Regulation for Semantic Segmentation, ICCV 2021. Citing SR

Dong ZHANG 30 Nov 21, 2022
A voice recognition assistant similar to amazon alexa, siri and google assistant.

kenyan-Siri Build an Artificial Assistant Full tutorial (video) To watch the tutorial, click on the image below Installation For windows users (run th

Alison Parker 3 Aug 19, 2022
Roach: End-to-End Urban Driving by Imitating a Reinforcement Learning Coach

CARLA-Roach This is the official code release of the paper End-to-End Urban Driving by Imitating a Reinforcement Learning Coach by Zhejun Zhang, Alexa

Zhejun Zhang 118 Dec 28, 2022
SPTAG: A library for fast approximate nearest neighbor search

SPTAG: A library for fast approximate nearest neighbor search SPTAG SPTAG (Space Partition Tree And Graph) is a library for large scale vector approxi

Microsoft 4.3k Jan 01, 2023
Very large and sparse networks appear often in the wild and present unique algorithmic opportunities and challenges for the practitioner

Sparse network learning with snlpy Very large and sparse networks appear often in the wild and present unique algorithmic opportunities and challenges

Andrew Stolman 1 Apr 30, 2021
A synthetic texture-invariant dataset for object detection of UAVs

A synthetic dataset for object detection of UAVs This repository contains a synthetic datasets accompanying the paper Sim2Air - Synthetic aerial datas

LARICS Lab 10 Aug 13, 2022
Causal estimators for use with WhyNot

WhyNot Estimators A collection of causal inference estimators implemented in Python and R to pair with the Python causal inference library whynot. For

ZYKLS 8 Apr 06, 2022
Official code for "Maximum Likelihood Training of Score-Based Diffusion Models", NeurIPS 2021 (spotlight)

Maximum Likelihood Training of Score-Based Diffusion Models This repo contains the official implementation for the paper Maximum Likelihood Training o

Yang Song 84 Dec 12, 2022
Model search is a framework that implements AutoML algorithms for model architecture search at scale

Model search (MS) is a framework that implements AutoML algorithms for model architecture search at scale. It aims to help researchers speed up their exploration process for finding the right model a

Google 3.2k Dec 31, 2022
Official repository of the paper "GPR1200: A Benchmark for General-PurposeContent-Based Image Retrieval"

GPR1200 Dataset GPR1200: A Benchmark for General-Purpose Content-Based Image Retrieval (ArXiv) Konstantin Schall, Kai Uwe Barthel, Nico Hezel, Klaus J

Visual Computing Group 16 Nov 21, 2022
Spline is a tool that is capable of running locally as well as part of well known pipelines like Jenkins (Jenkinsfile), Travis CI (.travis.yml) or similar ones.

Welcome to spline - the pipeline tool Important note: Since change in my job I didn't had the chance to continue on this project. My main new project

Thomas Lehmann 29 Aug 22, 2022
Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch

Segformer - Pytorch Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch. Install $ pip install segformer-pytorch

Phil Wang 208 Dec 25, 2022
基于YoloX目标检测+DeepSort算法实现多目标追踪Baseline

项目简介: 使用YOLOX+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中。 代码地址(欢迎star): https://github.com/Sharpiless/yolox-deepsort/ 最终效果: 运行demo: python demo

114 Dec 30, 2022