Official repository of my book: "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide"

Overview

Deep Learning with PyTorch Step-by-Step

This is the official repository of my book "Deep Learning with PyTorch Step-by-Step". Here you will find one Jupyter notebook for every chapter in the book.

Each notebook contains all the code shown in its corresponding chapter, and you should be able to run its cells in sequence to get the same outputs as shown in the book. I strongly believe that being able to reproduce the results brings confidence to the reader.

There are three options for you to run the Jupyter notebooks:

Google Colab

You can easily load the notebooks directly from GitHub using Colab and run them using a GPU provided by Google. You need to be logged in a Google Account of your own.

You can go through the chapters already using the links below:

Part I - Fundamentals

Part II - Computer Vision

Part III - Sequences

Part IV - Natural Language Processing

Binder

You can also load the notebooks directly from GitHub using Binder, but the process is slightly different. It will create an environment on the cloud and allow you to access Jupyter's Home Page in your browser, listing all available notebooks, just like in your own computer.

If you make changes to the notebooks, make sure to download them, since Binder does not keep the changes once you close it.

You can start your environment on the cloud right now using the button below:

Binder

Local Installation

This option will give you more flexibility, but it will require more effort to set up. I encourage you to try setting up your own environment. It may seem daunting at first, but you can surely accomplish it following seven easy steps:

1 - Anaconda

If you don’t have Anaconda’s Individual Edition installed yet, that would be a good time to do it - it is a very handy way to start - since it contains most of the Python libraries a data scientist will ever need to develop and train models.

Please follow the installation instructions for your OS:

Make sure you choose Python 3.X version since Python 2 was discontinued in January 2020.

2 - Conda (Virtual) Environments

Virtual environments are a convenient way to isolate Python installations associated with different projects.

First, you need to choose a name for your environment :-) Let’s call ours pytorchbook (or anything else you find easier to remember). Then, you need to open a terminal (in Ubuntu) or Anaconda Prompt (in Windows or macOS) and type the following command:

conda create -n pytorchbook anaconda

The command above creates a conda environment named pytorchbook and includes all anaconda packages in it (time to get a coffee, it will take a while...). If you want to learn more about creating and using conda environments, please check Anaconda’s Managing Environments user guide.

Did it finish creating the environment? Good! It is time to activate it, meaning, making that Python installation the one to be used now. In the same terminal (or Anaconda Prompt), just type:

conda activate pytorchbook

Your prompt should look like this (if you’re using Linux)...

(pytorchbook)$

or like this (if you’re using Windows):

(pytorchbook)C:\>

Done! You are using a brand new conda environment now. You’ll need to activate it every time you open a new terminal or, if you’re a Windows or macOS user, you can open the corresponding Anaconda Prompt (it will show up as Anaconda Prompt (pytorchbook), in our case), which will have it activated from start.

IMPORTANT: From now on, I am assuming you’ll activate the pytorchbook environment every time you open a terminal / Anaconda Prompt. Further installation steps must be executed inside the environment.

3 - PyTorch

It is time to install the star of the show :-) We can go straight to the Start Locally section of its website and it will automatically select the options that best suit your local environment and it will show you the command to run.

Your choices should look like:

  • PyTorch Build: "Stable"
  • Your OS: your operating system
  • Package: "Conda"
  • Language: "Python"
  • CUDA: "None" if you don't have a GPU, or the latest version (e.g. "10.1"), if you have a GPU.

The installation command will be shown right below your choices, so you can copy it. If you have a Windows computer and no GPU, you'd have to run the following command in your Anaconda Prompt (pytorchbook):

(pytorchbook) C:\> conda install pytorch torchvision cpuonly -c pytorch

4 - TensorBoard

TensorBoard is a powerful tool and we can use it even if we are developing models in PyTorch. Luckily, you don’t need to install the whole TensorFlow to get it, you can easily install TensorBoard alone using conda. You just need to run this command in your terminal or Anaconda Prompt (again, after activating the environment):

(pytorchbook)C:\> conda install -c conda-forge tensorboard

5 - GraphViz and TorchViz (optional)

This step is optional, mostly because the installation of GraphViz can be challenging sometimes (especially on Windows). If, for any reason, you do not succeed in installing it correctly, or if you decide to skip this installation step, you will still be able to execute the code in this book (except for a couple of cells that generate images of a model’s structure in the Dynamic Computation Graph section of Chapter 1).

We need to install GraphViz to be able to use TorchViz, a neat package that allows us to visualize a model’s structure. Please check the installation instructions for your OS.

If you are using Windows, please use the installer at GraphViz's Windows Package. You also need to add GraphViz to the PATH (environment variable) in Windows. Most likely, you can find GraphViz executable file at C:\ProgramFiles(x86)\Graphviz2.38\bin. Once you found it, you need to set or change the PATH accordingly, adding GraphViz's location to it. For more details on how to do that, please refer to How to Add to Windows PATH Environment Variable.

For additional information, you can also check the How to Install Graphviz Software guide.

If you installed GraphViz successfully, you can install the torchviz package. This package is not part of Anaconda Distribution Repository and is only available at PyPI , the Python Package Index, so we need to pip install it.

Once again, open a terminal or Anaconda Prompt and run this command (just once more: after activating the environment):

(pytorchbook)C:\> pip install torchviz

6 - Git

It is way beyond the scope of this guide to introduce you to version control and its most popular tool: git. If you are familiar with it already, great, you can skip this section altogether!

Otherwise, I’d recommend you to learn more about it, it will definitely be useful for you later down the line. In the meantime, I will show you the bare minimum, so you can use git to clone this repository containing all code used in this book - so you have your own, local copy of it and can modify and experiment with it as you please.

First, you need to install it. So, head to its downloads page and follow instructions for your OS. Once installation is complete, please open a new terminal or Anaconda Prompt (it's OK to close the previous one). In the new terminal or Anaconda Prompt, you should be able to run git commands. To clone this repository, you only need to run:

(pytorchbook)C:\> git clone https://github.com/dvgodoy/PyTorchStepByStep.git

The command above will create a PyTorchStepByStep folder which contains a local copy of everything available on this GitHub’s repository.

7 - Jupyter

After cloning the repository, navigate to the PyTorchStepByStep and, once inside it, you only need to start Jupyter on your terminal or Anaconda Prompt:

(pytorchbook)C:\> jupyter notebook

This will open your browser up and you will see Jupyter's Home Page containing this repository's notebooks and code.

Congratulations! You are ready to go through the chapters' notebooks!

Owner
Daniel Voigt Godoy
Data scientist, developer, teacher and writer. Author of "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide".
Daniel Voigt Godoy
Educational 2D SLAM implementation based on ICP and Pose Graph

slam-playground Educational 2D SLAM implementation based on ICP and Pose Graph How to use: Use keyboard arrow keys to navigate robot. Press 'r' to vie

Kirill 19 Dec 17, 2022
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX Foolbox is a Python li

Bethge Lab 2.4k Dec 25, 2022
A PyTorch implementation of the architecture of Mask RCNN

EDIT (AS OF 4th NOVEMBER 2019): This implementation has multiple errors and as of the date 4th, November 2019 is insufficient to be utilized as a reso

Sai Himal Allu 975 Dec 30, 2022
A keras-based real-time model for medical image segmentation (CFPNet-M)

CFPNet-M: A Light-Weight Encoder-Decoder Based Network for Multimodal Biomedical Image Real-Time Segmentation This repository contains the implementat

268 Nov 27, 2022
Official implementation of particle-based models (GNS and DPI-Net) on the Physion dataset.

Physion: Evaluating Physical Prediction from Vision in Humans and Machines [paper] Daniel M. Bear, Elias Wang, Damian Mrowca, Felix J. Binder, Hsiao-Y

Hsiao-Yu Fish Tung 18 Dec 19, 2022
Drslmarkov - Distributionally Robust Structure Learning for Discrete Pairwise Markov Networks

Distributionally Robust Structure Learning for Discrete Pairwise Markov Networks

1 Nov 24, 2022
Code for Low-Cost Algorithmic Recourse for Users With Uncertain Cost Functions

EMS-COLS-recourse Initial Code for Low-Cost Algorithmic Recourse for Users With Uncertain Cost Functions Folder structure: data folder contains raw an

Prateek Yadav 1 Nov 25, 2022
Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

ood-text-emnlp Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them" Files fine_tune.py is used to finetune the GPT-2 mo

Udit Arora 19 Oct 28, 2022
Dataset and Code for ICCV 2021 paper "Real-world Video Super-resolution: A Benchmark Dataset and A Decomposition based Learning Scheme"

Dataset and Code for RealVSR Real-world Video Super-resolution: A Benchmark Dataset and A Decomposition based Learning Scheme Xi Yang, Wangmeng Xiang,

Xi Yang 92 Jan 04, 2023
Tensorflow Implementation of ECCV'18 paper: Multimodal Human Motion Synthesis

MT-VAE for Multimodal Human Motion Synthesis This is the code for ECCV 2018 paper MT-VAE: Learning Motion Transformations to Generate Multimodal Human

Xinchen Yan 36 Oct 02, 2022
Repo for paper "Dynamic Placement of Rapidly Deployable Mobile Sensor Robots Using Machine Learning and Expected Value of Information"

Repo for paper "Dynamic Placement of Rapidly Deployable Mobile Sensor Robots Using Machine Learning and Expected Value of Information" Notes I probabl

Berkeley Expert System Technologies Lab 0 Jul 01, 2021
Official implementation of the paper DeFlow: Learning Complex Image Degradations from Unpaired Data with Conditional Flows

DeFlow: Learning Complex Image Degradations from Unpaired Data with Conditional Flows Official implementation of the paper DeFlow: Learning Complex Im

Valentin Wolf 86 Nov 16, 2022
Project of 'TBEFN: A Two-branch Exposure-fusion Network for Low-light Image Enhancement '

TBEFN: A Two-branch Exposure-fusion Network for Low-light Image Enhancement Codes for TMM20 paper "TBEFN: A Two-branch Exposure-fusion Network for Low

KUN LU 31 Nov 06, 2022
Pytorch implementation of various High Dynamic Range (HDR) Imaging algorithms

Deep High Dynamic Range Imaging Benchmark This repository is the pytorch impleme

Tianhong Dai 5 Nov 16, 2022
HW3 ― GAN, ACGAN and UDA

HW3 ― GAN, ACGAN and UDA In this assignment, you are given datasets of human face and digit images. You will need to implement the models of both GAN

grassking100 1 Dec 13, 2021
Official code release for 3DV 2021 paper Human Performance Capture from Monocular Video in the Wild.

Official code release for 3DV 2021 paper Human Performance Capture from Monocular Video in the Wild.

Chen Guo 58 Dec 24, 2022
Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLPv2, RaftMLP, ConvMLP, ConvMixer in Jittor and PyTorch.

Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLPv2, RaftMLP, ConvMLP, ConvMixer in Jittor and PyTorch! Now, Rearrange and Reduce in einops.layers.jittor are support!!

130 Jan 08, 2023
Official Implementation of DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation

DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation [Arxiv] [Paper] As acquiring pixel-wise an

Lukas Hoyer 305 Dec 29, 2022
HINet: Half Instance Normalization Network for Image Restoration

HINet: Half Instance Normalization Network for Image Restoration Liangyu Chen, Xin Lu, Jie Zhang, Xiaojie Chu, Chengpeng Chen Paper: https://arxiv.org

303 Dec 31, 2022
mlpack: a scalable C++ machine learning library --

a fast, flexible machine learning library Home | Documentation | Doxygen | Community | Help | IRC Chat Download: current stable version (3.4.2) mlpack

mlpack 4.2k Jan 09, 2023