Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 200 universities.

Overview

D2L.ai: Interactive Deep Learning Book with Multi-Framework Code, Math, and Discussions

Build Status

Book website | STAT 157 Course at UC Berkeley | Latest version: v0.17.0

The best way to understand deep learning is learning by doing.

This open-source book represents our attempt to make deep learning approachable, teaching you the concepts, the context, and the code. The entire book is drafted in Jupyter notebooks, seamlessly integrating exposition figures, math, and interactive examples with self-contained code.

Our goal is to offer a resource that could

  1. be freely available for everyone;
  2. offer sufficient technical depth to provide a starting point on the path to actually becoming an applied machine learning scientist;
  3. include runnable code, showing readers how to solve problems in practice;
  4. allow for rapid updates, both by us and also by the community at large;
  5. be complemented by a forum for interactive discussion of technical details and to answer questions.

Universities Using D2L

Cool Papers Using D2L

  1. Descending through a Crowded Valley--Benchmarking Deep Learning Optimizers. R. Schmidt, F. Schneider, P. Hennig. International Conference on Machine Learning, 2021

  2. Universal Average-Case Optimality of Polyak Momentum. D. Scieur, F. Pedregosan. International Conference on Machine Learning, 2020

  3. 2D Digital Image Correlation and Region-Based Convolutional Neural Network in Monitoring and Evaluation of Surface Cracks in Concrete Structural Elements. M. Słoński, M. Tekieli. Materials, 2020

  4. GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing. J. Guo, H. He, T. He, L. Lausen, M. Li, H. Lin, X. Shi, C. Wang, J. Xie, S. Zha, A. Zhang, H. Zhang, Z. Zhang, Z. Zhang, S. Zheng, and Y. Zhu. Journal of Machine Learning Research, 2020

  5. Detecting Human Driver Inattentive and Aggressive Driving Behavior Using Deep Learning: Recent Advances, Requirements and Open Challenges. M. Alkinani, W. Khan, Q. Arshad. IEEE Access, 2020

more
  1. Diagnosing Parkinson by Using Deep Autoencoder Neural Network. U. Kose, O. Deperlioglu, J. Alzubi, B. Patrut. Deep Learning for Medical Decision Support Systems, 2020

  2. Deep Learning Architectures for Medical Diagnosis. U. Kose, O. Deperlioglu, J. Alzubi, B. Patrut. Deep Learning for Medical Decision Support Systems, 2020

  3. ControlVAE: Tuning, Analytical Properties, and Performance Analysis. H. Shao, Z. Xiao, S. Yao, D. Sun, A. Zhang, S. Liu, T. Abdelzaher.

  4. Potential, challenges and future directions for deep learning in prognostics and health management applications. O. Fink, Q. Wang, M. Svensén, P. Dersin, W-J. Lee, M. Ducoffe. Engineering Applications of Artificial Intelligence, 2020

  5. Learning User Representations with Hypercuboids for Recommender Systems. S. Zhang, H. Liu, A. Zhang, Y. Hu, C. Zhang, Y. Li, T. Zhu, S. He, W. Ou. ACM International Conference on Web Search and Data Mining, 2021

If you find this book useful, please star (★) this repository or cite this book using the following bibtex entry:

@article{zhang2021dive,
    title={Dive into Deep Learning},
    author={Zhang, Aston and Lipton, Zachary C. and Li, Mu and Smola, Alexander J.},
    journal={arXiv preprint arXiv:2106.11342},
    year={2021}
}

Endorsements

"In less than a decade, the AI revolution has swept from research labs to broad industries to every corner of our daily life. Dive into Deep Learning is an excellent text on deep learning and deserves attention from anyone who wants to learn why deep learning has ignited the AI revolution: the most powerful technology force of our time."

— Jensen Huang, Founder and CEO, NVIDIA

"This is a timely, fascinating book, providing with not only a comprehensive overview of deep learning principles but also detailed algorithms with hands-on programming code, and moreover, a state-of-the-art introduction to deep learning in computer vision and natural language processing. Dive into this book if you want to dive into deep learning!"

— Jiawei Han, Michael Aiken Chair Professor, University of Illinois at Urbana-Champaign

"This is a highly welcome addition to the machine learning literature, with a focus on hands-on experience implemented via the integration of Jupyter notebooks. Students of deep learning should find this invaluable to become proficient in this field."

— Bernhard Schölkopf, Director, Max Planck Institute for Intelligent Systems

Contributing (Learn How)

This open source book has benefited from pedagogical suggestions, typo corrections, and other improvements from community contributors. Your help is valuable for making the book better for everyone.

Dear D2L contributors, please email your GitHub ID and name to d2lbook.en AT gmail DOT com so your name will appear on the acknowledgments. Thanks.

License Summary

This open source book is made available under the Creative Commons Attribution-ShareAlike 4.0 International License. See LICENSE file.

The sample and reference code within this open source book is made available under a modified MIT license. See the LICENSE-SAMPLECODE file.

Chinese version | Discuss and report issues | Code of conduct | Other Information

Comments
  • Large-Scale Pretraining with Transformers

    Large-Scale Pretraining with Transformers

    Description of changes:

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by astonzhang 52
  • Vol2 gp

    Vol2 gp

    Description of changes:

    Added Gaussian process chapter, including notebooks for the index, priors, inference, and advanced topics. The index, priors, and inference notebooks are still being refined, but the basic structure and content are essentially complete.

    opened by andrewgordonwilson 51
  • Vol2 hpo

    Vol2 hpo

    Description of changes:

    Add initial set of notebooks for the HPO chapter: index, intro, api, hyperband, async RS, async SH.

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by aaronkl 48
  • Fix a typo and delete a repeated word

    Fix a typo and delete a repeated word

    Description of changes:

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by netyster 29
  • Add Cambridge latex style resources

    Add Cambridge latex style resources

    Description of changes:

    (1) Rewrite some sphinx default latex functions and styles. (2) The cambridge latex style is downloaded automatically. Here, I photoshopped two images (included in latex_style directory) based on alex's suggestions. (3) The https://github.com/d2l-ai/d2l-book/pull/56 of d2l-book is dependent on this pr.

    To ensure the smoothness of the qr code generation process, please do not use any special characters in the URL, otherwise, it won't go through. You can easily replace the special characters with URL encoding. please refer to: https://www.urlencoder.io/learn/

    To use this style, please specify the style in the config.ini of d2l-en. style = cambridge

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by cheungdaven 28
  • Adding RegNet (2020) to Modern CNN

    Adding RegNet (2020) to Modern CNN

    Radosavovic et al. Designing Network Design Spaces (2020)

    Training 32-layer RegNet:

    Screen Shot 2022-02-06 at 10 00 39 PM

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by astonzhang 22
  • handling path building in xplatform manner + some minor issues and typos

    handling path building in xplatform manner + some minor issues and typos

    Description of changes:

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by IgorDzreyev 20
  • RL- add MDP, val-iter, and first draft index

    RL- add MDP, val-iter, and first draft index

    Added MDP, value-iteration, and index notebooks. Als, updated d2l/torch.py and d2l.bib

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by rasoolfa 19
  • MSLE rather than MSE in the loss function

    MSLE rather than MSE in the loss function

    Fixes of the loss function in both MXNet and PyTorch version: changing MSE to MSLE (mean squared log error). @AnirudhDagar can you help review this PR?

    opened by goldmermaid 19
  • JAX port for Chapter-2 Preliminaries

    JAX port for Chapter-2 Preliminaries

    #1972 #1825

    Ported code in Chapter-2 Preliminaries to JAX

    I have added some comments within the code blocks for better understanding and have avoided changing the text part

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    @AnirudhDagar

    opened by DevPranjal 18
  • Update naive-bayes.md

    Update naive-bayes.md

    Description of changes:

    There's a mistake in the formula for calculating naive bayes estimates. P[x_i, y] is not defined. Only P[i, y] is defined which is the conditional probability p(x_i = 1 | y).

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by particle1331 18
  • Fix equation numbering in an equation

    Fix equation numbering in an equation

    Description of changes: Changed :numref: to :eqref: in an exercise in the file queries-keys-values.md

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by JojiJoseph 2
  • Discussion Forum Not Showing up on Classic Branch

    Discussion Forum Not Showing up on Classic Branch

    As the below image shows, none of the lessons on the classic website have functioning discussion forums (eg. http://classic.d2l.ai/chapter_recurrent-modern/beam-search.html). :

    image

    I've checked it on Firefox and Edge already, I don't think this is browser related.

    opened by Vortexx2 0
  • Minor change to exercise 4

    Minor change to exercise 4

    Exercise 4 says, "Assume that we draw n samples..." and then uses m in the following formula, so I propose changing n to m.

    Description of changes:

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by Denis-Kazakov 1
  • Remove [None, :]

    Remove [None, :]

    Description of changes: Minor code cleaning. The [None, :] is unnecessary, because without [None, :] PyTorch broadcasting will also prepend 1 to the dimensions of torch.arange()'s output.

    I tested the change by removing the [None, :] and running the masked_softmax()code below in 11.3.2.1. The output are the same as when we have [None, :], in terms of the numbers of masked and unmasked elements. I pasted the output below. You can compare it with the output on the website.

    masked_softmax(torch.rand(2, 2, 4), torch.tensor([2, 3]))
    tensor([[[0.5670, 0.4330, 0.0000, 0.0000],
             [0.5983, 0.4017, 0.0000, 0.0000]],
    
            [[0.4297, 0.3518, 0.2185, 0.0000],
             [0.3578, 0.3347, 0.3075, 0.0000]]])
    
    masked_softmax(torch.rand(2, 2, 4), torch.tensor([[1, 3], [2, 4]]))
    tensor([[[1.0000, 0.0000, 0.0000, 0.0000],
             [0.4129, 0.3338, 0.2533, 0.0000]],
    
            [[0.4291, 0.5709, 0.0000, 0.0000],
             [0.2964, 0.2290, 0.1903, 0.2844]]])
    

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by gab-chen 2
  • Fixed typo

    Fixed typo

    Description of changes: Fixed typo 'fo function values' to 'of function values'

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by pggPL 4
  • [Do not merge] Preview Vol.1

    [Do not merge] Preview Vol.1

    Description of changes:

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by astonzhang 1
Releases(v1.0.0-beta0)
  • v1.0.0-beta0(Dec 15, 2022)

    D2L has gone 1.0.0-beta0! We thank all the 296 contributors for making this happen!

    Forthcoming on Cambridge University Press

    Chapter 1--11 will be forthcoming on Cambridge University Press (early 2023). 1a

    New JAX Implementation

    We added new JAX implementation. Get started with import jax at https://d2l.ai/chapter_preliminaries/ndarray.html

    2a

    Thank @AnirudhDagar!

    New Vol.2 Chapter on Reinforcement Learning

    With the advent of ChatGPT (sibling model to InstructGPT fine-tuned using reinforcement learning), you may get curious about how to enable ML to take decisions sequentially:

    17. Reinforcement Learning 17.1. Markov Decision Process (MDP) 17.2. Value Iteration 17.3. Q-Learning

    3a 3b

    Thank Pratik Chaudhari (University of Pennsylvania and Amazon), Rasool Fakoor @rasoolfa (Amazon), and Kavosh Asadi (Amazon)!

    New Vol.2 Chapter on Gaussian Processes

    “Everything is a special case of a Gaussian process.” Gaussian processes and deep neural networks are highly complementary and can be combined to great effect:

    18. Gaussian Processes 18.1. Introduction to Gaussian Processes 18.2. Gaussian Process Priors 18.3. Gaussian Process Inference

    4a 4b

    Thank Andrew Gordon Wilson @andrewgordonwilson (New York University and Amazon)!

    New Vol.2 Chapter on Hyperparameter Optimization

    Tired of setting hyperparameters in a trial-and-error manner? You may wish to check out the systematic hyperparameter optimization approach:

    19. Hyperparameter Optimization 19.1. What Is Hyperparameter Optimization? 19.2. Hyperparameter Optimization API 19.3. Asynchronous Random Search 19.4. Multi-Fidelity Hyperparameter Optimization 19.5. Asynchronous Successive Halving

    5b

    Thank Aaron Klein @aaronkl (Amazon), Matthias Seeger @mseeger (Amazon), and Cedric Archambeau (Amazon)!

    Fixes and Improvements

    Thank @finale80 @JojiJoseph @gab-chen @Excelsior7 @shanmo @kxxt @vBarbaros @gui-miotto @bolded @atishaygarg @tuelwer @gopalakrishna-r @qingfengtommy @Mohamad-Jaallouk @biswajitsahoo1111 @315930399 for improving this book!

    Source code(tar.gz)
    Source code(zip)
    d2l-en-1.0.0-beta0-full-mxnet.pdf(35.62 MB)
    d2l-en-1.0.0-beta0-full-pytorch.pdf(36.99 MB)
    d2l-en-1.0.0-beta0.zip(169.63 MB)
  • v0.17.6(Nov 13, 2022)

  • v1.0.0-alpha1.post0(Sep 1, 2022)

    We are happy to release D2L 1.0.0-alpha1.post0 We thank all the contributors who have made this open-source textbook better for everyone.

    This minor release includes the following updates:

    • Build PDFs using Cambridge Latex Style (#2187)
    • Feat: Add dynamic preview/stable version tab (#2264)
    • Chapter Modern RNN is heavily improved, refactoring the text and reorganizing content (#2241)
    • Chapter Modern CNN is refactored and is more polished (#2249)
    • Bump supported version to Python3.9 (#2231)

    It also comes with the following bug-fixes:

    • Fix #2250: Depend on matplotlib-inline instead of ipython to avoid colab warning (#2279)
    • Fix broken preview version pdf links (#2264)
    • Fix #2247: Tensorflow explicitly squeeze image to support matplotlib<3.3.0 (#2248)
    • Fix PyTorch moving average computation for batch norm (#2213)
    • Fix PyTorch module types (LazyLinear -> Linear) (#2225)
    • Fix QR code being overridden by the section rule issue (#2251)
    • Fix torch.meshgrid user warning (7d921558a8cb063af05c6246b0aa8cbb2fe9d222)
    Source code(tar.gz)
    Source code(zip)
    d2l-en-1.0.0-alpha1.post0-full.pdf(35.17 MB)
  • v1.0.0-alpha0(Jul 15, 2022)

    We are excited to announce the release of D2L 1.0.0-alpha0! We thank all the 265 contributors who have made this open-source textbook better for everyone.

    New Topics and Revision

    We have added the following new topics, with discussions of more recent methods such as ResNeXt, RegNet, ConvNeXt, Vision Transformer, Swin Transformer, T5, GPT-1/2/3, zero-shot, one-shot, few-shot, Gato, Imagen, Minerva, and Parti.

    Besides new topics, we have significantly revised all the topics up to transformers. For example, the previous Linear Neural Networks and Multilayer Perceptrons chapters have been revamped as new chapters of Linear Neural Networks for Regression, Linear Neural Networks for Classification, and Multilayer Perceptrons.

    New API

    Throughout the book we repeatedly walk through various components including the data, the model, the loss function, and the optimization algorithm. Treating components in deep learning as objects, we can define classes for these objects and their interactions. This object-oriented design for implementation will greatly streamline the presentation. Therefore, inspired by open-source libraries such as PyTorch Lightning, we have re-designed the API with three core classes:

    • Module contains models, losses, and optimization methods;
    • DataModule provides data loaders for training and validation;
    • Both classes are combined using the Trainer class, which allows us to train models on a variety of hardware platforms.

    For example, with the classic API in previous releases:

    model = # Multilayer perceptron definition
    train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size=256)
    loss = nn.CrossEntropyLoss(reduction='none')
    trainer = torch.optim.SGD(net.parameters(), lr=0.1)
    d2l.train_ch3(model, train_iter, test_iter, loss, num_epochs=10, trainer)
    

    With the new API:

    model = # Multilayer perceptron definition
    data = d2l.FashionMNIST(batch_size=256)
    trainer = d2l.Trainer(max_epochs=10)
    trainer.fit(model, data)
    

    Lazy Layers in PyTorch

    Since v1.8.0, PyTorch offers "lazy" layers where input shape specification is no longer required. For simplicity we will use "lazy" layers whenever we can, such as:

    class LinearRegression(d2l.Module):
        def __init__(self, lr):
            super().__init__()
            self.save_hyperparameters()
            self.net = nn.LazyLinear(1)  # Lazy layer with output dimension only
            self.net.weight.data.normal_(0, 0.01)
            self.net.bias.data.fill_(0)
    

    Ongoing Translations

    Join us to improve ongoing translations in:

    Source code(tar.gz)
    Source code(zip)
    d2l-en-1.0.0-alpha0-full.pdf(33.79 MB)
  • v0.17.5(Mar 31, 2022)

    This release fixes issues when installing d2l package and running d2l notebooks on Google Colab with Python 3.7 and updates PyTorch & TensorFlow to their respective latest versions.

    More concretely, this release includes the following upgrades/fixes:

    • Update TensorFlow==2.8.0 (#2055)
    • Update PyTorch: torch==1.11.0 & torchvision==0.12.0 (#2063)
    • Rollback NumPy==1.21.5 & Support Python>=3.7 (#2066)
    • Fix MXNet plots; NumPy auto coercion & Unpin matplotlib==3.4 dependency (#2078)
    • Fix the broken download link for MovieLens dataset (#2074)
    • Fix iPython deprecation warning of set_matplotlib_formats (#2065)
    • Fix Densenet PyTorch implementation using nn.AdaptiveAvgPool2d (f6b1dd0053a5caeb8a53c81f97eb929c27fb868e)
    • Fix the hotdog class in section Fine Tuning for imagenet, which is number 934 instead of 713 (#2009)
    • Use reduction=none in PyTorch loss for train_epoch_ch3 (#2007)
    • Fix argument test_feature->test_features of train_and_pred in kaggle house price section (#1982)
    • Fix TypeError: can’t convert CUDA tensor to numpy, explicitly moving torch tensor to cpu before plotting (#1966)
    Source code(tar.gz)
    Source code(zip)
    d2l-en-pytorch.pdf(27.18 MB)
    d2l-en.pdf(27.66 MB)
  • v0.17.1(Dec 8, 2021)

    This release supports running the book with SageMaker Studio Lab for free and introduces several fixes:

    • Fix data synchronization for multi-GPU training in PyTorch (https://github.com/d2l-ai/d2l-en/pull/1978)
    • Fix token sampling in BERT datasets (https://github.com/d2l-ai/d2l-en/pull/1979/)
    • Fix semantic segmentation normalization in PyTorch (https://github.com/d2l-ai/d2l-en/pull/1980/)
    • Fix mean square loss calculation in PyTorch and TensorFlow (https://github.com/d2l-ai/d2l-en/pull/1984)
    • Fix broken paragraphs (https://github.com/d2l-ai/d2l-en/commit/8e0fe4ba54b6e2a0aa0f15f58a1e81f7fef1cdd7)
    Source code(tar.gz)
    Source code(zip)
    d2l-en-0.17.1-full-pytorch.pdf(27.00 MB)
    d2l-en-0.17.1-full.pdf(27.35 MB)
  • v0.17.0(Jul 26, 2021)

    Dive into Deep Learning is now available on arxiv!

    Framework Adaptation

    We have added TensorFlow implementations up to Chapter 11 (Optimization Algorithms).

    Towards v1.0

    The following chapters have been significantly improved for v1.0:

    • Optimization (the first 4 sections)
    • Computational Performance
    • Computer Vision
    • Natural Language Processing: Pretraining
    • Natural Language Processing: Applications

    Finalized chapters are being translated into Chinese (d2l-zh v2)

    Other Improvements

    • Add BLEU uniform weights from the original paper
    • Revise the normalization trick in LogSumExp
    • Revise data standardization
    • Prove convexity using second derivatives for one-dimensional and multi-dimensional cases
    • Improve d2l.train_2d function
    • Improve convergence analysis of Newton's method
    • Improve SGD convergence analysis for convex objectives 1
    • Improve Convergence Analysis for Convex Objectives
    • Reorganize comparisons of network partitioning, layer-wise partitioning, and data parallelism
    • Improve d2l.box_iou function
    • Improve the "Labeling Classes and Offsets" subsection
    • Add discussions of issues of non-maximum suppression
    • Reorganize multiscale anchor boxes and multiscale detection
    • Highlight layerwise representations via deep nets in multiscale object detection
    • Connect SSD downsampling blocks to VGG blocks
    • Refer to YOLO and a recent survey on object detection
    • Fix legend issues in Kaggle CIFAR-10 and ImageNet Dogs
    • Improve performance on the Kaggle small-scale CIFAR-10 dataset
    • Improve performance on the Kaggle small-scale ImageNet Dog dataset
    • Improve the function to build the mapping from RGB to class indices for VOC labels
    • Revise motivations for transposed convolution
    • Rewrite basic transposed convolution operation
    • Add relations between transposed convolution and regular convolution implementations
    • Improve explanations of the pretrained backbone for the fully convolutional network
    • Improve the output synthesized image of style transfer
    • Add d2l.show_list_len_pair_hist
    • Fix d2l.get_negatives
    • Improve efficiency of d2l.Vocab
    • Exclude unknown tokens when training word embeddings
    • Add self-supervised learning
    • Add discussions of self-supervised learning in NLP
    • Revise the notation table
    Source code(tar.gz)
    Source code(zip)
    d2l-en-0.17.0-full-pytorch.pdf(26.97 MB)
    d2l-en-0.17.0-full.pdf(27.35 MB)
  • v0.16.0(Jan 6, 2021)

    Brand-New Attention Chapter

    We have added the brand-new Chapter: Attention Mechanisms:

    • Attention Cues

      • Attention Cues in Biology
      • Queries, Keys, and Values
      • Visualization of Attention
    • Attention Pooling: Nadaraya-Watson Kernel Regression

      • Generating the Dataset
      • Average Pooling
      • Nonparametric Attention Pooling
      • Parametric Attention Pooling
    • Attention Scoring Functions

      • Masked Softmax Operation
      • Additive Attention
      • Scaled Dot-Product Attention
    • Bahdanau Attention

      • Model
      • Defining the Decoder with Attention
      • Training
    • Multi-Head Attention

      • Model
      • Implementation
    • Self-Attention and Positional Encoding

      • Self-Attention
      • Comparing CNNs, RNNs, and Self-Attention
      • Positional Encoding
    • Transformer

      • Model
      • Positionwise Feed-Forward Networks
      • Residual Connection and Layer Normalization
      • Encoder
      • Decoder
      • Training

    PyTorch Adaptation Completed

    We have completed PyTorch implementations for Vol.1 (Chapter 1--15).

    Towards v1.0

    The following chapters have been significantly improved for v1.0:

    • Introduction
    • Modern Recurrent Neural Networks

    Chinese Translation

    The following chapters have been translated into Chinese (d2l-zh v2 Git repo, Web preview):

    • Introduction
    • Preliminaries
    • Linear Neural Networks
    • Multilayer Perceptrons
    • Deep Learning Computation
    • Convolutional Neural Networks
    • Modern Convolutional Neural Networks

    Turkish Translation

    The community are translating the book into Turkish (d2l-tr Git repo, Web preview). The first draft of Chapter 1--7 is complete.

    Source code(tar.gz)
    Source code(zip)
    d2l-en-0.16.0-full.pdf(27.60 MB)
  • v0.15.0(Oct 23, 2020)

    Framework Adaptation

    We have added PyTorch implementations up to Chapter 11 (Optimization Algorithms). Chapter 1--7 and Chapter 11 have also been adapted to TensorFlow.

    Towards v1.0

    The following chapters have been significantly improved for v1.0:

    • Linear Neural Networks
    • Multilayer Perceptrons
    • Deep Learning Computation
    • Convolutional Neural Networks
    • Modern Convolutional Neural Networks
    • Recurrent Neural Networks

    Finalized chapters are being translated into Chinese (d2l-zh v2)

    Other Improvements

    • Fixed issues of not showing all the equation numbers in the HTML and PDF
    • Consistently used f-string
    • Revised overfitting experiments
    • Fixed implementation errors for weight decay experiments
    • Improved layer index style
    • Revised "breaking the symmetry"
    • Revised descriptions of covariate and label shift
    • Fixed mathematical errors in covariate shift correction
    • Added true risk, empirical risk, and (weighted) empirical risk minimization
    • Improved variable naming style for matrices and tensors
    • Improved consistency of mathematical notation for tensors of order two or higher
    • Improved mathematical descriptions of convolution
    • Revised descriptions of cross-correlation
    • Added feature maps and receptive fields
    • Revised mathematical descriptions of batch normalization
    • Added more details to Markov models
    • Fixed implementations of k-step-ahead predictions in sequence modeling
    • Fixed mathematical descriptions in language modeling
    • Improved the d2l.Vocab API
    • Fixed mathematical descriptions and figure illustrations for deep RNNs
    • Added BLEU
    • Improved machine translation application results
    • Improved the animation plot function in the all the training loops
    Source code(tar.gz)
    Source code(zip)
    d2l-en-0.15.0-full.pdf(26.11 MB)
  • v0.14.0(Jul 8, 2020)

    Highlights

    We have added both PyTorch and TensorFlow implementations up to Chapter 7 (Modern CNNs).

    Improvements

    • We updated the text to be framework neutral, such as now we call ndarray as tensor.
    • Readers can click the tab in the HTML version to switch between frameworks, both colab button and discussion thread will change properly.
    • We changed the release process, d2l.ai will host the latest release (i.e. the release branch), instead of the contents from the master branch. We unified the version number of both text and the d2l package. That's why we jumped from v0.8 to v0.14.0
    • The notebook zip contains three folders, mxnet, pytorch and tensorflow (though we only build the PDF for mxnet yet).
    Source code(tar.gz)
    Source code(zip)
    d2l-en-0.14.0-full.pdf(30.77 MB)
    notebooks-0.14.0.zip(105.41 MB)
  • v0.8.0(May 30, 2020)

    Highlights

    D2L is now runnable on Amazon SageMaker and Google Colab.

    New Contents

    The following chapters are re-organized:

    • Natural Language Processing: Pretraining
    • Natural Language Processing: Applications

    The following sections are added:

    • Subword Embedding (Byte-pair encoding)
    • Bidirectional Encoder Representations from Transformers (BERT)
    • The Dataset for Pretraining BERT
    • Pretraining BERT
    • Natural Language Inference and the Dataset
    • Natural Language Inference: Using Attention
    • Fine-Tuning BERT for Sequence-Level and Token-Level Applications
    • Natural Language Inference: Fine-Tuning BERT

    Improvements

    There have been many light revisions and improvements throughout the book.

    Source code(tar.gz)
    Source code(zip)
    d2l-en-0.8.0-full.pdf(30.92 MB)
  • v0.7.0(Dec 18, 2019)

    Highlights

    • D2L is now based on the NumPy interface. All the code samples are rewritten.

    New Contents

    • Recommender Systems

      • Overview of Recommender Systems
      • The MovieLens Dataset
      • Matrix Factorization
      • AutoRec: Rating Prediction with Autoencoders
      • Personalized Ranking for Recommender Systems
      • Neural Collaborative Filtering for Personalized Ranking
      • Sequence-Aware Recommender Systems
      • Feature-Rich Recommender Systems
      • Factorization Machines
      • Deep Factorization Machines
    • Appendix: Mathematics for Deep Learning

      • Geometry and Linear Algebraic Operations
      • Eigendecompositions
      • Single Variable Calculus
      • Multivariable Calculus
      • Integral Calculus
      • Random Variables
      • Maximum Likelihood
      • Distributions
      • Naive Bayes
      • Statistics
      • Information Theory
    • Attention Mechanisms

      • Attention Mechanism
      • Sequence to Sequence with Attention Mechanism
      • Transformer
    • Generative Adversarial Networks

      • Generative Adversarial Networks
      • Deep Convolutional Generative Adversarial Networks
    • Preliminaries

      • Data Preprocessing
      • Calculus

    Improvements

    • The Preliminaries chapter is improved.
    • More theoretical analysis is added to the Optimization chapter.

    Preview Version

    Hard copies of a D2L preview version based on this release (excluding chapters of Recommender Systems and Generative Adversarial Networks) are distributed at AWS re:Invent 2019 and NeurIPS 2019.

    Source code(tar.gz)
    Source code(zip)
    d2l-en-0.7.0-full.pdf(28.98 MB)
  • v0.6.0(Apr 11, 2019)

    Change of Contents

    We heavily revised the following chapters, especially during teaching STAT 157 at Berkeley.

    • Preface
    • Installation
    • Introduction
    • The Preliminaries: A Crashcourse
    • Linear Neural Networks
    • Multilayer Perceptrons
    • Recurrent Neural Networks

    The Community Are Translating D2L into Korean and Japanese

    d2l-ko in Korean (website: ko.d2l.ai) joins d2l.ai! Thank Muhyun Kim, Kyoungsu Lee, Ji hye Seo, Jiyang Kang and many other contributors!

    d2l-ja in Japanese (website: ja.d2l.ai) joins d2l.ai! Thank Masaki Samejima!

    Thanks to Our Contributors

    @alxnorden, @avinashingit, @bowen0701, @brettkoonce, Chaitanya Prakash Bapat, @cryptonaut, Davide Fiocco, @edgarroman, @gkutiel, John Mitro, Liang Pu, Rahul Agarwal, @mohamed-ali, @mstewart141, Mike Müller, @NRauschmayr, @Prakhar Srivastav, @sad-, @sfermigier, Sheng Zha, @sundeepteki, @topecongiro, @tpdi, @vermicelli, Vishaal Kapoor, @vishwesh5, @YaYaB, Yuhong Chen, Evgeniy Smirnov, @lgov, Simon Corston-Oliver, @IgorDzreyev, @trungha-ngx, @pmuens, @alukovenko, @senorcinco, @vfdev-5, @dsweet, Mohammad Mahdi Rahimi, Abhishek Gupta, @uwsd, @DomKM, Lisa Oakley, @vfdev-5, @bowen0701, @arush15june, @prasanth5reddy.

    Source code(tar.gz)
    Source code(zip)
    d2l-en-v0.6.0.pdf(23.13 MB)
  • v0.5.0(Jan 25, 2019)

    Contents

    • Translated contents from https://github.com/d2l-ai/d2l-zh, including the following chapters

      • Introduction
      • A Taste of Deep Learning
      • Deep Learning Basics
      • Deep Learning Computation
      • Convolutional Neural Networks
      • Recurrent Neural Networks
      • Optimization Algorithms
      • Computational Performance
      • Computer Vision
      • Natural Language Processing
      • Appendix
    • Added new contents in the following chapters

      • Introduction
      • A Taste of Deep Learning
      • Deep Learning Basics
      • Deep Learning Computation
      • Convolutional Neural Networks

    Style

    • Improved HTML styles
    • Improved PDF styles

    Chinese Version

    v1.0.0-rc0 is released: https://github.com/d2l-ai/d2l-zh/releases/tag/v1.0.0-rc0 The physical book will be published soon.

    Thanks to Our Contributors

    alxnorden, avinashingit, bowen0701, brettkoonce, Chaitanya Prakash Bapat, cryptonaut, Davide Fiocco, edgarroman, gkutiel, John Mitro, Liang Pu, Rahul Agarwal, mohamed-ali, mstewart141, Mike Müller, NRauschmayr, Prakhar Srivastav, sad-, sfermigier, Sheng Zha, sundeepteki, topecongiro, tpdi, vermicelli, Vishaal Kapoor, vishwesh5, YaYaB

    Source code(tar.gz)
    Source code(zip)
    d2l-en-v0.5.0.pdf(22.89 MB)
Owner
Dive into Deep Learning (D2L.ai)
Dive into Deep Learning (D2L.ai)
Deep Learning (with PyTorch)

Deep Learning (with PyTorch) This notebook repository now has a companion website, where all the course material can be found in video and textual for

Alfredo Canziani 6.2k Jan 02, 2023
ConvNet training using pytorch

Convolutional networks using PyTorch This is a complete training example for Deep Convolutional Networks on various datasets (ImageNet, Cifar10, Cifar

Elad Hoffer 336 Dec 30, 2022
Pytorch implementations of various Deep NLP models in cs-224n(Stanford Univ)

DeepNLP-models-Pytorch Pytorch implementations of various Deep NLP models in cs-224n(Stanford Univ: NLP with Deep Learning) This is not for Pytorch be

Kim SungDong 2.9k Dec 24, 2022
Torch Containers simplified in PyTorch

pytorch-containers This repository aims to help former Torchies more seamlessly transition to the "Containerless" world of PyTorch by providing a list

Max deGroot 88 Apr 25, 2022
Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 200 universities.

D2L.ai: Interactive Deep Learning Book with Multi-Framework Code, Math, and Discussions Book website | STAT 157 Course at UC Berkeley | Latest version

Dive into Deep Learning (D2L.ai) 16k Jan 03, 2023
Open source guides/codes for mastering deep learning to deploying deep learning in production in PyTorch, Python, C++ and more.

Deep Learning Materials by Deep Learning Wizard Start Learning Now Please head to www.deeplearningwizard.com to start learning! It is mobile/tablet fr

Ritchie Ng 572 Dec 28, 2022
Simple PyTorch Tutorials Zero to ALL!

PyTorchZeroToAll Quick 3~4 day lecture materials for HKUST students. Video Lectures: (RNN TBA) Youtube Bilibili Slides Lecture Slides @GoogleDrive If

Sung Kim 3.7k Dec 30, 2022
This is a gentle introductin on how to start using an awesome library called Weights and Biases.

🪄 W&B Minimal PyTorch Tutorial This tutorial is also accompanied with a PyTorch source code, it can be found in src folder. Furthermore, all plots an

Nauryzbay K 8 Aug 20, 2022
PyTorch tutorials.

PyTorch Tutorials All the tutorials are now presented as sphinx style documentation at: https://pytorch.org/tutorials Contributing We use sphinx-galle

6.6k Jan 02, 2023
C++ Implementation of PyTorch Tutorials for Everyone

C++ Implementation of PyTorch Tutorials for Everyone OS (Compiler)\LibTorch 1.9.0 macOS (clang 10.0, 11.0, 12.0) Linux (gcc 8, 9, 10, 11) Windows (msv

Omkar Prabhu 1.5k Jan 04, 2023
Example of network fine-tuning in pytorch for the kaggle competition Dogs vs. Cats Redux: Kernels Edition

Example of network fine-tuning in pytorch for the kaggle competition Dogs vs. Cats Redux: Kernels Edition Currently

bobby 70 Sep 22, 2022
PyTorch tutorials and best practices.

Effective PyTorch Table of Contents Part I: PyTorch Fundamentals PyTorch basics Encapsulate your model with Modules Broadcasting the good and the ugly

Vahid Kazemi 1.5k Jan 04, 2023
PyTorch Tutorial for Deep Learning Researchers

This repository provides tutorial code for deep learning researchers to learn PyTorch. In the tutorial, most of the models were implemented with less

Yunjey Choi 25.4k Jan 05, 2023
A scalable template for PyTorch projects, with examples in Image Segmentation, Object classification, GANs and Reinforcement Learning.

PyTorch Project Template is being sponsored by the following tool; please help to support us by taking a look and signing up to a free trial PyTorch P

Mo'men AbdelRazek 740 Dec 23, 2022
Image captioning - Tensorflow implementation of Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

Introduction This neural system for image captioning is roughly based on the paper "Show, Attend and Tell: Neural Image Caption Generation with Visual

Guoming Wang 749 Dec 28, 2022
The Hitchiker's Guide to PyTorch

The Hitchiker's Guide to PyTorch

Kai Arulkumaran 1k Dec 20, 2022
simple generative adversarial network (GAN) using PyTorch

Generative Adversarial Networks (GANs) in PyTorch Running Run the sample code by typing: ./gan_pytorch.py ...and you'll train two nets to battle it o

vanguard_space 32 Jun 14, 2020
A collection of various deep learning architectures, models, and tips

Deep Learning Models A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks. Traditiona

Sebastian Raschka 15.5k Jan 07, 2023
Some example scripts on pytorch

pytorch-practice Some example scripts on pytorch CONLL 2000 Chunking task Uses BiLSTM CRF loss with char CNN embeddings. To run use: cd data/conll2000

Shubhanshu Mishra 180 Dec 22, 2022
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.

PyTorch Examples WARNING: if you fork this repo, github actions will run daily on it. To disable this, go to /examples/settings/actions and Disable Ac

19.4k Jan 01, 2023