C++ Implementation of PyTorch Tutorials for Everyone

Overview

C++ Implementation of PyTorch Tutorials for Everyone

OS (Compiler)\LibTorch 1.9.0
macOS (clang 10.0, 11.0, 12.0) Status
Linux (gcc 8, 9, 10, 11) Status
Windows (msvc 2017, 2019) Status

Table of Contents

This repository provides tutorial code in C++ for deep learning researchers to learn PyTorch (i.e. Section 1 to 3)
Python Tutorial: https://github.com/yunjey/pytorch-tutorial

1. Basics

2. Intermediate

3. Advanced

4. Interactive Tutorials

5. Other Popular Tutorials

Getting Started

Requirements

  1. C++
  2. CMake (minimum version 3.14)
  3. LibTorch v1.9.0
  4. Conda

For Interactive Tutorials

Note: Interactive Tutorials are currently running on LibTorch Nightly Version.
So there are some tutorials which can break when working with nightly version.

conda create --name pytorch-cpp
conda activate pytorch-cpp
conda install xeus-cling notebook -c conda-forge

Clone, build and run tutorials

In Google Colab

Open In Colab

On Local Machine

git clone https://github.com/prabhuomkar/pytorch-cpp.git
cd pytorch-cpp

Generate build system

cmake -B build #<options>

Note for Windows users:
Libtorch only supports 64bit Windows and an x64 generator needs to be specified. For Visual Studio this can be done by appending -A x64 to the above command.

Some useful options:

Option Default Description
-D CUDA_V=(|10.2|11.1|none) none Download LibTorch for a CUDA version (none = download CPU version).
-D DOWNLOAD_DATASETS=(OFF|ON) ON Download required datasets during build (only if they do not already exist in pytorch-cpp/data).
-D CREATE_SCRIPTMODULES=(OFF|ON) OFF Create all required scriptmodule files for prelearned models / weights during build. Requires installed python3 with pytorch and torchvision.
-D CMAKE_PREFIX_PATH=path/to/libtorch/share/cmake/Torch <empty> Skip the downloading of LibTorch and use your own local version (see Requirements) instead.
-D CMAKE_BUILD_TYPE=(Release|Debug) <empty> (Release when downloading LibTorch on Windows) Set the build type (Release = compile with optimizations).
Example Linux
Aim
  • Use existing Python, PyTorch (see Requirements) and torchvision installation.
  • Download all datasets and create all necessary scriptmodule files.
Command
cmake -B build \
-D CMAKE_BUILD_TYPE=Release \
-D CMAKE_PREFIX_PATH=/path/to/libtorch/share/cmake/Torch \
-D CREATE_SCRIPTMODULES=ON 
Example Windows
Aim
  • Automatically download LibTorch for CUDA 11.1 and all necessary datasets.
  • Do not create scriptmodule files.
Command
cmake -B build \
-A x64 \
-D CUDA_V=11.1

Build

Note for Windows (Visual Studio) users:
The CMake script downloads the Release version of LibTorch, so --config Release has to be appended to the build command.

How dataset download and scriptmodule creation work:

  • If DOWNLOAD_DATASETS is ON, the datasets required by the tutorials you choose to build will be downloaded to pytorch-cpp/data (if they do not already exist there).
  • If CREATE_SCRIPTMODULES is ON, the scriptmodule files for the prelearned models / weights required by the tutorials you choose to build will be created in the model folder of the respective tutorial's source folder (if they do not already exist).

All tutorials

To build all tutorials use

cmake --build build

All tutorials in a category

You can choose to only build tutorials in one of the categories basics, intermediate, advanced or popular. For example, if you are only interested in the basics tutorials:

cmake --build build --target basics
# In general: cmake --build build --target {category}

Single tutorial

You can also choose to only build a single tutorial. For example to build the language model tutorial only:

cmake --build build --target language-model
# In general: cmake --build build --target {tutorial-name}

Note:
The target argument is the tutorial's foldername with all underscores replaced by hyphens.

Tip for users of CMake version >= 3.15:
You can specify several targets separated by spaces, for example:

cmake --build build --target language-model image-captioning

Run Tutorials

  1. (IMPORTANT!) First change into the tutorial's directory within build/tutorials. For example, assuming you are in the pytorch-cpp directory and want to change to the pytorch basics tutorial folder:
    cd build/tutorials/basics/pytorch_basics
    # In general: cd build/tutorials/{basics|intermediate|advanced|popular/blitz}/{tutorial_name}
  2. Run the executable. Note that the executable's name is the tutorial's foldername with all underscores replaced with hyphens (e.g. for tutorial folder: pytorch_basics -> executable name: pytorch-basics (or pytorch-basics.exe on Windows)). For example, to run the pytorch basics tutorial:

    Linux/Mac
    ./pytorch-basics
    # In general: ./{tutorial-name}
    Windows
    .\pytorch-basics.exe
    # In general: .\{tutorial-name}.exe

Using Docker

Find the latest and previous version images on Docker Hub.

You can build and run the tutorials (on CPU) in a Docker container using the provided Dockerfile and docker-compose.yml files:

  1. From the root directory of the cloned repo build the image:
    docker-compose build --build-arg USER_ID=$(id -u) --build-arg GROUP_ID=$(id -g)

    Note:
    When you run the Docker container, the host repo directory is mounted as a volume in the Docker container in order to cache build and downloaded dependency files so that it is not necessary to rebuild or redownload everything when a container is restarted. In order to have correct file permissions it is necessary to provide your user and group ids as build arguments when building the image on Linux.

  2. Now start the container and build the tutorials using:
    docker-compose run --rm pytorch-cpp
    This fetches all necessary dependencies and builds all tutorials. After the build is done, by default the container starts bash in interactive mode in the build/tutorials folder.
    As with the local build, you can choose to only build tutorials of a category (basics, intermediate, advanced, popular):
    docker-compose run --rm pytorch-cpp {category}
    In this case the container is started in the chosen category's base build directory.
    Alternatively, you can also directly run a tutorial by instead invoking the run command with a tutorial name as additional argument, for example:
    docker-compose run --rm pytorch-cpp pytorch-basics
    # In general: docker-compose run --rm pytorch-cpp {tutorial-name} 
    This will - if necessary - build the pytorch-basics tutorial and then start the executable in a container.

License

This repository is licensed under MIT as given in LICENSE.

Comments
  • [bug] Build Problem on Raspberry Pi

    [bug] Build Problem on Raspberry Pi

    Problem

    • I downloaded the files on a raspberrypi
    • unpacked the files
    • run: cmake -B build
    • run: cmake --build build
    • Console Text:

    Scanning dependencies of target pytorch-cpp [ 2%] Building CXX object CMakeFiles/pytorch-cpp.dir/main.cpp.o [ 4%] Linking CXX executable pytorch-cpp /home/pi/Desktop/LibTorch/libtorch/lib/libtorch.so: file not recognized: file format not recognized collect2: error: ld returned 1 exit status make[2]: *** [CMakeFiles/pytorch-cpp.dir/build.make:87: pytorch-cpp] Fehler 1 make[1]: *** [CMakeFiles/Makefile2:327: CMakeFiles/pytorch-cpp.dir/all] Fehler 2 make: *** [Makefile:84: all] Fehler 2

    • (This is with manually downloaded libtorch, but it behaves the same when I let cmake download libtoch) -Problem is that the libtorch.so is an unrecogniced File Format

    Help Me Pls I think this is a Problem of my specific System, but I have no Idea what the issue is

    Desktop:

    • OS: Raspbian
    • Libtorch Version [1.4]
    bug help wanted 
    opened by Nivolsgel 16
  • A Question

    A Question

    When I use the CPU to train the model, as the number of iterations increases, Loss.Backward () this function is getting slower and slower ?However, the memory on vs2017 has been stable。 `for (int i = 0; i < num_epochs; i++) {

    	int batch_index = 0;
    	
    	for (auto &batch : *dataloader) {
    	
    		auto data = batch.data.to(device);
    		auto target = batch.target.to(device);
    
    		auto output= ae->forward(target);
    		
    		Tensor LossToTal;
    		//Replace torch::nn:functional::mse_loss() is Same   
    		s->caclSSIM(data, output, LossToTal);	
    		
    		auto Loss =1.0f- LossToTal.mean();
    
    		sts = clock();
    		optimizer.zero_grad();
    		Loss.backward();
    		optimizer.step();
    		endt = clock();
    		v1 = endt - sts;
    		if ((batch_index + 1) % 2 == 0) {
    			std::cout << "Epoch [" << i << "/" << num_epochs << "], Step [" << batch_index + 1 << "/"
    				<< num_samples / batch_size << "], MeanLoss " << Loss.item<double>() << 
    				"CostTime:"<<v1<<std::endl;			
    		}
    		
    		batch_index++;
    	}
    	
    	
    	
    }`
    
    opened by MavenFeng 7
  • [feature] Add support for juce::image to torch::tensor and back

    [feature] Add support for juce::image to torch::tensor and back

    Is your feature request related to a problem? Please describe. Hello, I is use Libtorch for process image, OpenCV very large so try avoid. I try use JCUE C++ framework.

    Describe the solution you'd like Not able to convert torch::Tensor to juce::image. I want to read a Video file, split into images on the fly and apply Deep Learning algorithm on image in Libtorch, then turn it back to image JUCE can understand. Anyone is have example make like this?

    Describe alternatives you've considered Is here someone make OpenCV to juce::image. https://forum.juce.com/t/opencvs-cv-mat-to-juces-image/33518

    Additional context Try integrate code with JUCE c++ GUI.

    See also: https://discuss.pytorch.org/t/avoid-use-opencv-for-image-convert-libtorch-torch-tensor-to-juce-image-and-like-back/94792

    enhancement 
    opened by ghost 6
  • The time consuming bug of '.to(at::kCPU)'  [bug]

    The time consuming bug of '.to(at::kCPU)' [bug]

    This is g great project! Recently, I find using '.to(at::kCPU)' from CUDA takes a very long time, when I use my model to inference on GPU. Like this:

      vector<torch::jit::IValue> inputs = {input_data.input_ids_pt_,
                                           input_data.attention_mask_pt_,
                                           input_data.token_type_ids_pt_};                              
      auto pred_res = (static_cast<torch::jit::script::Module*>(bert_model_))->forward(inputs);      // About 12 milliseconds
      auto logits = pred_res.toTensor().to(at::kCPU);        // About 80 milliseconds
    

    So what should I do to reduce the time consuming? Looking forward to your reply, thanks!

    bug help wanted 
    opened by wulaoshi 4
  • Error in loading MNIST dataset, convolutional_neural_network

    Error in loading MNIST dataset, convolutional_neural_network

    Hi, I am unable to read the images in the MNIST dataset. below I have listed the error that I am receiving. Can you please help me with it. Thanks

    : ~/Projects/cnn/build$ ./cnn

    Convolutional Neural Network

    CUDA available. Training on GPU. terminate called after throwing an instance of 'c10::Error' what(): Error opening images file at ../../../../data/mnist/train-images-idx3-ubyte Exception raised from read_images at ../torch/csrc/api/src/data/datasets/mnist.cpp:67 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator >) + 0x6b (0x7f9f8f93e0db in /home/paras/Libraries/libtorch/lib/libc10.so) frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0xce (0x7f9f8f939d2e in /home/paras/Libraries/libtorch/lib/libc10.so) frame #2: + 0x43847a2 (0x7f9f0b77e7a2 in /home/paras/Libraries/libtorch/lib/libtorch_cpu.so) frame #3: torch::data::datasets::MNIST::MNIST(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, torch::data::datasets::MNIST::Mode) + 0x46 (0x7f9f0b77f846 in /home/paras/Libraries/libtorch/lib/libtorch_cpu.so) frame #4: main + 0x121 (0x55670d49790c in ./cnn) frame #5: __libc_start_main + 0xf3 (0x7f9ec30140b3 in /lib/x86_64-linux-gnu/libc.so.6) frame #6: _start + 0x2e (0x55670d49752e in ./cnn)

    Aborted (core dumped)

    opened by Paras-97 3
  • Integration of torch::tensor with Siv3D Image / texture

    Integration of torch::tensor with Siv3D Image / texture

    Is your feature request related to a problem? Please describe. I am the author of Siv3DTorch (https://github.com/QuantScientist/Siv3DTorch) which integrates OpenSiv3d with Libtorch C++. At the moment Siv3D does not support CMake and therefore all integration efforts are on VC 19. One main burning issue that I have is reading and writing Images/video frames from and to Siv3D without using OpenCV.

    Describe the solution you'd like I would love to have an Image conversation method between the two frameworks, either using stb_image or libpng (used by Siv3D).

    Additional context The whole scenario is described here: https://github.com/Siv3D/OpenSiv3D/issues/534

    The source code is here: https://github.com/QuantScientist/Siv3DTorch/blob/master/src/loadmodel003.cpp

    Many thanks for your help,

    enhancement discussion 
    opened by ghost 3
  • [bug] Build Problem on Ubuntu

    [bug] Build Problem on Ubuntu

    * https://discuss.pytorch.org/t/libtorch-for-raspberry-pi/63107
      This actually solved the build problem.
    

    But now there are new Problems showing up:

    cmake --build build Scanning dependencies of target pytorch-cpp [ 2%] Building CXX object CMakeFiles/pytorch-cpp.dir/main.cpp.o [ 4%] Linking CXX executable pytorch-cpp [ 4%] Built target pytorch-cpp Scanning dependencies of target feedforward-neural-network [ 7%] Building CXX object tutorials/basics/feedforward_neural_network/CMakeFiles/feedforward-neural-network.dir/src/main.cpp.o /home/pi/Desktop/PytorchC++TestInternet/pytorch-cpp-master/tutorials/basics/feedforward_neural_network/src/main.cpp: In function ‘int main()’: /home/pi/Desktop/PytorchC++TestInternet/pytorch-cpp-master/tutorials/basics/feedforward_neural_network/src/main.cpp:71:48: error: ‘cross_entropy’ is not a member of ‘torch::nn::functional’ auto loss = torch::nn::functional::cross_entropy(output, target); ^~~~~~~~~~~~~ /home/pi/Desktop/PytorchC++TestInternet/pytorch-cpp-master/tutorials/basics/feedforward_neural_network/src/main.cpp:74:39: error: expected primary-expression before ‘double’ running_loss += loss.item() * data.size(0); ^~~~~~ /home/pi/Desktop/PytorchC++TestInternet/pytorch-cpp-master/tutorials/basics/feedforward_neural_network/src/main.cpp:111:44: error: ‘cross_entropy’ is not a member of ‘torch::nn::functional’ auto loss = torch::nn::functional::cross_entropy(output, target); ^~~~~~~~~~~~~ /home/pi/Desktop/PytorchC++TestInternet/pytorch-cpp-master/tutorials/basics/feedforward_neural_network/src/main.cpp:113:35: error: expected primary-expression before ‘double’ running_loss += loss.item() * data.size(0); ^~~~~~ make[2]: *** [tutorials/basics/feedforward_neural_network/CMakeFiles/feedforward-neural-network.dir/build.make:63: tutorials/basics/feedforward_neural_network/CMakeFiles/feedforward-neural-network.dir/src/main.cpp.o] Fehler 1 make[1]: *** [CMakeFiles/Makefile2:354: tutorials/basics/feedforward_neural_network/CMakeFiles/feedforward-neural-network.dir/all] Fehler 2 make: *** [Makefile:84: all] Fehler 2

    Is this a Problem of Libtorch 1.3 instead of the leatest Version? Do you know that?

    I have a similar issue while trying to build convolutional_neural_network from repo. cmake --build . --config Release Scanning dependencies of target freespace_torch [ 33%] Building CXX object CMakeFiles/freespace_torch.dir/src/convnet.cpp.o In file included from /home/fugurcal/freespace_torch/src/convnet.cpp:2:0: /home/fugurcal/freespace_torch/include/convnet.h:13:20: error: ‘BatchNorm2d’ is not a member of ‘torch::nn’ torch::nn::BatchNorm2d(16), ^~~~~~~~~~~ /home/fugurcal/freespace_torch/include/convnet.h:13:20: note: suggested alternative: ‘BatchNorm’ torch::nn::BatchNorm2d(16), ^~~~~~~~~~~ BatchNorm /home/fugurcal/freespace_torch/include/convnet.h:14:20: error: ‘ReLU’ is not a member of ‘torch::nn’ torch::nn::ReLU(), ^~~~ /home/fugurcal/freespace_torch/include/convnet.h:15:20: error: ‘MaxPool2d’ is not a member of ‘torch::nn’ torch::nn::MaxPool2d(torch::nn::MaxPool2dOptions(2).stride(2)) ^~~~~~~~~ /home/fugurcal/freespace_torch/include/convnet.h:15:41: error: ‘MaxPool2dOptions’ is not a member of ‘torch::nn’ torch::nn::MaxPool2d(torch::nn::MaxPool2dOptions(2).stride(2)) ^~~~~~~~~~~~~~~~ /home/fugurcal/freespace_torch/include/convnet.h:15:41: note: suggested alternative: ‘Conv2dOptions’ torch::nn::MaxPool2d(torch::nn::MaxPool2dOptions(2).stride(2)) ^~~~~~~~~~~~~~~~ Conv2dOptions /home/fugurcal/freespace_torch/include/convnet.h:16:5: error: could not convert ‘{torch::nn::Conv2d((* &(& torch::nn::ConvOptions<2>(1, 16, torch::ExpandingArray<2, long int>(5)).torch::nn::ConvOptions<2>::stride(torch::ExpandingArray<2, long int>(1)))->torch::nn::ConvOptions<2>::padding(torch::ExpandingArray<2, long int>(2)))), <expression error>, <expression error>, <expression error>}’ from ‘<brace-enclosed initializer list>’ to ‘torch::nn::Sequential’ }; ^ /home/fugurcal/freespace_torch/include/convnet.h:20:20: error: ‘BatchNorm2d’ is not a member of ‘torch::nn’ torch::nn::BatchNorm2d(32), ^~~~~~~~~~~ /home/fugurcal/freespace_torch/include/convnet.h:20:20: note: suggested alternative: ‘BatchNorm’ torch::nn::BatchNorm2d(32), ^~~~~~~~~~~ BatchNorm /home/fugurcal/freespace_torch/include/convnet.h:21:20: error: ‘ReLU’ is not a member of ‘torch::nn’ torch::nn::ReLU(), ^~~~ /home/fugurcal/freespace_torch/include/convnet.h:22:20: error: ‘MaxPool2d’ is not a member of ‘torch::nn’ torch::nn::MaxPool2d(torch::nn::MaxPool2dOptions(2).stride(2)) ^~~~~~~~~ /home/fugurcal/freespace_torch/include/convnet.h:22:41: error: ‘MaxPool2dOptions’ is not a member of ‘torch::nn’ torch::nn::MaxPool2d(torch::nn::MaxPool2dOptions(2).stride(2)) ^~~~~~~~~~~~~~~~ /home/fugurcal/freespace_torch/include/convnet.h:22:41: note: suggested alternative: ‘Conv2dOptions’ torch::nn::MaxPool2d(torch::nn::MaxPool2dOptions(2).stride(2)) ^~~~~~~~~~~~~~~~ Conv2dOptions /home/fugurcal/freespace_torch/include/convnet.h:23:5: error: could not convert ‘{torch::nn::Conv2d((* &(& torch::nn::ConvOptions<2>(16, 32, torch::ExpandingArray<2, long int>(5)).torch::nn::ConvOptions<2>::stride(torch::ExpandingArray<2, long int>(1)))->torch::nn::ConvOptions<2>::padding(torch::ExpandingArray<2, long int>(2)))), <expression error>, <expression error>, <expression error>}’ from ‘<brace-enclosed initializer list>’ to ‘torch::nn::Sequential’ }; ^ CMakeFiles/freespace_torch.dir/build.make:62: recipe for target 'CMakeFiles/freespace_torch.dir/src/convnet.cpp.o' failed make[2]: *** [CMakeFiles/freespace_torch.dir/src/convnet.cpp.o] Error 1 CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/freespace_torch.dir/all' failed make[1]: *** [CMakeFiles/freespace_torch.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2 I am using torch 1.5, any idea how solve this problem?

    Originally posted by @FarukUgurcali in https://github.com/prabhuomkar/pytorch-cpp/issues/37#issuecomment-636452169

    opened by mfl28 3
  • [feature]requesting for pretrained weights

    [feature]requesting for pretrained weights

    Is your feature request related to a problem? Please describe.

    I would like to train a CNN-classifier with my custom data using the widely-used models like ResNet series,I found it is useful to initialize the model weights with ImageNet pretrained weights, and it is easy to implement with the torch::load API when the image channels of my dataset is 3, the same as ImageNet,under which situation no change should be made to the Conv1 layer. It is the other situation when I try to train with gray images,as the Conv1 weights is supposed to be of in_channels=3, In the python fronten, I guess this maybe solved but imdieatly repalce the model.conv1 like this:

    model.conv1 = nn.Conv2d(in_channels=1, out_channels=64, kernel_size=7, stride=2, padding=3, bias=False)
    

    but as for the C++ fronten, repalcing seems not work:

    model->conv1 = torch::nn::Conv2d(torch::nn::Conv2dOptions(in_channels, 64, 7).stride(2).padding(3).bias(false).dilation(1));
    

    Describe the solution you'd like

    Correcting the API use to rightly loading the pretrained weights.

    Describe alternatives you've considered

    Maybe a pretrained model on gray image dataset would bypass the problem.

    Additional context

    Exception occurs during the model forward process.

    enhancement 
    opened by jianyin2016 3
  • Expression: vector subscript out of range

    Expression: vector subscript out of range

    In the tutorial of deep residual net. In the cifar.cpp, for the function--std::pair<torch::Tensor, torch::Tensor> read_data(const std::string& root, bool train), it works well in linux, but when in win10, i got the following error.

    error

    bug help wanted 
    opened by alance123 3
  • Image captioning runs only on CPU [bug]

    Image captioning runs only on CPU [bug]

    Describe the bug On following the steps mentioned in the Readme.md, i was able to train the image_captioning module successfully but only on CPU.

    To Reproduce Steps to reproduce the behavior:

    1. run the build command cmake -B build
      -D CMAKE_PREFIX_PATH=/path/to/libtorch/share/cmake/Torch
      -D CREATE_SCRIPTMODULES=ON

    2. execute cmake --build build --target image-captioning

    3. Then execute the image_captioning and the model begins to train but on CPU

    Expected behavior Since I have pointed it to libtorch compatible with cuda 11.7, the training should begin on GPU.

    I have cuda 11.7 available

    nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2022 NVIDIA Corporation
    Built on Wed_Jun__8_16:49:14_PDT_2022
    Cuda compilation tools, release 11.7, V11.7.99
    Build cuda_11.7.r11.7/compiler.31442593_0
    
    

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: Ubuntu 22.04
    • Libtorch Version 1.13.0 with cuda 11.7

    To confirm if there is some issue with my libtorch, i tried a simple cpp program to print a tensor.

    Output: 0.7159 0.7315 0.1141 0.9851 0.2703 [ CUDAFloatType{1,5} ]

    CmakeLists.txt

    cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
    project(dcgan)
    set(CMAKE_PREFIX_PATH /opt/libtorch/cu11_7/libtorch)
    find_package(Torch REQUIRED)
    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")
    
    add_executable(dcgan main.cpp)
    target_link_libraries(dcgan "${TORCH_LIBRARIES}")
    set_property(TARGET dcgan PROPERTY CXX_STANDARD 14)
    

    When running image_captioning, if i try and manually set the device to torch::kCUDA i get the below exception.

    Image Captioning
    
    Training on CPU.
    Vocabulary size: 4076
    Training samples: 30000
    Validation samples: 5000
    terminate called after throwing an instance of 'c10::Error'
      what():  PyTorch is not linked with support for cuda devices
    Exception raised from getDeviceGuardImpl at ../c10/core/impl/DeviceGuardImplInterface.h:319 (most recent call first):
    
    
    bug help wanted 
    opened by prabeshKhadka94 2
  • Compile failure

    Compile failure

    Hello there,

    I'm having this problem: [ 21%] Building CXX object tutorials/intermediate/convolutional_neural_network/CMakeFiles/convolutional-neural-network.dir/src/main.cpp.o /root/ml/pytorch-cpp/tutorials/intermediate/convolutional_neural_network/src/main.cpp: In function 'int main()': /root/ml/pytorch-cpp/tutorials/intermediate/convolutional_neural_network/src/main.cpp:106:12: error: 'InferenceMode' is not a member of 'torch' torch::InferenceMode no_grad; ^~~~~~~~~~~~~

    What could be the cause?

    Thank you!

    opened by baik33 2
  • Add DockerfileCUDA102

    Add DockerfileCUDA102

    Description of the change

    Description here

    Type Of Change

    • [ ] Bug Fix (non-breaking change that fixes an issue)
    • [x] New Feature
    • [ ] New PyTorch tutorial
    • [ ] Breaking Change (cmake changes, fix or feature that would cause existing functionality to not work as expected)

    Related Issues

    Fix #100

    Development & Code Review

    • [x] cpplint rules passes locally (run cmake -P cpplint.cmake)
    • [ ] CI is passing
    • [ ] Changes have been reviewed by at least one of the maintainers
    opened by pyun-ram 0
  • [feature] Dockerfile to support CUDA-version pytorch-cpp

    [feature] Dockerfile to support CUDA-version pytorch-cpp

    Thanks for the nice code! Here is a Dockerfile to support CUDA-version pytorch-cpp. Hope it helps when you want to run the code with GPUs.

    FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04
    LABEL maintainer="[email protected]"
    
    # Fix the apt-get error from nvidia-docker
    RUN rm /etc/apt/sources.list.d/cuda.list \
        && rm /etc/apt/sources.list.d/nvidia-ml.list \
        && apt-key del 7fa2af80 \
        && apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub \
        && apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/7fa2af80.pub
    
    # Install basics
    RUN apt-get update -y \
        && apt-get install -y apt-utils git curl ca-certificates tree htop wget libssl-dev unzip \
        && rm -rf /var/lib/apt/lists/*
    # Install g++-8 gcc-8
    RUN apt-get update && apt-get install -y gcc-8 g++-8 \
      && update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-8 60 --slave /usr/bin/g++ g++ /usr/bin/g++-8 \
      && update-alternatives --config gcc \
      && rm -rf /var/lib/apt/lists/*
    # Install cmake
    RUN apt-get purge -y cmake \
        && mkdir /root/temp \
        && cd /root/temp \
        && wget https://github.com/Kitware/CMake/releases/download/v3.23.4/cmake-3.23.4.tar.gz \
        && tar -xzvf cmake-3.23.4.tar.gz \
        && cd cmake-3.23.4 \
        && bash ./bootstrap \
        && make \
        && make install \
        && cmake --version \
        && rm -rf /root/temp \
        && rm -rf /var/lib/apt/lists/*
    # Install libtorch
    RUN cd /root/ \
        && wget https://download.pytorch.org/libtorch/cu102/libtorch-cxx11-abi-shared-with-deps-1.12.1%2Bcu102.zip -O libtorch.zip \
        && unzip libtorch.zip
    # Install pytorch-cpp
    RUN cd /root \
        && wget https://github.com/prabhuomkar/pytorch-cpp/archive/refs/tags/v1.12.tar.gz \
        && tar -xzvf v1.12.tar.gz
    RUN cd /root/pytorch-cpp-1.12 \
        && cmake -B build \
        -D CMAKE_BUILD_TYPE=Release \
        -D CMAKE_PREFIX_PATH=/root/libtorch/share/cmake/Torch \
        -D CREATE_SCRIPTMODULES=ON \
        && cmake --build build
    WORKDIR /root
    
    enhancement 
    opened by pyun-ram 2
  • Human detection tutorial please.

    Human detection tutorial please.

    I'm always frustrated when google spits out only python and tensorflow human detection implementation :man_facepalming:

    It would be great if this libtroch tutorial can show some human detection implementation which have high accuracies.

    Alternatives you've considered: OpenPose (Super Slow, low FPS even on RTX 3090). Would like to use OpenCL for AMD and ARM Mali GPUs.

    Thanks.

    enhancement 
    opened by rajhlinux 1
  • Tutorial Requests / Roadmap

    Tutorial Requests / Roadmap

    Issue to track tutorial requests:

    • Deep Learning with PyTorch: A 60 Minute Blitz - #69
    • Sentence Classification - #79
    • Run in containers with CUDA - #100
    documentation enhancement good first issue question 
    opened by prabhuomkar 0
Releases(v1.12)
Owner
Omkar Prabhu
Omkar Prabhu
Torch Containers simplified in PyTorch

pytorch-containers This repository aims to help former Torchies more seamlessly transition to the "Containerless" world of PyTorch by providing a list

Max deGroot 88 Apr 25, 2022
A collection of various deep learning architectures, models, and tips

Deep Learning Models A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks. Traditiona

Sebastian Raschka 15.5k Jan 07, 2023
Deep Learning (with PyTorch)

Deep Learning (with PyTorch) This notebook repository now has a companion website, where all the course material can be found in video and textual for

Alfredo Canziani 6.2k Jan 02, 2023
Open source guides/codes for mastering deep learning to deploying deep learning in production in PyTorch, Python, C++ and more.

Deep Learning Materials by Deep Learning Wizard Start Learning Now Please head to www.deeplearningwizard.com to start learning! It is mobile/tablet fr

Ritchie Ng 572 Dec 28, 2022
PyTorch tutorials and best practices.

Effective PyTorch Table of Contents Part I: PyTorch Fundamentals PyTorch basics Encapsulate your model with Modules Broadcasting the good and the ugly

Vahid Kazemi 1.5k Jan 04, 2023
A scalable template for PyTorch projects, with examples in Image Segmentation, Object classification, GANs and Reinforcement Learning.

PyTorch Project Template is being sponsored by the following tool; please help to support us by taking a look and signing up to a free trial PyTorch P

Mo'men AbdelRazek 740 Dec 23, 2022
Example of network fine-tuning in pytorch for the kaggle competition Dogs vs. Cats Redux: Kernels Edition

Example of network fine-tuning in pytorch for the kaggle competition Dogs vs. Cats Redux: Kernels Edition Currently

bobby 70 Sep 22, 2022
PyTorch Implementation of Fully Convolutional Networks. (Training code to reproduce the original result is available.)

pytorch-fcn PyTorch implementation of Fully Convolutional Networks. Requirements pytorch = 0.2.0 torchvision = 0.1.8 fcn = 6.1.5 Pillow scipy tqdm

Kentaro Wada 1.6k Jan 04, 2023
Simple PyTorch Tutorials Zero to ALL!

PyTorchZeroToAll Quick 3~4 day lecture materials for HKUST students. Video Lectures: (RNN TBA) Youtube Bilibili Slides Lecture Slides @GoogleDrive If

Sung Kim 3.7k Dec 30, 2022
The Hitchiker's Guide to PyTorch

The Hitchiker's Guide to PyTorch

Kai Arulkumaran 1k Dec 20, 2022
PyTorch Tutorial for Deep Learning Researchers

This repository provides tutorial code for deep learning researchers to learn PyTorch. In the tutorial, most of the models were implemented with less

Yunjey Choi 25.4k Jan 05, 2023
Minimal tutorials for PyTorch

Minimal tutorials for PyTorch adapted from Alec Radford's Theano tutorials. Tensor multiplication Linear Regression Logistic Regression Neural Network

Vinh Khuc 321 Oct 25, 2022
C++ Implementation of PyTorch Tutorials for Everyone

C++ Implementation of PyTorch Tutorials for Everyone OS (Compiler)\LibTorch 1.9.0 macOS (clang 10.0, 11.0, 12.0) Linux (gcc 8, 9, 10, 11) Windows (msv

Omkar Prabhu 1.5k Jan 04, 2023
Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 200 universities.

D2L.ai: Interactive Deep Learning Book with Multi-Framework Code, Math, and Discussions Book website | STAT 157 Course at UC Berkeley | Latest version

Dive into Deep Learning (D2L.ai) 16k Jan 03, 2023
An IPython Notebook tutorial on deep learning for natural language processing, including structure prediction.

Table of Contents: Introduction to Torch's Tensor Library Computation Graphs and Automatic Differentiation Deep Learning Building Blocks: Affine maps,

Robert 1.8k Jan 04, 2023
ConvNet training using pytorch

Convolutional networks using PyTorch This is a complete training example for Deep Convolutional Networks on various datasets (ImageNet, Cifar10, Cifar

Elad Hoffer 336 Dec 30, 2022
This is a gentle introductin on how to start using an awesome library called Weights and Biases.

🪄 W&B Minimal PyTorch Tutorial This tutorial is also accompanied with a PyTorch source code, it can be found in src folder. Furthermore, all plots an

Nauryzbay K 8 Aug 20, 2022
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.

PyTorch Examples WARNING: if you fork this repo, github actions will run daily on it. To disable this, go to /examples/settings/actions and Disable Ac

19.4k Jan 01, 2023
simple generative adversarial network (GAN) using PyTorch

Generative Adversarial Networks (GANs) in PyTorch Running Run the sample code by typing: ./gan_pytorch.py ...and you'll train two nets to battle it o

vanguard_space 32 Jun 14, 2020
Image captioning - Tensorflow implementation of Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

Introduction This neural system for image captioning is roughly based on the paper "Show, Attend and Tell: Neural Image Caption Generation with Visual

Guoming Wang 749 Dec 28, 2022