Simple and ready-to-use tutorials for TensorFlow

Overview

TensorFlow World

https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat https://badges.frapsoft.com/os/v2/open-source.svg?v=102 https://coveralls.io/repos/github/astorfi/TensorFlow-World/badge.svg?branch=master https://img.shields.io/twitter/follow/amirsinatorfi.svg?label=Follow&style=social

To support maintaining and upgrading this project, please kindly consider Sponsoring the project developer.

Any level of support is a great contribution here ❤️

This repository aims to provide simple and ready-to-use tutorials for TensorFlow. The explanations are present in the wiki associated with this repository.

Each tutorial includes source code and associated documentation.

Slack Group

Table of Contents

Motivation

There are different motivations for this open source project. TensorFlow (as we write this document) is one of / the best deep learning frameworks available. The question that should be asked is why has this repository been created when there are so many other tutorials about TensorFlow available on the web?

Why use TensorFlow?

Deep Learning is in very high interest these days - there's a crucial need for rapid and optimized implementations of the algorithms and architectures. TensorFlow is designed to facilitate this goal.

The strong advantage of TensorFlow is it flexibility in designing highly modular models which can also be a disadvantage for beginners since a lot of the pieces must be considered together when creating the model.

This issue has been facilitated as well by developing high-level APIs such as Keras and Slim which abstract a lot of the pieces used in designing machine learning algorithms.

The interesting thing about TensorFlow is that it can be found anywhere these days. Lots of the researchers and developers are using it and its community is growing at the speed of light! So many issues can be dealt with easily since they're usually the same issues that a lot of other people run into considering the large number of people involved in the TensorFlow community.

What's the point of this repository?

Developing open source projects for the sake of just developing something is not the reason behind this effort. Considering the large number of tutorials that are being added to this large community, this repository has been created to break the jump-in and jump-out process that usually happens to most of the open source projects, but why and how?

First of all, what's the point of putting effort into something that most of the people won't stop by and take a look? What's the point of creating something that does not help anyone in the developers and researchers community? Why spend time for something that can easily be forgotten? But how we try to do it? Even up to this very moment there are countless tutorials on TensorFlow whether on the model design or TensorFlow workflow.

Most of them are too complicated or suffer from a lack of documentation. There are only a few available tutorials which are concise and well-structured and provide enough insight for their specific implemented models.

The goal of this project is to help the community with structured tutorials and simple and optimized code implementations to provide better insight about how to use TensorFlow quick and effectively.

It is worth noting that, the main goal of this project is to provide well-documented tutorials and less-complicated code!

TensorFlow Installation and Setup the Environment

alternate text

In order to install TensorFlow please refer to the following link:

_img/mainpage/installation.gif

The virtual environment installation is recommended in order to prevent package conflict and having the capacity to customize the working environment.

TensorFlow Tutorials

The tutorials in this repository are partitioned into relevant categories.


Warm-up

alternate text

# topic Source Code  
1 Start-up Welcome / IPython Documentation

Basics

alternate text

# topic Source Code  
2 TensorFLow Basics Basic Math Operations / IPython Documentation
3 TensorFLow Basics TensorFlow Variables / IPython Documentation

Basic Machine Learning

alternate text

# topic Source Code  
4 Linear Models Linear Regression / IPython Documentation
5 Predictive Models Logistic Regression / IPython Documentation
6 Support Vector Machines Linear SVM / IPython  
7 Support Vector Machines MultiClass Kernel SVM / IPython  

Neural Networks

alternate text

# topic Source Code  
8 Multi Layer Perceptron Simple Multi Layer Perceptron / IPython  
9 Convolutional Neural Network Simple Convolutional Neural Networks Documentation
10 Autoencoder Undercomplete Autoencoder Documentation
11 Recurrent Neural Network RNN / IPython  

Some Useful Tutorials

Contributing

When contributing to this repository, please first discuss the change you wish to make via issue, email, or any other method with the owners of this repository before making a change. For typos, please do not create a pull request. Instead, declare them in issues or email the repository owner.

Please note we have a code of conduct, please follow it in all your interactions with the project.

Pull Request Process

Please consider the following criterions in order to help us in a better way:

  • The pull request is mainly expected to be a code script suggestion or improvement.
  • A pull request related to non-code-script sections is expected to make a significant difference in the documentation. Otherwise, it is expected to be announced in the issues section.
  • Ensure any install or build dependencies are removed before the end of the layer when doing a build and creating a pull request.
  • Add comments with details of changes to the interface, this includes new environment variables, exposed ports, useful file locations and container parameters.
  • You may merge the Pull Request in once you have the sign-off of at least one other developer, or if you do not have permission to do that, you may request the owner to merge it for you if you believe all checks are passed.

Final Note

We are looking forward to your kind feedback. Please help us to improve this open source project and make our work better. For contribution, please create a pull request and we will investigate it promptly. Once again, we appreciate your kind feedback and elaborate code inspections.

Acknowledgement

I have taken huge efforts in this project for hopefully being a small part of TensorFlow world. However, it would not have been plausible without the kind support and help of my friend and colleague Domenick Poster for his valuable advices. He helped me for having a better understanding of TensorFlow and my special appreciation goes to him.

Comments
  • TensorFlow

    TensorFlow

    $ git clone https://github.com/TensorFlow-World.git Cloning into 'TensorFlow-World'... remote: Not Found fatal: repository 'https://github.com/TensorFlow-World.git/' not found

    opened by ashu-22 13
  • RE: Policy regarding typos in codebase.

    RE: Policy regarding typos in codebase.

    This issue is regarding your policy regarding typos in your codebase. Here is the relevant section in your CONTRIBUTING.rst: For typos, please do not create a pull request. Instead, declare them in issues or email the repository owner.

    I suggest this policy be revised as it creates an extra step for you, the maintainer of this repo. For example, here is your current process:

    1. Contributor finds a typo.
    2. Contributor opens an issue.
    3. Repo owner reads the issue.
    4. Repo owner decides to create a code change to fix the typo and pushes the change.

    Here is the suggested process:

    1. Contributor finds a typo.
    2. Contributor creates a code change to fix the typo and creates a pull request
    3. Repo owner decides to accept the pull request and merges the changes.

    If typos can be discussed within a pull request, I don't see the point for a contributor to create an issue and then the repo owner creates a code change to fix the typo. I suggest using Github Issues to discuss lengthy proposals, but typos should be handled directly within a pull request. For example, see this Contributing guide for Github's open source guide.

    opened by adyavanapalli 4
  • Look for Python syntax errors or undefined names

    Look for Python syntax errors or undefined names

    • http://flake8.pycqa.org with find syntax errors and undefined names that can halt your program.
      • --select=E901,E999,F821,F822,F823 focuses the tool on the most critical issues
    • Fxxx codes are here: http://flake8.pycqa.org/en/latest/user/error-codes.html
    • Other codes are here: https://pycodestyle.readthedocs.io/en/latest/intro.html#error-codes
    • The output is here: https://travis-ci.org/astorfi/TensorFlow-World/builds/272817787

    F821 is really helpful for finding Python 2 / 3 differences but also for typos, copy/paste errors, etc.

    opened by cclauss 4
  • Update README.rst

    Update README.rst

    So, I cleaned up the grammar / spelling and got to the section about contributing to this repository.

    Based on this - it's definitely going to the top.

    Also. No. Here's your pull request.

    opened by razodactyl 3
  • logits is an undefined name in this context, should it be logits_last?

    logits is an undefined name in this context, should it be logits_last?

    Undefined names can raise NameErrorat runtime.

    https://travis-ci.org/astorfi/TensorFlow-World/jobs/272817788#L623-L626

    https://github.com/astorfi/TensorFlow-World/blob/master/codes/3-neural_networks/multi-layer-perceptron/code/test_classifier.py#L113

    opened by cclauss 2
  • train_op in linear regression

    train_op in linear regression

    Is defining train_op for each data point and epoch anew really needed? I'm new to TensorFlow so I can't tell why or why not this would make sense. For me, the regression seems to work fine (and much faster) if the line is removed.

    opened by mzur 2
  • sudo apt-get install nvidia-current-updates nvidia-settings-updates error

    sudo apt-get install nvidia-current-updates nvidia-settings-updates error

    Hello, just wanted to say this is a great guide. but when i execute : sudo apt-get install nvidia-current-updates nvidia-settings-updates its says: E: Unable to locate package nvidia-settings-updates

    can someone help me with this?

    opened by ghost 1
  • linear regression tutorial cost only reported for last data point

    linear regression tutorial cost only reported for last data point

    I noticed in the notebook for the linear regression that the cost was only being calculated for the last piece of data in each epoch.

    with tf.Session() as sess:
    
        # Initialize the variables[w and b].
        sess.run(tf.global_variables_initializer())
    
        # Get the input tensors
        X, Y = inputs()
    
        # Return the train loss and create the train_op.
        train_loss = loss(X, Y)
        train_op = train(train_loss)
    
        # Step 8: train the model
        for epoch_num in range(num_epochs): # run 100 epochs
            for x, y in data:
              train_op = train(train_loss)
    
              # Session runs train_op to minimize loss
              loss_value,_ = sess.run([train_loss,train_op], feed_dict={X: x, Y: y})
    
            # Displaying the loss per epoch.
            print('epoch %d, loss=%f' %(epoch_num+1, loss_value))
    
            # save the values of weight and bias
            wcoeff, bias = sess.run([W, b])
    

    data is being iterated over and the loss_value that is calculated is written over each time through the loop. Thus, the loss is only for the last piece of data. Since the loss needs to be computed over all of the data being used to train, the cost function should probably be something more like the following:

    def loss(X, Y):
        '''
        compute the loss by comparing the predicted value to the actual label.
        :param X: The inputs.
        :param Y: The labels.
        :return: The loss over the samples.
        '''
    
        # Making the prediction.
        Y_predicted = inference(X)
        return tf.reduce_sum(tf.squared_difference(Y, Y_predicted))/(2*data.shape[0])
    

    With this change above, the training section could be changed to the following (with the looping over data removed completely):

    with tf.Session() as sess:
    
        # Initialize the variables[w and b].
        sess.run(tf.global_variables_initializer())
    
        # Get the input tensors
        X, Y = inputs()
    
        # Return the train loss and create the train_op.
        train_loss = loss(X, Y)
        train_op = train(loss(X, Y))
    
        # Step 8: train the model
        for epoch_num in range(num_epochs): # run 100 epochs
            loss_value, _ = sess.run([train_loss,train_op], feed_dict={X: data[:,0], Y: data[:,1]})
    
            # Displaying the loss per epoch.
            print('epoch %d, loss=%f' %(epoch_num+1, loss_value))
    
            # save the values of weight and bias
            wcoeff, bias = sess.run([W, b])
    

    This would result in output like the following:

    epoch 1, loss=1573.599976
    epoch 2, loss=1332.513916
    epoch 3, loss=1128.868408
    epoch 4, loss=956.848999
    epoch 5, loss=811.544067
    

    I would be glad to submit a pull request with these and other minor changes. Please let me know if I have some misunderstanding.

    opened by mulhod 1
  • No Transformer Notebook

    No Transformer Notebook

    Hey,

    I see that there are no tutorial notebooks for Transformer implementations in this repository yet. Transformers are used primarily in the field of natural language processing. Like recurrent neural networks, Transformers are designed to handle sequential data, such as natural language, for tasks such as translation and text summarization.

    I would like to add such tutorial notebooks.

    opened by SauravMaheshkar 0
  • docs: fix simple typo, visualiaing -> visualising

    docs: fix simple typo, visualiaing -> visualising

    There is a small typo in docs/tutorials/1-basics/basic_math_operations/README.rst.

    Should read visualising rather than visualiaing.

    Semi-automated pull request generated by https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md

    opened by timgates42 0
  • a small mistake in doc

    a small mistake in doc

    In the tutorial doc of chapter 1 "Basics/variables", there might be a misktake here:

    # "variable_list_custom" is the list of variables that we want to initialize.
    variable_list_custom = [weights, custom_variable]
    
    # The initializer
    init_custom_op = tf.variables_initializer(var_list=all_variables_list)
    

    The last line of the code above might end up with var_list=variable_list_custom, not all_variables_list.

    Here's url of the doc: https://github.com/astorfi/TensorFlow-World/tree/master/docs/tutorials/1-basics/variables#initializing-specific-variables Thank you for your repo, it helps me a lot.

    opened by Xiaokeai18 0
Releases(v1.0)
Owner
Amirsina Torfi
PhD & Developer working on Deep Learning, Computer Vision & NLP
Amirsina Torfi
Repository for the "Gotta Go Fast When Generating Data with Score-Based Models" paper

Gotta Go Fast When Generating Data with Score-Based Models This repo contains the official implementation for the paper Gotta Go Fast When Generating

Alexia Jolicoeur-Martineau 89 Nov 09, 2022
Code and Data for NeurIPS2021 Paper "A Dataset for Answering Time-Sensitive Questions"

Time-Sensitive-QA The repo contains the dataset and code for NeurIPS2021 (dataset track) paper Time-Sensitive Question Answering dataset. The dataset

wenhu chen 35 Nov 14, 2022
Source code of SIGIR2021 Paper 'One Chatbot Per Person: Creating Personalized Chatbots based on Implicit Profiles'

DHAP Source code of SIGIR2021 Long Paper: One Chatbot Per Person: Creating Personalized Chatbots based on Implicit User Profiles . Preinstallation Fir

ZYMa 32 Dec 06, 2022
TensorFlow implementation of "Learning from Simulated and Unsupervised Images through Adversarial Training"

Simulated+Unsupervised (S+U) Learning in TensorFlow TensorFlow implementation of Learning from Simulated and Unsupervised Images through Adversarial T

Taehoon Kim 569 Dec 29, 2022
PyTorch implementation of the paper: Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features

Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features Estimate the noise transition matrix with f-mutual information. This co

<a href=[email protected]"> 1 Jun 05, 2022
Toontown: Galaxy, a new Toontown game based on Disney's Toontown Online

Toontown: Galaxy The official archive repo for Toontown: Galaxy, a new Toontown

1 Feb 15, 2022
A Strong Baseline for Image Semantic Segmentation

A Strong Baseline for Image Semantic Segmentation Introduction This project is an open source semantic segmentation toolbox based on PyTorch. It is ba

Clark He 49 Sep 20, 2022
Implementing Graph Convolutional Networks and Information Retrieval Mechanisms using pure Python and NumPy

Implementing Graph Convolutional Networks and Information Retrieval Mechanisms using pure Python and NumPy

Noah Getz 3 Jun 22, 2022
Monocular 3D Object Detection: An Extrinsic Parameter Free Approach (CVPR2021)

Monocular 3D Object Detection: An Extrinsic Parameter Free Approach (CVPR2021) Yunsong Zhou, Yuan He, Hongzi Zhu, Cheng Wang, Hongyang Li, Qinhong Jia

Yunsong Zhou 51 Dec 14, 2022
Code for NeurIPS 2021 paper 'Spatio-Temporal Variational Gaussian Processes'

Spatio-Temporal Variational GPs This repository is the official implementation of the methods in the publication: O. Hamelijnck, W.J. Wilkinson, N.A.

AaltoML 26 Sep 16, 2022
A Topic Modeling toolbox

Topik A Topic Modeling toolbox. Introduction The aim of topik is to provide a full suite and high-level interface for anyone interested in applying to

Anaconda, Inc. (formerly Continuum Analytics, Inc.) 93 Dec 01, 2022
SBINN: Systems-biology informed neural network

SBINN: Systems-biology informed neural network The source code for the paper M. Daneker, Z. Zhang, G. E. Karniadakis, & L. Lu. Systems biology: Identi

Lu Group 15 Nov 19, 2022
EMNLP 2021 paper The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers.

Codebase for training transformers on systematic generalization datasets. The official repository for our EMNLP 2021 paper The Devil is in the Detail:

Csordás Róbert 57 Nov 21, 2022
Efficient 3D Backbone Network for Temporal Modeling

VoV3D is an efficient and effective 3D backbone network for temporal modeling implemented on top of PySlowFast. Diverse Temporal Aggregation and

102 Dec 06, 2022
Official Pytorch implementation of ICLR 2018 paper Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge.

Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge: Official Pytorch implementation of ICLR 2018 paper Deep Learning for Phy

emmanuel 47 Nov 06, 2022
CS50x-AI - Artificial Intelligence with Python from Harvard University

CS50x-AI Artificial Intelligence with Python from Harvard University 📖 Table of

Hosein Damavandi 6 Aug 22, 2022
A Dataset of Python Challenges for AI Research

Python Programming Puzzles (P3) This repo contains a dataset of python programming puzzles which can be used to teach and evaluate an AI's programming

Microsoft 850 Dec 24, 2022
Predicting 10 different clothing types using Xception pre-trained model.

Predicting-Clothing-Types Predicting 10 different clothing types using Xception pre-trained model from Keras library. It is reimplemented version from

AbdAssalam Ahmad 3 Dec 29, 2021
Official pytorch implementation of Active Learning for deep object detection via probabilistic modeling (ICCV 2021)

Active Learning for Deep Object Detection via Probabilistic Modeling This repository is the official PyTorch implementation of Active Learning for Dee

NVIDIA Research Projects 130 Jan 06, 2023
SeisComP/SeisBench interface to enable deep-learning (re)picking in SeisComP

scdlpicker SeisComP/SeisBench interface to enable deep-learning (re)picking in SeisComP Objective This is a simple deep learning (DL) repicker module

Joachim Saul 6 May 13, 2022