Intro-to-dl - Resources for "Introduction to Deep Learning" course.

Overview

Introduction to Deep Learning course resources

https://www.coursera.org/learn/intro-to-deep-learning

Running on Google Colab (tested for all weeks)

Google has released its own flavour of Jupyter called Colab, which has free GPUs!

Here's how you can use it:

  1. Open https://colab.research.google.com, click Sign in in the upper right corner, use your Google credentials to sign in.
  2. Click GITHUB tab, paste https://github.com/hse-aml/intro-to-dl and press Enter
  3. Choose the notebook you want to open, e.g. week2/v2/mnist_with_keras.ipynb
  4. Click File -> Save a copy in Drive... to save your progress in Google Drive
  5. Click Runtime -> Change runtime type and select GPU in Hardware accelerator box
  6. Execute the following code in the first cell that downloads dependencies (change for your week number):
! shred -u setup_google_colab.py
! wget https://raw.githubusercontent.com/hse-aml/intro-to-dl/master/setup_google_colab.py -O setup_google_colab.py
import setup_google_colab
# please, uncomment the week you're working on
# setup_google_colab.setup_week1()
# setup_google_colab.setup_week2()
# setup_google_colab.setup_week2_honor()
# setup_google_colab.setup_week3()
# setup_google_colab.setup_week4()
# setup_google_colab.setup_week5()
# setup_google_colab.setup_week6()
  1. If you run many notebooks on Colab, they can continue to eat up memory, you can kill them with ! pkill -9 python3 and check with ! nvidia-smi that GPU memory is freed.

Known issues:

  • Blinking animation with IPython.display.clear_output(). It's usable, but still looking for a workaround.

Offline instructions

Coursera Jupyter Environment can be slow if many learners use it heavily. Our tasks are compute-heavy and we recommend to run them on your hardware for optimal performance.

You will need a computer with at least 4GB of RAM.

There're two options to setup the Jupyter Notebooks locally: Docker container and Anaconda.

Docker container option (best for Mac/Linux)

Follow the instructions on https://hub.docker.com/r/zimovnov/coursera-aml-docker/ to install Docker container with all necessary software installed.

After that you should see a Jupyter page in your browser.

Anaconda option (best for Windows)

We highly recommend to install docker environment, but if it's not an option, you can try to install the necessary python modules with Anaconda.

First, install Anaconda with Python 3.5+ from here.

Download conda_requirements.txt from here.

Open terminal on Mac/Linux or "Anaconda Prompt" in Start Menu on Windows and run:

conda config --append channels conda-forge
conda config --append channels menpo
conda install --yes --file conda_requirements.txt

To start Jupyter Notebooks run jupyter notebook on Mac/Linux or "Jupyter Notebook" in Start Menu on Windows.

After that you should see a Jupyter page in your browser.

Prepare resources inside Jupyter Notebooks (for local setups only)

Click New -> Terminal and execute: git clone https://github.com/hse-aml/intro-to-dl.git On Windows you might want to install Git. You can also download all the resources as zip archive from GitHub page.

Close the terminal and refresh Jupyter page, you will see intro-to-dl folder, go there, all the necessary notebooks are waiting for you.

First you need to download necessary resources, to do that open download_resources.ipynb and run cells for Keras and your week.

Now you can open a notebook for the corresponding week and work there just like in Coursera Jupyter Environment.

Using GPU for offline setup (for advanced users)

Comments
  • cannot submit

    cannot submit

    In the first submission for week 3, I couldn't submit. Here is the error: AttributeError: module 'grading_utils' has no attribute 'model_total_params'

    opened by AhmedFrikha 4
  • week4/lfw_dataset.py

    week4/lfw_dataset.py

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-4-856143fffc33> in <module>()
          8 #Those attributes will be required for the final part of the assignment (applying smiles), so please keep them in mind
          9 from lfw_dataset import load_lfw_dataset
    ---> 10 data,attrs = load_lfw_dataset(dimx=36,dimy=36)
         11 
         12 #preprocess faces
    
    ~/GitHub/intro-to-dl/week4/lfw_dataset.py in load_lfw_dataset(use_raw, dx, dy, dimx, dimy)
         52 
         53     # preserve photo_ids order!
    ---> 54     all_attrs = photo_ids.merge(df_attrs, on=('person', 'imagenum')).drop(["person", "imagenum"], axis=1)
         55 
         56     return all_photos, all_attrs
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/frame.py in merge(self, right, how, on, left_on, right_on, left_index, right_index, sort, suffixes, copy, indicator, validate)
       6377                      right_on=right_on, left_index=left_index,
       6378                      right_index=right_index, sort=sort, suffixes=suffixes,
    -> 6379                      copy=copy, indicator=indicator, validate=validate)
       6380 
       6381     def round(self, decimals=0, *args, **kwargs):
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/merge.py in merge(left, right, how, on, left_on, right_on, left_index, right_index, sort, suffixes, copy, indicator, validate)
         58                          right_index=right_index, sort=sort, suffixes=suffixes,
         59                          copy=copy, indicator=indicator,
    ---> 60                          validate=validate)
         61     return op.get_result()
         62 
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/merge.py in __init__(self, left, right, how, on, left_on, right_on, axis, left_index, right_index, sort, suffixes, copy, indicator, validate)
        552         # validate the merge keys dtypes. We may need to coerce
        553         # to avoid incompat dtypes
    --> 554         self._maybe_coerce_merge_keys()
        555 
        556         # If argument passed to validate,
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/merge.py in _maybe_coerce_merge_keys(self)
        976             # incompatible dtypes GH 9780, GH 15800
        977             elif is_numeric_dtype(lk) and not is_numeric_dtype(rk):
    --> 978                 raise ValueError(msg)
        979             elif not is_numeric_dtype(lk) and is_numeric_dtype(rk):
        980                 raise ValueError(msg)
    
    ValueError: You are trying to merge on int64 and object columns. If you wish to proceed you should use pd.concat
    
    opened by zuenko 4
  • explanation of

    explanation of "download_utils.py"

    def link_all_keras_resources():
        link_all_files_from_dir("../readonly/keras/datasets/", os.path.expanduser("~/.keras/datasets"))
        link_all_files_from_dir("../readonly/keras/models/", os.path.expanduser("~/.keras/models"))
    

    which datas are belong to the datasets and models dir ? (with name).

    def link_week_6_resources():
        link_all_files_from_dir("../readonly/week6/", ".")
    

    which datas are belong to the week6 dir ? (with name).

    Please, explain this two function. I want to run week-6 image_captionong_project into my local jupyter-notebook.

    Please help me . THANKS

    opened by rezwanh001 3
  • NumpyNN (honor).ipynb not able to import util.py

    NumpyNN (honor).ipynb not able to import util.py

    Hi,

    It seems like

    from util import eval_numerical_gradient

    not working. (week 2 honor assignment)

    It can work by manually adding eval_numerical_gradientm function but it would be better if linked.

    Cheers, Nan

    opened by xia0nan 1
  • The Kernel dies after epoch 2 and the callbacks doesn't work, both in Colab & Jupyter notebooks.Please help!!

    The Kernel dies after epoch 2 and the callbacks doesn't work, both in Colab & Jupyter notebooks.Please help!!

    The Kernel dies after epoch 2 and the callbacks doesn't work, both in Colab & Jupyter notebooks. The result is always 6 out of 9 because the progress halts after that. Please help me complete the work and submit the results.

    It's an earnest request to the mentors, tutors , instructors to please consider those students facing such issues and provide assistance.

    As for my case , it's the only project left in the entire specialization and it's completion.

    I will be extremely grateful for the opportunity for the peer review to be made accessible to all the learners whether they are undergoing the same issue for a long span of time or otherwise.

    Will be eagerly awaiting a response.

    Regards,

    Saheli Basu

    opened by MehaRima 0
  • Fixed a typo on line 285.

    Fixed a typo on line 285.

    Original: So far our model is staggeringly inefficient. There is something wring with it. Guess, what?

    Changed to: So far, our model is staggeringly inefficient. There is something wrong with it. Guess, what?

    opened by IAmSuyogJadhav 0
  • KeyError in keras_utils.py

    KeyError in keras_utils.py

    I tried running on my local computer

    model.fit( x_train2, y_train2, # prepared data batch_size=BATCH_SIZE, epochs=EPOCHS, callbacks=[keras.callbacks.LearningRateScheduler(lr_scheduler), LrHistory(), keras_utils.TqdmProgressCallback(), keras_utils.ModelSaveCallback(model_filename)], validation_data=(x_test2, y_test2), shuffle=True, verbose=0, initial_epoch=last_finished_epoch or 0 )

    But it returned me this error

    ~\Documents\kkbq\Coursera\Intro to Deep Learning\intro-to-dl\keras_utils.py in _set_prog_bar_desc(self, logs) 27 28 def _set_prog_bar_desc(self, logs): ---> 29 for k in self.params['metrics']: 30 if k in logs: 31 self.log_values_by_metric[k].append(logs[k])

    KeyError: 'metrics'

    Does anyone know why this happened? Thanks.

    opened by samtjong23 0
  • Week 3 - Task 2 issue

    Week 3 - Task 2 issue

    In one of the last cells,

    model.compile(
        loss='categorical_crossentropy',  # we train 102-way classification
        optimizer=keras.optimizers.adamax(lr=1e-2),  # we can take big lr here because we fixed first layers
        metrics=['accuracy']  # report accuracy during training
    )
    

    AttributeError: module 'keras.optimizers' has no attribute 'adamax'

    This can be fixed by changing "adamax" to "Adamax". However, after that the second next cell:

    # fine tune for 2 epochs (full passes through all training data)
    # we make 2*8 epochs, where epoch is 1/8 of our training data to see progress more often
    model.fit_generator(
        train_generator(tr_files, tr_labels), 
        steps_per_epoch=len(tr_files) // BATCH_SIZE // 8,
        epochs=2 * 8,
        validation_data=train_generator(te_files, te_labels), 
        validation_steps=len(te_files) // BATCH_SIZE // 4,
        callbacks=[keras_utils.TqdmProgressCallback(), 
                   keras_utils.ModelSaveCallback(model_filename)],
        verbose=0,
        initial_epoch=last_finished_epoch or 0
    )
    

    throws the following error:

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-183-faf1b24645ff> in <module>()
         10                keras_utils.ModelSaveCallback(model_filename)],
         11     verbose=0,
    ---> 12     initial_epoch=last_finished_epoch or 0
         13 )
    
    2 frames
    /usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
         85                 warnings.warn('Update your `' + object_name +
         86                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
    ---> 87             return func(*args, **kwargs)
         88         wrapper._original_function = func
         89         return wrapper
    
    /usr/local/lib/python3.6/dist-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, initial_epoch)
       1723 
       1724         do_validation = bool(validation_data)
    -> 1725         self._make_train_function()
       1726         if do_validation:
       1727             self._make_test_function()
    
    /usr/local/lib/python3.6/dist-packages/keras/engine/training.py in _make_train_function(self)
        935                 self._collected_trainable_weights,
        936                 self.constraints,
    --> 937                 self.total_loss)
        938             updates = self.updates + training_updates
        939             # Gets loss and metrics. Updates weights at each call.
    
    TypeError: get_updates() takes 3 positional arguments but 4 were given
    

    keras.optimizers.Adamax() inherits the get_updates() method from keras.optimizers.Optimizer(), and that method takes only three arguments (self, loss, params), but _make_train_function is trying to pass four arguments to it.

    As I understand it, the issue here is compatibility between tf 1.x and tf 2. I'm using colab and running the %tensorflow_version 1.x line, as well as the setup cell with week 3 setup uncommented at the start of the notebook.

    All checkpoints up to this point have been passed succesfully.

    opened by nietoo 1
  • conda issue

    conda issue

    Hi there, I face a lot of problem to create the environment. I want to use my GPU as I used to do but here, to run your environment I face a lot a package conflicts. I spent 4 hours trying to to make working tensorflow==1.2.1 & Keras==2.0.6 (with theano ).

    (nvidia-docker does not work on my Debian so I would use a stable conda environment) Please update the co-lab with tensflow 2+

    opened by kakooloukia 0
  • Google colab code addition

    Google colab code addition

    The original code does not work fine in the Google colab. Please add following code: !pip install q keras==2.0.6 to these lines of codes: ! shred -u setup_google_colab.py ! wget https://raw.githubusercontent.com/hse-aml/intro-to-dl/master/setup_google_colab.py -O setup_google_colab.py import setup_google_colab please, uncomment the week you're working on setup_google_colab.setup_week1() setup_google_colab.setup_week2() setup_google_colab.setup_week2_honor() setup_google_colab.setup_week3() setup_google_colab.setup_week4() setup_google_colab.setup_week5() setup_google_colab.setup_week6()

    opened by ansh997 0
Owner
Advanced Machine Learning specialisation by HSE
Advanced Machine Learning specialisation by HSE
SalFBNet: Learning Pseudo-Saliency Distribution via Feedback Convolutional Networks

SalFBNet This repository includes Pytorch implementation for the following paper: SalFBNet: Learning Pseudo-Saliency Distribution via Feedback Convolu

12 Aug 12, 2022
LUKE -- Language Understanding with Knowledge-based Embeddings

LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transf

Studio Ousia 587 Dec 30, 2022
A curated list of awesome neural radiance fields papers

Awesome Neural Radiance Fields A curated list of awesome neural radiance fields papers, inspired by awesome-computer-vision. How to submit a pull requ

Yen-Chen Lin 3.9k Dec 27, 2022
Deep Learning Head Pose Estimation using PyTorch.

Hopenet is an accurate and easy to use head pose estimation network. Models have been trained on the 300W-LP dataset and have been tested on real data with good qualitative performance.

Nataniel Ruiz 1.3k Dec 26, 2022
[ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark

HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark Accepted as a spotlight paper at ICLR 2021. Table of content File structure Prerequi

72 Jan 03, 2023
Adaptable tools to make reinforcement learning and evolutionary computation algorithms.

Pearl The Parallel Evolutionary and Reinforcement Learning Library (Pearl) is a pytorch based package with the goal of being excellent for rapid proto

38 Jan 01, 2023
Multi Agent Path Finding Algorithms

MATP-solver Simulator collision check path step random initial states or given states Traditional method Seperate A* algorithem Confict-based Search S

30 Dec 12, 2022
Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

41 Jan 03, 2023
[NeurIPS 2021] Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data

Near-Duplicate Video Retrieval with Deep Metric Learning This repository contains the Tensorflow implementation of the paper Near-Duplicate Video Retr

Liming Jiang 238 Nov 25, 2022
📚 A collection of Jupyter notebooks for learning and experimenting with OpenVINO 👓

A collection of ready-to-run Python* notebooks for learning and experimenting with OpenVINO developer tools. The notebooks are meant to provide an introduction to OpenVINO basics and teach developers

OpenVINO Toolkit 840 Jan 03, 2023
Estimation of human density in a closed space using deep learning.

Siemens HOLLZOF challenge - Human Density Estimation Add project description here. Installing Dependencies: Install Python3 either system-wide, user-w

3 Aug 08, 2021
codes for Image Inpainting with External-internal Learning and Monochromic Bottleneck

Image Inpainting with External-internal Learning and Monochromic Bottleneck This repository is for the CVPR 2021 paper: 'Image Inpainting with Externa

97 Nov 29, 2022
Official code repository for the work: "The Implicit Values of A Good Hand Shake: Handheld Multi-Frame Neural Depth Refinement"

Handheld Multi-Frame Neural Depth Refinement This is the official code repository for the work: The Implicit Values of A Good Hand Shake: Handheld Mul

55 Dec 14, 2022
PyTorch META-DATASET (Few-shot classification benchmark)

PyTorch META-DATASET (Few-shot classification benchmark) This repo contains a PyTorch implementation of meta-dataset and a unified implementation of s

Malik Boudiaf 39 Oct 31, 2022
A system for quickly generating training data with weak supervision

Programmatically Build and Manage Training Data Announcement The Snorkel team is now focusing their efforts on Snorkel Flow, an end-to-end AI applicat

Snorkel Team 5.4k Jan 02, 2023
Forecasting for knowable future events using Bayesian informative priors (forecasting with judgmental-adjustment).

What is judgyprophet? judgyprophet is a Bayesian forecasting algorithm based on Prophet, that enables forecasting while using information known by the

AstraZeneca 56 Oct 26, 2022
Api for getting bin info and getting encrypted card details for adyen.

Bin Info And Adyen Cse Enc Python api for getting bin info and getting encrypted

Roldex Stark 8 Dec 30, 2022
Permeability Prediction Via Multi Scale 3D CNN

Permeability-Prediction-Via-Multi-Scale-3D-CNN Data: The raw CT rock cores are obtained from the Imperial Colloge portal. The CT rock cores are sub-sa

Mohamed Elmorsy 2 Jul 06, 2022
Language models are open knowledge graphs ( non official implementation )

language-models-are-knowledge-graphs-pytorch Language models are open knowledge graphs ( work in progress ) A non official reimplementation of Languag

theblackcat102 132 Dec 18, 2022
Neural Point-Based Graphics

Neural Point-Based Graphics Project   Video   Paper Neural Point-Based Graphics Kara-Ali Aliev1 Artem Sevastopolsky1,2 Maria Kolos1,2 Dmitry Ulyanov3

Ali Aliev 252 Dec 13, 2022