Lab Materials for MIT 6.S191: Introduction to Deep Learning

Overview

banner

This repository contains all of the code and software labs for MIT 6.S191: Introduction to Deep Learning! All lecture slides and videos are available on the course website.

Opening the labs in Google Colaboratory:

The 2021 6.S191 labs will be run in Google's Colaboratory, a Jupyter notebook environment that runs entirely in the cloud, you don't need to download anything. To run these labs, you must have a Google account.

On this Github repo, navigate to the lab folder you want to run (lab1, lab2, lab3) and open the appropriate python notebook (*.ipynb). Click the "Run in Colab" link on the top of the lab. That's it!

Running the labs

Now, to run the labs, open the Jupyter notebook on Colab. Navigate to the "Runtime" tab --> "Change runtime type". In the pop-up window, under "Runtime type" select "Python 3", and under "Hardware accelerator" select "GPU". Go through the notebooks and fill in the #TODO cells to get the code to compile for yourself!

MIT Deep Learning package

You might notice that inside the labs we install the mitdeeplearning python package from the Python Package repository:

pip install mitdeeplearning

This package contains convienence functions that we use throughout the course and can be imported like any other Python package.

>>> import mitdeeplearning as mdl

We do this for you in each of the labs, but the package is also open source under the same license so you can also use it outside the class.

Lecture Videos

All lecture videos are available publicly online and linked above! Use and/or modification of lecture slides outside of 6.S191 must reference:

© MIT 6.S191: Introduction to Deep Learning

http://introtodeeplearning.com

License

All code in this repository is copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.

Licensed under the MIT License. You may not use this file except in compliance with the License. Use and/or modification of this code outside of 6.S191 must reference:

© MIT 6.S191: Introduction to Deep Learning

http://introtodeeplearning.com

Comments
  • Error

    Error

    Hey, Why am i having an error with this loc "import util.download_lung_data", is it correct?? I am getting an error of no module named util

    Thanks

    opened by arpita8 12
  • Docker images pull error

    Docker images pull error

    Hi amini, I'm trying to pull the docker images by this link DockerHub, it seems that there're some problems. I try docker pull mit6s191/iap2018 in terminal, it gives me this message:

    Using default tag: latest
    Error response from daemon: manifest for mit6s191/iap2018:latest not found
    

    then, I use docker pull mit6s191/iap2018:labs, it shows:

    error pulling image configuration: Get https://dseasb33srnrn.cloudfront.net/registry-v2/docker/registry/v2/blobs/sha256/48/48db1fb4c0b5d8aeb65b499c0623b057f6b50f93eed0e9cfb3f963b0c12a74db/data?Expires=1524752941&Signature=AKuwnCd69y-fs0NlLjQnAlBoUhbht-gWbIYIoIESf7dERzjlkeejUndYC1QCnEhjjlhZAvv2NWQFWEf-Efc6noGUV9hK4QRVaQqO23zRKRrqarTWVMLj5LQX4X1Qikze5YEXy4VqdNm5t88WRQsfDvsPHHDmKx6vqA2V4VgVDP8_&Key-Pair-Id=APKAJECH5M7VWIS5YZ6Q: net/http: TLS handshake timeout
    

    Could you please help to address this problem? Thank you.

    opened by ytzhao 9
  • Error while importing util

    Error while importing util

    I'm getting this error in most of the notebooks. Stacktrace in Colab ->

    ModuleNotFoundError Traceback (most recent call last) in () 14 15 # Import the necessary class-specific utility files for this lab ---> 16 import introtodeeplearning_labs as util

    /content/introtodeeplearning_labs/init.py in () ----> 1 from lab1 import * 2 from lab2 import * 3 # from lab3 import * 4 5

    ModuleNotFoundError: No module named 'lab1'

    opened by SarthakSG 5
  • can't play music though no error occurs

    can't play music though no error occurs

    I ran the code below with colab, but I didn't hear a sound. !pip install mitdeeplearning import mitdeeplearning as mdl

    songs = mdl.lab1.load_training_data() example_song = songs[0] mdl.lab1.play_song(example_song)

    opened by creater-yzl 4
  • invalid bind mount spec, invalid mode: /notebooks/introtodeeplearning_labs

    invalid bind mount spec, invalid mode: /notebooks/introtodeeplearning_labs

    I am trying to try the notebooks posted here as per the instructions given in the readme here as:

    sudo docker run -p 8888:8888 -p 6006:6006 -v https://github.com/aamini/introtodeeplearning_labs:/notebooks/introtodeeplearning_labs mit6s191/iap2018:labs

    when I get the error:

    docker: Error response from daemon: invalid bind mount spec "https://github.com/aamini/introtodeeplearning_labs:/notebooks/introtodeeplearning_labs": invalid mode: /notebooks/introtodeeplearning_labs.

    Am I missing the path for the repo?

    opened by sameermahajan 4
  • [Lab 2 / Part 2] Training dataset is missing

    [Lab 2 / Part 2] Training dataset is missing

    Regarding Section 2.2 of Lab2, Part2 - Debiasing, the training data hosted at https://www.dropbox.com/s/dl/bp54q547mfg15ze/train_face.h5 no longer exists. Thanks for the great class!

    opened by vivianliang 3
  • ModuleNotFoundError: No module named 'lab1'

    ModuleNotFoundError: No module named 'lab1'

    Hi, quick solution for importing this modules? ModuleNotFoundError Traceback (most recent call last) in () 8 from IPython import display as ipythondisplay 9 ---> 10 import introtodeeplearning_labs as util 11 12 is_correct_tf_version = '1.14.0' in tf.version

    /content/introtodeeplearning_labs/init.py in () ----> 1 from lab1 import * 2 from lab2 import * 3 # from lab3 import * 4 5

    opened by IngridJSJ 3
  • lab1/Part2 fails assertion for tf version 1.13.0 (Colab is on 1.13.1)

    lab1/Part2 fails assertion for tf version 1.13.0 (Colab is on 1.13.1)

    Running lab1/Part2_music_generation.ipynb fails at the beginning because it requests 1.13.0. Looks like Colab updated to 1.13.1 (I didn't do anything special).

    is_correct_tf_version = '1.13.0' in tf.__version__
    ...
    AssertionError: Wrong tensorflow version (1.13.1) installed
    

    Replacing that with this will work, unless that specific version is important:

    is_correct_tf_version = tf.__version__ >= '1.13.0'
    
    opened by MaxGhenis 3
  • Avoid colocate_with warning in lab1/part1 solution by using tf.add

    Avoid colocate_with warning in lab1/part1 solution by using tf.add

    lab1/Part1_tensorflow_solution.ipynb includes this code:

    def our_dense_layer(x, n_in, n_out):
      # ...
      z = tf.matmul(x,W) + b
    

    When this is called in the next cell, it produces this warning:

    WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py:642: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
    Instructions for updating:
    Colocations handled automatically by placer.
    tf.Tensor([[0.95257413 0.95257413 0.95257413]], shape=(1, 3), dtype=float32)
    

    This warning can be avoided by replacing the +b code segment with tf.add, i.e.

    z = tf.add(tf.matmul(x, W), b, name="z")
    

    (also more TensorFlow-y)

    opened by MaxGhenis 3
  • Lab1 - Part1 - Section 1.2:Error when using tf.constant to provide input to Keras model.predict

    Lab1 - Part1 - Section 1.2:Error when using tf.constant to provide input to Keras model.predict

    I am encountering an InvalidArgumentErrorerror when I use Keras model.predict with tf.constant as input. I am not sure whether it's because model.predict don't work with tf.constant or I am doing something wrong. It work fine when I use numpy array with same argument.

    # Define the number of inputs and outputs
    n_input_nodes = 2
    n_output_nodes = 3
    
    # First define the model 
    model = Sequential()
    
    '''TODO: Define a dense (fully connected) layer to compute z'''
    # Remember: dense layers are defined by the parameters W and b!
    # You can read more about the initialization of W and b in the TF documentation :) 
    dense_layer = Dense(n_output_nodes, input_shape=(n_input_nodes,),activation='sigmoid') # TODO 
    
    # Add the dense layer to the model
    model.add(dense_layer)
    

    Now when I do prediction using:

    # Test model with example input
    x_input = tf.constant([[1.0,2.]], shape=(1,2))
    '''TODO: feed input into the model and predict the output!'''
    print(model.predict(x_input)) # TODO
    

    I get following error:

    InvalidArgumentError: In[0] is not a matrix. Instead it has shape [2] [[{{node MatMul_3}}]] [Op:StatefulPartitionedCall]

    When I use Numpy array, it works:

    # Test model with example input
    x_input =np.array([[1.0,2.]])
    '''TODO: feed input into the model and predict the output!'''
    print(model.predict(x_input)) # TODO
    

    [[0.19114174 0.88079417 0.8062956 ]]

    Could you let me know if it is a tf issue. If so I can raise an issue on the tf repository.

    opened by sibyjackgrove 3
  • module 'mitdeeplearning.lab3' has no attribute 'pong_change'

    module 'mitdeeplearning.lab3' has no attribute 'pong_change'

    I'm doing the pong part of RL Lab I ran into this issue i've tried looking into all versions of mitdeeplearning i didn't find pong_change function can you fix this

    opened by babahadjsaid 2
  • Part2_Music_Generation: model prediction inputs

    Part2_Music_Generation: model prediction inputs

    Maybe this gets a little bit late, but I think this is also a good chance to say thank you to Alex and Ava for this great course.

    Here is my questions:

    When I went through Music_Generation codes, the answer here was quite confusing to me. Only one character passes to the model each time although it is updating but the previous information is missing. (I think this also part the reason why the songs generated are always invalid.)

    ~Pass the prediction along with the previous hidden state ~as the next inputs to the model input_eval = tf.expand_dims([predicted_id], 0)

    So I save the initial input and concatenate with each output as the next input. This makes more sense to me and the results start to be much better, but I'm not sure if I make something wrong or there are better ways, like taking the previous state as the next initializer state.

    out_eval = tf.expand_dims([predicted_id], 0) input_eval = tf.concat([input_eval, output_eval], 1)

    opened by maple24 0
  • [Lab 1 Part 1.1] A typo in the matrix indexes?

    [Lab 1 Part 1.1] A typo in the matrix indexes?

    The last matrix defined by the user in section 1.1 has to be a 2-d Tensor. Thus, it'll have a shape of (n, 2) or (2, n). The code in the Lab 1 notebook has:

    row_vector = matrix[1]
    column_vector = matrix[:,2]
    scalar = matrix[1, 2]
    

    This assumes the column-rank to be >=3. IMHO, it should instead be:

    row_vector = matrix[1]
    column_vector = matrix[:,1]
    scalar = matrix[0, 1]
    

    And assume the user will choose a (2,2) matrix as their solution.

    opened by datasith 1
  • Lab2: Part2_Debiasing Test Set Bias

    Lab2: Part2_Debiasing Test Set Bias

    Hello, I have a question on test set bias of the standard CNN model. After running cell #13 in Part2_Debiasing.ipynb, I see this histogram:

    Lab2_Cell13_Std_CNN_Test_Set_Bias

    Am I correct in interpreting that avg test set accuracy is ~65% for Light Female, ~70% for Light Male, ~82% for Dark Female, and ~90% for Dark Male?

    If training set has majority of data on light skinned female, I expected test set accuracy to be higher for that category. So, results of above hisogram are counter intuitive.

    What am I missing?

    --Rahul

    opened by rvh72 2
  • Unable to install on M1 Mac

    Unable to install on M1 Mac

    Getting the following output when attempting to install mitdeeplearning via pip on an M1 mac:

    % pip install mitdeeplearning
    DEPRECATION: Configuring installation scheme with distutils config files is deprecated and will no longer work in the near future. If you are using a Homebrew or Linuxbrew Python, please see discussion at https://github.com/Homebrew/homebrew-core/issues/76621
    Collecting mitdeeplearning
      Using cached mitdeeplearning-0.2.0.tar.gz (2.1 MB)
      Preparing metadata (setup.py) ... done
    Requirement already satisfied: numpy in /opt/homebrew/lib/python3.9/site-packages (from mitdeeplearning) (1.22.3)
    Collecting regex
      Downloading regex-2022.3.15-cp39-cp39-macosx_11_0_arm64.whl (281 kB)
         ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 281.8/281.8 KB 5.1 MB/s eta 0:00:00
    Collecting tqdm
      Using cached tqdm-4.64.0-py2.py3-none-any.whl (78 kB)
    Collecting gym
      Downloading gym-0.23.1.tar.gz (626 kB)
         ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 626.2/626.2 KB 17.9 MB/s eta 0:00:00
      Installing build dependencies ... done
      Getting requirements to build wheel ... done
      Preparing metadata (pyproject.toml) ... done
    Collecting mitdeeplearning
      Downloading mitdeeplearning-0.1.2.tar.gz (2.1 MB)
         ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 45.2 MB/s eta 0:00:00
      Preparing metadata (setup.py) ... done
      Downloading mitdeeplearning-0.1.1.tar.gz (2.1 MB)
         ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 37.5 MB/s eta 0:00:00
      Preparing metadata (setup.py) ... done
      Downloading mitdeeplearning-0.1.0.tar.gz (2.1 MB)
         ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 36.1 MB/s eta 0:00:00
      Preparing metadata (setup.py) ... done
    ERROR: Cannot install mitdeeplearning==0.1.0, mitdeeplearning==0.1.1, mitdeeplearning==0.1.2 and mitdeeplearning==0.2.0 because these package versions have conflicting dependencies.
    
    The conflict is caused by:
        mitdeeplearning 0.2.0 depends on tensorflow>=2.0.0a
        mitdeeplearning 0.1.2 depends on tensorflow>=2.0.0a
        mitdeeplearning 0.1.1 depends on tensorflow>=2.0.0a
        mitdeeplearning 0.1.0 depends on tensorflow>=2.0.0a
    
    To fix this you could try to:
    1. loosen the range of package versions you've specified
    2. remove package versions to allow pip attempt to solve the dependency conflict
    
    ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
    
    ERROR: Could not find a version that satisfies the requirement tensorflow>=2.0.0a 
    

    Looking for a solution that allows the installation of this package. I'm using python3.9 and I have tensorflow-macos installed.

    opened by jackson-sandland 4
  • Lab 3-Fail to import module for event camera

    Lab 3-Fail to import module for event camera

    During running the following section in lab 3 on autonomous driving:

    import vista from vista.utils import logging logging.setLevel(logging.ERROR)

    I get an error:

    ::WARNING::[vista.entities.sensors.EventCamera.] Fail to import module for event camera. Remember to do source /openeb/build/utils/scripts/setup_env.shCan ignore this if not using it

    How to solve this problem?

    I try to continue , then in the following code section: camera = car.spawn_camera(config={'size': (200, 320)})

    I got GL error: GLError: GLError( err = 12290, baseOperation = eglMakeCurrent, cArguments = ( <OpenGL._opaque.EGLDisplay_pointer object at 0x7ff7681b5b90>, <OpenGL._opaque.EGLSurface_pointer object at 0x7ff768c97dd0>, <OpenGL._opaque.EGLSurface_pointer object at 0x7ff768c97dd0>, <OpenGL._opaque.EGLContext_pointer object at 0x7ff767f89290>, ), result = 0 )

    I use in Google Colab (and GPU) .

    opened by kerengold 2
Releases(v0.2.0)
Owner
Alexander Amini
Alexander Amini
Use MATLAB to simulate the signal and extract features. Use PyTorch to build and train deep network to do spectrum sensing.

Deep-Learning-based-Spectrum-Sensing Use MATLAB to simulate the signal and extract features. Use PyTorch to build and train deep network to do spectru

10 Dec 14, 2022
A more easy-to-use implementation of KPConv

A more easy-to-use implementation of KPConv This repo contains a more easy-to-use implementation of KPConv based on PyTorch. Introduction KPConv is a

Zheng Qin 35 Dec 14, 2022
Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image

Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image (Project page) Zhengqin Li, Mohammad Sha

209 Jan 05, 2023
Rotation-Only Bundle Adjustment

ROBA: Rotation-Only Bundle Adjustment Paper, Video, Poster, Presentation, Supplementary Material In this repository, we provide the implementation of

Seong 51 Nov 29, 2022
Build fully-functioning computer vision models with PyTorch

Detecto is a Python package that allows you to build fully-functioning computer vision and object detection models with just 5 lines of code. Inferenc

Alan Bi 576 Dec 29, 2022
Official code for "Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer. ICCV2021".

Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer. ICCV2021. Introduction We proposed a novel model training paradi

Lucas 103 Dec 14, 2022
Efficient and intelligent interactive segmentation annotation software

Efficient and intelligent interactive segmentation annotation software

294 Dec 30, 2022
Supervised domain-agnostic prediction framework for probabilistic modelling

A supervised domain-agnostic framework that allows for probabilistic modelling, namely the prediction of probability distributions for individual data

The Alan Turing Institute 112 Oct 23, 2022
git《Joint Entity and Relation Extraction with Set Prediction Networks》(2020) GitHub:

Joint Entity and Relation Extraction with Set Prediction Networks Source code for Joint Entity and Relation Extraction with Set Prediction Networks. W

130 Dec 13, 2022
Official PyTorch Implementation of Learning Self-Similarity in Space and Time as Generalized Motion for Video Action Recognition, ICCV 2021

Official PyTorch Implementation of Learning Self-Similarity in Space and Time as Generalized Motion for Video Action Recognition, ICCV 2021

26 Dec 07, 2022
D2Go is a toolkit for efficient deep learning

D2Go D2Go is a production ready software system from FacebookResearch, which supports end-to-end model training and deployment for mobile platforms. W

Facebook Research 744 Jan 04, 2023
The reference baseline of final exam for XMU machine learning course

Mini-NICO Baseline The baseline is a reference method for the final exam of machine learning course. Requirements Installation we use /python3.7 /torc

JoaquinChou 3 Dec 29, 2021
Cross-modal Deep Face Normals with Deactivable Skip Connections

Cross-modal Deep Face Normals with Deactivable Skip Connections Victoria Fernández Abrevaya*, Adnane Boukhayma*, Philip H. S. Torr, Edmond Boyer (*Equ

72 Nov 27, 2022
SmartSim Infrastructure Library.

Home Install Documentation Slack Invite Cray Labs SmartSim SmartSim makes it easier to use common Machine Learning (ML) libraries like PyTorch and Ten

Cray Labs 139 Jan 01, 2023
Analysis of rationale selection in neural rationale models

Neural Rationale Interpretability Analysis We analyze the neural rationale models proposed by Lei et al. (2016) and Bastings et al. (2019), as impleme

Yiming Zheng 3 Aug 31, 2022
A higher performance pytorch implementation of DeepLab V3 Plus(DeepLab v3+)

A Higher Performance Pytorch Implementation of DeepLab V3 Plus Introduction This repo is an (re-)implementation of Encoder-Decoder with Atrous Separab

linhua 326 Nov 22, 2022
Exploration of some patients clinical variables.

Answer_ALS_clinical_data Exploration of some patients clinical variables. All the clinical / metadata data is available here: https://data.answerals.o

1 Jan 20, 2022
Public repository of the 3DV 2021 paper "Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds"

Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds Björn Michele1), Alexandre Boulch1), Gilles Puy1), Maxime Bucher1) and Rena

valeo.ai 15 Dec 22, 2022
Implementation of ConvMixer in TensorFlow and Keras

ConvMixer ConvMixer, an extremely simple model that is similar in spirit to the ViT and the even-more-basic MLP-Mixer in that it operates directly on

Sayan Nath 8 Oct 03, 2022
Junction Tree Variational Autoencoder for Molecular Graph Generation (ICML 2018)

Junction Tree Variational Autoencoder for Molecular Graph Generation Official implementation of our Junction Tree Variational Autoencoder https://arxi

Wengong Jin 418 Jan 07, 2023