Code for testing various M1 Chip benchmarks with TensorFlow.

Overview

M1, M1 Pro, M1 Max Machine Learning Speed Test Comparison

This repo contains some sample code to benchmark the new M1 MacBooks (M1 Pro and M1 Max) against various other pieces of hardware.

It also has steps below to setup your M1, M1 Pro and M1 Max (steps should also for work Intel) Mac to run the code.

Who is this repo for?

You: have a new M1, M1 Pro, M1 Max machine and would like to get started doing machine learning and data science on it.

This repo: teaches you how to install the most common machine learning and data science packages (software) on your machine and make sure they run using sample code.

Machine Learning Experiments Conducted

All experiments were run with the same code. For Apple devices, TensorFlow environments were created with the steps below.

Notebook Number Experiment
00 TinyVGG model trained on CIFAR10 dataset with TensorFlow code.
01 EfficientNetB0 Feature Extractor on Food101 dataset with TensorFlow code.
02 RandomForestClassifier from Scikit-Learn trained with random search cross-validation on California Housing dataset.

Results

See the results directory.

Steps (how to test your M1 machine)

  1. Create an environment and install dependencies (see below)
  2. Clone this repo
  3. Run various notebooks (results come at the end of the notebooks)

How to setup a TensorFlow environment on M1, M1 Pro, M1 Max using Miniforge (shorter version)

If you're experienced with making environments and using the command line, follow this version. If not, see the longer version below.

  1. Download and install Homebrew from https://brew.sh. Follow the steps it prompts you to go through after installation.
  2. Download Miniforge3 (Conda installer) for macOS arm64 chips (M1, M1 Pro, M1 Max).
  3. Install Miniforge3 into home directory.
chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
source ~/miniforge3/bin/activate
  1. Restart terminal.
  2. Create a directory to setup TensorFlow environment.
mkdir tensorflow-test
cd tensorflow-test
  1. Make and activate Conda environment. Note: Python 3.8 is the most stable for using the following setup.
conda create --prefix ./env python=3.8
conda activate ./env
  1. Install TensorFlow dependencies from Apple Conda channel.
conda install -c apple tensorflow-deps
  1. Install base TensorFlow (Apple's fork of TensorFlow is called tensorflow-macos).
python -m pip install tensorflow-macos
  1. Install Apple's tensorflow-metal to leverage Apple Metal (Apple's GPU framework) for M1, M1 Pro, M1 Max GPU acceleration.
python -m pip install tensorflow-metal
  1. (Optional) Install TensorFlow Datasets to run benchmarks included in this repo.
python -m pip install tensorflow-datasets
  1. Install common data science packages.
conda install jupyter pandas numpy matplotlib scikit-learn
  1. Start Jupyter Notebook.
jupyter notebook
  1. Import dependencies and check TensorFlow version/GPU access.
import numpy as np
import pandas as pd
import sklearn
import tensorflow as tf
import matplotlib.pyplot as plt

# Check for TensorFlow GPU access
print(f"TensorFlow has access to the following devices:\n{tf.config.list_physical_devices()}")

# See TensorFlow version
print(f"TensorFlow version: {tf.__version__}")

If it all worked, you should see something like:

TensorFlow has access to the following devices:
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'),
PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
TensorFlow version: 2.8.0

How to setup a TensorFlow environment on M1, M1 Pro, M1 Max using Miniforge (longer version)

If you're new to creating environments, using a new M1, M1 Pro, M1 Max machine and would like to get started running TensorFlow and other data science libraries, follow the below steps.

Note: You're going to see the term "package manager" a lot below. Think of it like this: a package manager is a piece of software that helps you install other pieces (packages) of software.

Installing package managers (Homebrew and Miniforge)

  1. Download and install Homebrew from https://brew.sh. Homebrew is a package manager that sets up a lot of useful things on your machine, including Command Line Tools for Xcode, you'll need this to run things like git. The command to install Homebrew will look something like:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

It will explain what it's doing and what you need to do as you go.

  1. Download the most compatible version of Miniforge (minimal installer for Conda specific to conda-forge, Conda is another package manager and conda-forge is a Conda channel) from GitHub.

If you're using an M1 variant Mac, it's "Miniforge3-MacOSX-arm64" <- click for direct download.

Clicking the link above will download a shell file called Miniforge3-MacOSX-arm64.sh to your Downloads folder (unless otherwise specified).

  1. Open Terminal.

  2. We've now got a shell file capable of installing Miniforge, but to do so we'll have to modify it's permissions to make it executable.

To do so, we'll run the command chmod -x FILE_NAME which stands for "change mode of FILE_NAME to -executable".

We'll then execute (run) the program using sh.

chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
  1. This should install Miniforge3 into your home directory (~/ stands for "Home" on Mac).

To check this, we can try to activate the (base) environment, we can do so using the source command.

source ~/miniforge3/bin/activate

If it worked, you should see something like the following in your terminal window.

(base) [email protected] ~ %
  1. We've just installed some new software and for it to fully work, we'll need to restart terminal.

Creating a TensorFlow environment

Now we've got the package managers we need, it's time to install TensorFlow.

Let's setup a folder called tensorflow-test (you can call this anything you want) and install everything in there to make sure it's working.

Note: An environment is like a virtual room on your computer. For example, you use the kitchen in your house for cooking because it's got all the tools you need. It would be strange to have an oven in your bedroom. The same thing on your computer. If you're going to be working on specific software, you'll want it all in one place and not scattered everywhere else.

  1. Make a directory called tensorflow-test. This is the directory we're going to be storing our environment. And inside the environment will be the software tools we need to run TensorFlow.

We can do so with the mkdir command which stands for "make directory".

mkdir tensorflow-test
  1. Change into tensorflow-test. For the rest of the commands we'll be running them inside the directory tensorflow-test so we need to change into it.

We can do this with the cd command which stands for "change directory".

cd tensorflow-test
  1. Now we're inside the tensorflow-test directory, let's create a new Conda environment using the conda command (this command was installed when we installed Miniforge above).

We do so using conda create --prefix ./env which stands for "conda create an environment with the name file/path/to/this/folder/env". The . stands for "everything before".

For example, if I didn't use the ./env, my filepath looks like: /Users/daniel/tensorflow-test/env

conda create --prefix ./env
  1. Activate the environment. If conda created the environment correctly, you should be able to activate it using conda activate path/to/environment.

Short version:

conda activate ./env

Long version:

conda activate /Users/daniel/tensorflow-test/env

Note: It's important to activate your environment every time you'd like to work on projects that use the software you install into that environment. For example, you might have one environment for every different project you work on. And all of the different tools for that specific project are stored in its specific environment.

If activating your environment went correctly, your terminal window prompt should look something like:

(/Users/daniel/tensorflow-test/env) [email protected] tensorflow-test %
  1. Now we've got a Conda environment setup, it's time to install the software we need.

Let's start by installing various TensorFlow dependencies (TensorFlow is a large piece of software and depends on many other pieces of software).

Rather than list these all out, Apple have setup a quick command so you can install almost everything TensorFlow needs in one line.

conda install -c apple tensorflow-deps

The above stands for "hey conda install all of the TensorFlow dependencies from the Apple Conda channel" (-c stands for channel).

If it worked, you should see a bunch of stuff being downloaded and installed for you.

  1. Now all of the TensorFlow dependencies have been installed, it's time install base TensorFlow.

Apple have created a fork (copy) of TensorFlow specifically for Apple Macs. It has all the features of TensorFlow with some extra functionality to make it work on Apple hardware.

This Apple fork of TensorFlow is called tensorflow-macos and is the version we'll be installing:

python -m pip install tensorflow-macos

Depending on your internet connection the above may take a few minutes since TensorFlow is quite a large piece of software.

  1. Now we've got base TensorFlow installed, it's time to install tensorflow-metal.

Why?

Machine learning models often benefit from GPU acceleration. And the M1, M1 Pro and M1 Max chips have quite powerful GPUs.

TensorFlow allows for automatic GPU acceleration if the right software is installed.

And Metal is Apple's framework for GPU computing.

So Apple have created a plugin for TensorFlow (also referred to as a TensorFlow PluggableDevice) called tensorflow-metal to run TensorFlow on Mac GPUs.

We can install it using:

python -m pip install tensorflow-metal

If the above works, we should now be able to leverage our Mac's GPU cores to speed up model training with TensorFlow.

  1. (Optional) Install TensorFlow Datasets. Doing the above is enough to run TensorFlow on your machine. But if you'd like to run the benchmarks included in this repo, you'll need TensorFlow Datasets.

TensorFlow Datasets provides a collection of common machine learning datasets to test out various machine learning code.

python -m pip install tensorflow-datasets
  1. Install common data science packages. If you'd like to run the benchmarks above or work on other various data science and machine learning projects, you're likely going to need Jupyter Notebooks, pandas for data manipulation, NumPy for numeric computing, matplotlib for plotting and Scikit-Learn for traditional machine learning algorithms and processing functions.

To install those in the current environment run:

conda install jupyter pandas numpy matplotlib scikit-learn
  1. Test it out. To see if everything worked, try starting a Jupyter Notebook and importing the installed packages.
# Start a Jupyter notebook
jupyter notebook

Once the notebook is started, in the first cell:

import numpy as np
import pandas as pd
import sklearn
import tensorflow as tf
import matplotlib.pyplot as plt

# Check for TensorFlow GPU access
print(tf.config.list_physical_devices())

# See TensorFlow version
print(tf.__version__)

If it all worked, you should see something like:

TensorFlow has access to the following devices:
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'),
PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
TensorFlow version: 2.5.0
  1. To see if it really worked, try running one of the notebooks above end to end!

And then compare your results to the benchmarks above.

Owner
Daniel Bourke
Machine Learning Engineer live on YouTube.
Daniel Bourke
Dynamic View Synthesis from Dynamic Monocular Video

Dynamic View Synthesis from Dynamic Monocular Video Project Website | Video | Paper Dynamic View Synthesis from Dynamic Monocular Video Chen Gao, Ayus

Chen Gao 139 Dec 28, 2022
Final report with code for KAIST Course KSE 801.

Orthogonal collocation is a method for the numerical solution of partial differential equations

Chuanbo HUA 4 Apr 06, 2022
The implementation for paper Joint t-SNE for Comparable Projections of Multiple High-Dimensional Datasets.

Joint t-sne This is the implementation for paper Joint t-SNE for Comparable Projections of Multiple High-Dimensional Datasets. abstract: We present Jo

IDEAS Lab 7 Dec 18, 2022
Implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training

SemCo The official pytorch implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training

42 Nov 14, 2022
My personal Home Assistant configuration.

About This is my personal Home Assistant configuration. My guiding princile is to have full local control of all my devices. I intend everything to ru

Chris Turra 13 Jun 07, 2022
A Lightweight Experiment & Resource Monitoring Tool 📺

Lightweight Experiment & Resource Monitoring 📺 "Did I already run this experiment before? How many resources are currently available on my cluster?"

170 Dec 28, 2022
LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation

LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation Table of Contents: Introduction Project Structure Installation Datas

Yu Wang 492 Dec 02, 2022
Learning to Segment Instances in Videos with Spatial Propagation Network

Learning to Segment Instances in Videos with Spatial Propagation Network This paper is available at the 2017 DAVIS Challenge website. Check our result

Jingchun Cheng 145 Sep 28, 2022
GazeScroller - Using Facial Movements to perform Hands-free Gesture on the system

GazeScroller Using Facial Movements to perform Hands-free Gesture on the system

2 Jan 05, 2022
Official repository for the paper "Instance-Conditioned GAN"

Official repository for the paper "Instance-Conditioned GAN" by Arantxa Casanova, Marlene Careil, Jakob Verbeek, Michał Drożdżal, Adriana Romero-Soriano.

Facebook Research 510 Dec 30, 2022
Localized representation learning from Vision and Text (LoVT)

Localized Vision-Text Pre-Training Contrastive learning has proven effective for pre- training image models on unlabeled data and achieved great resul

Philip Müller 10 Dec 07, 2022
Implementation of "Semi-supervised Domain Adaptive Structure Learning"

Semi-supervised Domain Adaptive Structure Learning - ASDA This repo contains the source code and dataset for our ASDA paper. Illustration of the propo

3 Dec 13, 2021
A library for augmentation of a YOLO-formated dataset

YOLO Dataset Augmentation lib Инструкция по использованию этой библиотеки Запуск всех файлов осуществлять из консоли. GoogleCrawl_to_Dataset.py Это ск

Egor Orel 1 Dec 10, 2022
Unofficial implementation of Perceiver IO: A General Architecture for Structured Inputs & Outputs

Perceiver IO Unofficial implementation of Perceiver IO: A General Architecture for Structured Inputs & Outputs Usage import torch from src.perceiver.

Timur Ganiev 111 Nov 15, 2022
FedCV: A Federated Learning Framework for Diverse Computer Vision Tasks

FedCV: A Federated Learning Framework for Diverse Computer Vision Tasks Image Classification Dataset: Google Landmark, COCO, ImageNet Model: Efficient

FedML-AI 62 Dec 10, 2022
QMagFace: Simple and Accurate Quality-Aware Face Recognition

Quality-Aware Face Recognition 26.11.2021 start readme QMagFace: Simple and Accurate Quality-Aware Face Recognition Research Paper Implementation - To

Philipp Terhörst 59 Jan 04, 2023
Code for visualizing the loss landscape of neural nets

Visualizing the Loss Landscape of Neural Nets This repository contains the PyTorch code for the paper Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer

Tom Goldstein 2.2k Jan 09, 2023
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation

Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation Requirements This repository needs mmsegmentation Training To train

20 May 28, 2022
Python package for dynamic system estimation of time series

PyDSE Toolset for Dynamic System Estimation for time series inspired by DSE. It is in a beta state and only includes ARMA models right now. Documentat

Blue Yonder GmbH 40 Oct 07, 2022
[ICCV'21] UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction

UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction Project Page | Paper | Supplementary | Video This reposit

331 Dec 28, 2022