Exploring the Top ML and DL GitHub Repositories

Overview

Exploring the Top ML and DL GitHub Repositories

This repository contains my work related to my project where I collected data on the most popular machine learning and deep learning GitHub repositories in order to further visualize and analyze it.

I've written a corresponding article about this project, which you can find on Towards Data Science. The article was selected as an "Editors Pick", and was also selected to be in their "Hands on Tutorials" section of their publication.

At a high level, my analysis is as follows:

  1. I collected data on the top machine learning and deep learning repositories and their respective owners from GitHub.
  2. I cleaned and prepared the data.
  3. I visualized what I thought were interesting patterns, trends, and findings within the data, and discuss each visualization in detail within the TDS article above.

Tools used

Python NumPy pandas tqdm PyGitHub GeoPy Altair tqdm wordcloud docopt black

Replicating the Analysis

I've designed the analysis in this repository so that anyone is able to recreate the data collection, cleaning, and visualization steps in a fully automated manner. To do this, open up a terminal and follow the steps below:

Step 1: Clone this repository to your computer

# clone the repo
git clone https://github.com/nicovandenhooff/top-repo-analysis.git

# change working directory to the repos root directory
cd top-repo-analysis

Step 2: Create and activate the required virtual environment

# create the environment
conda env create -f environment.yaml

# activate the environment
conda activate top-repo-analysis

Step 3: Obtain a GitHub personal access token ("PAT") and add it to the credentials file

Please see how to obtain a PAT here.

Once you have it perform the following:

# open the credentials file
open src/credentials.json

This will open the credentials json file which contains the following:

" }">
{
"github_token": "
   
    "
   
}

Change to your PAT.

Step 4: Run the following command to delete the current data and visualizations in the repository

make clean

Step 5: Run the following command to recreate the analysis

make all

Please note that if you are recreating the analysis:

  • The last step will take several hours to run (approximately 6-8 hours) as the data collection process from GitHub has to sleep to respect the GitHub API rate limit. The total number of API requests for the data collection will approximately be between 20,000 to 30,000.
  • When the data cleaning script data_cleaning.py runs, there make be some errors may be printed to the screen by GeoPy if the Noinatim geolocation service is unable to find a valid location for a GitHub user. This will not cause the script to terminate, and is just ugly in the terminal. Unfortunately you cannot suppress these error messages, so just ignore them if they occur.
  • Getting the location data with GeoPy in the data cleaning script also takes about 30 minutes as the Nominatim geolocation service limits 1 API request per second.
  • I ran this analysis on December 30, 2021 and as such collected the data from GitHub on this date. If you run this analysis in the future, the data you collect will inherently be slightly different if the machine learning and deep learning repositories with the highest number of stars has changed since the date when I ran the analysis. This will slightly change how the resulting visualizations look.

Using the Scraper to Collect New Data

You can also use the scraping script in isolation to collect new data from GitHub if you desire.

If you'd like to do this, all you'll need to do is open up a terminal, follow steps 1 to 3 above, and then perform the following:

Step a) Run the scraping script with your desired options as follows

python src/github_scraper.py --queries=<queries> --path=<path>
  • Replace with your desired queries. Note that if you desire multiple search queries, enclose them in "" separate them by a single comma with NO SPACE after the comma. For example "Machine Learning,Deep Learning"
  • Replace with the output path that you want the scraped data to be saved at.

Please see the documentation in the header of the scraping script for additional options that are available.

Step b) Run the data cleaning script to clean your newly scraped data

python src/data_cleaning.py --input_path=<path> --output_path=<output_path>
  • Replace with the path that you saved the scraped data at.
  • Replace with the output path that you want the cleaned data to be saved at.
  • As metioned in the last section, some errors may be printed to the terminal by GeoPy during the data cleaning process, but feel free to ignore these as they do not affect the execution of the script.

Dependencies

Please see the environment file for a full list of dependencies.

License

The source code for the site is licensed under the MIT license.

You might also like...
Spectacular AI SDK fuses data from cameras and IMU sensors and outputs an accurate 6-degree-of-freedom pose of a device.
Spectacular AI SDK fuses data from cameras and IMU sensors and outputs an accurate 6-degree-of-freedom pose of a device.

Spectacular AI SDK examples Spectacular AI SDK fuses data from cameras and IMU sensors (accelerometer and gyroscope) and outputs an accurate 6-degree-

Working Time Statistics of working hours and working conditions by industry and company

Working Time Statistics of working hours and working conditions by industry and company

A python package which can be pip installed to perform statistics and visualize binomial and gaussian distributions of the dataset

GBiStat package A python package to assist programmers with data analysis. This package could be used to plot : Binomial Distribution of the dataset p

ToeholdTools is a Python package and desktop app designed to facilitate analyzing and designing toehold switches, created as part of the 2021 iGEM competition.

ToeholdTools Category Status Repository Package Build Quality A library for the analysis of toehold switch riboregulators created by the iGEM team Cit

A collection of robust and fast processing tools for parsing and analyzing web archive data.

ChatNoir Resiliparse A collection of robust and fast processing tools for parsing and analyzing web archive data. Resiliparse is part of the ChatNoir

Python beta calculator that retrieves stock and market data and provides linear regressions.

Stock and Index Beta Calculator Python script that calculates the beta (β) of a stock against the chosen index. The script retrieves the data and resa

Larch: Applications and Python Library for Data Analysis of X-ray Absorption Spectroscopy (XAS, XANES, XAFS, EXAFS), X-ray Fluorescence (XRF) Spectroscopy and Imaging

Larch: Data Analysis Tools for X-ray Spectroscopy and More Documentation: http://xraypy.github.io/xraylarch Code: http://github.com/xraypy/xraylarch L

A real-time financial data streaming pipeline and visualization platform using Apache Kafka, Cassandra, and Bokeh.
A real-time financial data streaming pipeline and visualization platform using Apache Kafka, Cassandra, and Bokeh.

Realtime Financial Market Data Visualization and Analysis Introduction This repo shows my project about real-time stock data pipeline. All the code is

Python script to automate the plotting and analysis of percentage depth dose and dose profile simulations in TOPAS.

topas-create-graphs A script to automatically plot the results of a topas simulation Works for percentage depth dose (pdd) and dose profiles (dp). Dep

Releases(v1.0.0)
Owner
Nico Van den Hooff
UBC Master of Data Science 2022
Nico Van den Hooff
Tools for working with MARC data in Catalogue Bridge.

catbridge_tools Tools for working with MARC data in Catalogue Bridge. Borrows heavily from PyMarc

1 Nov 11, 2021
Unsub is a collection analysis tool that assists libraries in analyzing their journal subscriptions.

About Unsub is a collection analysis tool that assists libraries in analyzing their journal subscriptions. The tool provides rich data and a summary g

9 Nov 16, 2022
The OHSDI OMOP Common Data Model allows for the systematic analysis of healthcare observational databases.

The OHSDI OMOP Common Data Model allows for the systematic analysis of healthcare observational databases.

Bell Eapen 14 Jan 02, 2023
Data exploration done quick.

Pandas Tab Implementation of Stata's tabulate command in Pandas for extremely easy to type one-way and two-way tabulations. Support: Python 3.7 and 3.

W.D. 20 Aug 27, 2022
Data Analysis for First Year Laboratory at Imperial College, London.

Data Analysis for First Year Laboratory at Imperial College, London. For personal reference only, and to reference in lab reports and lab books.

Martin He 0 Aug 29, 2022
Statistical Rethinking: A Bayesian Course Using CmdStanPy and Plotnine

Statistical Rethinking: A Bayesian Course Using CmdStanPy and Plotnine Intro This repo contains the python/stan version of the Statistical Rethinking

Andrés Suárez 3 Nov 08, 2022
SparseLasso: Sparse Solutions for the Lasso

SparseLasso: Sparse Solutions for the Lasso Introduction SparseLasso provides a Scikit-Learn based estimation of the Lasso with cross-validation tunin

Gabriel Okasa 1 Nov 08, 2021
Python package for analyzing behavioral data for Brain Observatory: Visual Behavior

Allen Institute Visual Behavior Analysis package This repository contains code for analyzing behavioral data from the Allen Brain Observatory: Visual

Allen Institute 16 Nov 04, 2022
University Challenge 2021 With Python

University Challenge 2021 This repository contains: The TeX file of the technical write-up describing the University / HYPER Challenge 2021 under late

2 Nov 27, 2021
This program analyzes a DNA sequence and outputs snippets of DNA that are likely to be protein-coding genes.

This program analyzes a DNA sequence and outputs snippets of DNA that are likely to be protein-coding genes.

1 Dec 28, 2021
Import, connect and transform data into Excel

xlwings_query Import, connect and transform data into Excel. Description The concept is to apply data transformations to a main query object. When the

George Karakostas 1 Jan 19, 2022
Feature Detection Based Template Matching

Feature Detection Based Template Matching The classification of the photos was made using the OpenCv template Matching method. Installation Use the pa

Muhammet Erem 2 Nov 18, 2021
DaCe is a parallel programming framework that takes code in Python/NumPy and other programming languages

aCe - Data-Centric Parallel Programming Decoupling domain science from performance optimization. DaCe is a parallel programming framework that takes c

SPCL 330 Dec 30, 2022
A 2-dimensional physics engine written in Cairo

A 2-dimensional physics engine written in Cairo

Topology 38 Nov 16, 2022
Program that predicts the NBA mvp based on data from previous years.

NBA MVP Predictor A machine learning model using RandomForest Regression that predicts NBA MVP's using player data. Explore the docs » View Demo · Rep

Muhammad Rabee 1 Jan 21, 2022
A variant of LinUCB bandit algorithm with local differential privacy guarantee

Contents LDP LinUCB Description Model Architecture Dataset Environment Requirements Script Description Script and Sample Code Script Parameters Launch

Weiran Huang 4 Oct 25, 2022
Hangar is version control for tensor data. Commit, branch, merge, revert, and collaborate in the data-defined software era.

Overview docs tests package Hangar is version control for tensor data. Commit, branch, merge, revert, and collaborate in the data-defined software era

Tensorwerk 193 Nov 29, 2022
Code for the DH project "Dhimmis & Muslims – Analysing Multireligious Spaces in the Medieval Muslim World"

Damast This repository contains code developed for the digital humanities project "Dhimmis & Muslims – Analysing Multireligious Spaces in the Medieval

University of Stuttgart Visualization Research Center 2 Jul 01, 2022
Pandas and Dask test helper methods with beautiful error messages.

beavis Pandas and Dask test helper methods with beautiful error messages. test helpers These test helper methods are meant to be used in test suites.

Matthew Powers 18 Nov 28, 2022
MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation [ECCV2020]

MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation [ECCV2020] by Kaisiyuan Wang, Qianyi Wu, Linsen Song, Zhuoqian Yang, Wa

112 Dec 28, 2022