PyTorch framework, for reproducing experiments from the paper Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks

Overview

Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks.

Code, based on the PyTorch framework, for reproducing experiments from the paper Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks.

Install Requirements

Tested with python 3.8.

pip install -r requirements.txt

1. Incremental Hierarchical Tensor Rank Learning

1.1 Generating Data

Matrix Completion/Sensing

python matrix_factorization_data_generator.py --task_type completion
  • Setting task_type to "sensing" will generate matrix sensing data.
  • Use the -h flag for information on the customizable run arguments.

Tensor Completion/Sensing

python tensor_sensing_data_generator.py --task_type completion
  • Setting task_type to "sensing" will generate tensor sensing data.
  • Use the -h flag for information on the customizable run arguments.

1.2 Running Experiments

Matrix Factorization

python matrix_factorization_experiments_runner.py \
--dataset_path 
   
     \
--epochs 500000 \
--num_train_samples 2048 \
--outputs_dir "outputs/mf_exps" \
--save_logs \
--save_metric_plots \
--save_checkpoints \
--validate_every 25 \
--save_every_num_val 50 \
--epoch_log_interval 25 \
--train_batch_log_interval -1 

   
  • dataset_path should point to the dataset file generated in the previous step.
  • A folder with checkpoints, metric plots, and a log file will be automatically created under the directory specified by outputs_dir.
  • Use the -h flag for information on the customizable run arguments.

Tensor Factorization

python tensor_factorization_experiments_runner.py \
--dataset_path 
   
     \
--epochs 500000 \
--num_train_samples 2048 \
--outputs_dir "outputs/tf_exps" \
--save_logs \
--save_metric_plots \
--save_checkpoints \
--validate_every 25 \
--save_every_num_val 50 \
--epoch_log_interval 25 \
--train_batch_log_interval -1 

   
  • dataset_path should point to the dataset file generated in the previous step.
  • A folder with checkpoints, metric plots, and a log file will be automatically created under the directory specified by outputs_dir.
  • Use the -h flag for information on the customizable run arguments.

Hierarchical Tensor Factorization

python hierarchical_tensor_factorization_experiments_runner.py \
--dataset_path 
   
     \
--epochs 500000 \
--num_train_samples 2048 \
--outputs_dir "outputs/htf_exps" \
--save_logs \
--save_metric_plots \
--save_checkpoints \
--validate_every 25 \
--save_every_num_val 50 \
--epoch_log_interval 25 \
--train_batch_log_interval -1 

   
  • dataset_path should point to the dataset file generated in the previous step.
  • A folder with checkpoints, metric plots, and a log file will be automatically created under the directory specified by outputs_dir.
  • Use the -h flag for information on the customizable run arguments.

1.3 Plotting Results

Plotting metrics against the number of iterations for an experiment (or multiple experiments) can be done by:

python dynamical_analysis_results_multi_plotter.py \
--plot_config_path 
   

   
  • plot_config_path should point to a file with the plot configuration. For example, plot_configs/mf_tf_htf_dyn_plot_config.json is the configuration used to create the plot below. To run it, it suffices to fill in the checkpoint_path fields (checkpoints are created during training inside the respective experiment's folder).

Example plot:

2. Countering Locality Bias of Convolutional Networks via Regularization

2.1. Is Same Class

2.1.1 Generating Data

Generating train data is done by running:

python is_same_class_data_generator.py --train --num_samples 5000

For test data use:

python is_same_class_data_generator.py --num_samples 10000
  • Use the output_dir argument to set the output directory in which the datasets will be saved (default is ./data/is_same).
  • The flag train determines whether to generate the dataset using the train or test set of the original dataset.
  • Specify num_samples to set how many samples to generate.
  • Use the -h flag for information on the customizable run arguments.

2.1.2 Running Experiments

python is_same_class_experiments_runner.py \
--train_dataset_path 
   
     \
--test_dataset_path 
    
      \
--epochs 150 \
--outputs_dir "outputs/is_same_exps" \
--save_logs \
--save_metric_plots \
--save_checkpoints \
--validate_every 1 \
--save_every_num_val 1 \
--epoch_log_interval 1 \
--train_batch_log_interval 50 \
--stop_on_perfect_train_acc \
--stop_on_perfect_train_acc_patience 20 \
--model resnet18 \
--distance 0 \
--grad_change_reg_coeff 0

    
   
  • train_dataset_path and test_dataset_path are the paths of the train and test dataset files, respectively.
  • A folder with checkpoints, metric plots, and a log file will be automatically created under the directory specified by outputs_dir.
  • Use the -h flag for information on the customizable run arguments.

2.1.3 Plotting Results

Plotting different regularization options against the task difficulty can be done by:

\ --error_bars_opacity 0.5 ">
python locality_bias_plotter.py \
--experiments_dir 
   
     \
--experiment_groups_dir_names 
     
     
       .. \
--per_experiment_group_y_axis_value_name 
       
       
         .. \ --per_experiment_group_label 
         
         
           .. \ --x_axis_value_name "distance" \ --plot_title "Is Same Class" \ --x_label "distance between images" \ --y_label "test accuracy (%)" \ --save_plot_to 
          
            \ --error_bars_opacity 0.5 
          
         
        
       
      
     
    
   
  • Set experiments_dir to the directory containing the experiments you would like to plot.
  • Specify after experiment_groups_dir_names the names of the experiment groups, each group name should correspond to a sub-directory with the group name under experiments_dir path.
  • Use per_experiment_group_y_axis_value_name to name the report value for each experiment. Name should match key in experiment's summary.json files. Use dot notation for nested keys.
  • per_experiment_group_label sets a label for the groups by the same order they were mentioned.
  • save_plot_to is the path to save the plot at.
  • Use x_axis_value_name to set the name of the value to use as the x-axis. This should match to a key in either summary.json or config.json files. Use dot notation for nested keys.
  • Use the -h flag for information on the customizable run arguments.

Example plots:

2.2. Pathfinder

2.2.1 Generating Data

To generate Pathfinder datasets, first run the following command to create raw image samples for all specified path lengths:

python pathfinder_raw_images_generator.py \
--num_samples 20000 \
--path_lengths 3 5 7 9
  • Use the output_dir argument to set the output directory in which the raw samples will be saved (default is ./data/pathfinder/raw).
  • The samples for each path length are separated to different directories.
  • Use the -h flag for information on the customizable run arguments.

Then, use the following command to create the dataset files for all path lengths (one dataset per length):

python pathfinder_data_generator.py \
--dataset_path data/pathfinder/raw \
--num_train_samples 10000 \
--num_test_samples 10000
  • dataset_path is the path to the directory of the raw images.
  • Use the output_dir argument to set the output directory in which the datasets will be saved (default is ./data/pathfinder).
  • Use the -h flag for information on the customizable run arguments.

2.2.2 Running Experiments

python pathfinder_experiments_runner.py \
--dataset_path 
   
     \
--epochs 150 \
--outputs_dir "outputs/pathfinder_exps" \
--save_logs \
--save_metric_plots \
--save_checkpoints \
--validate_every 1 \
--save_every_num_val 1 \
--epoch_log_interval 1 \
--train_batch_log_interval 50 \
--stop_on_perfect_train_acc \
--stop_on_perfect_train_acc_patience 20 \
--model resnet18 \
--grad_change_reg_coeff 0

   
  • dataset_path should point to the dataset file generated in the previous step.
  • A folder with checkpoints, metric plots, and a log file will be automatically created under the directory specified by outputs_dir.
  • Use the -h flag for information on the customizable run arguments.

2.2.3 Plotting Results

Plotting different regularization options against the task difficulty can be done by:

\ --error_bars_opacity 0.5">
python locality_bias_plotter.py \
--experiments_dir 
   
     \
--experiment_groups_dir_names 
     
     
       .. \
--per_experiment_group_y_axis_value_name 
       
       
         .. \ --per_experiment_group_label 
         
         
           .. \ --x_axis_value_name "dataset_path" \ --plot_title "Pathfinder" \ --x_label "path length" \ --y_label "test accuracy (%)" \ --x_axis_ticks 3 5 7 9 \ --save_plot_to 
          
            \ --error_bars_opacity 0.5 
          
         
        
       
      
     
    
   
  • Set experiments_dir to the directory containing the experiments you would like to plot.
  • Specify after experiment_groups_dir_names the names of the experiment groups, each group name should correspond to a sub-directory with the group name under experiments_dir path.
  • Use per_experiment_group_y_axis_value_name to name the report value for each experiment. Name should match key in experiment's summary.json files. Use dot notation for nested keys.
  • per_experiment_group_label sets a label for the groups by the same order they were mentioned.
  • save_plot_to is the path to save the plot at.
  • Use x_axis_value_name to set the name of the value to use as the x-axis. This should match to a key in either summary.json or config.json files. Use dot notation for nested keys.
  • Use the -h flag for information on the customizable run arguments.

Example plots:

Citation

For citing the paper, you can use:

@article{razin2022implicit,
  title={Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks},
  author={Razin, Noam and Maman, Asaf and Cohen, Nadav},
  journal={arXiv preprint arXiv:2201.11729},
  year={2022}
}
Owner
Asaf
MS.c Student Computer Science
Asaf
Select, weight and analyze complex sample data

Sample Analytics In large-scale surveys, often complex random mechanisms are used to select samples. Estimates derived from such samples must reflect

samplics 37 Dec 15, 2022
darija <-> english dictionary

darija-dictionary Having advanced IT solutions that are well adapted to the Moroccan context passes inevitably through understanding Moroccan dialect.

DODa 102 Jan 01, 2023
Implementations of LSTM: A Search Space Odyssey variants and their training results on the PTB dataset.

An LSTM Odyssey Code for training variants of "LSTM: A Search Space Odyssey" on Fomoro. Check out the blog post. Training Install TensorFlow. Clone th

Fomoro AI 95 Apr 13, 2022
We propose a new method for effective shadow removal by regarding it as an exposure fusion problem.

Auto-exposure fusion for single-image shadow removal We propose a new method for effective shadow removal by regarding it as an exposure fusion proble

Qing Guo 146 Dec 31, 2022
Tutorial repo for an end-to-end Data Science project

End-to-end Data Science project This is the repo with the notebooks, code, and additional material used in the ITI's workshop. The goal of the session

Deena Gergis 127 Dec 30, 2022
SMPL-X: A new joint 3D model of the human body, face and hands together

SMPL-X: A new joint 3D model of the human body, face and hands together [Paper Page] [Paper] [Supp. Mat.] Table of Contents License Description News I

Vassilis Choutas 1k Jan 09, 2023
Python implementation of Lightning-rod Agent, the Stack4Things board-side probe

Iotronic Lightning-rod Agent Python implementation of Lightning-rod Agent, the Stack4Things board-side probe. Free software: Apache 2.0 license Websit

2 May 19, 2022
(CVPR2021) Kaleido-BERT: Vision-Language Pre-training on Fashion Domain

Kaleido-BERT: Vision-Language Pre-training on Fashion Domain Mingchen Zhuge*, Dehong Gao*, Deng-Ping Fan#, Linbo Jin, Ben Chen, Haoming Zhou, Minghui

250 Jan 08, 2023
A PyTorch implementation of "Capsule Graph Neural Network" (ICLR 2019).

CapsGNN ⠀⠀ A PyTorch implementation of Capsule Graph Neural Network (ICLR 2019). Abstract The high-quality node embeddings learned from the Graph Neur

Benedek Rozemberczki 1.2k Jan 02, 2023
Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2

Graph Transformer - Pytorch Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2. This was recently used by bot

Phil Wang 97 Dec 28, 2022
Project NII pytorch scripts

project-NII-pytorch-scripts By Xin Wang, National Institute of Informatics, since 2021 I am a new pytorch user. If you have any suggestions or questio

Yamagishi and Echizen Laboratories, National Institute of Informatics 184 Dec 23, 2022
SciKit-Learn Laboratory (SKLL) makes it easy to run machine learning experiments.

SciKit-Learn Laboratory This Python package provides command-line utilities to make it easier to run machine learning experiments with scikit-learn. O

ETS 528 Nov 25, 2022
SimulLR - PyTorch Implementation of SimulLR

PyTorch Implementation of SimulLR There is an interesting work[1] about simultan

11 Dec 22, 2022
PaddleBoBo是基于PaddlePaddle和PaddleSpeech、PaddleGAN等开发套件的虚拟主播快速生成项目

PaddleBoBo - 元宇宙时代,你也可以动手做一个虚拟主播。 PaddleBoBo是基于飞桨PaddlePaddle深度学习框架和PaddleSpeech、PaddleGAN等开发套件的虚拟主播快速生成项目。PaddleBoBo致力于简单高效、可复用性强,只需要一张带人像的图片和一段文字,就能

502 Jan 08, 2023
Implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training

SemCo The official pytorch implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training

42 Nov 14, 2022
A PyTorch library and evaluation platform for end-to-end compression research

CompressAI CompressAI (compress-ay) is a PyTorch library and evaluation platform for end-to-end compression research. CompressAI currently provides: c

InterDigital 680 Jan 06, 2023
《Improving Unsupervised Image Clustering With Robust Learning》(2020)

Improving Unsupervised Image Clustering With Robust Learning This repo is the PyTorch codes for "Improving Unsupervised Image Clustering With Robust L

Sungwon Park 129 Dec 27, 2022
Try out deep learning models online on Google Colab

Try out deep learning models online on Google Colab

Erdene-Ochir Tuguldur 1.5k Dec 27, 2022
My coursework for Machine Learning (2021 Spring) at National Taiwan University (NTU)

Machine Learning 2021 Machine Learning (NTU EE 5184, Spring 2021) Instructor: Hung-yi Lee Course Website : (https://speech.ee.ntu.edu.tw/~hylee/ml/202

100 Dec 26, 2022
VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).

VSR-Transformer By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool This paper proposes a new Transformer for video super-resolution (called VSR-Transf

Jiezhang Cao 225 Nov 13, 2022