Visual Memorability for Robotic Interestingness via Unsupervised Online Learning (ECCV 2020 Oral and TRO)

Overview

Visual Interestingness


Install Dependencies

This version is tested in PyTorch 1.7

  pip3 install -r requirements.txt

Long-term Learning

  • You may skip this step, if you download the pre-trained vgg16.pt into folder "saves".

  • Download coco dataset into folder [data-root]:

    bash download_coco.sh [data-root] # replace [data-root] by your desired location
    

    The dataset will be look like:

    data-root
    ├──coco
       ├── annotations
       │   ├── annotations_trainval2017
       │   └── image_info_test2017
       └── images
           ├── test2017
           ├── train2017
           └── val2017
    
  • Run

    python3 longterm.py --data-root [data-root] --model-save saves/vgg16.pt
    
    # This requires a long time for training on single GPU.
    # Create a folder "saves" manually and a model named "ae.pt" will be saved.
    

Short-term Learning

  • Dowload the SubT front camera data (SubTF) and put into folder "data-root", so that it looks like:

    data-root
    ├──SubTF
       ├── 0817-ugv0-tunnel0
       ├── 0817-ugv1-tunnel0
       ├── 0818-ugv0-tunnel1
       ├── 0818-ugv1-tunnel1
       ├── 0820-ugv0-tunnel1
       ├── 0821-ugv0-tunnel0
       ├── 0821-ugv1-tunnel0
       ├── ground-truth
       └── train
    
  • Run

    python3 shortterm.py --data-root [data-root] --model-save saves/vgg16.pt --dataset SubTF --memory-size 100 --save-flag n100usage
    
    # This will read the previous model "ae.pt".
    # A new model "ae.pt.SubTF.n1000.mse" will be generated.
    
  • You may skip this step, if you download the pre-trained vgg16.pt.SubTF.n100usage.mse into folder "saves".

On-line Learning

  • Run

      python3 online.py --data-root [data-root] --model-save saves/vgg16.pt.SubTF.n100usage.mse --dataset SubTF --test-data 0 --save-flag n100usage
    
      # --test-data The sequence ID in the dataset SubTF, [0-6] is avaiable
      # This will read the trained model "vgg16.pt.SubTF.n100usage.mse" from short-term learning.
    
  • Alternatively, you may test all sequences by running

      bash test.sh
    
  • This will generate results files in folder "results".

  • You may skip this step, if you download our generated results.


Evaluation

  • We follow the SubT tutorial for evaluation, simply run

    python performance.py --data-root [data-root] --save-flag n100usage --category normal --delta 1 2 3
    # mean accuracy: [0.64455275 0.8368784  0.92165116 0.95906876]
    
    python performance.py --data-root [data-root] --save-flag n100usage --category difficult --delta 1 2 4
    # mean accuracy: [0.42088688 0.57836163 0.67878168 0.75491805]
    
  • This will generate performance figures and create data curves for two categories in folder "performance".


Citation

      @inproceedings{wang2020visual,
        title={Visual memorability for robotic interestingness via unsupervised online learning},
        author={Wang, Chen and Wang, Wenshan and Qiu, Yuheng and Hu, Yafei and Scherer, Sebastian},
        booktitle={European Conference on Computer Vision (ECCV)},
        year={2020},
        organization={Springer}
      }
      
      @article{wang2021unsupervised,
        title={Unsupervised Online Learning for Robotic Interestingness with Visual Memory},
        author={Wang, Chen and  Qiu, Yuheng and Wang, Wenshan and Hu, Yafei anad Kim, Seungchan and Scherer, Sebastian},
        journal={IEEE Transactions on Robotics (T-RO)},
        year={2021},
        publisher={IEEE}
      }

You may watch the following video to catch the idea of this work.

You might also like...
Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)

Improving Vision-and-Language Navigation with Image-Text Pairs from the Web Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh

Code for ECCV 2020 paper
Code for ECCV 2020 paper "Contacts and Human Dynamics from Monocular Video".

Contact and Human Dynamics from Monocular Video This is the official implementation for the ECCV 2020 spotlight paper by Davis Rempe, Leonidas J. Guib

Repository for Traffic Accident Benchmark for Causality Recognition (ECCV 2020)
Repository for Traffic Accident Benchmark for Causality Recognition (ECCV 2020)

Causality In Traffic Accident (Under Construction) Repository for Traffic Accident Benchmark for Causality Recognition (ECCV 2020) Overview Data Prepa

Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

dataset for ECCV 2020 "Motion Capture from Internet Videos"

Motion Capture from Internet Videos Motion Capture from Internet Videos Junting Dong*, Qing Shuai*, Yuanqing Zhang, Xian Liu, Xiaowei Zhou, Hujun Bao

Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.

Adversarial Training Against Location-Optimized Adversarial Patches arXiv | Paper | Code | Video | Slides Code for the paper: Sukrut Rao, David Stutz,

SNE-RoadSeg in PyTorch, ECCV 2020
SNE-RoadSeg in PyTorch, ECCV 2020

SNE-RoadSeg Introduction This is the official PyTorch implementation of SNE-RoadSeg: Incorporating Surface Normal Information into Semantic Segmentati

[ECCV 2020] Gradient-Induced Co-Saliency Detection
[ECCV 2020] Gradient-Induced Co-Saliency Detection

Gradient-Induced Co-Saliency Detection Zhao Zhang*, Wenda Jin*, Jun Xu, Ming-Ming Cheng ⭐ Project Home » The official repo of the ECCV 2020 paper Grad

Code for Towards Streaming Perception (ECCV 2020) :car:
Code for Towards Streaming Perception (ECCV 2020) :car:

sAP — Code for Towards Streaming Perception ECCV Best Paper Honorable Mention Award Feb 2021: Announcing the Streaming Perception Challenge (CVPR 2021

Comments
  • Variable

    Variable

    https://github.com/wang-chen/interestingness/blob/6994d50bd47d14b617f34f5c36c1beaba03acfdc/test_interest.py#L94

    I think using Variable() will just return a tensor object in the new pytorch version.

    opened by haleqiu 2
Owner
Chen Wang
I am engaged in delivering simple and efficient source code.
Chen Wang
LSTM-VAE Implementation and Relevant Evaluations

LSTM-VAE Implementation and Relevant Evaluations Before using any file in this repository, please create two directories under the root directory name

Lan Zhang 5 Oct 08, 2022
On Generating Extended Summaries of Long Documents

ExtendedSumm This repository contains the implementation details and datasets used in On Generating Extended Summaries of Long Documents paper at the

Georgetown Information Retrieval Lab 76 Sep 05, 2022
Realtime_Multi-Person_Pose_Estimation

Introduction Multi Person PoseEstimation By PyTorch Results Require Pytorch Installation git submodule init && git submodule update Demo Download conv

tensorboy 1.3k Jan 05, 2023
Official Implementation of HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation

HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation by Lukas Hoyer, Dengxin Dai, and Luc Van Gool [Arxiv] [Paper] Overview Unsup

Lukas Hoyer 149 Dec 28, 2022
SpinalNet: Deep Neural Network with Gradual Input

SpinalNet: Deep Neural Network with Gradual Input This repository contains scripts for training different variations of the SpinalNet and its counterp

H M Dipu Kabir 142 Dec 30, 2022
[NeurIPS 2021] "Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks" by Yonggan Fu, Qixuan Yu, Yang Zhang, Shang Wu, Xu Ouyang, David Cox, Yingyan Lin

Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks Yonggan Fu, Qixuan Yu, Yang Zhang, S

12 Dec 11, 2022
Official repository for the paper "GN-Transformer: Fusing AST and Source Code information in Graph Networks".

GN-Transformer AST This is the official repository for the paper "GN-Transformer: Fusing AST and Source Code information in Graph Networks". Data Prep

Cheng Jun-Yan 10 Nov 26, 2022
Code for paper "Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation" EMNLP 2021

The repo provides the code for paper "Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation" EMNLP 2

Yuning Mao 18 May 24, 2022
Pytorch implementation of the paper "Class-Balanced Loss Based on Effective Number of Samples"

Class-balanced-loss-pytorch Pytorch implementation of the paper Class-Balanced Loss Based on Effective Number of Samples presented at CVPR'19. Yin Cui

Vandit Jain 697 Dec 29, 2022
PyTorch implementation of Off-policy Learning in Two-stage Recommender Systems

Off-Policy-2-Stage This repo provides a PyTorch implementation of the MovieLens experiments for the following paper: Off-policy Learning in Two-stage

Jiaqi Ma 25 Dec 12, 2022
A bare-bones TensorFlow framework for Bayesian deep learning and Gaussian process approximation

Aboleth A bare-bones TensorFlow framework for Bayesian deep learning and Gaussian process approximation [1] with stochastic gradient variational Bayes

Gradient Institute 127 Dec 12, 2022
Oscar and VinVL

Oscar: Object-Semantics Aligned Pre-training for Vision-and-Language Tasks VinVL: Revisiting Visual Representations in Vision-Language Models Updates

Microsoft 938 Dec 26, 2022
Anonymize BLM Protest Images

Anonymize BLM Protest Images This repository automates @BLMPrivacyBot, a Twitter bot that shows the anonymized images to help keep protesters safe. Us

Stanford Machine Learning Group 40 Oct 13, 2022
QAHOI: Query-Based Anchors for Human-Object Interaction Detection (paper)

QAHOI QAHOI: Query-Based Anchors for Human-Object Interaction Detection (paper) Requirements PyTorch = 1.5.1 torchvision = 0.6.1 pip install -r requ

38 Dec 29, 2022
Learning Features with Parameter-Free Layers (ICLR 2022)

Learning Features with Parameter-Free Layers (ICLR 2022) Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo | Paper NAVER AI Lab, NAVER CLOVA Up

NAVER AI 65 Dec 07, 2022
💡 Learnergy is a Python library for energy-based machine learning models.

Learnergy: Energy-based Machine Learners Welcome to Learnergy. Did you ever reach a bottleneck in your computational experiments? Are you tired of imp

Gustavo Rosa 57 Nov 17, 2022
A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.

WILDS is a benchmark of in-the-wild distribution shifts spanning diverse data modalities and applications, from tumor identification to wildlife monitoring to poverty mapping.

P-Lambda 437 Dec 30, 2022
A simple approach to emable dense segmentation with ViT.

Vision Transformer Segmentation Network This implementation of ViT in pytorch uses a super simple and straight-forward way of generating an output of

HReynaud 5 Jan 03, 2023
This is a work in progress reimplementation of Instant Neural Graphics Primitives

Neural Hash Encoding This is a work in progress reimplementation of Instant Neural Graphics Primitives Currently this can train an implicit representa

Penn 79 Sep 01, 2022
Code accompanying the paper Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs (Chen et al., CVPR 2020, Oral).

Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs This repository contains PyTorch implementation of our pa

Shizhe Chen 178 Dec 29, 2022