This is the repository of our article published on MDPI Entropy "Feature Selection for Recommender Systems with Quantum Computing".

Related tags

Deep LearningCQFS
Overview

Collaborative-driven Quantum Feature Selection

This repository was developed by Riccardo Nembrini, PhD student at Politecnico di Milano. See the websites of our quantum computing group and of our recommender systems group for more information on our teams and works. This repository contains the source code for the article "Feature Selection for Recommender Systems with Quantum Computing".

Here we explain how to install dependencies, setup the connection to D-Wave Leap quantum cloud services and how to run experiments included in this repository.

Installation

NOTE: This repository requires Python 3.7

It is suggested to install all the required packages into a new Python environment. So, after repository checkout, enter the repository folder and run the following commands to create a new environment:

If you're using virtualenv:

virtualenv -p python3 cqfs
source cqfs/bin/activate

If you're using conda:

conda create -n cqfs python=3.7 anaconda
conda activate cqfs

Remember to add this project in the PYTHONPATH environmental variable if you plan to run the experiments on the terminal:

export PYTHONPATH=$PYTHONPATH:/path/to/project/folder

Then, make sure you correctly activated the environment and install all the required packages through pip:

pip install -r requirements.txt

After installing the dependencies, it is suggested to compile Cython code in the repository.

In order to compile you must first have installed: gcc and python3 dev. Under Linux those can be installed with the following commands:

sudo apt install gcc 
sudo apt-get install python3-dev

If you are using Windows as operating system, the installation procedure is a bit more complex. You may refer to THIS guide.

Now you can compile all Cython algorithms by running the following command. The script will compile within the current active environment. The code has been developed for Linux and Windows platforms. During the compilation you may see some warnings.

python run_compile_all_cython.py

D-Wave Setup

In order to make use of D-Wave cloud services you must first sign-up to D-Wave Leap and get your API token.

Then, you need to run the following command in the newly created Python environment:

dwave setup

This is a guided setup for D-Wave Ocean SDK. When asked to select non-open-source packages to install you should answer y and install at least D-Wave Drivers (the D-Wave Problem Inspector package is not required, but could be useful to analyse problem solutions, if solving problems with the QPU only).

Then, continue the configuration by setting custom properties (or keeping the default ones, as we suggest), apart from the Authentication token field, where you should paste your API token obtained on the D-Wave Leap dashboard.

You should now be able to connect to D-Wave cloud services. In order to verify the connection, you can use the following command, which will send a test problem to D-Wave's QPU:

dwave ping

Running CQFS Experiments

First of all, you need to prepare the original files for the datasets.

For The Movies Dataset you need to download The Movies Dataset from Kaggle and place the compressed files in the directory recsys/Data_manager_offline_datasets/TheMoviesDataset/, making sure the file is called the-movies-dataset.zip.

For CiteULike_a you need to download the following .zip file and place it in the directory recsys/Data_manager_offline_datasets/CiteULike/, making sure the file is called CiteULike_a_t.zip.

We cannot provide data for Xing Challenge 2017, but if you have the dataset available, place the compressed file containing the dataset's original files in the directory recsys/Data_manager_offline_datasets/XingChallenge2017/, making sure the file is called xing_challenge_data_2017.zip.

After preparing the datasets, you should run the following command under the data directory:

python split_NameOfTheDataset.py

This python script will generate the data splits used in the experiments. Moreover, it will preprocess the dataset and check for any error in the preprocessing phase. The resulting splits are saved in the recsys/Data_manager_split_datasets directory.

After splitting the dataset, you can actually run the experiments. All the experiment scripts are in the experiments directory, so enter this folder first. Each dataset has separated experiment scripts that you can find in the corresponding directories. From now on, we will assume that you are running the following commands in the dataset-specific folders, thus running the scripts contained there.

Collaborative models

First of all, we need to optimize the chosen collaborative models to use with CQFS. To do so, run the following command:

python CollaborativeFiltering.py

The resulting models will be saved into the results directory.

CQFS

Then, you can run the CQFS procedure. We divided the procedure into a selection phase and a recommendation phase. To perform the selection through CQFS run the following command:

python CQFS.py

This script will solve the CQFS problem on the corresponding dataset and save all the selected features in appropriate subdirectories under the results directory.

After solving the feature selection problem, you should run the following command:

python CQFSTrainer.py

This script will optimize an ItemKNN content-based recommender system for each selection corresponding to the given hyperparameters (and previously obtained through CQFS), using only the selected features. Again, all the results are saved in the corresponding subdirectories under the results directory.

NOTE: Each selection with D-Wave Leap hybrid service on these problems is performed in around 8 seconds for The Movies Dataset and around 30 for CiteULike_a. Therefore, running the script as it is would result in consuming all the free time given with the developer plan on D-Wave Leap and may result in errors or invalid selections when there's no free time remaining.

We suggest to reduce the number of hyperparameters passed when running the experiments or, even better, chose a collaborative model and perform all the experiments on it.

This is not the case when running experiments with Simulated Annealing, since it is executed locally.

For Xing Challenge 2017 experiments run directly on the D-Wave QPU. Leaving all the hyperparameters unchanged, all the experiments should not exceed the free time of the developer plan. Pay attention when increasing the number of reads from the sampler or the annealing time.

Baselines

In order to obtain the baseline evaluations you can run the corresponding scripts with the following commands:

# ItemKNN content-based with all the features
python baseline_CBF.py

# ItemKNN content-based with features selected through TF-IDF
python baseline_TFIDF.py

# CFeCBF feature weighting baseline
python baseline_CFW.py

Acknowledgements

Software produced by Riccardo Nembrini. Recommender systems library by Maurizio Ferrari Dacrema.

Article authors: Riccardo Nembrini, Maurizio Ferrari Dacrema, Paolo Cremonesi

Owner
Quantum Computing Lab @ Politecnico di Milano
Quantum Machine Learning group
Quantum Computing Lab @ Politecnico di Milano
Repo for our ICML21 paper Unsupervised Learning of Visual 3D Keypoints for Control

Unsupervised Learning of Visual 3D Keypoints for Control [Project Website] [Paper] Boyuan Chen1, Pieter Abbeel1, Deepak Pathak2 1UC Berkeley 2Carnegie

Boyuan Chen 34 Jul 22, 2022
The official repo for OC-SORT: Observation-Centric SORT on video Multi-Object Tracking. OC-SORT is simple, online and robust to occlusion/non-linear motion.

OC-SORT Observation-Centric SORT (OC-SORT) is a pure motion-model-based multi-object tracker. It aims to improve tracking robustness in crowded scenes

Jinkun Cao 325 Jan 05, 2023
FasterAI: A library to make smaller and faster models with FastAI.

Fasterai fasterai is a library created to make neural network smaller and faster. It essentially relies on common compression techniques for networks

Nathan Hubens 193 Jan 01, 2023
[AAAI2021] The source code for our paper 《Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion》.

DSM The source code for paper Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion Project Website; Datasets li

Jinpeng Wang 114 Oct 16, 2022
Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

The official code for the NeurIPS 2021 paper Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

13 Dec 22, 2022
StyleGAN-Human: A Data-Centric Odyssey of Human Generation

StyleGAN-Human: A Data-Centric Odyssey of Human Generation Abstract: Unconditional human image generation is an important task in vision and graphics,

stylegan-human 762 Jan 08, 2023
Unet network with mean teacher for altrasound image segmentation

Unet network with mean teacher for altrasound image segmentation

5 Nov 21, 2022
The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks

The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks This folder contains the code to reproduce the data in "The Implicit Bias o

Samuel Lippl 0 Feb 05, 2022
Weakly-supervised semantic image segmentation with CNNs using point supervision

Code for our ECCV paper What's the Point: Semantic Segmentation with Point Supervision. Summary This library is a custom build of Caffe for semantic i

27 Sep 14, 2022
CLIP: Connecting Text and Image (Learning Transferable Visual Models From Natural Language Supervision)

CLIP (Contrastive Language–Image Pre-training) Experiments (Evaluation) Model Dataset Acc (%) ViT-B/32 (Paper) CIFAR100 65.1 ViT-B/32 (Our) CIFAR100 6

Myeongjun Kim 52 Jan 07, 2023
SGPT: Multi-billion parameter models for semantic search

SGPT: Multi-billion parameter models for semantic search This repository contains code, results and pre-trained models for the paper SGPT: Multi-billi

Niklas Muennighoff 182 Dec 29, 2022
Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning.

Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning Installation

Pytorch Lightning 1.6k Jan 08, 2023
Software Platform for solving and manipulating multiparametric programs in Python

PPOPT Python Parametric OPtimization Toolbox (PPOPT) is a software platform for solving and manipulating multiparametric programs in Python. This pack

10 Sep 13, 2022
Computer Vision Paper Reviews with Key Summary of paper, End to End Code Practice and Jupyter Notebook converted papers

Computer-Vision-Paper-Reviews Computer Vision Paper Reviews with Key Summary along Papers & Codes. Jonathan Choi 2021 The repository provides 100+ Pap

Jonathan Choi 2 Mar 17, 2022
Fast, accurate and reliable software for algebraic CT reconstruction

KCT CBCT Fast, accurate and reliable software for algebraic CT reconstruction. This set of software tools includes OpenCL implementation of modern CT

Vojtěch Kulvait 4 Dec 14, 2022
graph-theoretic framework for robust pairwise data association

CLIPPER: A Graph-Theoretic Framework for Robust Data Association Data association is a fundamental problem in robotics and autonomy. CLIPPER provides

MIT Aerospace Controls Laboratory 118 Dec 28, 2022
A Pytorch implementation of the multi agent deep deterministic policy gradients (MADDPG) algorithm

Multi-Agent-Deep-Deterministic-Policy-Gradients A Pytorch implementation of the multi agent deep deterministic policy gradients(MADDPG) algorithm This

Phil Tabor 159 Dec 28, 2022
Few-shot Neural Architecture Search

One-shot Neural Architecture Search uses a single supernet to approximate the performance each architecture. However, this performance estimation is super inaccurate because of co-adaption among oper

Yiyang Zhao 38 Oct 18, 2022
Robust Lane Detection via Expanded Self Attention (WACV 2022)

Robust Lane Detection via Expanded Self Attention (WACV 2022) Minhyeok Lee, Junhyeop Lee, Dogyoon Lee, Woojin Kim, Sangwon Hwang, Sangyoun Lee Overvie

Min Hyeok Lee 18 Nov 12, 2022
Code for IntraQ, PyTorch implementation of our paper under review

IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization paper Requirements Python = 3.7.10 Pytorch == 1.7

1 Nov 19, 2021