Multiple-criteria decision-making (MCDM) with Electre, Promethee, Weighted Sum and Pareto

Overview

PyPI version GitHub Issues Contributions welcome License: MIT Downloads

EasyMCDM - Quick Installation methods

Install with PyPI

Once you have created your Python environment (Python 3.6+) you can simply type:

pip3 install EasyMCDM

Install with GitHub

Once you have created your Python environment (Python 3.6+) you can simply type:

git clone https://github.com/qanastek/EasyMCDM.git
cd EasyMCDM
pip3 install -r requirements.txt
pip3 install --editable .

Any modification made to the EasyMCDM package will be automatically interpreted as we installed it with the --editable flag.

Setup with Anaconda

conda create --name EasyMCDM python=3.6 -y
conda activate EasyMCDM

More information on managing environments with Anaconda can be found in the conda cheat sheet.

Try It

Data in tests/data/donnees.csv :

alfa_156,23817,201,8,39.6,6,378,31.2
audi_a4,25771,195,5.7,35.8,7,440,33
cit_xantia,25496,195,7.9,37,2,480,34

Promethee

from EasyMCDM.models.Promethee import Promethee

data = pd.read_csv('tests/data/donnees.csv', header=None).to_numpy()
# or
data = {
  "alfa_156": [23817.0, 201.0, 8.0, 39.6, 6.0, 378.0, 31.2],
  "audi_a4": [25771.0, 195.0, 5.7, 35.8, 7.0, 440.0, 33.0],
  "cit_xantia": [25496.0, 195.0, 7.9, 37.0, 2.0, 480.0, 34.0]
}
weights = [0.14,0.14,0.14,0.14,0.14,0.14,0.14]
prefs = ["min","max","min","min","min","max","min"]

p = Promethee(data=data, verbose=False)
res = p.solve(weights=weights, prefs=prefs)
print(res)

Output :

{
  'phi_negative': [('rnlt_safrane', 2.381), ('vw_passat', 2.9404), ('bmw_320d', 3.3603), ('saab_tid', 3.921), ('audi_a4', 4.34), ('cit_xantia', 4.48), ('rnlt_laguna', 5.04), ('alfa_156', 5.32), ('peugeot_406', 5.461), ('cit_xsara', 5.741)],
  'phi_positive': [('rnlt_safrane', 6.301), ('vw_passat', 5.462), ('bmw_320d', 5.18), ('saab_tid', 4.76), ('audi_a4', 4.0605), ('cit_xantia', 3.921), ('rnlt_laguna', 3.6406), ('alfa_156', 3.501), ('peugeot_406', 3.08), ('cit_xsara', 3.08)],
  'phi': [('rnlt_safrane', 3.92), ('vw_passat', 2.5214), ('bmw_320d', 1.8194), ('saab_tid', 0.839), ('audi_a4', -0.27936), ('cit_xantia', -0.5596), ('rnlt_laguna', -1.3995), ('alfa_156', -1.8194), ('peugeot_406', -2.381), ('cit_xsara', -2.661)],
  'matrix': '...'
}

Electre Iv / Is

from EasyMCDM.models.Electre import Electre

data = {
    "A1" : [80, 90,  600, 5.4,  8,  5],
    "A2" : [65, 58,  200, 9.7,  1,  1],
    "A3" : [83, 60,  400, 7.2,  4,  7],
    "A4" : [40, 80, 1000, 7.5,  7, 10],
    "A5" : [52, 72,  600, 2.0,  3,  8],
    "A6" : [94, 96,  700, 3.6,  5,  6],
}
weights = [0.1, 0.2, 0.2, 0.1, 0.2, 0.2]
prefs = ["min", "max", "min", "min", "min", "max"]
vetoes = [45, 29, 550, 6, 4.5, 4.5]
indifference_threshold = 0.6
preference_thresholds = [20, 10, 200, 4, 2, 2] # or None for Electre Iv

e = Electre(data=data, verbose=False)

results = e.solve(weights, prefs, vetoes, indifference_threshold, preference_thresholds)

Output :

{'kernels': ['A4', 'A5']}

Pareto

from EasyMCDM.models.Pareto import Pareto

data = 'tests/data/donnees.csv'
# or
data = {
  "alfa_156": [23817.0, 201.0, 8.0, 39.6, 6.0, 378.0, 31.2],
  "audi_a4": [25771.0, 195.0, 5.7, 35.8, 7.0, 440.0, 33.0],
  "cit_xantia": [25496.0, 195.0, 7.9, 37.0, 2.0, 480.0, 34.0]
}

p = Pareto(data=data, verbose=False)
res = p.solve(indexes=[0,1,6], prefs=["min","max","min"])
print(res)

Output :

{
  'alfa_156': {'Weakly-dominated-by': [], 'Dominated-by': []},
  'audi_a4': {'Weakly-dominated-by': ['alfa_156'], 'Dominated-by': ['alfa_156']}, 
  'cit_xantia': {'Weakly-dominated-by': ['alfa_156', 'vw_passat'], 'Dominated-by': ['alfa_156']},
  'peugeot_406': {'Weakly-dominated-by': ['alfa_156', 'cit_xantia', 'rnlt_laguna', 'vw_passat'], 'Dominated-by': ['alfa_156', 'cit_xantia', 'rnlt_laguna', 'vw_passat']},
  'saab_tid': {'Weakly-dominated-by': ['alfa_156'], 'Dominated-by': ['alfa_156']}, 
  'rnlt_laguna': {'Weakly-dominated-by': ['vw_passat'], 'Dominated-by': ['vw_passat']}, 
  'vw_passat': {'Weakly-dominated-by': [], 'Dominated-by': []},
  'bmw_320d': {'Weakly-dominated-by': [], 'Dominated-by': []},
  'cit_xsara': {'Weakly-dominated-by': [], 'Dominated-by': []},
  'rnlt_safrane': {'Weakly-dominated-by': ['bmw_320d'], 'Dominated-by': ['bmw_320d']}
}

Weighted Sum

from EasyMCDM.models.WeightedSum import WeightedSum

data = 'tests/data/donnees.csv'
# or
data = {
  "alfa_156": [23817.0, 201.0, 8.0, 39.6, 6.0, 378.0, 31.2],
  "audi_a4": [25771.0, 195.0, 5.7, 35.8, 7.0, 440.0, 33.0],
  "cit_xantia": [25496.0, 195.0, 7.9, 37.0, 2.0, 480.0, 34.0]
}

p = WeightedSum(data=data, verbose=False)
res = p.solve(pref_indexes=[0,1,6],prefs=["min","max","min"], weights=[0.001,2,3], target='min')
print(res)

Output :

[(1, 'bmw_320d', -299.04), (2, 'alfa_156', -284.58299999999997), (3, 'rnlt_safrane', -280.84), (4, 'saab_tid', -275.817), (5, 'vw_passat', -265.856), (6, 'audi_a4', -265.229), (7, 'rnlt_laguna', -262.93600000000004), (8, 'cit_xantia', -262.504), (9, 'peugeot_406', -252.551), (10, 'cit_xsara', -244.416)]

Instant-Runoff Multicriteria Optimization (IRMO)

Short description : Eliminate the worst individual for each criteria, until we reach the last one and select the best one.

from EasyMCDM.models.Irmo import Irmo

p = Irmo(data="data/donnees.csv", verbose=False)
res = p.solve(
    indexes=[0,1,4,5], # price -> max_speed -> comfort -> trunk_space
    prefs=["min","max","min","max"]
)
print(res)

Output :

{'best': 'saab_tid'}

List of methods available

Build PyPi package

Build: python setup.py sdist bdist_wheel

Upload: twine upload dist/*

Citation

If you want to cite the tool you can use this:

@misc{EasyMCDM,
  title={EasyMCDM},
  author={Yanis Labrak, Quentin Raymondaud, Philippe Turcotte},
  publisher={GitHub},
  journal={GitHub repository},
  howpublished={\url{https://github.com/qanastek/EasyMCDM}},
  year={2022}
}
Owner
Labrak Yanis
👨🏻‍🎓 Student in Master of Science in Computer Science, Avignon University 🇫🇷 🏛 Research Scientist - Machine Learning in Healthcare
Labrak Yanis
A Pytorch implementation of the multi agent deep deterministic policy gradients (MADDPG) algorithm

Multi-Agent-Deep-Deterministic-Policy-Gradients A Pytorch implementation of the multi agent deep deterministic policy gradients(MADDPG) algorithm This

Phil Tabor 159 Dec 28, 2022
基于深度强化学习的原神自动钓鱼AI

原神自动钓鱼AI由YOLOX, DQN两部分模型组成。使用迁移学习,半监督学习进行训练。 模型也包含一些使用opencv等传统数字图像处理方法实现的不可学习部分。

4.2k Jan 01, 2023
The software associated with a paper accepted at EMNLP 2021 titled "Open Knowledge Graphs Canonicalization using Variational Autoencoders".

Open-KG-canonicalization The software associated with a paper accepted at EMNLP 2021 titled "Open Knowledge Graphs Canonicalization using Variational

International Business Machines 13 Nov 11, 2022
Deep Learning Tutorial for Kaggle Ultrasound Nerve Segmentation competition, using Keras

Deep Learning Tutorial for Kaggle Ultrasound Nerve Segmentation competition, using Keras This tutorial shows how to use Keras library to build deep ne

Marko Jocić 922 Dec 19, 2022
Code base of object detection

rmdet code base of object detection. 环境安装: 1. 安装conda python环境 - `conda create -n xxx python=3.7/3.8` - `conda activate xxx` 2. 运行脚本,自动安装pytorch1

3 Mar 08, 2022
A Python library for differentiable optimal control on accelerators.

A Python library for differentiable optimal control on accelerators.

Google 80 Dec 21, 2022
Sound-guided Semantic Image Manipulation - Official Pytorch Code (CVPR 2022)

🔉 Sound-guided Semantic Image Manipulation (CVPR2022) Official Pytorch Implementation Sound-guided Semantic Image Manipulation IEEE/CVF Conference on

CVLAB 58 Dec 28, 2022
Deep Markov Factor Analysis (NeurIPS2021)

Deep Markov Factor Analysis (DMFA) Codes and experiments for deep Markov factor analysis (DMFA) model accepted for publication at NeurIPS2021: A. Farn

Sarah Ostadabbas 2 Dec 16, 2022
Normalization Calibration (NorCal) for Long-Tailed Object Detection and Instance Segmentation

NorCal Normalization Calibration (NorCal) for Long-Tailed Object Detection and Instance Segmentation On Model Calibration for Long-Tailed Object Detec

Tai-Yu (Daniel) Pan 24 Dec 25, 2022
Unsupervised clustering of high content screen samples

Microscopium Unsupervised clustering and dataset exploration for high content screens. See microscopium in action Public dataset BBBC021 from the Broa

60 Dec 05, 2022
Probabilistic Tracklet Scoring and Inpainting for Multiple Object Tracking

Probabilistic Tracklet Scoring and Inpainting for Multiple Object Tracking (CVPR 2021) Pytorch implementation of the ArTIST motion model. In this repo

Fatemeh 38 Dec 12, 2022
This program can detect your face and add an Christams hat on the top of your head

Auto_Christmas This program can detect your face and add a Christmas hat to the top of your head. just run the Auto_Christmas.py, then you can see the

3 Dec 22, 2021
To prepare an image processing model to classify the type of disaster based on the image dataset

Disaster Classificiation using CNNs bunnysaini/Disaster-Classificiation Goal To prepare an image processing model to classify the type of disaster bas

Bunny Saini 1 Jan 24, 2022
利用yolov5和TensorRT从0到1实现目标检测的模型训练到模型部署全过程

写在前面 利用TensorRT加速推理速度是以时间换取精度的做法,意味着在推理速度上升的同时将会有精度的下降,不过不用太担心,精度下降微乎其微。此外,要有NVIDIA显卡,经测试,CUDA10.2可以支持20系列显卡及以下,30系列显卡需要CUDA11.x的支持,并且目前有bug。 默认你已经完成了

Helium 6 Jul 28, 2022
It's like Shape Editor in Maya but works with skeletons (transforms).

Skeleposer What is Skeleposer? Briefly, it's like Shape Editor in Maya, but works with transforms and joints. It can be used to make complex facial ri

Alexander Zagoruyko 1 Nov 11, 2022
Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigations

Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigations This is the repository for the paper Consumer Fairness in Recomm

7 Nov 30, 2022
UI2I via StyleGAN2 - Unsupervised image-to-image translation method via pre-trained StyleGAN2 network

We proposed an unsupervised image-to-image translation method via pre-trained StyleGAN2 network. paper: Unsupervised Image-to-Image Translation via Pr

208 Dec 30, 2022
Code for BMVC2021 paper "Boundary Guided Context Aggregation for Semantic Segmentation"

Boundary-Guided-Context-Aggregation Boundary Guided Context Aggregation for Semantic Segmentation Haoxiang Ma, Hongyu Yang, Di Huang In BMVC'2021 Pape

Haoxiang Ma 31 Jan 08, 2023
Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"

merlot_reserve Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound" MERLOT Reserve (in submission) is a mo

Rowan Zellers 92 Dec 11, 2022
This project implements "virtual speed" from heart rate monito

ANT+ Virtual Stride Based Speed and Distance Monitor Overview This project imple

2 May 20, 2022