Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)

Related tags

Text Data & NLPTOPSIS
Overview

TOPSIS implementation in Python

Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) CHING-LAI Hwang and Yoon introduced TOPSIS in 1981 in their Multiple Criteria Decision Making (MCDM) and Multiple Criteria Decision Analysis (MCDA) methods [1]. TOPSIS strives to minimize the distance between the positive ideal solution and the chosen alternative, as well as to maximize the distance between the negative ideal solution and the chosen alternative. [2]. TOPSIS, in a nutshell, aids researchers to rank alternative items by identifying some criteria. We present alternative information and the criteria for each in the following decision matrix: image It is possible that some criteria are more effective than others. Therefore, some weights are given to their importance. It is required that the summation of n weights equals one.

image

Jahanshahloo et al, (2006), explained the TOPSIS in six main phases as follows:

1) Normalized Decision Matrix

It is the first phase of TOPSIS to normalize the process. Researchers have proposed different types of normalization. In this section, we identify the most commonly used normalization methods. The criterion or attribute is divided into two categories, cost and benefit. There are two formulas for normalizing the decision matrix for each normalization method: one for benefit criteria and one for cost criteria. According to Vafaei et al (2018), some of these normalization methods include:

image

All of the above normalization methods were coded in Normalization.py. Also, there is another related file called Normalized_Decision_Matrix.py, implementing the normalization method on the decision matrix. Now we have anormalized decision matrix as follows:

image

2) Weighted Normalized Decision Matrix

The Weighted Normalized Decision Matrix is calculated by multiplying the normalized decision matrix by the weights.

image

This multiplication is performed in the Weighted_Normalized_Decision_Matrix.py file. Now, we have a weighted normalized decision matrix as follows:

image

3) Ideal Solutions

As was mentioned, TOPSIS strives to minimize the distance between the positive ideal solution and the chosen alternative, as well as to maximize the distance between the negative ideal solution and the chosen alternative. But what are the positive and negative ideal solutions?

If our attribute or criterion is profit-based, positive ideal solution (PIS) and negative ideal solution (NIS) are:

image

If our attribute or criterion is cost-based, positive ideal solution (PIS) and negative ideal solution (NIS) are:

image

In our code, ideal solutions are calculated in Ideal_Solution.py.

  1. Separation measures It is necessary to introduce a measure that can measure how far alternatives are from the ideal solutions. Our measure comprise two main sections: The separation of each alternative from the PIS is calculated as follows:

image

Also, the separation of each alternative from the NIS is calculated as follows:

image

  1. Closeness to the Ideal Solution Now that the distance between ideal solutions and alternatives has been calculated, we rank our alternatives according to how close they are to ideal solutions. The distance measure is calculated by the following formula:

image

It is clear that :

image

6) Ranking

Now, alternatives are ranked in decreasing order based on closeness to the ideal solution. Both of (5) and (6) are calculated in Distance_Between_Ideal_and_Alternatives.py.

7) TOPSIS

In this section, all of the previous .py files are employed and utilized in an integrated way.

References

  1. Hwang, C.L.; Yoon, K. (1981). Multiple Attribute Decision Making: Methods and Applications. New York: Springer-Verlag.: https://www.springer.com/gp/book/9783540105589
  2. Assari, A., Mahesh, T., & Assari, E. (2012b). Role of public participation in sustainability of historical city: usage of TOPSIS method. Indian Journal of Science and Technology, 5(3), 2289-2294.
  3. Jahanshahloo, G.R., Lotfi, F.H. and Izadikhah, M., 2006. An algorithmic method to extend TOPSIS for decision-making problems with interval data. Applied mathematics and computation, 175(2), pp.1375-1384.
  4. Vafaei, N., Ribeiro, R.A. and Camarinha-Matos, L.M., 2018. Data normalization techniques in decision making: case study with TOPSIS method. International journal of information and decision sciences, 10(1), pp.19-38.
Owner
Hamed Baziyad
Hamed Baziyad
Conversational text Analysis using various NLP techniques

Conversational text Analysis using various NLP techniques

Rita Anjana 159 Jan 06, 2023
BERTAC (BERT-style transformer-based language model with Adversarially pretrained Convolutional neural network)

BERTAC (BERT-style transformer-based language model with Adversarially pretrained Convolutional neural network) BERTAC is a framework that combines a

6 Jan 24, 2022
Blender addon - Scrub timeline from viewport with a shortcut

Viewport scrub timeline Move in the timeline directly in viewport and snap to nearest keyframe Note : This standalone feature will be added in the nat

Samuel Bernou 40 Nov 07, 2022
Transformer training code for sequential tasks

Sequential Transformer This is a code for training Transformers on sequential tasks such as language modeling. Unlike the original Transformer archite

Meta Research 578 Dec 13, 2022
NLP-based analysis of poor Chinese movie reviews on Douban

douban_embedding 豆瓣中文影评差评分析 1. NLP NLP(Natural Language Processing)是指自然语言处理,他的目的是让计算机可以听懂人话。 下面是我将2万条豆瓣影评训练之后,随意输入一段新影评交给神经网络,最终AI推断出的结果。 "很好,演技不错

3 Apr 15, 2022
运小筹公众号是致力于分享运筹优化(LP、MIP、NLP、随机规划、鲁棒优化)、凸优化、强化学习等研究领域的内容以及涉及到的算法的代码实现。

OlittleRer 运小筹公众号是致力于分享运筹优化(LP、MIP、NLP、随机规划、鲁棒优化)、凸优化、强化学习等研究领域的内容以及涉及到的算法的代码实现。编程语言和工具包括Java、Python、Matlab、CPLEX、Gurobi、SCIP 等。 关注我们: 运筹小公众号 有问题可以直接在

运小筹 151 Dec 30, 2022
Pipelines de datos, 2021.

Este repo ilustra un proceso sencillo de automatización de transformación y modelado de datos, a través de un pipeline utilizando Luigi. Stack princip

Rodolfo Ferro 8 May 19, 2022
A Non-Autoregressive Transformer based TTS, supporting a family of SOTA transformers with supervised and unsupervised duration modelings. This project grows with the research community, aiming to achieve the ultimate TTS.

A Non-Autoregressive Transformer based TTS, supporting a family of SOTA transformers with supervised and unsupervised duration modelings. This project grows with the research community, aiming to ach

Keon Lee 237 Jan 02, 2023
Chinese NER(Named Entity Recognition) using BERT(Softmax, CRF, Span)

Chinese NER(Named Entity Recognition) using BERT(Softmax, CRF, Span)

Weitang Liu 1.6k Jan 03, 2023
Text Normalization(文本正则化)

Text Normalization(文本正则化) 任务描述:通过机器学习算法将英文文本的“手写”形式转换成“口语“形式,例如“6ft”转换成“six feet”等 实验结果 XGBoost + bag-of-words: 0.99159 XGBoost+Weights+rules:0.99002

Jason_Zhang 0 Feb 26, 2022
State of the Art Natural Language Processing

Spark NLP: State of the Art Natural Language Processing Spark NLP is a Natural Language Processing library built on top of Apache Spark ML. It provide

John Snow Labs 3k Jan 05, 2023
Transformer - A TensorFlow Implementation of the Transformer: Attention Is All You Need

[UPDATED] A TensorFlow Implementation of Attention Is All You Need When I opened this repository in 2017, there was no official code yet. I tried to i

Kyubyong Park 3.8k Dec 26, 2022
Code for lyric-section-to-comment generation based on huggingface transformers.

CommentGeneration Code for lyric-section-to-comment generation based on huggingface transformers. Migrate Guyu model and code (both 12-layers and 24-l

Yawei Sun 8 Sep 04, 2021
MEDIALpy: MEDIcal Abbreviations Lookup in Python

A small python package that allows the user to look up common medical abbreviations.

Aberystwyth Systems Biology 7 Nov 09, 2022
CPC-big and k-means clustering for zero-resource speech processing

The CPC-big model and k-means checkpoints used in Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing.

Benjamin van Niekerk 5 Nov 23, 2022
SEJE is a prototype for the paper Learning Text-Image Joint Embedding for Efficient Cross-Modal Retrieval with Deep Feature Engineering.

SEJE is a prototype for the paper Learning Text-Image Joint Embedding for Efficient Cross-Modal Retrieval with Deep Feature Engineering. Contents Inst

0 Oct 21, 2021
An open-source NLP research library, built on PyTorch.

An Apache 2.0 NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks. Quic

AI2 11.4k Jan 01, 2023
CATs: Semantic Correspondence with Transformers

CATs: Semantic Correspondence with Transformers For more information, check out the paper on [arXiv]. Training with different backbones and evaluation

74 Dec 10, 2021
nlabel is a library for generating, storing and retrieving tagging information and embedding vectors from various nlp libraries through a unified interface.

nlabel is a library for generating, storing and retrieving tagging information and embedding vectors from various nlp libraries through a unified interface.

Bernhard Liebl 2 Jun 10, 2022
Develop open-source Python Arabic NLP libraries that the Arab world will easily use in all Natural Language Processing applications

Develop open-source Python Arabic NLP libraries that the Arab world will easily use in all Natural Language Processing applications

BADER ALABDAN 2 Oct 22, 2022