This repo contains implementation of different architectures for emotion recognition in conversations.

Overview

Emotion Recognition in Conversations

Updates ๐Ÿ”ฅ ๐Ÿ”ฅ ๐Ÿ”ฅ

Date Announcements
03/08/2021 ?? ๐ŸŽ† We have released a new dataset M2H2: A Multimodal Multiparty Hindi Dataset For Humor Recognition in Conversations. Check it out: M2H2. The baselines for the M2H2 dataset are created based on DialogueRNN and bcLSTM.
18/05/2021 ๐ŸŽ† ๐ŸŽ† We have released a new repo containing models to solve the problem of emotion cause recognition in conversations. Check it out: emotion-cause-extraction. Thanks to Pengfei Hong for compiling this.
24/12/2020 ๐ŸŽ† ๐ŸŽ† Interested in the topic of recognizing emotion causes in conversations? We have just released a dataset for this. Head over to https://github.com/declare-lab/RECCON.
06/10/2020 ๐ŸŽ† ๐ŸŽ† New paper and SOTA in Emotion Recognition in Conversations. Refer to the directory COSMIC for the code. Read the paper -- COSMIC: COmmonSense knowledge for eMotion Identification in Conversations.
30/09/2020 New paper and baselines in utterance-level dialogue understanding have been released. Read our paper Utterance-level Dialogue Understanding: An Empirical Study. Fork the codes.
26/07/2020 New DialogueGCN code has been released. Please visit https://github.com/declare-lab/conv-emotion/tree/master/DialogueGCN-mianzhang. All the credit goes to the Mian Zhang (https://github.com/mianzhang/)
11/07/2020 Interested in reading the papers on ERC or related tasks such as sarcasm detection in conversations? We have compiled a comprehensive reading list for papers. Please visit https://github.com/declare-lab/awesome-emotion-recognition-in-conversations
07/06/2020: New state-of-the-art results for the ERC task will be released soon.
07/06/2020: The conv-emotion repo will be maintained on https://github.com/declare-lab/
22/12/2019: Code for DialogueGCN has been released.
11/10/2019: New Paper: Conversational Transfer Learning for Emotion Recognition.
09/08/2019: New paper on Emotion Recognition in Conversation (ERC).
06/03/2019: Features and codes to train DialogueRNN on the MELD dataset have been released.
20/11/2018: End-to-end version of ICON and DialogueRNN have been released.

COSMIC is the best performing model in this repo and please visit the links below to compare the models on different ERC datasets.

PWC

PWC

PWC

PWC

This repository contains implementations for several emotion recognition in conversations methods as well algorithms for recognizing emotion cause in conversations:

Unlike other emotion detection models, these techniques consider the party-states and inter-party dependencies for modeling conversational context relevant to emotion recognition. The primary purpose of all these techniques are to pretrain an emotion detection model for empathetic dialogue generation.

Controlling variables in conversation

Interaction among different controlling variables during a dyadic conversation between persons X and Y. Grey and white circles represent hidden and observed variables, respectively. P represents personality, U represents utterance, S represents interlocutor state, I represents interlocutor intent, B represents background knowledge, Q represents external and sensory inputs, E represents emotion and Topic represents topic of the conversation. This can easily be extended to multi-party conversations.

Emotion recognition can be very useful for empathetic and affective dialogue generation -

Affective dialogue generation

Data Format

These networks expect emotion/sentiment label and speaker info for each utterance present in a dialogue like

Party 1: I hate my girlfriend (angry)
Party 2: you got a girlfriend?! (surprise)
Party 1: yes (angry)

However, the code can be adpated to perform tasks where only the preceding utterances are available, without their corresponding labels, as context and goal is to label only the present/target utterance. For example, the context is

Party 1: I hate my girlfriend
Party 2: you got a girlfriend?!

the target is

Party 1: yes (angry)

where the target emotion is angry. Moreover, this code can also be molded to train the network in an end-to-end manner. We will soon push these useful changes.

Present SOTA Results

Methods IEMOCAP DailyDialog MELD EmoryNLP
W-Avg F1 Macro F1 Micro F1 W-Avg F1 (3-cls) W-Avg F1 (7-cls) W-Avg F1 (3-cls) W-Avg F1 (7-cls)
RoBERTa 54.55 48.20 55.16 72.12 62.02 55.28 37.29
RoBERTa DialogueRNN 64.76 49.65 57.32 72.14 63.61 55.36 37.44
RoBERTa COSMIC 65.28 51.05 58.48 73.20 65.21 56.51 38.11

COSMIC: COmmonSense knowledge for eMotion Identification in Conversations

COSMIC addresses the task of utterance level emotion recognition in conversations using commonsense knowledge. It is a new framework that incorporates different elements of commonsense such as mental states, events, and causal relations, and build upon them to learn interactions between interlocutors participating in a conversation. Current state-of-the-art methods often encounter difficulties in context propagation, emotion shift detection, and differentiating between related emotion classes. By learning distinct commonsense representations, COSMIC addresses these challenges and achieves new state-of-the-art results for emotion recognition on four different benchmark conversational datasets.

Alt text

Execution

First download the RoBERTa and COMET features here and keep them in appropriate directories in COSMIC/erc-training. Then training and evaluation on the four datasets are to be done as follows:

  1. IEMOCAP: python train_iemocap.py --active-listener
  2. DailyDialog: python train_dailydialog.py --active-listener --class-weight --residual
  3. MELD Emotion: python train_meld.py --active-listener --attention simple --dropout 0.5 --rec_dropout 0.3 --lr 0.0001 --mode1 2 --classify emotion --mu 0 --l2 0.00003 --epochs 60
  4. MELD Sentiment: python train_meld.py --active-listener --class-weight --residual --classify sentiment
  5. EmoryNLP Emotion: python train_emorynlp.py --active-listener --class-weight --residual
  6. EmoryNLP Sentiment: python train_emorynlp.py --active-listener --class-weight --residual --classify sentiment

Citation

Please cite the following paper if you find this code useful in your work.

COSMIC: COmmonSense knowledge for eMotion Identification in Conversations. D. Ghosal, N. Majumder, A. Gelbukh, R. Mihalcea, & S. Poria.  Findings of EMNLP 2020.

TL-ERC: Emotion Recognition in Conversations with Transfer Learning from Generative Conversation Modeling

TL-ERC is a transfer learning-based framework for ERC. It pre-trains a generative dialogue model and transfers context-level weights that include affective knowledge into the target discriminative model for ERC.

TL-ERC framework

Setting up

  1. Setup an environment with Conda:

    conda env create -f environment.yml
    conda activate TL_ERC
    cd TL_ERC
    python setup.py
  2. Download dataset files IEMOCAP, DailyDialog and store them in ./datasets/.

  3. Download the pre-trained weights of HRED on Cornell and Ubuntu datasets and store them in ./generative_weights/

  4. [Optional]: To train new generative weights from dialogue models, refer to https://github.com/ctr4si/A-Hierarchical-Latent-Structure-for-Variational-Conversation-Modeling .

Run the ERC classifier with pre-trained weights

  1. cd bert_model
  2. python train.py --load_checkpoint=../generative_weights/cornell_weights.pkl --data=iemocap.
    • Change cornell to ubuntu and iemocap to dailydialog for other dataset combinations.
    • Drop load_checkpoint to avoid initializing contextual weights.
    • To modify hyperparameters, check configs.py

[Optional] Create ERC Dataset splits

  1. Set glove path in the preprocessing files.
  2. python iemocap_preprocess.py. Similarly for dailydialog.

Citation

Please cite the following paper if you find this code useful in your work.

Conversational transfer learning for emotion recognition. Hazarika, D., Poria, S., Zimmermann, R., & Mihalcea, R. (2020). Information Fusion.

DialogueGCN: A Graph Convolutional Neural Network for Emotion Recognition in Conversation

DialogueGCN (Dialogue Graph Convolutional Network), is a graph neural network based approach to ERC. We leverage self and inter-speaker dependency of the interlocutors to model conversational context for emotion recognition. Through the graph network, DialogueGCN addresses context propagation issues present in the current RNN-based methods. DialogueGCN is naturally suited for multi-party dialogues.

Alt text

Requirements

  • Python 3
  • PyTorch 1.0
  • PyTorch Geometric 1.3
  • Pandas 0.23
  • Scikit-Learn 0.20
  • TensorFlow (optional; required for tensorboard)
  • tensorboardX (optional; required for tensorboard)

Execution

Note: PyTorch Geometric makes heavy usage of CUDA atomic operations and is a source of non-determinism. To reproduce the results reported in the paper, we recommend to use the following execution command. Note that this script will execute in CPU. We obatined weighted average F1 scores of 64.67 in our machine and 64.44 in Google colaboratory for IEMOCAP dataset with the following command.

  1. IEMOCAP dataset: python train_IEMOCAP.py --base-model 'LSTM' --graph-model --nodal-attention --dropout 0.4 --lr 0.0003 --batch-size 32 --class-weight --l2 0.0 --no-cuda

Citation

Please cite the following paper if you find this code useful in your work.

DialogueGCN: A Graph Convolutional Neural Network for Emotion Recognition in Conversation. D. Ghosal, N. Majumder, S. Poria, N. Chhaya, & A. Gelbukh. EMNLP-IJCNLP (2019), Hong Kong, China.

DialogueGCN-mianzhang: DialogueGCN Implementation by Mian Zhang

Pytorch implementation to paper "DialogueGCN: A Graph Convolutional Neural Network for Emotion Recognition in Conversation".

Running

You can run the whole process very easily. Take the IEMOCAP corpus for example:

Step 1: Preprocess.

./scripts/iemocap.sh preprocess

Step 2: Train.

./scripts/iemocap.sh train

Requirements

  • Python 3
  • PyTorch 1.0
  • PyTorch Geometric 1.4.3
  • Pandas 0.23
  • Scikit-Learn 0.20

Performance Comparision

- Dataset Weighted F1
Original IEMOCAP 64.18%
This Implementation IEMOCAP 64.10%

Credits

Mian Zhang (Github: mianzhang)

Citation

Please cite the following paper if you find this code useful in your work.

DialogueGCN: A Graph Convolutional Neural Network for Emotion Recognition in Conversation. D. Ghosal, N. Majumder, S. Poria, N. Chhaya, & A. Gelbukh. EMNLP-IJCNLP (2019), Hong Kong, China.

DialogueRNN: An Attentive RNN for Emotion Detection in Conversations

DialogueRNN is basically a customized recurrent neural network (RNN) that profiles each speaker in a conversation/dialogue on the fly, while models the context of the conversation at the same time. This model can easily be extended to multi-party scenario. Also, it can be used as a pretraining model for empathetic dialogue generation.

Note: the default settings (hyperparameters and commandline arguments) in the code are meant for BiDialogueRNN+Att. The user needs to optimize the settings for other the variants and changes. Alt text

Requirements

  • Python 3
  • PyTorch 1.0
  • Pandas 0.23
  • Scikit-Learn 0.20
  • TensorFlow (optional; required for tensorboard)
  • tensorboardX (optional; required for tensorboard)

Dataset Features

Please extract the contents of DialogueRNN_features.zip.

Execution

  1. IEMOCAP dataset: python train_IEMOCAP.py
  2. AVEC dataset: python train_AVEC.py

Command-Line Arguments

  • --no-cuda: Does not use GPU
  • --lr: Learning rate
  • --l2: L2 regularization weight
  • --rec-dropout: Recurrent dropout
  • --dropout: Dropout
  • --batch-size: Batch size
  • --epochs: Number of epochs
  • --class-weight: class weight (not applicable for AVEC)
  • --active-listener: Explicit lisnener mode
  • --attention: Attention type
  • --tensorboard: Enables tensorboard log
  • --attribute: Attribute 1 to 4 (only for AVEC; 1 = valence, 2 = activation/arousal, 3 = anticipation/expectation, 4 = power)

Citation

Please cite the following paper if you find this code useful in your work.

DialogueRNN: An Attentive RNN for Emotion Detection in Conversations. N. Majumder, S. Poria, D. Hazarika, R. Mihalcea, E. Cambria, and G. Alexander. AAAI (2019), Honolulu, Hawaii, USA

ICON

Interactive COnversational memory Network (ICON) is a multimodal emotion detection framework that extracts multimodal features from conversational videos and hierarchically models the \textit{self-} and \textit{inter-speaker} emotional influences into global memories. Such memories generate contextual summaries which aid in predicting the emotional orientation of utterance-videos.

ICON framework

Requirements

  • python 3.6.5
  • pandas==0.23.3
  • tensorflow==1.9.0
  • numpy==1.15.0
  • scikit_learn==0.20.0

Execution

  1. cd ICON

  2. Unzip the data as follows:

    • Download the features for IEMOCAP using this link.
    • Unzip the folder and place it in the location: /ICON/IEMOCAP/data/. Sample command to achieve this: unzip {path_to_zip_file} -d ./IEMOCAP/
  3. Train the ICON model:

    • python train_iemocap.py for IEMOCAP

Citation

ICON: Interactive Conversational Memory Networkfor Multimodal Emotion Detection. D. Hazarika, S. Poria, R. Mihalcea, E. Cambria, and R. Zimmermann. EMNLP (2018), Brussels, Belgium

CMN

CMN is a neural framework for emotion detection in dyadic conversations. It leverages mutlimodal signals from text, audio and visual modalities. It specifically incorporates speaker-specific dependencies into its architecture for context modeling. Summaries are then generated from this context using multi-hop memory networks. Alt text

Requirements

  • python 3.6.5
  • pandas==0.23.3
  • tensorflow==1.9.0
  • numpy==1.15.0
  • scikit_learn==0.20.0

Execution

  1. cd CMN

  2. Unzip the data as follows:

    • Download the features for IEMOCAP using this link.
    • Unzip the folder and place it in the location: /CMN/IEMOCAP/data/. Sample command to achieve this: unzip {path_to_zip_file} -d ./IEMOCAP/
  3. Train the ICON model:

    • python train_iemocap.py for IEMOCAP

Citation

Please cite the following paper if you find this code useful in your work.

Hazarika, D., Poria, S., Zadeh, A., Cambria, E., Morency, L.P. and Zimmermann, R., 2018. Conversational Memory Network for Emotion Recognition in Dyadic Dialogue Videos. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) (Vol. 1, pp. 2122-2132).

bc-LSTM-pytorch

bc-LSTM-pytorch is a network for using context to detection emotion of an utterance in a dialogue. The model is simple but efficient which only uses a LSTM to model the temporal relation among the utterances. In this repo we gave the data of Semeval 2019 Task 3. We have used and provided the data released by Semeval 2019 Task 3 - "Emotion Recognition in Context" organizers. In this task only 3 utterances have been provided - utterance1 (user1), utterance2 (user2), utterance3 (user1) consecutively. The task is to predict the emotion label of utterance3. Emotion label of each utterance have not been provided. However, if your data contains emotion label of each utterance then you can still use this code and adapt it accordingly. Hence, this code is still aplicable for the datasets like MOSI, MOSEI, IEMOCAP, AVEC, DailyDialogue etc. bc-LSTM does not make use of speaker information like CMN, ICON and DialogueRNN.

bc-LSTM framework

Requirements

  • python 3.6.5
  • pandas==0.23.3
  • PyTorch 1.0
  • numpy==1.15.0
  • scikit_learn==0.20.0

Execution

  1. cd bc-LSTM-pytorch

  2. Train the bc-LSTM model:

    • python train_IEMOCAP.py for IEMOCAP

Citation

Please cite the following paper if you find this code useful in your work.

Poria, S., Cambria, E., Hazarika, D., Majumder, N., Zadeh, A. and Morency, L.P., 2017. Context-dependent sentiment analysis in user-generated videos. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (Vol. 1, pp. 873-883).

bc-LSTM

Keras implementation of bc-LSTM.

Requirements

  • python 3.6.5
  • pandas==0.23.3
  • tensorflow==1.9.0
  • numpy==1.15.0
  • scikit_learn==0.20.0
  • keras==2.1

Execution

  1. cd bc-LSTM

  2. Train the bc-LSTM model:

    • python baseline.py -config testBaseline.config for IEMOCAP

Citation

Please cite the following paper if you find this code useful in your work.

Poria, S., Cambria, E., Hazarika, D., Majumder, N., Zadeh, A. and Morency, L.P., 2017. Context-dependent sentiment analysis in user-generated videos. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (Vol. 1, pp. 873-883).

Recognizing Emotion Cause in Conversations

This repository also contains implementations of different architectures to detect emotion cause in conversations.

Emotion cause types in conversation

(a) No context. (b) Unmentioned Latent Cause. (c) Distinguishing emotion cause from emotional expressions.

Emotion cause types in conversation

(a) Self-contagion. (b) The cause of the emotion is primarily due to a stable mood of the speaker that was induced in the previous dialogue turns; (c) The hybrid type with both inter-personal emotional influence and self-contagion.

Baseline Results on RECCON dataset (DailyDialog Fold)

Model emo_f1 pos_f1 neg_f1 macro_avg
ECPE-2d cross_road
(0 transform layer)
52.76 52.39 95.86 73.62
ECPE-2d window_constrained
(1 transform layer)
70.48 48.80 93.85 71.32
ECPE-2d cross_road
(2 transform layer)
52.76 55.50 94.96 75.23
ECPE-MLL - 48.48 94.68 71.58
Rank Emotion Cause - 33.00 97.30 65.15
RoBERTa-base - 64.28 88.74 76.51
RoBERTa-large - 66.23 87.89 77.06

ECPE-2D on RECCON dataset

ECPE-2D

Citation: Please cite the following papers if you use this code.

  • Recognizing Emotion Cause in Conversations. Soujanya Poria, Navonil Majumder, Devamanyu Hazarika, Deepanway Ghosal, Rishabh Bhardwaj, Samson Yu Bai Jian, Romila Ghosh, Niyati Chhaya, Alexander Gelbukh, Rada Mihalcea. Arxiv (2020). [pdf]
  • Zixiang Ding, Rui Xia, Jianfei Yu. ECPE-2D: Emotion-Cause Pair Extraction based on Joint Two-Dimensional Representation, Interaction and Prediction. ACL 2020. [pdf]

Rank-Emotion-Cause on RECCON dataset

ECPE-2D

Citation: Please cite the following papers if you use this code.

  • Recognizing Emotion Cause in Conversations. Soujanya Poria, Navonil Majumder, Devamanyu Hazarika, Deepanway Ghosal, Rishabh Bhardwaj, Samson Yu Bai Jian, Romila Ghosh, Niyati Chhaya, Alexander Gelbukh, Rada Mihalcea. Arxiv (2020). [pdf]
  • Effective Inter-Clause Modeling for End-to-End Emotion-Cause Pair Extraction. In Proc. of ACL 2020: The 58th Annual Meeting of the Association for Computational Linguistics, pages 3171--3181. [pdf]

ECPE-MLL on RECCON dataset

ECPE-2D

Citation: Please cite the following papers if you use this code.

  • Recognizing Emotion Cause in Conversations. Soujanya Poria, Navonil Majumder, Devamanyu Hazarika, Deepanway Ghosal, Rishabh Bhardwaj, Samson Yu Bai Jian, Romila Ghosh, Niyati Chhaya, Alexander Gelbukh, Rada Mihalcea. Arxiv (2020). [pdf]
  • Zixiang Ding, Rui Xia, Jianfei Yu. End-to-End Emotion-Cause Pair Extraction based on SlidingWindow Multi-Label Learning. EMNLP 2020.[pdf]

RoBERTa and SpanBERT Baselines on RECCON dataset

The RoBERTa and SpanBERT baselines as explained in the original RECCON paper. Refer to this.

Citation: Please cite the following papers if you use this code.

  • Recognizing Emotion Cause in Conversations. Soujanya Poria, Navonil Majumder, Devamanyu Hazarika, Deepanway Ghosal, Rishabh Bhardwaj, Samson Yu Bai Jian, Romila Ghosh, Niyati Chhaya, Alexander Gelbukh, Rada Mihalcea. Arxiv (2020). [pdf]
Owner
Deep Cognition and Language Research (DeCLaRe) Lab
Deep Cognition and Language Research (DeCLaRe) Lab
PyTorch implementations of the paper: "Learning Independent Instance Maps for Crowd Localization"

IIM - Crowd Localization This repo is the official implementation of paper: Learning Independent Instance Maps for Crowd Localization. The code is dev

tao han 91 Nov 10, 2022
Official code for "Towards An End-to-End Framework for Flow-Guided Video Inpainting" (CVPR2022)

E2FGVI (CVPR 2022) English | ็ฎ€ไฝ“ไธญๆ–‡ This repository contains the official implementation of the following paper: Towards An End-to-End Framework for Flo

Media Computing Group @ Nankai University 537 Jan 07, 2023
Easy and Efficient Object Detector

EOD Easy and Efficient Object Detector EOD (Easy and Efficient Object Detection) is a general object detection model production framework. It aim on p

381 Jan 01, 2023
Using OpenAI's CLIP to upscale and enhance images

CLIP Upscaler and Enhancer Using OpenAI's CLIP to upscale and enhance images Based on nshepperd's JAX CLIP Guided Diffusion v2.4 Sample Results Viewpo

Tripp Lyons 5 Jun 14, 2022
Implementation of the Remixer Block from the Remixer paper, in Pytorch

Remixer - Pytorch Implementation of the Remixer Block from the Remixer paper, in Pytorch. It claims that substituting the feedforwards in transformers

Phil Wang 35 Aug 23, 2022
Official implementation of "Implicit Neural Representations with Periodic Activation Functions"

Implicit Neural Representations with Periodic Activation Functions Project Page | Paper | Data Vincent Sitzmann*, Julien N. P. Martel*, Alexander W. B

Vincent Sitzmann 1.4k Jan 06, 2023
A platform for intelligent agent learning based on a 3D open-world FPS game developed by Inspir.AI.

Wilderness Scavenger: 3D Open-World FPS Game AI Challenge This is a platform for intelligent agent learning based on a 3D open-world FPS game develope

46 Nov 24, 2022
Network Pruning That Matters: A Case Study on Retraining Variants (ICLR 2021)

Network Pruning That Matters: A Case Study on Retraining Variants (ICLR 2021)

Duong H. Le 18 Jun 13, 2022
WSDM2022 Challenge - Large scale temporal graph link prediction

WSDM 2022 Large-scale Temporal Graph Link Prediction - Baseline and Initial Test Set WSDM Cup Website link Link to this challenge This branch offers A

Deep Graph Library 34 Dec 29, 2022
Code for our ACL 2021 paper "One2Set: Generating Diverse Keyphrases as a Set"

One2Set This repository contains the code for our ACL 2021 paper โ€œOne2Set: Generating Diverse Keyphrases as a Setโ€. Our implementation is built on the

Jiacheng Ye 63 Jan 05, 2023
Pi-NAS: Improving Neural Architecture Search by Reducing Supernet Training Consistency Shift (ICCV 2021)

ฮ -NAS This repository provides the evaluation code of our submitted paper: Pi-NAS: Improving Neural Architecture Search by Reducing Supernet Training

Jiqi Zhang 18 Aug 18, 2022
True per-item rarity for Loot

True-Rarity True per-item rarity for Loot (For Adventurers) and More Loot A.K.A mLoot each out/true_rarity_{item_type}.json file contains probabilitie

Dan R. 3 Jul 26, 2022
Deep Learning and Logical Reasoning from Data and Knowledge

Logic Tensor Networks (LTN) Logic Tensor Network (LTN) is a neurosymbolic framework that supports querying, learning and reasoning with both rich data

171 Dec 29, 2022
The project is an official implementation of our paper "3D Human Pose Estimation with Spatial and Temporal Transformers".

3D Human Pose Estimation with Spatial and Temporal Transformers This repo is the official implementation for 3D Human Pose Estimation with Spatial and

Ce Zheng 363 Dec 28, 2022
Code for pre-training CharacterBERT models (as well as BERT models).

Pre-training CharacterBERT (and BERT) This is a repository for pre-training BERT and CharacterBERT. DISCLAIMER: The code was largely adapted from an o

Hicham EL BOUKKOURI 31 Dec 05, 2022
Detector for Log4Shell exploitation attempts

log4shell-detector Detector for Log4Shell exploitation attempts Idea The problem with the log4j CVE-2021-44228 exploitation is that the string can be

Florian Roth 729 Dec 25, 2022
This is the research repository for Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition.

Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition This is the research repository for Vid2

Future Interfaces Group (CMU) 26 Dec 24, 2022
PySLM Python Library for Selective Laser Melting and Additive Manufacturing

PySLM Python Library for Selective Laser Melting and Additive Manufacturing PySLM is a Python library for supporting development of input files used i

Dr Luke Parry 35 Dec 27, 2022
Semi-supervised Stance Detection of Tweets Via Distant Network Supervision

SANDS This is an annonymous repository containing code and data necessary to reproduce the results published in "Semi-supervised Stance Detection of T

2 Sep 22, 2022
A series of convenience functions to make basic image processing operations such as translation, rotation, resizing, skeletonization, and displaying Matplotlib images easier with OpenCV and Python.

imutils A series of convenience functions to make basic image processing functions such as translation, rotation, resizing, skeletonization, and displ

Adrian Rosebrock 4.3k Jan 08, 2023