The proliferation of disinformation across social media has led the application of deep learning techniques to detect fake news.

Overview

Fake News Detection

Overview

The proliferation of disinformation across social media has led the application of deep learning techniques to detect fake news. However, it is difficult to understand how deep learning models make decisions on what is fake or real news, and furthermore these models are vulnerable to adversarial attacks. In this project, we test the resilience of a fake news detector against a set of adversarial attacks. Our results indicate that a deep learning model remains vulnerable to adversarial attacks, but also is alarmingly vulnerable to the use of generic attacks: the inclusion of certain sequences of text whose inclusion into nearly any text sample can cause it to be misclassified. We explore how this set of generic attacks against text classifiers can be detected, and explore how future models can be made more resilient against these attacks.

Dataset Description

Our fake news model and dataset are taken from this github repo.

  • train.csv: A full training dataset with the following attributes:

    • id: unique id for a news article
    • title: the title of a news article
    • author: author of the news article
    • text: the text of the article; could be incomplete
    • label: a label that marks the article as potentially unreliable
      • 1: unreliable
      • 0: reliable
  • test.csv: A testing training dataset with all the same attributes at train.csv without the label.

Adversarial Text Generation

It's difficult to generate adversarial samples when working with text, which is discrete. A workaround, proposed by J. Gao et al. has been to create small text perturbations, like misspelled words, to create a black-box attack on text classification models. Another method taken by N. Papernot has been to find the gradient based off of the word embeddings of sample text. Our approach uses the algorithm proposed by Papernot to generate our adversarial samples. While Gao’s method is extremely effective, with little to no modification of the meaning of the text samples, we decided to see if we could create valid adversarial samples by changing the content of the words, instead of their text.

Methodology

Our original goal was to create a model that could mutate text samples so that they would be misclassified by the model. We accomplished this by implementing the algorithm set out by Papernot in Crafting Adversarial Input Sequences. The proposed algorithm generates a white-box adversarial example based on the model’s Jacobian matrix. Random words from the original text sample are mutated. These mutations are determined by finding a word in the embedding where the sign of the difference between the original word and the new word are closest to the sign of the Jacobian of the original word. The resulting words have an embedding direction that most closely resemble the direction indicated as being most impactful according to the model’s Jacobian.

A fake news text sample modified to be classified as reliable is shown below:

Council of Elders Intended to Set Up Anti-ISIS Coalition by Jason Ditz, October said 31, 2016 Share This ISIS has killed a number of Afghan tribal elders and wounded several more in Nangarhar Province’s main city of Jalalabad today, with a suicide bomber from the group targeting a meeting of the council of elders in the city. The details are still scant, but ISIS claims that the council was established in part to discuss the formation of a tribal anti-ISIS coalition in the area. They claimed 15 killed and 25 wounded, labeling the victims “apostates.” Afghan 000 government officials put the toll a lot lower, saying only four were killed and seven mr wounded in the attack. Nangarhar is the main base of operations for ISIS forces in Afghanistan, though they’ve recently begun to pop up around several other provinces. Whether the council was at the point of establishing an anti-ISIS coalition or not, this is in keeping with the group mr's reaction to any sign of growing local resistance, with ISIS having similarly made an example of tribal groups in Iraq and Syria during their establishment there. Last 5 posts by Jason Ditz

We also discovered a phenomena where adding certain sequences of text to samples would cause them to be misclassified without needing to make any additional modifications to the original text. To discover additional sequences, we took three different approaches: generating sequences based on the sentiments of the word bank, using Papernot’s algorithm to append new sequences, and creating sequences by hand.

Modified Papernot

Papernot’s original algorithm had been trained to mutate existing words in an input text to generate the adversarial text. However, our LSTM model pads the input, leaving spaces for blank words when the input length is small enough. We modify Papernot’s algorithm to mutate on two “blank” words at the end of our input sequence. This will generate new sequences of text that can then be applied to other samples, to see if they can serve as generic attacks.

The modified Papernot algorithm generated two-word sequences of the words ‘000’, ‘said’, and ‘mr’ in various orders, closely resembling the word substitutions created by the baseline Papernot algorithm. It can be expected that the modified Papernot will still use words identified by the baseline method, given that both models rely on the model’s Jacobian matrix when selecting replacement words. When tested against all unreliable samples, sequences generated are able to shift the model’s confidence to inaccurately classify a majority of samples as reliable instead.

Handcraft

Our simplest approach to the generation was to manually look for sequences of text by hand. This involved looking at how the model had performed on the training set, how confident it was on certain samples, and looking for patterns in samples that had been misclassified. We tried to rely on patterns that appear to a human observer to be innocuous, but also explored other patterns that would change the meaning of the text in significant ways.

Methodology Sample Sequence False Discovery Rate
Papernot mr 000 0.37%
Papernot said mr 29.74%
Handcraft follow twitter 26.87%
Handcraft nytimes com 1.70%

Conclusion

One major issue with the deployment of deep learning models is that "the ease with which we can switch between any two decisions in targeted attacks is still far from being understood." It is primarily on this basis that we are skeptical of machine learning methods. We believe that there should be greater emphasis placed on identifying the set of misclassified text samples when evaluating the performance of fake news detectors. If seemingly minute perturbations in the text can change the entire classification of the sample, it is likely that these weaknesses will be found by fake news distributors, where the cost of producing fake news is cheaper than the cost of detecting it.

Our project also led to the discovery of the existence of a set of sequences that could be applied to nearly any text sample to then be misclassified by the model, resembling generic attacks from the cryptography field. We proposed a modification of Papernot’s Jacobian-based adversarial attack to automatically identify these sequences. However, some of these generated sequences do not feel natural to the human eye, and future work can be placed into improving their generation. For now, while the eyes of a machine may be tricked by our samples, the eyes of a human can still spot the differences.

References

Owner
Kushal Shingote
Android Developer📱📱 iOS Apps📱📱 Swift | Xcode | SwiftUI iOS Swift development📱 Kotlin Application📱📱 iOS📱 Artificial Intelligence 💻 Data science
Kushal Shingote
An Explainable Leaderboard for NLP

ExplainaBoard: An Explainable Leaderboard for NLP Introduction | Website | Download | Backend | Paper | Video | Bib Introduction ExplainaBoard is an i

NeuLab 319 Dec 20, 2022
A python project made to generate code using either OpenAI's codex or GPT-J (Although not as good as codex)

CodeJ A python project made to generate code using either OpenAI's codex or GPT-J (Although not as good as codex) Install requirements pip install -r

TheProtagonist 1 Dec 06, 2021
Convolutional Neural Networks for Sentence Classification

Convolutional Neural Networks for Sentence Classification Code for the paper Convolutional Neural Networks for Sentence Classification (EMNLP 2014). R

Yoon Kim 2k Jan 02, 2023
A BERT-based reverse-dictionary of Korean proverbs

Wisdomify A BERT-based reverse-dictionary of Korean proverbs. 김유빈 : 모델링 / 데이터 수집 / 프로젝트 설계 / back-end 김종윤 : 데이터 수집 / 프로젝트 설계 / front-end Quick Start C

Eu-Bin KIM 94 Dec 08, 2022
Experiments in converting wikidata to ftm

FollowTheMoney / Wikidata mappings This repo will contain tools for converting Wikidata entities into FtM schema. Prefixes: https://www.mediawiki.org/

Friedrich Lindenberg 2 Nov 12, 2021
Which Apple Keeps Which Doctor Away? Colorful Word Representations with Visual Oracles

Which Apple Keeps Which Doctor Away? Colorful Word Representations with Visual Oracles (TASLP 2022)

Zhuosheng Zhang 3 Apr 14, 2022
File-based TF-IDF: Calculates keywords in a document, using a word corpus.

File-based TF-IDF Calculates keywords in a document, using a word corpus. Why? Because I found myself with hundreds of plain text files, with no way t

Jakob Lindskog 1 Feb 11, 2022
Optimal Transport Tools (OTT), A toolbox for all things Wasserstein.

Optimal Transport Tools (OTT), A toolbox for all things Wasserstein. See full documentation for detailed info on the toolbox. The goal of OTT is to pr

OTT-JAX 255 Dec 26, 2022
FireFlyer Record file format, writer and reader for DL training samples.

FFRecord The FFRecord format is a simple format for storing a sequence of binary records developed by HFAiLab, which supports random access and Linux

77 Jan 04, 2023
Ceaser-Cipher - The Caesar Cipher technique is one of the earliest and simplest method of encryption technique

Ceaser-Cipher The Caesar Cipher technique is one of the earliest and simplest me

Lateefah Ajadi 2 May 12, 2022
A minimal Conformer ASR implementation adapted from ESPnet.

Conformer ASR A minimal Conformer ASR implementation adapted from ESPnet. Introduction I want to use the pre-trained English ASR model provided by ESP

Niu Zhe 3 Jan 24, 2022
📜 GPT-2 Rhyming Limerick and Haiku models using data augmentation

Well-formed Limericks and Haikus with GPT2 📜 GPT-2 Rhyming Limerick and Haiku models using data augmentation In collaboration with Matthew Korahais &

Bardia Shahrestani 2 May 26, 2022
Code repository for "It's About Time: Analog clock Reading in the Wild"

it's about time Code repository for "It's About Time: Analog clock Reading in the Wild" Packages required: pytorch (used 1.9, any reasonable version s

52 Nov 10, 2022
Rootski - Full codebase for rootski.io (without the data)

📣 Welcome to the Rootski codebase! This is the codebase for the application run

Eric 20 Nov 18, 2022
Pipeline for training LSA models using Scikit-Learn.

Latent Semantic Analysis Pipeline for training LSA models using Scikit-Learn. Usage Instead of writing custom code for latent semantic analysis, you j

Dani El-Ayyass 23 Sep 05, 2022
A Python package implementing a new model for text classification with visualization tools for Explainable AI :octocat:

A Python package implementing a new model for text classification with visualization tools for Explainable AI 🍣 Online live demos: http://tworld.io/s

Sergio Burdisso 285 Jan 02, 2023
Pretrained Japanese BERT models

Pretrained Japanese BERT models This is a repository of pretrained Japanese BERT models. The models are available in Transformers by Hugging Face. Mod

Inui Laboratory 387 Dec 30, 2022
This is the 25 + 1 year anniversary version of the 1995 Rachford-Rice contest

Rachford-Rice Contest This is the 25 + 1 year anniversary version of the 1995 Rachford-Rice contest. Can you solve the Rachford-Rice problem for all t

13 Sep 20, 2022
Text Normalization(文本正则化)

Text Normalization(文本正则化) 任务描述:通过机器学习算法将英文文本的“手写”形式转换成“口语“形式,例如“6ft”转换成“six feet”等 实验结果 XGBoost + bag-of-words: 0.99159 XGBoost+Weights+rules:0.99002

Jason_Zhang 0 Feb 26, 2022
A desktop GUI providing an audio interface for GPT3.

Jabberwocky neil_degrasse_tyson_with_audio.mp4 Project Description This GUI provides an audio interface to GPT-3. My main goal was to provide a conven

16 Nov 27, 2022