A command-line program to download media, like and unlike posts, and more from creators on OnlyFans.

Overview

onlyfans-scraper

version python3.8-3.9 downloads status

A command-line program to download media, like and unlike posts, and more from creators on OnlyFans.

Installation

You can install this program by entering the following in your terminal:

pip install onlyfans-scraper

If you're on macOS/Linux, then do this instead:

pip3 install onlyfans-scraper

Upgrading

In order to upgrade onlyfans-scraper, run the following in your terminal:

pip install --upgrade onlyfans-scraper

Or, a shorter version:

pip install -U onlyfans-scraper

Setup

Before you can fully use it, you need to fill out some fields in a auth.json file. This file will be created for you when you run the program for the first time.

These are the fields:

{
    "auth": {
        "app-token": "33d57ade8c02dbc5a333db99ff9ae26a",
        "sess": "",
        "auth_id": "",
        "auth_uniq_": "",
        "user_agent": "",
        "x-bc": ""
    }
}

It's really not that bad. I'll show you in the next sections how to get these bits of info.

Step One: Creating the 'auth.json' File

You first need to run the program in order for the auth.json file to be created. To run it, simply type onlyfans-scraper in your terminal and hit enter. Because you don't have an auth.json file, the program will create one for you and then ask you to enter some information. Now we need to get that information.

Step Two: Getting Your Auth Info

If you've already used DIGITALCRIMINAL's OnlyFans script, you can simply copy and paste the auth information from there to here.

Go to your notification area on OnlyFans. Once you're there, open your browser's developer tools. If you don't know how to do that, consult the following chart:

Operating System Keys
macOS altcmdi
Windows ctrlshifti
Linux ctrlshifti

Once you have your browser's developer tools open, your screen should look like the following:

Click on the Network tab at the top of the browser tools:

Then click on XHR sub-tab inside of the Network tab:

Once you're inside of the XHR sub-tab, refresh the page while you have your browser's developer tools open. After the page reloads, you should see a section titled init appear:

When you click on init, you should see a large sidebar appear. Make sure you're in the Headers section:

After that, scroll down until you see a subsection called Request Headers. You should then see three important fields inside of the Request Headers subsection: Cookie, User-Agent, and x-bc

Inside of the Cookie field, you will see a couple of important bits:

  • sess=
  • auth_id=
  • auth_uid_=

Your auth_uid_ will only appear if you have 2FA (two-factor authentication) enabled. Also, keep in mind that your auth_uid_ will have numbers after the final underscore and before the equal sign (that's your auth_id).

You need everything after the equal sign and everything before the semi-colon for all of those bits.

Once you've copied the value for your sess cookie, go back to the program, paste it in, and hit enter. Now go back to your browser, copy the auth_id value, and paste it into the program and hit enter. Then go back to your browser, copy the auth_uid_ value, and paste it into the program and hit enter (leave this blank if you don't use 2FA!!!).

Once you do that, the program will ask for your user agent. You should be able to find your user agent in a field called User-Agent below the Cookie field. Copy it and paste it into the program and hit enter.

After it asks for your user agent, it will ask for your x-bc token. You should also be able to find this in the Request Headers section.

You're all set and you can now use onlyfans-scraper.

Usage

Whenever you want to run the program, all you need to do is type onlyfans-scraper in your terminal:

onlyfans-scraper

That's it. It's that simple.

Once the program launches, all you need to do is follow the on-screen directions. The first time you run it, it will ask you to fill out your auth.json file (directions for that in the section above).

You will need to use your arrow keys to select an option:

If you choose to download content, you will have three options: having a list of all of your subscriptions printed, manually entering a username, or scraping all accounts that you're subscribed to.

Liking/Unliking Posts

You can also use this program to like all of a user's posts or remove your likes from their posts. Just select either option during the main menu screen and enter their username.

This program will like posts at a rate of around one post per second. This may be reduced in the future but OnlyFans is strict about how quickly you can like posts.

Migrating Databases

If you've used DIGITALCRIMINAL's script, you might've liked how his script prevented duplicates from being downloaded each time you ran it on a user. This is done through database files.

This program also uses a database file to prevent duplicates. In order to make it easier for user's to transition from his program to this one, this program will migrate the data from those databases for you (only IDs and filenames).

In order to use it select the last option (Migrate an old database) and enter the path to the directory that contains the database files (Posts.db, Archived.db, etc.).

For example, if you have a directory that looks like the following:

Users
|__ home
    |__ .sites
        |__ OnlyFans
            |__ melodyjai
                |__ Metadata
                    |__ Archived.db
                    |__ Messages.db
                    |__ Posts.db

Then the path you enter should be /Users/home/.sites/OnlyFans/melodyjai/Metadata. The program will detect the .db files in the directory and then ask you for the username to whom those .db files belong. The program will then move the relevant data over.

Bugs/Issues/Suggestions

If you run into any trouble while using this script, or if you're confused on how to get something running, feel free to open an issue or open a discussion. I don't bite :D

If you would like a feature added to the program or have some ideas, start a discussion!

You might also like...
Simple tool to scrape and download cross country ski timings and results from live.skidor.com

LiveSkidorDownload Simple tool to scrape and download cross country ski timings and results from live.skidor.com Usage: Put the python file in a dedic

Liveskidordownload - Simple tool to scrape and download cross country ski timings and results from live.skidor.com

LiveSkidorDownload Simple tool to scrape and download cross country ski timings

Automatically download and crop key information from the arxiv daily paper.
Automatically download and crop key information from the arxiv daily paper.

Arxiv daily 速览 功能:按关键词筛选arxiv每日最新paper,自动获取摘要,自动截取文中表格和图片。 1 测试环境 Ubuntu 16+ Python3.7 torch 1.9 Colab GPU 2 使用演示 首先下载权重baiduyun 提取码:il87,放置于code/Pars

PaperRobot: a paper crawler that can quickly download numerous papers, facilitating paper studying and management
PaperRobot: a paper crawler that can quickly download numerous papers, facilitating paper studying and management

PaperRobot PaperRobot 是一个论文抓取工具,可以快速批量下载大量论文,方便后期进行持续的论文管理与学习。 PaperRobot通过多个接口抓取论文,目前抓取成功率维持在90%以上。通过配置Config文件,可以抓取任意计算机领域相关会议的论文。 Installation Down

Python scrapper scrapping torrent website and download new movies Automatically.

torrent-scrapper Python scrapper scrapping torrent website and download new movies Automatically. If you like it Put a ⭐ on this repo 😇 Run this git

This code will be able to scrape movies from a movie website and also provide download links to newly uploaded movies.

Movies-Scraper You are probably tired of navigating through a movie website to get the right movie you'd want to watch during the weekend. There may e

Find papers by keywords and venues. Then download it automatically

paper finder Find papers by keywords and venues. Then download it automatically. How to use this? Search CLI python search.py -k "knowledge tracing,kn

Script used to download data for stocks.

This script is useful for downloading stock market data for a wide range of companies specified by their respective tickers. The script reads in the d

Download images from forum threads

Forum Image Scraper Downloads images from forum threads Only works with forums which doesn't require a login to view and have an incremental paginatio

Releases(v1.8.0)
✂️🕷️ Spider-Cut is a Network Mapper Framework (NMAP Framework)

Spider-Cut is a Network Mapper Framework (NMAP Framework) Installation | Usage | Creators | Donate Installation # Kali Linux | WSL

XforWorks 3 Mar 07, 2022
python+selenium实现的web端自动打卡 + 每日邮件发送 + 金山词霸 每日一句 + 毒鸡汤(从2月份稳定运行至今)

python+selenium实现的web端自动打卡 说明 本打卡脚本适用于郑州大学健康打卡,其他web端打卡也可借鉴学习。(自己用的,从2月分稳定运行至今) 仅供学习交流使用,请勿依赖。开发者对使用本脚本造成的问题不负任何责任,不对脚本执行效果做出任何担保,原则上不提供任何形式的技术支持。 为防止

Sunday 1 Aug 27, 2022
京东茅台抢购最新优化版本,京东秒杀,添加误差时间调整,优化了茅台抢购进程队列

京东茅台抢购最新优化版本,京东秒杀,添加误差时间调整,优化了茅台抢购进程队列

776 Jul 28, 2021
Visual scraping for Scrapy

Portia Portia is a tool that allows you to visually scrape websites without any programming knowledge required. With Portia you can annotate a web pag

Scrapinghub 8.7k Jan 05, 2023
A Python module to bypass Cloudflare's anti-bot page.

cloudflare-scrape A simple Python module to bypass Cloudflare's anti-bot page (also known as "I'm Under Attack Mode", or IUAM), implemented with Reque

3k Jan 04, 2023
Scrapes all articles and their headlines from theonion.com

The Onion Article Scraper Scrapes all articles and their headlines from the satirical news website https://www.theonion.com Also see Clickhole Article

0 Nov 17, 2021
Scraping news from Ucsal portal with Scrapy.

NewsScraping Esse é um projeto de raspagem das últimas noticias, de 2021, do portal da universidade Ucsal http://noosfero.ucsal.br/institucional Tecno

Crissiano Pires 0 Sep 30, 2021
An IpVanish Proxies Scraper

EzProxies Tired of searching for good proxies for hours? Just get an IpVanish account and get thousands of good proxies in few seconds! Showcase Watch

11 Nov 13, 2022
Incredibly fast crawler designed for OSINT.

Photon Incredibly fast crawler designed for OSINT. Photon Wiki • How To Use • Compatibility • Photon Library • Contribution • Roadmap Key Features Dat

Somdev Sangwan 9.3k Jan 02, 2023
Command line program to download documents from web portals.

command line document download made easy Highlights list available documents in json format or download them filter documents using string matching re

16 Dec 26, 2022
Rottentomatoes, Goodreads and IMDB sites crawler. Semantic Web final project.

Crawler Rottentomatoes, Goodreads and IMDB sites crawler. Crawler written by beautifulsoup, selenium and lxml to gather books and films information an

Faeze Ghorbanpour 1 Dec 30, 2021
Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo.

Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo. (Todas as infomações)

Guilherme Silva Uchoa 3 Oct 04, 2022
Here I provide the source code for doing web scraping using the python library, it is Selenium.

Here I provide the source code for doing web scraping using the python library, it is Selenium.

M Khaidar 1 Nov 13, 2021
Web scraping library and command-line tool for text discovery and extraction (main content, metadata, comments)

trafilatura: Web scraping tool for text discovery and retrieval Description Trafilatura is a Python package and command-line tool which seamlessly dow

Adrien Barbaresi 704 Jan 06, 2023
Telegram Group Scrapper

this programe is make your work so much easy on telegrame. do you want to send messages on everyone to your group or others group. use this script it will do your work automatically with one click. a

HackArrOw 3 Dec 03, 2022
simple http & https proxy scraper and checker

simple http & https proxy scraper and checker

Neospace 11 Nov 15, 2021
Create crawler get some new products with maximum discount in banimode website

crawler-banimode create crawler and get some new products with maximum discount in banimode website. این پروژه کوچک جهت یادگیری و کار با ابزار سلنیوم

nourollah rezaei 2 Feb 17, 2022
Library to scrape and clean web pages to create massive datasets.

lazynlp A straightforward library that allows you to crawl, clean up, and deduplicate webpages to create massive monolingual datasets. Using this libr

Chip Huyen 2.1k Jan 06, 2023
A Pixiv web crawler module

Pixiv-spider A Pixiv spider module WARNING It's an unfinished work, browsing the code carefully before using it. Features 0004 - Readme.md updated, co

Uzuki 1 Nov 14, 2021
WebScraping - Scrapes Job website for python developer jobs and exports the data to a csv file

WebScraping Web scraping Pyton program that scrapes Job website for python devel

Michelle 2 Jul 22, 2022