A command-line program to download media, like and unlike posts, and more from creators on OnlyFans.

Overview

onlyfans-scraper

version python3.8-3.9 downloads status

A command-line program to download media, like and unlike posts, and more from creators on OnlyFans.

Installation

You can install this program by entering the following in your terminal:

pip install onlyfans-scraper

If you're on macOS/Linux, then do this instead:

pip3 install onlyfans-scraper

Upgrading

In order to upgrade onlyfans-scraper, run the following in your terminal:

pip install --upgrade onlyfans-scraper

Or, a shorter version:

pip install -U onlyfans-scraper

Setup

Before you can fully use it, you need to fill out some fields in a auth.json file. This file will be created for you when you run the program for the first time.

These are the fields:

{
    "auth": {
        "app-token": "33d57ade8c02dbc5a333db99ff9ae26a",
        "sess": "",
        "auth_id": "",
        "auth_uniq_": "",
        "user_agent": "",
        "x-bc": ""
    }
}

It's really not that bad. I'll show you in the next sections how to get these bits of info.

Step One: Creating the 'auth.json' File

You first need to run the program in order for the auth.json file to be created. To run it, simply type onlyfans-scraper in your terminal and hit enter. Because you don't have an auth.json file, the program will create one for you and then ask you to enter some information. Now we need to get that information.

Step Two: Getting Your Auth Info

If you've already used DIGITALCRIMINAL's OnlyFans script, you can simply copy and paste the auth information from there to here.

Go to your notification area on OnlyFans. Once you're there, open your browser's developer tools. If you don't know how to do that, consult the following chart:

Operating System Keys
macOS altcmdi
Windows ctrlshifti
Linux ctrlshifti

Once you have your browser's developer tools open, your screen should look like the following:

Click on the Network tab at the top of the browser tools:

Then click on XHR sub-tab inside of the Network tab:

Once you're inside of the XHR sub-tab, refresh the page while you have your browser's developer tools open. After the page reloads, you should see a section titled init appear:

When you click on init, you should see a large sidebar appear. Make sure you're in the Headers section:

After that, scroll down until you see a subsection called Request Headers. You should then see three important fields inside of the Request Headers subsection: Cookie, User-Agent, and x-bc

Inside of the Cookie field, you will see a couple of important bits:

  • sess=
  • auth_id=
  • auth_uid_=

Your auth_uid_ will only appear if you have 2FA (two-factor authentication) enabled. Also, keep in mind that your auth_uid_ will have numbers after the final underscore and before the equal sign (that's your auth_id).

You need everything after the equal sign and everything before the semi-colon for all of those bits.

Once you've copied the value for your sess cookie, go back to the program, paste it in, and hit enter. Now go back to your browser, copy the auth_id value, and paste it into the program and hit enter. Then go back to your browser, copy the auth_uid_ value, and paste it into the program and hit enter (leave this blank if you don't use 2FA!!!).

Once you do that, the program will ask for your user agent. You should be able to find your user agent in a field called User-Agent below the Cookie field. Copy it and paste it into the program and hit enter.

After it asks for your user agent, it will ask for your x-bc token. You should also be able to find this in the Request Headers section.

You're all set and you can now use onlyfans-scraper.

Usage

Whenever you want to run the program, all you need to do is type onlyfans-scraper in your terminal:

onlyfans-scraper

That's it. It's that simple.

Once the program launches, all you need to do is follow the on-screen directions. The first time you run it, it will ask you to fill out your auth.json file (directions for that in the section above).

You will need to use your arrow keys to select an option:

If you choose to download content, you will have three options: having a list of all of your subscriptions printed, manually entering a username, or scraping all accounts that you're subscribed to.

Liking/Unliking Posts

You can also use this program to like all of a user's posts or remove your likes from their posts. Just select either option during the main menu screen and enter their username.

This program will like posts at a rate of around one post per second. This may be reduced in the future but OnlyFans is strict about how quickly you can like posts.

Migrating Databases

If you've used DIGITALCRIMINAL's script, you might've liked how his script prevented duplicates from being downloaded each time you ran it on a user. This is done through database files.

This program also uses a database file to prevent duplicates. In order to make it easier for user's to transition from his program to this one, this program will migrate the data from those databases for you (only IDs and filenames).

In order to use it select the last option (Migrate an old database) and enter the path to the directory that contains the database files (Posts.db, Archived.db, etc.).

For example, if you have a directory that looks like the following:

Users
|__ home
    |__ .sites
        |__ OnlyFans
            |__ melodyjai
                |__ Metadata
                    |__ Archived.db
                    |__ Messages.db
                    |__ Posts.db

Then the path you enter should be /Users/home/.sites/OnlyFans/melodyjai/Metadata. The program will detect the .db files in the directory and then ask you for the username to whom those .db files belong. The program will then move the relevant data over.

Bugs/Issues/Suggestions

If you run into any trouble while using this script, or if you're confused on how to get something running, feel free to open an issue or open a discussion. I don't bite :D

If you would like a feature added to the program or have some ideas, start a discussion!

You might also like...
Simple tool to scrape and download cross country ski timings and results from live.skidor.com

LiveSkidorDownload Simple tool to scrape and download cross country ski timings and results from live.skidor.com Usage: Put the python file in a dedic

Liveskidordownload - Simple tool to scrape and download cross country ski timings and results from live.skidor.com

LiveSkidorDownload Simple tool to scrape and download cross country ski timings

Automatically download and crop key information from the arxiv daily paper.
Automatically download and crop key information from the arxiv daily paper.

Arxiv daily 速览 功能:按关键词筛选arxiv每日最新paper,自动获取摘要,自动截取文中表格和图片。 1 测试环境 Ubuntu 16+ Python3.7 torch 1.9 Colab GPU 2 使用演示 首先下载权重baiduyun 提取码:il87,放置于code/Pars

PaperRobot: a paper crawler that can quickly download numerous papers, facilitating paper studying and management
PaperRobot: a paper crawler that can quickly download numerous papers, facilitating paper studying and management

PaperRobot PaperRobot 是一个论文抓取工具,可以快速批量下载大量论文,方便后期进行持续的论文管理与学习。 PaperRobot通过多个接口抓取论文,目前抓取成功率维持在90%以上。通过配置Config文件,可以抓取任意计算机领域相关会议的论文。 Installation Down

Python scrapper scrapping torrent website and download new movies Automatically.

torrent-scrapper Python scrapper scrapping torrent website and download new movies Automatically. If you like it Put a ⭐ on this repo 😇 Run this git

This code will be able to scrape movies from a movie website and also provide download links to newly uploaded movies.

Movies-Scraper You are probably tired of navigating through a movie website to get the right movie you'd want to watch during the weekend. There may e

Find papers by keywords and venues. Then download it automatically

paper finder Find papers by keywords and venues. Then download it automatically. How to use this? Search CLI python search.py -k "knowledge tracing,kn

Script used to download data for stocks.

This script is useful for downloading stock market data for a wide range of companies specified by their respective tickers. The script reads in the d

Download images from forum threads

Forum Image Scraper Downloads images from forum threads Only works with forums which doesn't require a login to view and have an incremental paginatio

Releases(v1.8.0)
Scrapy-soccer-games - Scraping information about soccer games from a few websites

scrapy-soccer-games Esse projeto tem por finalidade pegar informação de tabela d

Caio Alves 2 Jul 20, 2022
Simply scrape / download all the media from an fansly account.

Simply scrape / download all the media from an fansly account. Providing updates as long as its continuously gaining popularity, so hit the ⭐ button!

Mika C. 334 Jan 01, 2023
对于有验证码的站点爆破,用于安全合法测试

使用方法 python3 main.py + 配置好的文件 python3 main.py Verify.json python3 main.py NoVerify.json 以上分别对应有验证码的demo和无验证码的demo Tips: 你可以以域名作为配置文件名字加载:python3 main

47 Nov 09, 2022
Luis M. Capdevielle 1 Jan 14, 2022
Get paper names from dblp.org

scraper-dblp Get paper names from dblp.org and store them in a .txt file Useful for a related literature :) Install libraries pip3 install -r requirem

Daisy Lab 1 Dec 07, 2021
Extract gene TSS site form gencode/ensembl/gencode database GTF file and export bed format file.

GetTss python Package extract gene TSS site form gencode/ensembl/gencode database GTF file and export bed format file. Install $ pip install GetTss Us

laojunjun 6 Nov 21, 2022
The core packages of security analyzer web crawler

Security Analyzer 🐍 A large scale web crawler (considered also as vulnerability scanner tool) to take an overview about security of Moroccan sites Cu

Security Analyzer 10 Jul 03, 2022
Script for scrape user data like "id,username,fullname,followers,tweets .. etc" by Twitter's search engine .

TwitterScraper Script for scrape user data like "id,username,fullname,followers,tweets .. etc" by Twitter's search engine . Screenshot Data Users Only

Remax Alghamdi 19 Nov 17, 2022
This is a webscraper for a specific website

This is a webscraper for a specific website. It is tuned to extract the headlines of that website. With some little adjustments the webscraper is able to extract any part of the website.

Rahul Siyanwal 1 Dec 13, 2021
SkyScrapers: A collection of variety of Scraping Apps

SkyScrapers Collection of variety of Web Scraping Apps The web-scrapers involved

Biplov Pokhrel 3 Feb 17, 2022
This is a module that I had created along with my friend. It's a basic web scraping module

QuickInfo PYPI link : https://pypi.org/project/quickinfo/ This is the library that you've all been searching for, it's built for developers and allows

OneBit 2 Dec 13, 2021
High available distributed ip proxy pool, powerd by Scrapy and Redis

高可用IP代理池 README | 中文文档 本项目所采集的IP资源都来自互联网,愿景是为大型爬虫项目提供一个高可用低延迟的高匿IP代理池。 项目亮点 代理来源丰富 代理抓取提取精准 代理校验严格合理 监控完备,鲁棒性强 架构灵活,便于扩展 各个组件分布式部署 快速开始 注意,代码请在release

SpiderClub 5.2k Jan 03, 2023
Pelican plugin that adds site search capability

Search: A Plugin for Pelican This plugin generates an index for searching content on a Pelican-powered site. Why would you want this? Static sites are

22 Nov 21, 2022
Transistor, a Python web scraping framework for intelligent use cases.

Web data collection and storage for intelligent use cases. transistor About The web is full of data. Transistor is a web scraping framework for collec

BOM Quote Manufacturing 212 Nov 05, 2022
An utility library to scrape data from TikTok, Instagram, Twitch, Youtube, Twitter or Reddit in one line!

Social Media Scraper An utility library to scrape data from TikTok, Instagram, Twitch, Youtube, Twitter or Reddit in one line! Go to the website » Vie

2 Aug 03, 2022
👨🏼‍⚖️ reddit bot that turns comment chains into ace attorney scenes

Ace Attorney reddit bot 👨🏼‍⚖️ Reddit bot that turns comment chains into ace attorney scenes. You'll need to sign up for streamable and reddit and se

763 Nov 17, 2022
Telegram Group Scrapper

this programe is make your work so much easy on telegrame. do you want to send messages on everyone to your group or others group. use this script it will do your work automatically with one click. a

HackArrOw 3 Dec 03, 2022
A pure-python HTML screen-scraping library

Scrapely Scrapely is a library for extracting structured data from HTML pages. Given some example web pages and the data to be extracted, scrapely con

Scrapy project 1.8k Dec 31, 2022
河南工业大学 完美校园 自动校外打卡

HAUT-checkin 河南工业大学自动校外打卡 由于github actions存在明显延迟,建议直接使用腾讯云函数 特点 多人打卡 使用简单,仅需账号密码以及用于微信推送的uid 自动获取上一次打卡信息用于打卡 向所有成员微信单独推送打卡状态 完美校园服务器繁忙时造成打卡失败会自动重新打卡

36 Oct 27, 2022
A dead simple crawler to get books information from Douban.

Introduction A dead simple crawler to get books information from Douban. Pre-requesites Python 3 Install dependencies from requirements.txt (Optional)

Yun Wang 1 Jan 10, 2022