Binance harvester - A Python 3 script to harvest data from the Binance socket stream and calculate popular TA indicators and produce lists of top trending coins

Overview

binance_harvester

A Python 3 script to harvest data from the Binance socket stream and calculate popular TA indicators and produce lists of top trending coins storing data in an SQLite3 database for use by algorithmic and bot traders

The script will populate an SQLite3 database called prices.sqlite3 in the same directory as the script.

By default the script will populate the DB with data from 5 minute candles and takes about 14 seconds from the reception of candle data on the socket to publish the data for every crypto pair and indicator in the tables ready for your algos or bots

The top trending coins list is stored in a table called "top_cryptos"

Other data is stored in tables named for the coin pair eg "BTCUSDT"

coin pairs used are listed in the text file USDT_pairs.txt

The fields of recorded data and indicators are as follows: Timestamp,Open,High,Low,Close,Volume,macd,macd1226,macd_diff,macd_diff1020,macd_diff2550,macd_diff50100,stoch,stochrsi,rsi,sma,ema,chaikin_money_flow,mfi,trix,tema

Dependencies

import websocket, json, pprint
from sqlite3 import connect
import pandas as pd
import sqlite3
import argparse
import time
import math
import os
from pprint import pprint
from datetime import datetime, timedelta, timezone
import csv
from collections import deque
import copy
from binance.client import Client
from binance.enums import *
from ta import add_all_ta_features
from ta.utils import dropna
from ta.trend import macd, macd_diff, sma_indicator, ema_indicator, trix
from ta.momentum import stoch, stochrsi, stoch_signal, rsi
from ta.volume import chaikin_money_flow, money_flow_index
from functools import partial
import multiprocessing
import threading
import os
import time
from operator import itemgetter
import talib

You must also create a file called config.py in the same directory as the script and populate it with your Binance API keys following the format of SAMPLE_config.py

Usage

python binance_harvester -T 5

Ideally run in the background:

nohup python binance_harvester.py -T 5 &

Where the T parameter specifies number of hours for calculating the list of top trending coins eg using "1" will give you the top trending coins over the last 1 hour.

The top trending coins list is stored in a table called "top_cryptos"

Other data is stored in tables named for the coin pair eg "BTCUSDT"

coin pairs used are listed in the text file USDT_pairs.txt

A Python web scraper to scrape latest posts from official Coinbase's Blog.

Coinbase Blog Scraper A Python web scraper to scrape latest posts from official Coinbase's Blog. IDEA It scrapes up latest blog posts from https://blo

Lucas Villela 3 Feb 18, 2022
Web scraping library and command-line tool for text discovery and extraction (main content, metadata, comments)

trafilatura: Web scraping tool for text discovery and retrieval Description Trafilatura is a Python package and command-line tool which seamlessly dow

Adrien Barbaresi 704 Jan 06, 2023
Luis M. Capdevielle 1 Jan 14, 2022
Jobinja.ir jobs scraper.

Jobinja.ir Dataset Introduction This project is a simple web scraper that scraps pages of jobinja.ir concurrently and writes and update (if file gets

Iman Kermani 3 Apr 15, 2022
Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Django and Vue.js

Gerapy Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Scrapyd-Client, Scrapyd-API, Django and Vue.js. Documentation Documentation

Gerapy 2.9k Jan 03, 2023
Web scraped S&P 500 Data from Wikipedia using Pandas and performed Exploratory Data Analysis on the data.

Web scraped S&P 500 Data from Wikipedia using Pandas and performed Exploratory Data Analysis on the data. Then used Yahoo Finance to get the related stock data and displayed them in the form of chart

Samrat Mitra 3 Sep 09, 2022
Script for scrape user data like "id,username,fullname,followers,tweets .. etc" by Twitter's search engine .

TwitterScraper Script for scrape user data like "id,username,fullname,followers,tweets .. etc" by Twitter's search engine . Screenshot Data Users Only

Remax Alghamdi 19 Nov 17, 2022
A python module to parse the Open Graph Protocol

OpenGraph is a module of python for parsing the Open Graph Protocol, you can read more about the specification at http://ogp.me/ Installation $ pip in

Erik Rivera 213 Nov 12, 2022
Binance Smart Chain Contract Scraper + Contract Evaluator

Pulls Binance Smart Chain feed of newly-verified contracts every 30 seconds, then checks their contract code for links to socials.Returns only those with socials information included, and then submit

14 Dec 09, 2022
python+selenium实现的web端自动打卡 + 每日邮件发送 + 金山词霸 每日一句 + 毒鸡汤(从2月份稳定运行至今)

python+selenium实现的web端自动打卡 说明 本打卡脚本适用于郑州大学健康打卡,其他web端打卡也可借鉴学习。(自己用的,从2月分稳定运行至今) 仅供学习交流使用,请勿依赖。开发者对使用本脚本造成的问题不负任何责任,不对脚本执行效果做出任何担保,原则上不提供任何形式的技术支持。 为防止

Sunday 1 Aug 27, 2022
Google Maps crawler using Selenium

Google Maps Crawler using Selenium Built as part of the Antifragile Dev Project Selenium crawler that browses Google Maps as a regular user and stores

Guilherme Latrova 46 Dec 16, 2022
Introduction to WebScraping Workshop - Semcomp 24 Beta

Extrair informações da internet de forma automatizada. Existem diversas maneiras de fazer isso, nesse tutorial vamos ver algumas delas, por meio de bibliotecas de python.

Luísa Moura 19 Sep 11, 2022
Transistor, a Python web scraping framework for intelligent use cases.

Web data collection and storage for intelligent use cases. transistor About The web is full of data. Transistor is a web scraping framework for collec

BOM Quote Manufacturing 212 Nov 05, 2022
a high-performance, lightweight and human friendly serving engine for scrapy

a high-performance, lightweight and human friendly serving engine for scrapy

Speakol Ads 30 Mar 01, 2022
Web-scraping - Program that scrapes a website for a collection of quotes, picks one at random and displays it

web-scraping Program that scrapes a website for a collection of quotes, picks on

Manvir Mann 1 Jan 07, 2022
Web scraper build using python.

Web Scraper This project is made in pyhthon. It took some info. from website list then add them into data.json file. The dependencies used are: reques

Shashwat Harsh 2 Jul 22, 2022
An helper library to scrape data from Instagram effortlessly, using the Influencer Hunters APIs.

Instagram Scraper An utility library to scrape data from Instagram hassle-free Go to the website » View Demo · Report Bug · Request Feature About The

2 Jul 06, 2022
AssistScraper - program for /r/nba to use to find list of all players a player assisted and how many assists each player recieved

AssistScraper - program for /r/nba to use to find list of all players a player assisted and how many assists each player recieved

5 Nov 25, 2021
Dictionary - Application focused on word search through web scraping

Dictionary - Application focused on word search through web scraping, in addition to other functions such as dictation, spell and conjugation of syllables.

Juan Manuel 2 May 09, 2022
a small library for extracting rich content from urls

A small library for extracting rich content from urls. what does it do? micawber supplies a few methods for retrieving rich metadata about a variety o

Charles Leifer 588 Dec 27, 2022