crypto currency scraping

Related tags

Web Crawlingscrypto
Overview

SCRYPTO

What ?

Crypto currencies scraping

(At the moment, only bitcoin and ethereum crypto currencies are supported)

How ?

A python script is running in a container, and scrape informations (using CoinGecko API) about crypto currencies. Then this informations are send to a SQL database.
(You can also run the script alone with a .env file and a systemd service)

SETUP

  • You can launch this scraper by simply using the "docker-compose.yml" file that i let you on this repo.

Docker-compose file

Here is an example of docker-compose file.

version: "3"
services:
  scrypto:
    image: baldurr/scrypto:latest
    container_name: scrypto
    environment:
      - CRYPTO_LIST=bitcoin,ethereum
      - DEVISE_1=eur
      - DEVISE_2=usd
      - SCRAPE_TIME=300
      - SQL_USER=root2
      - SQL_PASSWORD=mypwd
      - SQL_HOST=192.168.1.20
      - SQL_DB=db_scrypto
      - SQL_PORT=3308
    restart: unless-stopped

Environment variable

List of all currencies available:
"btc", "eth", "ltc", "bch", "bnb", "eos", "xrp", "xlm", "link", "dot", "yfi", "usd", "aed", "ars", "aud", "bdt", "bhd", "bmd", "brl", "cad", "chf", "clp", "cny", "czk", "dkk", "eur", "gbp", "hkd", "huf", "idr", "ils", "inr", "jpy", "krw", "kwd", "lkr", "mmk", "mxn", "myr", "ngn", "nok", "nzd", "php", "pkr", "pln", "rub", "sar", "sek", "sgd", "thb", "try", "twd", "uah", "vef", "vnd", "zar", "xdr", "xag", "xau", "bits", "sats"

Var Usage Info
CRYPTO_LIST List of currencies separated by ',' Max: 2 currencies
DEVISE_1 Name of the 1st currencie defined ex: eur
DEVISE_2 Name of the 1nd currencie defined ex: usd
SCRAPE_TIME Scrape interval in second ex: 300 = 5min
SQL_USER SQL user used
SQL_PASSWORD SQL user password
SQL_HOST SQL host which host the database ex: 192.168.1.20, don't set localhost while this will refer to the scrypto container if you use the docker method
SQL_DB SQL database name This var must be set to 'db_scrypto'
SQL_PORT SQL database port

Insatallation

SQL configuration

To use correctly this image, you must create a database named 'db_scrypto'.

If you use the docker method, connect to the scrypto container:

docker exec -it scrypto /bin/bash

Then connect to the SQL database:

mysql -u myuser -p

Enter your password and display databases like this:

SHOW databases;

If 'db_scrypto' doesn't exist, create it:

CREATE DATABASE db_scrypto;

Then you have to create the tables to store the data.
NOTICE:

  • For the table name, please name it like the crypto currencie name: bitcoin, ethereum
  • For the value columns, set the name of the column like this: value_mycurrencie (ex: value_usd)
CREATE TABLE ethereum (data_id INT NOT NULL AUTO_INCREMENT, time DATETIME, metric VARCHAR(20), value_eur numeric(10,2), value_usd numeric(10,2), PRIMARY KEY(data_id));
CREATE TABLE bitcoin (data_id INT NOT NULL AUTO_INCREMENT, time DATETIME, metric VARCHAR(20), value_eur  numeric(10,2), value_usd numeric(10,2), PRIMARY KEY(data_id));

Now you are ready to collect data

Docker configuration

DockerHub image: https://hub.docker.com/repository/docker/baldurr/scrypto

mkdir scrypto
cd scrypto
wget https://raw.githubusercontent.com/Baldurrr/scrypto/main/docker-compose.yml
docker-compose up -d

Wait a bit and:
docker logs scrypto (will display the api response if the configuration worked)

THE RESULT

In this repo, you will also find a json fill that contain the grafana dashboard configuration

Grafana dashboard example: image

A crawler of doubamovie

豆瓣电影 A crawler of doubamovie 一个小小的入门级scrapy框架的应用,选取豆瓣电影对排行榜前1000的电影数据进行爬取。 spider.py start_requests方法为scrapy的方法,我们对它进行重写。 def start_requests(self):

Cats without dried fish 1 Oct 05, 2021
PyQuery-based scraping micro-framework.

demiurge PyQuery-based scraping micro-framework. Supports Python 2.x and 3.x. Documentation: http://demiurge.readthedocs.org Installing demiurge $ pip

Matias Bordese 109 Jul 20, 2022
A database scraper created with mechanical soup and sqlite

WebscrapingDatabases a database scraper created with mechanical soup and sqlite author: Mariya Sha Watch on YouTube: This repository was created to su

Mariya 30 Aug 08, 2022
中国大学生在线 四史自动答题刷分(现仅支持英雄篇)

中国大学生在线 “四史”学习教育竞答 自动答题 刷分 (现仅支持英雄篇,已更新可用) 若对您有所帮助,记得点个Star 🌟 !!! 中国大学生在线 “四史”学习教育竞答 自动答题 刷分 (现仅支持英雄篇,已更新可用) 🥰 🥰 🥰 依赖 本项目依赖的第三方库: requests 在终端执行以下

XWhite 229 Dec 12, 2022
A tool can scrape product in aliexpress: Title, Price, and URL Product.

Scrape-Product-Aliexpress A tool can scrape product in aliexpress: Title, Price, and URL Product. Usage: 1. Install Python 3.8 3.9 padahal halaman ins

Rahul Joshua Damanik 1 Dec 30, 2021
A Python package that scrapes Google News article data while remaining undetected by Google.

A Python package that scrapes Google News article data while remaining undetected by Google. Our scraper can scrape page data up until the last page and never trigger a CAPTCHA (download stats: https

Geminid Systems, Inc 6 Aug 10, 2022
a small library for extracting rich content from urls

A small library for extracting rich content from urls. what does it do? micawber supplies a few methods for retrieving rich metadata about a variety o

Charles Leifer 588 Dec 27, 2022
This is python to scrape overview and reviews of companies from Glassdoor.

Data Scraping for Glassdoor This is python to scrape overview and reviews of companies from Glassdoor. Please use it carefully and follow the Terms of

Houping 5 Jun 23, 2022
High available distributed ip proxy pool, powerd by Scrapy and Redis

高可用IP代理池 README | 中文文档 本项目所采集的IP资源都来自互联网,愿景是为大型爬虫项目提供一个高可用低延迟的高匿IP代理池。 项目亮点 代理来源丰富 代理抓取提取精准 代理校验严格合理 监控完备,鲁棒性强 架构灵活,便于扩展 各个组件分布式部署 快速开始 注意,代码请在release

SpiderClub 5.2k Jan 03, 2023
Scraping Top Repositories for Topics on GitHub,

0.-Webscrapping-using-python Scraping Top Repositories for Topics on GitHub, Web scraping is the process of extracting and parsing data from websites

Dev Aravind D Satprem 2 Mar 18, 2022
Scrap the 42 Intranet's elearning videos in a single click

42intra_scraper Scrap the 42 Intranet's elearning videos in a single click. Why you would want to use it ? Adjust speed at your convenience. (The intr

Noufel 5 Oct 27, 2022
Automated Linkedin bot that will improve your visibility and increase your network.

LinkedinSpider LinkedinSpider is a small project using browser automating to increase your visibility and network of connections on Linkedin. DISCLAIM

Frederik 2 Nov 26, 2021
Docker containerized Python Flask API that uses selenium to scrape and interact with websites

Docker containerized Python Flask API that uses selenium to scrape and interact with websites

Christian Gracia 0 Jan 22, 2022
Telegram Group Scrapper

this programe is make your work so much easy on telegrame. do you want to send messages on everyone to your group or others group. use this script it will do your work automatically with one click. a

HackArrOw 3 Dec 03, 2022
✂️🕷️ Spider-Cut is a Network Mapper Framework (NMAP Framework)

Spider-Cut is a Network Mapper Framework (NMAP Framework) Installation | Usage | Creators | Donate Installation # Kali Linux | WSL

XforWorks 3 Mar 07, 2022
Telegram group scraper tool

Telegram Group Scrapper

Wahyusaputra 2 Jan 11, 2022
Here I provide the source code for doing web scraping using the python library, it is Selenium.

Here I provide the source code for doing web scraping using the python library, it is Selenium.

M Khaidar 1 Nov 13, 2021
Introduction to WebScraping Workshop - Semcomp 24 Beta

Extrair informações da internet de forma automatizada. Existem diversas maneiras de fazer isso, nesse tutorial vamos ver algumas delas, por meio de bibliotecas de python.

Luísa Moura 19 Sep 11, 2022
Web Scraping Framework

Grab Framework Documentation Installation $ pip install -U grab See details about installing Grab on different platforms here http://docs.grablib.

2.3k Jan 04, 2023
🐞 Douban Movie / Douban Book Scarpy

Python3-based Douban Movie/Douban Book Scarpy crawler for cover downloading + data crawling + review entry.

Xingbo Jia 1 Dec 03, 2022