A high-level distributed crawling framework.

Related tags

Web Crawlingcola
Overview

Cola: high-level distributed crawling framework

Overview

Cola is a high-level distributed crawling framework, used to crawl pages and extract structured data from websites. It provides simple and fast yet flexible way to achieve your data acquisition objective. Users only need to write one piece of code which can run under both local and distributed mode.

Requirements

  • Python2.7 (Python3+ will be supported later)
  • Work on Linux, Windows and Mac OSX

Install

The quick way:

pip install cola

Or, download source code, then run:

python setup.py install

Write applications

Documents will update soon, now just refer to the wiki or weibo application.

Run applications

For the wiki or weibo app, please ensure the installation of dependencies, weibo as an example:

pip install -r /path/to/cola/app/weibo/requirements.txt

Local mode

In order to let your application support local mode, just add code to the entrance as below.

from cola.context import Context
ctx = Context(local_mode=True)
ctx.run_job(os.path.dirname(os.path.abspath(__file__)))

Then run the application:

python __init__.py

Stop the local job by CTRL+C.

Distributed mode

Start master:

coca master -s [ip:port]

Start one or more workers:

coca worker -s -m [ip:port]

Then run the application(weibo as an example):

coca job -u /path/to/cola/app/weibo -r

Coca command

Coca is a convenient command-line tool for the whole cola environment.

master

Kill master to stop the whole cluster:

coca master -k

job

List all jobs:

coca job -m [ip:port] -l

Example as:

list jobs at master: 10.211.55.2:11103
====> job id: 8ZcGfAqHmzc, job description: sina weibo crawler, status: stopped

You can run a job which shown in the list above:

coca job -r 8ZcGfAqHmzc

Actually, you don't have to input the complete job name:

coca job -r 8Z

Part of the job name is fine if there's no conflict.

You can know the status of a running job by:

coca job -t 8Z

The status like counters during running and so on will be output to the terminal.

You can kill a job by the kill command:

coca job -k 8Z

startproject

You can create an application by this command:

coca startproject colatest

Remember, help command will always be helpful:

coca -h

or

coca master -h

Notes

Chinese docs(wiki).

Donation

Cola is a non-profit project and by now maintained by myself, thus any donation will be encouragement for the further improvements of cola project.

Alipay & Paypal: [email protected]

You might also like...
Web Scraping Framework

Grab Framework Documentation Installation $ pip install -U grab See details about installing Grab on different platforms here http://docs.grablib.

Async Python 3.6+ web scraping micro-framework based on asyncio
Async Python 3.6+ web scraping micro-framework based on asyncio

Ruia 🕸️ Async Python 3.6+ web scraping micro-framework based on asyncio. ⚡ Write less, run faster. Overview Ruia is an async web scraping micro-frame

Transistor, a Python web scraping framework for intelligent use cases.
Transistor, a Python web scraping framework for intelligent use cases.

Web data collection and storage for intelligent use cases. transistor About The web is full of data. Transistor is a web scraping framework for collec

PyQuery-based scraping micro-framework.

demiurge PyQuery-based scraping micro-framework. Supports Python 2.x and 3.x. Documentation: http://demiurge.readthedocs.org Installing demiurge $ pip

Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo.

Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo. (Todas as infomações)

A simple django-rest-framework api using web scraping

Apicell You can use this api to search in google, bing, pypi and subscene and get results Method : POST Parameter : query Example import request url =

Python framework to scrape Pastebin pastes and analyze them
Python framework to scrape Pastebin pastes and analyze them

pastepwn - Paste-Scraping Python Framework Pastebin is a very helpful tool to store or rather share ascii encoded data online. In the world of OSINT,

This Spider/Bot is developed using Python and based on Scrapy Framework to Fetch some items information from Amazon

- Hello, This Project Contains Amazon Web-bot. - I've developed this bot for fething some items information on Amazon. - Scrapy Framework in Python is

This is a web scraper, using Python framework Scrapy, built to extract data  from the Deals of the Day section on Mercado Livre website.
This is a web scraper, using Python framework Scrapy, built to extract data from the Deals of the Day section on Mercado Livre website.

Deals of the Day This is a web scraper, using the Python framework Scrapy, built to extract data such as price and product name from the Deals of the

Comments
  • docs: Fix a few typos

    docs: Fix a few typos

    There are small typos in:

    • cola/cluster/master.py
    • cola/core/bloomfilter/init.py
    • cola/core/opener.py

    Fixes:

    • Should read experimentally rather than experimently.
    • Should read entries rather than enteries.
    • Should read continuously rather than continously.

    Semi-automated pull request generated by https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md

    opened by timgates42 0
  • 任务执行完成后为什么始终不退出

    任务执行完成后为什么始终不退出

    Task类的run方法内有两个循环,最外面循环只有在stop事件出现后才出退出, 为什么?

    def run(self):
            try:
                curr_priority = 0
                while not self.stopped.is_set():
                    priority_name = 'inc' if curr_priority == self.n_priorities \
                                        else curr_priority
                    is_inc = priority_name == 'inc'
                    
                    while not self.nonsuspend.wait(5):
                        continue
                    if self.stopped.is_set():
                        break
                    
                    self.logger.debug('start to process priority: %s' % priority_name)
                    
                    last = self.priorities_secs[curr_priority]
                    clock = Clock()
                    runnings = []
                    try:
                        no_budgets_times = 0
                        while not self.stopped.is_set():
                            if clock.clock() >= last:
                                break
                            
                            if not is_inc:
                                status = self._apply(no_budgets_times)
                                if status == CANNOT_APPLY:
                                    break
                                elif status == APPLY_FAIL:
                                    no_budgets_times += 1
                                    if not self._has_not_finished(curr_priority) and \
                                        len(runnings) == 0:
                                        continue
                                    
                                    if self._has_not_finished(curr_priority) and \
                                        len(runnings) == 0:
                                        self._get_unit(curr_priority, runnings)
                                else:
                                    no_budgets_times = 0
                                    self._get_unit(curr_priority, runnings)
                            else:
                                self._get_unit(curr_priority, runnings)
                                
                            if len(runnings) == 0:
                                break
                            if self.is_bundle:
                                self.logger.debug(
                                    'process bundle from priority %s' % priority_name)
                                rest = min(last - clock.clock(), MAX_BUNDLE_RUNNING_SECONDS)
                                if rest <= 0:
                                    break
                                obj = self.executor.execute(runnings.pop(), rest, is_inc=is_inc)
                            else:
                                obj = self.executor.execute(runnings.pop(), is_inc=is_inc)
                                
                            if obj is not None:
                                runnings.insert(0, obj)  
                    finally:
                        self.priorities_objs[curr_priority].extend(runnings)
                        
                    curr_priority = (curr_priority+1) % self.full_priorities
            finally:
                self.counter_client.sync()
                self.save()
    
    opened by brightgems 5
  • 看了下,和上一个issues的log是一样的,应该是mq没有保护好的问题把

    看了下,和上一个issues的log是一样的,应该是mq没有保护好的问题把

    Exception in thread Thread-2: Traceback (most recent call last): File "/usr/local/lib/python2.7/threading.py", line 551, in *bootstrap_inner self.run() File "/usr/local/lib/python2.7/threading.py", line 504, in run self.__target(_self.__args, _self.__kwargs) File "/usr/crawl/code/cola-code/cola/core/mq/__init.py", line 103, in _init_process self.put(objs, flush=flush) File "/usr/crawl/code/cola-code/cola/core/mq/node.py", line 407, in put self._remote_or_local_batch_put(addr, self.caches[addr]) File "/usr/crawl/code/cola-code/cola/core/mq/node.py", line 348, in _remote_or_local_batch_put self.mq_node.batch_put(objs) File "/usr/crawl/code/cola-code/cola/core/mq/node.py", line 151, in batch_put self.put(obs, force=force, priority=priority) File "/usr/crawl/code/cola-code/cola/core/mq/node.py", line 125, in put priority_store.put(objs, force=force) File "/usr/crawl/code/cola-code/cola/core/mq/store.py", line 291, in put result = self.put_one(obj, force, commit=False) File "/usr/crawl/code/cola-code/cola/core/mq/store.py", line 266, in put_one pos = self._seek_writable_pos(m) File "/usr/crawl/code/cola-code/cola/core/mq/store.py", line 228, in _seek_writable_pos size, = struct.unpack('I', map_handle[pos:pos+4]) TypeError: 'NoneType' object has no attribute 'getitem'

    opened by tottilin 0
Releases(0.1.0beta)
Owner
Xuye (Chris) Qin
Core developer and architect of Mars which is a tensor-based unified framework for large scale data computation, also worked on PyODPS and cola.
Xuye (Chris) Qin
Kusonime scraper using python3

Features Scrap from url Scrap from recommendation Search by query Todo [+] Search by genre Example # Get download url from kusonime import Scrap

MhankBarBar 2 Jan 28, 2022
The open-source web scrapers that feed the Los Angeles Times California coronavirus tracker.

The open-source web scrapers that feed the Los Angeles Times' California coronavirus tracker. Processed data ready for analysis is available at datade

Los Angeles Times Data and Graphics Department 51 Dec 14, 2022
This app will let you continuously scrape certain parts of LeasePlan and extract data of cars becoming available for lease.

LeasePlan - Scraper This app will let you continuously scrape certain parts of LeasePlan and extract data of cars becoming available for lease. It has

Rodney 4 Nov 18, 2022
An helper library to scrape data from TikTok in one line, using the Influencer Hunters APIs.

TikTok Scraper An utility library to scrape data from TikTok hassle-free Go to the website » View Demo · Report Bug · Request Feature About The Projec

6 Jan 08, 2023
Scrapy-soccer-games - Scraping information about soccer games from a few websites

scrapy-soccer-games Esse projeto tem por finalidade pegar informação de tabela d

Caio Alves 2 Jul 20, 2022
This is my CS 20 final assesment.

eeeeeSpider This is my CS 20 final assesment. How to use: Open program Run to your hearts content! There are no external dependancies that you will ha

1 Jan 17, 2022
A tool for scraping and organizing data from NewsBank API searches

nbscraper Overview This simple tool automates the process of copying, pasting, and organizing data from NewsBank API searches. Curerntly, nbscrape onl

0 Jun 17, 2021
UsernameScraperTool - Username Scraper Tool With Python

UsernameScraperTool Username Scraper for 40+ Social sites. How To use git clone

E4crypt3d 1 Dec 20, 2022
Bigdata - This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster

Scrapy Cluster This Scrapy project uses Redis and Kafka to create a distributed

Hanh Pham Van 0 Jan 06, 2022
API to parse tibia.com content into python objects.

Tibia.py An API to parse Tibia.com content into object oriented data. No fetching is done by this module, you must provide the html content. Features:

Allan Galarza 25 Oct 31, 2022
High available distributed ip proxy pool, powerd by Scrapy and Redis

高可用IP代理池 README | 中文文档 本项目所采集的IP资源都来自互联网,愿景是为大型爬虫项目提供一个高可用低延迟的高匿IP代理池。 项目亮点 代理来源丰富 代理抓取提取精准 代理校验严格合理 监控完备,鲁棒性强 架构灵活,便于扩展 各个组件分布式部署 快速开始 注意,代码请在release

SpiderClub 5.2k Jan 03, 2023
robobrowser - A simple, Pythonic library for browsing the web without a standalone web browser.

RoboBrowser: Your friendly neighborhood web scraper Homepage: http://robobrowser.readthedocs.org/ RoboBrowser is a simple, Pythonic library for browsi

Joshua Carp 3.7k Dec 27, 2022
Script used to download data for stocks.

This script is useful for downloading stock market data for a wide range of companies specified by their respective tickers. The script reads in the d

Carmelo Gonzales 71 Oct 04, 2022
Consulta de CPF e CNPJ na Receita Federal com Web-Scraping

Repositório contendo scripts Python que realizam a consulta de CPF e CNPJ diretamente no site da Receita Federal.

Josué Campos 5 Nov 29, 2021
A way to scrape sports streams for use with Jellyfin.

Sportyfin Description Stream sports events straight from your Jellyfin server. Sportyfin allows users to scrape for live streamed events and watch str

axelmierczuk 38 Nov 05, 2022
Html Content / Article Extractor, web scrapping lib in Python

Python-Goose - Article Extractor Intro Goose was originally an article extractor written in Java that has most recently (Aug2011) been converted to a

Xavier Grangier 3.8k Jan 02, 2023
Transistor, a Python web scraping framework for intelligent use cases.

Web data collection and storage for intelligent use cases. transistor About The web is full of data. Transistor is a web scraping framework for collec

BOM Quote Manufacturing 212 Nov 05, 2022
12306抢票脚本

12306抢票脚本

罐子里的茶 457 Jan 05, 2023
抖音批量下载用户所有无水印视频

Douyincrawler 抖音批量下载用户所有无水印视频 Run 安装python3, 安装依赖

28 Dec 08, 2022
优化版本的京东茅台抢购神器

优化版本的京东茅台抢购神器

1.8k Mar 18, 2022