Web Scraping Framework

Overview

Grab Framework Documentation

pytest status coveralls documentation

Installation

    $ pip install -U grab

See details about installing Grab on different platforms here http://docs.grablib.org/en/latest/usage/installation.html

Support

Documentation: https://grablab.org/docs/

Russian telegram chat: https://t.me/grablab_ru

English telegram chat: https://t.me/grablab

To report bug please use GitHub issue tracker: https://github.com/lorien/grab/issues

What is Grab?

Grab is a python web scraping framework. Grab provides a number of helpful methods to perform network requests, scrape web sites and process the scraped content:

  • Automatic cookies (session) support
  • HTTP and SOCKS proxy with/without authorization
  • Keep-Alive support
  • IDN support
  • Tools to work with web forms
  • Easy multipart file uploading
  • Flexible customization of HTTP requests
  • Automatic charset detection
  • Powerful API to extract data from DOM tree of HTML documents with XPATH queries
  • Asynchronous API to make thousands of simultaneous queries. This part of library called Spider. See list of spider fetures below.
  • Python 3 ready

Spider is a framework for writing web-site scrapers. Features:

  • Rules and conventions to organize the request/parse logic in separate blocks of codes
  • Multiple parallel network requests
  • Automatic processing of network errors (failed tasks go back to task queue)
  • You can create network requests and parse responses with Grab API (see above)
  • HTTP proxy support
  • Caching network results in permanent storage
  • Different backends for task queue (in-memory, redis, mongodb)
  • Tools to debug and collect statistics

Grab Example

    import logging

    from grab import Grab

    logging.basicConfig(level=logging.DEBUG)

    g = Grab()

    g.go('https://github.com/login')
    g.doc.set_input('login', '****')
    g.doc.set_input('password', '****')
    g.doc.submit()

    g.doc.save('/tmp/x.html')

    g.doc('//ul[@id="user-links"]//button[contains(@class, "signout")]').assert_exists()

    home_url = g.doc('//a[contains(@class, "header-nav-link name")]/@href').text()
    repo_url = home_url + '?tab=repositories'

    g.go(repo_url)

    for elem in g.doc.select('//h3[@class="repo-list-name"]/a'):
        print('%s: %s' % (elem.text(),
                          g.make_url_absolute(elem.attr('href'))))

Grab::Spider Example

    import logging

    from grab.spider import Spider, Task

    logging.basicConfig(level=logging.DEBUG)


    class ExampleSpider(Spider):
        def task_generator(self):
            for lang in 'python', 'ruby', 'perl':
                url = 'https://www.google.com/search?q=%s' % lang
                yield Task('search', url=url, lang=lang)

        def task_search(self, grab, task):
            print('%s: %s' % (task.lang,
                              grab.doc('//div[@class="s"]//cite').text()))


    bot = ExampleSpider(thread_number=2)
    bot.run()
Comments
  • Spider hangs during work randomly

    Spider hangs during work randomly

    now I have an issue I can't crack myself, last log entries looks this way:

    [12.05.2018] [15:23:07] [DEBUG] [1535-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:08] [DEBUG] RPS: 2.02 [error:grab-connection-error=283, error:grab-timeout-error=1, fatal=43, network-count-rejected=47]
    [12.05.2018] [15:23:08] [DEBUG] RPS: 0.51 [error:grab-connection-error=283, error:grab-timeout-error=1, fatal=43, network-count-rejected=47]
    [12.05.2018] [15:23:08] [DEBUG] [1536-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:08] [DEBUG] [1537-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:10] [DEBUG] RPS: 0.76 [error:grab-connection-error=283, error:grab-timeout-error=1, fatal=43, network-count-rejected=47]
    [12.05.2018] [15:23:10] [DEBUG] RPS: 0.76 [error:grab-connection-error=283, error:grab-timeout-error=1, fatal=43, network-count-rejected=47]
    [12.05.2018] [15:23:10] [DEBUG] [1538-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:10] [DEBUG] [1539-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:10] [DEBUG] [1540-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:11] [DEBUG] RPS: 1.17 [error:grab-connection-error=284, error:grab-timeout-error=1, fatal=43, network-count-rejected=47]
    [12.05.2018] [15:23:11] [DEBUG] RPS: 0.59 [error:grab-connection-error=284, error:grab-timeout-error=1, fatal=43, network-count-rejected=47]
    [12.05.2018] [15:23:11] [DEBUG] [1541-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:11] [DEBUG] [1542-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:12] [DEBUG] [1543-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:12] [DEBUG] RPS: 1.73 [error:grab-connection-error=285, error:grab-timeout-error=1, fatal=43, network-count-rejected=47]
    [12.05.2018] [15:23:12] [DEBUG] [1544-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:12] [DEBUG] [1545-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [12.05.2018] [15:23:14] [DEBUG] RPS: 3.69 [error:grab-connection-error=285, error:grab-timeout-error=1, fatal=43, network-count-rejected=47]
    

    or this:

    [13.05.2018] [07:13:53] [DEBUG] RPS: 1.49 [fatal=24]
    [13.05.2018] [07:13:53] [DEBUG] RPS: 0.75 [fatal=24]
    [13.05.2018] [07:13:53] [DEBUG] [363-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:53] [DEBUG] [364-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:53] [DEBUG] [365-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:54] [DEBUG] RPS: 1.29 [fatal=24]
    [13.05.2018] [07:13:54] [DEBUG] RPS: 0.65 [fatal=24]
    [13.05.2018] [07:13:54] [DEBUG] [366-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:54] [DEBUG] [367-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:55] [DEBUG] [368-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:56] [DEBUG] RPS: 1.21 [fatal=24]
    [13.05.2018] [07:13:56] [DEBUG] RPS: 0.61 [fatal=24]
    [13.05.2018] [07:13:56] [DEBUG] [369-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:56] [DEBUG] [370-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:56] [DEBUG] [371-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:58] [DEBUG] RPS: 1.38 [fatal=24]
    [13.05.2018] [07:13:58] [DEBUG] RPS: 0.69 [fatal=24]
    [13.05.2018] [07:13:58] [DEBUG] [372-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:58] [DEBUG] [373-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:58] [DEBUG] [374-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:59] [DEBUG] RPS: 1.77 [fatal=24]
    [13.05.2018] [07:13:59] [DEBUG] RPS: 0.00 [fatal=24]
    [13.05.2018] [07:13:59] [DEBUG] [375-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:59] [DEBUG] [376-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:13:59] [DEBUG] [377-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:14:01] [DEBUG] RPS: 1.21 [fatal=24]
    [13.05.2018] [07:14:01] [DEBUG] RPS: 0.60 [fatal=24]
    [13.05.2018] [07:14:01] [DEBUG] [378-worker:networkservicethreaded:worker_callback] GET https://here_is_hidden_website.com/... via 127.0.0.1:9150 proxy of type socks5
    [13.05.2018] [07:14:02] [DEBUG] RPS: 2.20 [fatal=24]
    

    it's final entries I can see in log. no mater how much time to wait (I was waiting for 20 hours to see result), spider just eat 1 CPU core at 100% and do absolutely nothing.

    my bot have 5 stages of work and it can be hanged on any stage.

    it's very difficult to debug this issue because no any errors in log, and it happens randomly

    I am running bot this way:

    bot = ExampleSpider(thread_number=3, network_service='threaded', grab_transport='pycurl')
    bot.load_proxylist("./proxy_tor.txt", "text_file", "socks5")
    bot.run()
    

    any ideas guys? :confused:

    bug 
    opened by EnzoRondo 25
  • Ошибка: pycurl.error: (0, '') при попытке отправки формы

    Ошибка: pycurl.error: (0, '') при попытке отправки формы

    Имеется html с формой:

    <form enctype="multipart/form-data" action="http://example.com/" method="post" accept-charset="UTF-8">
        <textarea name="body">Beställa</textarea>
        <input type="submit" name="op" value="Save"/>
    </form>
    

    Есть который отправляет форму:

    from grab import Grab
    
    g = Grab()
    g.setup(debug=True)
    
    g.go('file:///C:/page.html') # тут вставьте путь до файла что указан выше
    
    g.doc.set_input('op', 'Save')
    g.doc.submit(submit_name='op')
    

    При отправке формы получаем ошибку:

    pycurl.error: (0, '')
    

    Но если заменить внутри textarea код на другой, например вот так сделать в textarea:

    <textarea name="body">123</textarea>
    

    То всё отправится нормально.

    Как это исправить?

    opened by InputError 23
  • Python 3.5 - Unable to build DOM tree.

    Python 3.5 - Unable to build DOM tree.

    File "src/lxml/lxml.etree.pyx", line 3427, in lxml.etree.parse (src/lxml/lxml.etree.c:79801)
      File "src/lxml/parser.pxi", line 1799, in lxml.etree._parseDocument (src/lxml/lxml.etree.c:116219)
      File "src/lxml/parser.pxi", line 1819, in lxml.etree._parseMemoryDocument (src/lxml/lxml.etree.c:116494)
      File "src/lxml/parser.pxi", line 1700, in lxml.etree._parseDoc (src/lxml/lxml.etree.c:115040)
      File "src/lxml/parser.pxi", line 1040, in lxml.etree._BaseParser._parseUnicodeDoc (src/lxml/lxml.etree.c:109165)
      File "src/lxml/parser.pxi", line 573, in lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:103404)
      File "src/lxml/parser.pxi", line 683, in lxml.etree._handleParseResult (src/lxml/lxml.etree.c:105058)
      File "src/lxml/parser.pxi", line 613, in lxml.etree._raiseParseError (src/lxml/lxml.etree.c:103967)
      File "<string>", line None
    lxml.etree.XMLSyntaxError: switching encoding: encoder error, line 1, column 1
    

    With preceding:

    encoding error : input conversion failed due to input error, bytes 0x21 0x00 0x00 0x00
    encoding error : input conversion failed due to input error, bytes 0x44 0x00 0x00 0x00
    I/O error : encoder error
    

    Example:

    class Scraper(Spider):
        def task_generator(self):
            urls = [
                'https://au.linkedin.com/directory/people-a/',
                'https://www.linkedin.com/directory/people-a/'
            ]
            for url in urls:
                yield Task('url', url=url)
    
        def task_url(self, grab, task):
            links = grab.doc('//div[@class="columns"]//ul/li[@class="content"]/a')
    
    
    bot = Scraper()
    bot.run()
    

    That's happened on some pages, perhaps lxml failed to detect correct encoding.

    bug 
    opened by oiwn 23
  • Пустые cookie

    Пустые cookie

    from grab import Grab
    url = 'https://www.fiverr.com/'
    user_agent = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36'
    session_f_name1 = 'fiverr_session1.txt'
    session_f_name2 = 'fiverr_session2.txt'
    
    g1 = Grab()
    g1.setup(cookiefile=session_f_name1)
    g1.go(url)
    print 'g1', g1.cookies.cookiejar
    
    g2 = Grab()
    g2.setup(cookiefile=session_f_name2, user_agent=user_agent)
    g2.go(url)
    print 'g2', g2.cookies.cookiejar
    

    в первом случае кука есть, во втором нет

    opened by nodermann 22
  • Cookies issue on windows with pycurl version pycurl 7.43.0.1

    Cookies issue on windows with pycurl version pycurl 7.43.0.1

    Проверял на:

    Microsoft Windows Server 2012 Standard
    Microsoft Windows 7 Ultimate
    

    Версия питона на обоих машинах: Python 3.5.4 (v3.5.4:3f56838, Aug 8 2017, 02:17:05) [MSC v.1900 64 bit (AMD64)] on win32

    grab=0.6.3.8

    Код:

    from grab import Grab, error
    import sys
    import logging
    import base64
    
    g = Grab()
    g.setup(timeout=60)
    g.setup(debug=True, debug_post=True)
    
    logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
    
    # запрятано в интересах приватности
    url, login, pwd = base64.b64decode("v1WZktzbtVGZ7AHaw5CelRmbp9icvRXYyR3cp5WatRWYvwGcu0Wdw12LvoDc0RHa"[::-1])\
        .decode("utf-8").split(";")
    
    g.go(url)
    g.doc.set_input('username', login)
    g.doc.set_input('passwd', pwd)
    try:
        g.doc.set_input('lang', 'en-GB')
    except:
        pass
    g.doc.submit()
    
    is_logged = g.doc.text_search("task=logout")
    if not is_logged:
        raise error.GrabError("не вошли")
    
    print("all right!")
    

    Ставим pycurl-7.19.5.3, получаем результат: all right!

    Ставим последний pycurl-7.43.0.1, получаем: grab.error.GrabError: не вошли

    Это не особенность сайта, я проверял в других местах и там тоже самое. Сверял руками тело пост запроса с тем что отправляется в браузере, содержимое идентично.

    opened by InputError 20
  • GrabTimeoutError

    GrabTimeoutError

    Hello, I have exception like https://github.com/lorien/grab/issues/140, but I haven`t some DNS Errors. OS and network: OS: Windows 8 (x64) fixed DNS: 8.8.8.8

    Script:

    import pycurl; 
    from grab import Grab
    import logging
    
    print(pycurl.version); 
    print(pycurl.__file__);
    
    logging.basicConfig(level=logging.DEBUG)
    g = Grab(verbose_logging=True, debug=True)
    g.go('http://github.com')
    print g.xpath_text('//title')
    

    script output:

    PycURL/7.43.0 libcurl/7.47.0 OpenSSL/1.0.2f zlib/1.2.8 c-ares/1.10.0 libssh2/1.6.0
    c:\Python27\lib\site-packages\pycurl.pyd
    DEBUG:grab.network:[01] GET http://github.com
    DEBUG:grab.transport.curl:i: Rebuilt URL to: http://github.com/
    DEBUG:grab.transport.curl:i: Resolving timed out after 3000 milliseconds
    DEBUG:grab.transport.curl:i: Closing connection 0
    Traceback (most recent call last):
      File "D:\pr_files\source\python\planned\htmlParser\bgsParser\NewPythonProject\src\bgsParser.py", line 10, in <module>
        g.go('http://github.com')
      File "c:\Python27\lib\site-packages\grab-0.6.30-py2.7.egg\grab\base.py", line 377, in go
        return self.request(url=url, **kwargs)
      File "c:\Python27\lib\site-packages\grab-0.6.30-py2.7.egg\grab\base.py", line 450, in request
        self.transport.request()
      File "c:\Python27\lib\site-packages\grab-0.6.30-py2.7.egg\grab\transport\curl.py", line 489, in request
        raise error.GrabTimeoutError(ex.args[0], ex.args[1])
    grab.error.GrabTimeoutError: [Errno 28] Resolving timed out after 3000 milliseconds
    

    I tried to reinstall curl:

    C:\Python27\Scripts>pip install pycurl-7.43.0-cp27-none-win_amd64.whl --upgrade
    Processing c:\python27\scripts\pycurl-7.43.0-cp27-none-win_amd64.whl
    Installing collected packages: pycurl
      Found existing installation: pycurl 7.43.0
        Uninstalling pycurl-7.43.0:
          Successfully uninstalled pycurl-7.43.0
    Successfully installed pycurl-7.43.0
    

    but It doesn`t work. What can be wrong?

    Thank you for help.

    opened by tofflife 20
  • socks5 ip mismatch

    socks5 ip mismatch

    Hi, I am trying to use socks5 proxy list with grab.spider

    Here is my small test script:

    from grab.spider import Spider, Task
    import logging
    
    
    class TestSpider(Spider):
        def prepare(self):
            self.load_proxylist(
                'proxy.list',
                source_type='text_file', proxy_type='socks5',
                auto_change=True,
                read_timeout=180
            )
            self.set_proxy = set()
            self.real_proxy = set()
    
        def task_generator(self):
            for i in range(200):
                yield Task('2ip', 'http://2ip.ru/')
    
        def task_2ip(self, grab, task):
            ip = grab.doc.select('//big[@id="d_clip_button"]').text()
            self.real_proxy.add(ip)
    
            proxy = grab.config['proxy'].split(':')[0]
            self.set_proxy.add(proxy)
    
            # if proxy != ip:
            #     print proxy, ip
    
        def shutdown(self):
            print len(self.set_proxy), len(self.real_proxy)
    
    
    logging.basicConfig(level=logging.DEBUG)
    TestSpider(thread_number=16).run()
    

    The result is: 197 16

    As we can see, the real number of used proxies is only 16, the same as thread_number.

    I am using grab version from pip. The version of the curl is

    curl 7.38.0 (x86_64-pc-linux-gnu) libcurl/7.38.0 OpenSSL/1.0.1f zlib/1.2.8 libidn/1.28 librtmp/2.3
    Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp smtp smtps telnet tftp 
    Features: AsynchDNS IDN IPv6 Largefile GSS-API SPNEGO NTLM NTLM_WB SSL libz TLS-SRP 
    

    I think, the problem is with libcurl. What shall I do to fix it?

    opened by asluchevskiy 16
  • accidental work of grab.spider

    accidental work of grab.spider

    Hey there (@lorien), thanks a lot for great library :smiley:

    I am learning your library and now see unexpected behavior during work, here is my code sample which is based on example in documentation:

    import csv
    import logging
    import re
    
    from grab.spider import Spider, Task
    
    
    class ExampleSpider(Spider):
        def create_grab_instance(self, **kwargs):
            g = super(ExampleSpider, self).create_grab_instance(**kwargs)
            g.setup(proxy='127.0.0.1:8090', proxy_type='socks5', timeout=60, connect_timeout=15)
            return g
    
        def task_generator(self):
            for i in range(1, 1 + 1):
                page_url = "{}{}/".format("https://www.mourjan.com/properties/", i)
                # print("page url: {}".format(page_url))
                yield Task('stage_two', url=page_url)
    
        def prepare(self):
            # Prepare the file handler to save results.
            # The method `prepare` is called one time before the
            # spider has started working
            self.result_file = csv.writer(open('result.txt', 'w'))
    
            # This counter will be used to enumerate found images
            # to simplify image file naming
            self.result_counter = 0
    
        def task_stage_two(self, grab, task):
            for elem in grab.doc.select("//li[@itemprop='itemListElement']//p")[0:4]:
                part = elem.attr("onclick")
                url_part = re.search(r"(?<=wo\(\').*(?=\'\))", part).group()
                end_url = grab.make_url_absolute(url_part)
                yield Task('stage_three', url=end_url)
    
        def task_stage_three(self, grab, task):
            # First, save URL and title into dictionary
            post = {
                'url': task.url,
                'title': grab.doc.xpath_text("//title/text()"),
            }
            self.result_file.writerow([
                post['url'],
                post['title'],
            ])
            # Increment image counter
            self.result_counter += 1
    
    
    if __name__ == '__main__':
        logging.basicConfig(level=logging.DEBUG)
        # Let's start spider with two network concurrent streams
        bot = ExampleSpider(thread_number=2)
        bot.run()
    
    

    first run:

    DEBUG:grab.spider.base:Using memory backend for task queue
    DEBUG:grab.network:[01] GET https://www.mourjan.com/properties/1/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.network:[02] GET https://www.mourjan.com/properties/1/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.network:[03] GET https://www.mourjan.com/properties/1/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.network:[04] GET https://www.mourjan.com/properties/1/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.network:[05] GET https://www.mourjan.com/properties/1/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.stat:RPS: 7.35 [error:multi-added-already=5, network-count-rejected=1]
    DEBUG:grab.spider.parser_pipeline:Started shutdown of parser process: Thread-1
    DEBUG:grab.spider.parser_pipeline:Finished joining parser process: Thread-1
    DEBUG:grab.spider.base:Main process [pid=4064]: work done
    

    :confused:

    then I am running code again ~20 attempts and have same shit, but 21 time gives success and I see what I want to see:

    DEBUG:grab.spider.base:Using memory backend for task queue
    DEBUG:grab.network:[01] GET https://www.mourjan.com/properties/1/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.stat:RPS: 0.52 []
    DEBUG:grab.network:[02] GET https://www.mourjan.com/kw/kuwait/warehouses/rental/10854564/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.network:[03] GET https://www.mourjan.com/ae/abu-dhabi/apartments/rental/11047384/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.network:[04] GET https://www.mourjan.com/kw/kuwait/villas-and-houses/rental/11041455/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.network:[05] GET https://www.mourjan.com/ae/abu-dhabi/apartments/rental/11009663/ via 127.0.0.1:8090 proxy of type socks5
    DEBUG:grab.stat:RPS: 2.36 []
    DEBUG:grab.stat:RPS: 1.28 []
    DEBUG:grab.spider.parser_pipeline:Started shutdown of parser process: Thread-1
    DEBUG:grab.spider.parser_pipeline:Finished joining parser process: Thread-1
    DEBUG:grab.spider.base:Main process [pid=4860]: work done
    

    why it happens?

    bug 
    opened by EnzoRondo 15
  • Баг post запросов windows x64

    Баг post запросов windows x64

    с 32 битными системами все проще, один добрый человек собрал курл под него, есть ли у вас возможность сделать тоже самое для x64? Мы тут всей командой мучаемся из за этого бага. Библиотека мощная и труда в нее вложено не мало, но она становится бесполезной при таком баге. Или возможно ли сделать его не на curl, а на сокетах к примеру.

    opened by ArturFis 15
  • Добавление возможности накопления проксей между их обновлениями

    Добавление возможности накопления проксей между их обновлениями

    Раньше, при каждом обновлении прокси-листа, старый лист затирался новым. Добавил возможность держать старые прокси в списке, а при обновлении просто расширять список проксями, которых ещё нет в списке. (т.е. добавлять только свежие соксы, не дублируя старые) Также, ранее, при обновлении проксей, итератор каждый раз создавался заново, что влечёт за собой такую ситацию: если время обновления списка относительно маленькое, то итератор не успевает доходить до конца списка и обновляется и , в итоге, последние прокси в списке так и не юзаются. Поэтому сейчас он создаётся единожды, но вместо itertools.cycle использую свой cycle, потому что в itertools список кешируется, т.е. не получится динамически его обновлять.

    opened by temptask 13
  • Как избавиться от

    Как избавиться от "operation-timeouted"?

    Здравствуйте. Только начал использовать Grab, но все очень нравится.

    Проблема следующая: Парсю больше 3 миллионов инвентарей со стима. Инвентари - это просто json-файлы, бывают большие и маленькие. Маленькие инвентари парсятся Spyder'ом без проблем, но вот инвентари побольше как-то страшно подвешивают поток, а потом выдают ошибку вида:

    DEBUG:grab.stat:RPS: 0.26 [error:operation-timeouted=7]

    Искал везде: тут, на гитхабе, в гуглгруппах, в документации, но ничего не нашел про то, что это значит и как с этим бороться. Пробовал вручную создавать инстансы Grab'а, передавать им connection_timeout и timeout и сувать их в Task'и, но видимого эффекта не получил.

    opened by seniorjoinu 10
Releases(v0.6.40)
  • v0.6.40(May 14, 2018)

  • v0.6.39(May 10, 2018)

    Fixed

    • Fix bug: task generator works incorrectly
    • Fix bug: pypi package misses http api html file
    • Fix bug: dictionary changed size during iteration in stat logging
    • Fix bug: multiple errors in urllib3 transport and threaded network service
    • Fix short names of errors in stat logging
    • Improve error handling in urrllib3 transport
    • Fix #299: multi-added errors
    • Fix bug: pypi package misses http api html file
    • Fix #285: pyquery extension parses html incorrectly
    • Fix #267: normalize handling of too many redirect error
    • Fix #268: fix processing of utf cookies
    • Fix #241: form_fields() fails on some HTML forms
    • Fix normalize_unicode issue in debug post method
    • Fix #323: urllib3 transport fails with UnicodeError on some invalid URLs
    • Fix #31: support for multivalue form inputs
    • Fix #328, fix #67: remove hard link between document and grab
    • Fix #284: option headers affects content of common_headers
    • Fix #293: processing non-latin chars in Location header
    • Fix #324: refactor response header processing

    Changed

    • Refactor Spider into set of async. services
    • Add certifi dependency into grab[full] setup target
    • Fix #315: use psycopg2-binary package for postgres cache
    • Related to #206: do not use connection_reuse=False for proxy connections in spider

    Removed

    • Remove cache timeout option
    • Remove structured extension
    Source code(tar.gz)
    Source code(zip)
  • v0.6.38(May 10, 2018)

    Fixed

    • Fix "error:None" in spider rps logging
    • Fix race condition bug in task generator

    Added

    • Add original_exc attribute to GrabNetworkError (and subclasses) that points to original exception

    Changed

    • Remove IOError from the ancestors of GrabNetworkError
    • Add default values to --spider-transport and --grab-transport options of crawl script
    Source code(tar.gz)
    Source code(zip)
  • v0.6.37(May 10, 2018)

    Added

    • Add --spider-transport and --grab-transport options to crawl script
    • Add SOCKS5 proxy support in urllib3 transport

    Fixed

    • Fix #237: urllib3 transport fails without pycurl installed
    • Fix bug: incorrect spider request logging when cache is enabled
    • Fix bug: crawl script fails while trying to process a lock key
    • Fix bug: urllib3 transport fails while trying to throw GrabConnectionError exception
    • Fix bug: Spider add_task method fails while trying to log invalid URL error

    Removed

    • Remove obsoleted hammer_mode and hammer_timeout config options
    Source code(tar.gz)
    Source code(zip)
  • v0.6.36(May 10, 2018)

    Added

    • Add pylint to default test set

    Fixed

    • Fix #229: using deprecated response object inside Grab

    Removed

    • Remove spider project template and start_project script
    Source code(tar.gz)
    Source code(zip)
一个m3u8视频流下载脚本

一个Python的m3u8流视频下载脚本 介绍 m3u8流视频日益常见,目前好用的下载器也有很多,我把之前自己写的一个小脚本分享出来,供广大网友使用。写此程序的目的在于给视频下载爱好者提供一个下载样例,可直接调用,勿再重复造轮子。 使用方法 在python中直接运行程序或进行外部调用 import

Nchu 0 Oct 10, 2021
Pelican plugin that adds site search capability

Search: A Plugin for Pelican This plugin generates an index for searching content on a Pelican-powered site. Why would you want this? Static sites are

22 Nov 21, 2022
Python scrapper scrapping torrent website and download new movies Automatically.

torrent-scrapper Python scrapper scrapping torrent website and download new movies Automatically. If you like it Put a ⭐ on this repo 😇 Run this git

Fazil vk 1 Jan 08, 2022
The first public repository that provides free BUBT website scraping API script on Github.

BUBT WEBSITE SCRAPPING SCRIPT I think this is the first public repository that provides free BUBT website scraping API script on github. When I was do

Md Imam Hossain 3 Feb 10, 2022
Create crawler get some new products with maximum discount in banimode website

crawler-banimode create crawler and get some new products with maximum discount in banimode website. این پروژه کوچک جهت یادگیری و کار با ابزار سلنیوم

nourollah rezaei 2 Feb 17, 2022
Dude is a very simple framework for writing web scrapers using Python decorators

Dude is a very simple framework for writing web scrapers using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. Dude has an easy-to-lea

Ronie Martinez 326 Dec 15, 2022
A distributed crawler for weibo, building with celery and requests.

A distributed crawler for weibo, building with celery and requests.

SpiderClub 4.8k Jan 03, 2023
Twitter Scraper

Twitter's API is annoying to work with, and has lots of limitations — luckily their frontend (JavaScript) has it's own API, which I reverse–engineered. No API rate limits. No restrictions. Extremely

Tayyab Kharl 45 Dec 30, 2022
Web scraper build using python.

Web Scraper This project is made in pyhthon. It took some info. from website list then add them into data.json file. The dependencies used are: reques

Shashwat Harsh 2 Jul 22, 2022
Proxy scraper. Format: IP | PORT | COUNTRY | TYPE

proxy scraper 🔎 Installation: git clone https://github.com/ebankoff/proxy_scraper Required pip libraries (pip install library name): lxml beautifulso

Eban'ko 19 Dec 07, 2022
Scraping Top Repositories for Topics on GitHub,

0.-Webscrapping-using-python Scraping Top Repositories for Topics on GitHub, Web scraping is the process of extracting and parsing data from websites

Dev Aravind D Satprem 2 Mar 18, 2022
Anonymously scrapes onlinesim.ru for new usable phone numbers.

phone-scraper Anonymously scrapes onlinesim.ru for new usable phone numbers. Usage Clone the repository $ git clone https://github.com/thomasgruebl/ph

16 Oct 08, 2022
Grab the changelog from releases on Github

release-notes-scraper This simple script can be used to grab the release notes for projects from github that do not keep a CHANGELOG, but publish thei

Dan Čermák 4 Apr 01, 2022
A package that provides you Latest Cyber/Hacker News from website using Web-Scraping.

cybernews A package that provides you Latest Cyber/Hacker News from website using Web-Scraping. Latest Cyber/Hacker News Using Webscraping Developed b

Hitesh Rana 4 Jun 02, 2022
🐞 Douban Movie / Douban Book Scarpy

Python3-based Douban Movie/Douban Book Scarpy crawler for cover downloading + data crawling + review entry.

Xingbo Jia 1 Dec 03, 2022
Generate a repository with mirror links for DriveDroid app

DriveDroid Repository Generator Generate a repository for the app that allow boot a PC using ISO files stored on your Android phone Check also an offi

Evgeny 11 Nov 19, 2022
A python module to parse the Open Graph Protocol

OpenGraph is a module of python for parsing the Open Graph Protocol, you can read more about the specification at http://ogp.me/ Installation $ pip in

Erik Rivera 213 Nov 12, 2022
京东抢茅台,秒杀成功很多次讨论,天猫抢购,赚钱交流等。

Jd_Seckill 特别声明: 请添加个人微信:19972009719 进群交流讨论 目前群里很多人抢到【扫描微信添加群就好,满200关闭群,有喜欢薅信用卡羊毛的也可以找我交流】 本仓库发布的jd_seckill项目中涉及的任何脚本,仅用于测试和学习研究,禁止用于商业用途,不能保证其合法性,准确性

50 Jan 05, 2023
Python scraper to check for earlier appointments in Clalit Health Services

clalit-appt-checker Python scraper to check for earlier appointments in Clalit Health Services Some background If you ever needed to schedule a doctor

Dekel 16 Sep 17, 2022
A Python Oriented tool to Scrap WhatsApp Group Link using Google Dork it Scraps Whatsapp Group Links From Google Results And Gives Working Links.

WaGpScraper A Python Oriented tool to Scrap WhatsApp Group Link using Google Dork it Scraps Whatsapp Group Links From Google Results And Gives Working

Muhammed Rizad 27 Dec 18, 2022