coURLan: Clean, filter, normalize, and sample URLs

Overview

coURLan: Clean, filter, normalize, and sample URLs

Python package Python versions Code Coverage

Why coURLan?

“Given that the bandwidth for conducting crawls is neither infinite nor free, it is becoming essential to crawl the Web in not only a scalable, but efficient way, if some reasonable measure of quality or freshness is to be maintained.” (Edwards et al. 2001)

Avoid loosing bandwidth capacity and processing time for webpages which are probably not worth the effort. This library provides an additional brain for web crawling, scraping and management of Internet archives. Specific fonctionality for crawlers: stay away from pages with little text content or target synoptic pages explicitly to gather links.

This navigation help targets text-based documents (i.e. currently web pages expected to be in HTML format) and tries to guess the language of pages to allow for language-focused collection. Additional functions include straightforward domain name extraction and URL sampling.

Features

Separate the wheat from the chaff and optimize crawls by focusing on non-spam HTML pages containing primarily text. Most helpers revolve around the strict and language arguments:

  • Heuristics for triage of links
    • Targeting spam and unsuitable content-types
    • Language-aware filtering
    • Crawl management
  • URL handling
    • Validation
    • Canonicalization/Normalization
    • Sampling
  • Command-line interface (CLI) and Python tool

Let the coURLan fish out juicy bits for you!

Courlan

Here is a courlan (source: Limpkin at Harn's Marsh by Russ, CC BY 2.0).

Installation

This Python package is tested on Linux, macOS and Windows systems, it is compatible with Python 3.5 upwards. It is available on the package repository PyPI and can notably be installed with the Python package managers pip and pipenv:

$ pip install courlan # pip3 install on systems where both Python 2 and 3 are installed
$ pip install --upgrade courlan # to make sure you have the latest version
$ pip install git+https://github.com/adbar/courlan.git # latest available code (see build status above)

Python

check_url()

All useful operations chained in check_url(url):

>>> from courlan import check_url
# returns url and domain name
>>> check_url('https://github.com/adbar/courlan')
('https://github.com/adbar/courlan', 'github.com')
# noisy query parameters can be removed
>>> check_url('https://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.org', strict=True)
('https://httpbin.org/redirect-to', 'httpbin.org')
# Check for redirects (HEAD request)
>>> url, domain_name = check_url(my_url, with_redirects=True)

Language-aware heuristics, notably internationalization in URLs, are available in lang_filter(url, language):

# optional argument targeting webpages in English or German
>>> url = 'https://www.un.org/en/about-us'
# success: returns clean URL and domain name
>>> check_url(url, language='en')
('https://www.un.org/en/about-us', 'un.org')
# failure: doesn't return anything
>>> check_url(url, language='de')
>>>
# optional argument: strict
>>> url = 'https://en.wikipedia.org/'
>>> check_url(url, language='de', strict=False)
('https://en.wikipedia.org', 'wikipedia.org')
>>> check_url(url, language='de', strict=True)
>>>

Define stricter restrictions on the expected content type with strict=True. Also blocks certain platforms and pages types crawlers should stay away from if they don't target them explicitly and other black holes where machines get lost.

# strict filtering
>>> check_url('https://www.twitch.com/', strict=True)
# blocked as it is a major platform

Sampling by domain name

>>> from courlan import sample_urls
>>> my_sample = sample_urls(my_urls, 100)
# optional: exclude_min=None, exclude_max=None, strict=False, verbose=False

Web crawling and URL handling

Determine if a link leads to another host:

>>> from courlan import is_external
>>> is_external('https://github.com/', 'https://www.microsoft.com/')
True
# default
>>> is_external('https://google.com/', 'https://www.google.co.uk/', ignore_suffix=True)
False
# taking suffixes into account
>>> is_external('https://google.com/', 'https://www.google.co.uk/', ignore_suffix=False)
True

Other useful functions dedicated to URL handling:

  • get_base_url(url): strip the URL of some of its parts
  • get_host_and_path(url): decompose URLs in two parts: protocol + host/domain and path
  • get_hostinfo(url): extract domain and host info (protocol + host/domain)
  • fix_relative_urls(baseurl, url): prepend necessary information to relative links
>>> from courlan import *
>>> url = 'https://www.un.org/en/about-us'
>>> get_base_url(url)
'https://www.un.org'
>>> get_host_and_path(url)
('https://www.un.org', '/en/about-us')
>>> get_hostinfo(url)
('un.org', 'https://www.un.org')
>>> fix_relative_urls('https://www.un.org', 'en/about-us')
'https://www.un.org/en/about-us'

Other filters dedicated to crawl frontier management:

  • is_not_crawlable(url): check for deep web or pages generally not usable in a crawling context
  • is_navigation_page(url): check for navigation and overview pages
>>> from courlan import is_navigation_page, is_not_crawlable
>>> is_navigation_page('https://www.randomblog.net/category/myposts')
True
>>> is_not_crawlable('https://www.randomblog.net/login')
True

Python helpers

Helper function, scrub and normalize:

>>> from courlan import clean_url
>>> clean_url('HTTPS://WWW.DWDS.DE:80/')
'https://www.dwds.de'

Basic scrubbing only:

>>> from courlan import scrub_url

Basic canonicalization/normalization only, i.e. modifying and standardizing URLs in a consistent manner:

>>> from urllib.parse import urlparse
>>> from courlan import normalize_url
>>> my_url = normalize_url(urlparse(my_url))
# passing URL strings directly also works
>>> my_url = normalize_url(my_url)
# remove unnecessary components and re-order query elements
>>> normalize_url('http://test.net/foo.html?utm_source=twitter&post=abc&page=2#fragment', strict=True)
'http://test.net/foo.html?page=2&post=abc'

Basic URL validation only:

>>> from courlan import validate_url
>>> validate_url('http://1234')
(False, None)
>>> validate_url('http://www.example.org/')
(True, ParseResult(scheme='http', netloc='www.example.org', path='/', params='', query='', fragment=''))

Command-line

The main fonctions are also available through a command-line utility.

$ courlan --inputfile url-list.txt --outputfile cleaned-urls.txt
$ courlan --help
usage: courlan [-h] -i INPUTFILE -o OUTPUTFILE [-d DISCARDEDFILE] [-v]
               [--strict] [-l LANGUAGE] [-r] [--sample]
               [--samplesize SAMPLESIZE] [--exclude-max EXCLUDE_MAX]
               [--exclude-min EXCLUDE_MIN]
optional arguments:
-h, --help show this help message and exit
I/O:

Manage input and output

-i INPUTFILE, --inputfile INPUTFILE
  name of input file (required)
-o OUTPUTFILE, --outputfile OUTPUTFILE
  name of output file (required)
-d DISCARDEDFILE, --discardedfile DISCARDEDFILE
  name of file to store discarded URLs (optional)
-v, --verbose increase output verbosity
Filtering:

Configure URL filters

--strict perform more restrictive tests
-l LANGUAGE, --language LANGUAGE
  use language filter (ISO 639-1 code)
-r, --redirects
  check redirects
Sampling:

Use sampling by host, configure sample size

--sample use sampling
--samplesize SAMPLESIZE
  size of sample per domain
--exclude-max EXCLUDE_MAX
  exclude domains with more than n URLs
--exclude-min EXCLUDE_MIN
  exclude domains with less than n URLs

License

coURLan is distributed under the GNU General Public License v3.0. If you wish to redistribute this library but feel bounded by the license conditions please try interacting at arms length, multi-licensing with compatible licenses, or contacting me.

See also GPL and free software licensing: What's in it for business?

Settings

courlan is optimized for English and German but its generic approach is also usable in other contexts.

To review details of strict URL filtering see settings.py. This can be overriden by cloning the repository and recompiling the package locally.

Contributing

Contributions are welcome!

Feel free to file issues on the dedicated page.

Author

This effort is part of methods to derive information from web documents in order to build text databases for research (chiefly linguistic analysis and natural language processing). Extracting and pre-processing web texts to the exacting standards of scientific research presents a substantial challenge for those who conduct such research. Web corpus construction involves numerous design decisions, and this software package can help facilitate text data collection and enhance corpus quality.

Contact: see homepage or GitHub.

Software ecosystem: see this graphic.

Similar work

These Python libraries perform similar normalization tasks but don't entail language or content filters. They also don't necessarily focus on crawl optimization:

References

  • Cho, J., Garcia-Molina, H., & Page, L. (1998). Efficient crawling through URL ordering. Computer networks and ISDN systems, 30(1-7), 161–172.
  • Edwards, J., McCurley, K. S., and Tomlin, J. A. (2001). "An adaptive model for optimizing performance of an incremental web crawler". In Proceedings of the 10th international conference on World Wide Web - WWW '01. pp. 106–113.
Comments
  • Sourcery refactored master branch

    Sourcery refactored master branch

    Branch master refactored by Sourcery.

    If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

    See our documentation here.

    Run Sourcery locally

    Reduce the feedback loop during development by using the Sourcery editor plugin:

    Review changes via command line

    To manually merge these changes, make sure you're on the master branch, then run:

    git fetch origin sourcery/master
    git merge --ff-only FETCH_HEAD
    git reset HEAD^
    

    Help us improve this pull request!

    opened by sourcery-ai[bot] 3
  • Sourcery refactored master branch

    Sourcery refactored master branch

    Branch master refactored by Sourcery.

    If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

    See our documentation here.

    Run Sourcery locally

    Reduce the feedback loop during development by using the Sourcery editor plugin:

    Review changes via command line

    To manually merge these changes, make sure you're on the master branch, then run:

    git fetch origin sourcery/master
    git merge --ff-only FETCH_HEAD
    git reset HEAD^
    

    Help us improve this pull request!

    opened by sourcery-ai[bot] 2
  • Sourcery refactored master branch

    Sourcery refactored master branch

    Branch master refactored by Sourcery.

    If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

    See our documentation here.

    Run Sourcery locally

    Reduce the feedback loop during development by using the Sourcery editor plugin:

    Review changes via command line

    To manually merge these changes, make sure you're on the master branch, then run:

    git fetch origin sourcery/master
    git merge --ff-only FETCH_HEAD
    git reset HEAD^
    

    Help us improve this pull request!

    opened by sourcery-ai[bot] 2
  • Sourcery refactored master branch

    Sourcery refactored master branch

    Branch master refactored by Sourcery.

    If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

    See our documentation here.

    Run Sourcery locally

    Reduce the feedback loop during development by using the Sourcery editor plugin:

    Review changes via command line

    To manually merge these changes, make sure you're on the master branch, then run:

    git fetch origin sourcery/master
    git merge --ff-only FETCH_HEAD
    git reset HEAD^
    

    Help us improve this pull request!

    opened by sourcery-ai[bot] 1
  • Sourcery refactored master branch

    Sourcery refactored master branch

    Branch master refactored by Sourcery.

    If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

    See our documentation here.

    Run Sourcery locally

    Reduce the feedback loop during development by using the Sourcery editor plugin:

    Review changes via command line

    To manually merge these changes, make sure you're on the master branch, then run:

    git fetch origin sourcery/master
    git merge --ff-only FETCH_HEAD
    git reset HEAD^
    

    Help us improve this pull request!

    opened by sourcery-ai[bot] 1
  • Sourcery refactored master branch

    Sourcery refactored master branch

    Branch master refactored by Sourcery.

    If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

    See our documentation here.

    Run Sourcery locally

    Reduce the feedback loop during development by using the Sourcery editor plugin:

    Review changes via command line

    To manually merge these changes, make sure you're on the master branch, then run:

    git fetch origin sourcery/master
    git merge --ff-only FETCH_HEAD
    git reset HEAD^
    

    Help us improve this pull request!

    opened by sourcery-ai[bot] 1
  • Replace tldextract with tld?

    Replace tldextract with tld?

    Remove tldextract and replace it with tld to reduce the total number of package dependencies as mentioned in https://github.com/adbar/trafilatura/issues/41

    enhancement 
    opened by adbar 1
  • Investigate sampling issue

    Investigate sampling issue

    The sampling function may not always work as it should, working example:

    >>> from courlan import sample_urls
    >>> my_urls = ['https://example.org/' + str(x) for x in range(100)]
    >>> my_sample = list(sample_urls(my_urls, 10))
    
    opened by adbar 0
  • Drop support for Python 3.5

    Drop support for Python 3.5

    Only support Python versions 3.6+ in the future and see if the code can be improved or cleaned on the way.

    Example to search the code: https://github.com/adbar/courlan/search?l=Python&q=%22Python+3.%22

    enhancement 
    opened by adbar 0
Releases(v0.8.3)
Owner
Adrien Barbaresi
Research scientist – natural language processing, web scraping and text analytics. Mostly with Python.
Adrien Barbaresi
Extract countries, regions and cities from a URL or text

This project is no longer being maintained and has been archived. Please check the Forks list for newer versions. Forks We are aware of two 3rd party

Ushahidi 216 Nov 18, 2022
🔗 FusiShort is a URL shortener built with Python, Redis, Docker and Kubernetes

This is a playground application created with goal of applying full cycle software development using popular technologies like Python, Redis, Docker and Kubernetes.

Lucas Fusinato Zanis 7 Nov 10, 2022
a little project to make custom discord invites over a url

custom-dc-invite a little project to make custom discord invites over a url how it works you create a account for

baum1810 2 Oct 03, 2022
C++ library for urlencode.

liburlencode C library for urlencode.

Khaidi Chu 6 Oct 31, 2022
A tool programmed to shorten links/mask links

A tool programmed to shorten links/mask links

Anontemitayo 6 Dec 02, 2022
Temporary-shortner - A webapp that shortner URLs but for limited time

temporary-shortner A webapp that shortens URLs but for a limited time Demo site

Vitor 2 Jan 07, 2022
A friendly library for parsing HTTP request arguments, with built-in support for popular web frameworks, including Flask, Django, Bottle, Tornado, Pyramid, webapp2, Falcon, and aiohttp.

webargs Homepage: https://webargs.readthedocs.io/ webargs is a Python library for parsing and validating HTTP request objects, with built-in support f

marshmallow-code 1.3k Jan 01, 2023
hugeURLer 是一个基于 Python 和 GitHub action 的短链接服务

hugeURLer 是一个基于 Python 和 GitHub action 的短链接服务 如何使用 您需要把库 clone 到本地,然后在终端执行 python3 .\src\addNewRedirection.py url ,就能创建一个指向你设置的 url 的跳转页面。

安东尼洪 2 Dec 22, 2021
A URL builder for genius :D

genius-url A URL builder for genius :D Usage from gurl import genius_url

ꌗᖘ꒒ꀤ꓄꒒ꀤꈤꍟ 12 Aug 14, 2021
A url shortner written in Flask.

url-shortener-elitmus This is a simple flask app which takes an URL and shortens it. This shortened verion of the URL redirects to the user to the lon

2 Nov 23, 2021
declutters url lists for crawling/pentesting

uro Using a URL list for security testing can be painful as there are a lot of URLs that have uninteresting/duplicate content; uro aims to solve that.

Somdev Sangwan 677 Jan 07, 2023
Ukiyo - A simple, minimalist and efficient discord vanity URL sniper

Ukiyo - a simple, minimalist and efficient discord vanity URL sniper. Ukiyo is easy to use, has a very visually pleasing interface, and has great spee

13 Apr 14, 2022
🌐 URL parsing and manipulation made easy.

furl is a small Python library that makes parsing and manipulating URLs easy. Python's standard urllib and urlparse modules provide a number of URL re

Ansgar Grunseid 2.4k Jan 04, 2023
A simple URL shortener app using Python AWS Chalice, AWS Lambda and AWS Dynamodb.

url-shortener-chalice A simple URL shortener app using AWS Chalice. Please make sure you configure your AWS credentials using AWS CLI before starting

Ranadeep Ghosh 2 Dec 09, 2022
Python implementation for generating Tiny URL- and bit.ly-like URLs.

Short URL Generator Python implementation for generating Tiny URL- and bit.ly-like URLs. A bit-shuffling approach is used to avoid generating consecut

Alireza Savand 170 Dec 28, 2022
find all the URL of a site with a specific Regex

href this program will find all the link with a spesfic Regex pattern from a site. what it will do in any site there are a lots of url that may you ne

Arya Shabane 12 Dec 05, 2022
A url redirect status check module for python

A url redirect status check module for python

Fayas Noushad 2 Oct 24, 2021
Customizable URL shortener written in Python3 for sniffing and spoofing

Customizable URL shortener written in Python3 for sniffing and spoofing

3 Nov 22, 2022
Yet another URL library

Yet another URL library

aio-libs 884 Jan 03, 2023
Qysqa - URL shortener website with python

Qysqa - shorten your URL. ~ A simple URL-shortening website. how do you pronounc

Dastan Ozgeldi 0 Nov 18, 2022