Incredibly fast crawler designed for OSINT.

Overview


Photon
Photon

Incredibly fast crawler designed for OSINT.

pypi

demo

Photon WikiHow To UseCompatibilityPhoton LibraryContributionRoadmap

Key Features

Data Extraction

Photon can extract the following data while crawling:

  • URLs (in-scope & out-of-scope)
  • URLs with parameters (example.com/gallery.php?id=2)
  • Intel (emails, social media accounts, amazon buckets etc.)
  • Files (pdf, png, xml etc.)
  • Secret keys (auth/API keys & hashes)
  • JavaScript files & Endpoints present in them
  • Strings matching custom regex pattern
  • Subdomains & DNS related data

The extracted information is saved in an organized manner or can be exported as json.

save demo

Flexible

Control timeout, delay, add seeds, exclude URLs matching a regex pattern and other cool stuff. The extensive range of options provided by Photon lets you crawl the web exactly the way you want.

Genius

Photon's smart thread management & refined logic gives you top notch performance.

Still, crawling can be resource intensive but Photon has some tricks up it's sleeves. You can fetch URLs archived by archive.org to be used as seeds by using --wayback option.

Plugins

Docker

Photon can be launched using a lightweight Python-Alpine (103 MB) Docker image.

$ git clone https://github.com/s0md3v/Photon.git
$ cd Photon
$ docker build -t photon .
$ docker run -it --name photon photon:latest -u google.com

To view results, you can either head over to the local docker volume, which you can find by running docker inspect photon or by mounting the target loot folder:

$ docker run -it --name photon -v "$PWD:/Photon/google.com" photon:latest -u google.com

Frequent & Seamless Updates

Photon is under heavy development and updates for fixing bugs. optimizing performance & new features are being rolled regularly.

If you would like to see features and issues that are being worked on, you can do that on Development project board.

Updates can be installed & checked for with the --update option. Photon has seamless update capabilities which means you can update Photon without losing any of your saved data.

Contribution & License

You can contribute in following ways:

  • Report bugs
  • Develop plugins
  • Add more "APIs" for ninja mode
  • Give suggestions to make it better
  • Fix issues & submit a pull request

Please read the guidelines before submitting a pull request or issue.

Do you want to have a conversation in private? Hit me up on my twitter, inbox is open :)

Photon is licensed under GPL v3.0 license

Comments
  • error when scanning IP

    error when scanning IP

    Line 169 errors out if you run photon against an IP. Easiest fix might be to just add a try/except, but there is prob a more elgant solution.

    I'm pretty sure this was working before.

    [email protected]:/opt/Photon# python /opt/Photon/photon.py -u http://192.168.0.213:80
          ____  __          __
         / __ \/ /_  ____  / /_____  ____
        / /_/ / __ \/ __ \/ __/ __ \/ __ \
       / ____/ / / / /_/ / /_/ /_/ / / / /
      /_/   /_/ /_/\____/\__/\____/_/ /_/ v1.1.1
    
    Traceback (most recent call last):
      File "/opt/Photon/photon.py", line 169, in <module>
        domain = get_fld(host, fix_protocol=True) # Extracts top level domain out of the host
      File "/usr/local/lib/python2.7/dist-packages/tld/utils.py", line 387, in get_fld
        search_private=search_private
      File "/usr/local/lib/python2.7/dist-packages/tld/utils.py", line 339, in process_url
        raise TldDomainNotFound(domain_name=domain_name)
    tld.exceptions.TldDomainNotFound: Domain 192.168.0.213 didn't match any existing TLD name!
    
    opened by sethsec 18
  • Option to skip crawling of URLs that match a regex pattern

    Option to skip crawling of URLs that match a regex pattern

    Added option to skip crawling of URLs that match a regex pattern, changed handling of seeds, and removed double spaces.

    I am assuming this is what you meant in the ideas column tell me if I'm way off though :)

    opened by connorskees 17
  • Added two new options: -o/--output and --stdout

    Added two new options: -o/--output and --stdout

    Awesome tool. I've been looking for something like this for a while to integrate with something I am building! I added two new options for you to consider:

    1. -o/--output option that allows the user to specify an output directory (overriding the default).

    Command: python photon.py -u http://10.10.10.102:80 -l 2 -t100 -o /pentest/photontest

    In this case, all of the output files will be written to /pentest/photontest:

    [email protected]:/pentest/photontest# ls -ltr
    total 24
    -rw-r--r-- 1 root root    0 Jul 25 11:11 scripts.txt
    -rw-r--r-- 1 root root 3260 Jul 25 11:11 robots.txt
    -rw-r--r-- 1 root root 3260 Jul 25 11:11 links.txt
    -rw-r--r-- 1 root root   17 Jul 25 11:11 intel.txt
    -rw-r--r-- 1 root root  437 Jul 25 11:11 fuzzable.txt
    -rw-r--r-- 1 root root  146 Jul 25 11:11 files.txt
    -rw-r--r-- 1 root root    0 Jul 25 11:11 failed.txt
    -rw-r--r-- 1 root root   96 Jul 25 11:11 external.txt
    -rw-r--r-- 1 root root    0 Jul 25 11:11 endpoints.txt
    -rw-r--r-- 1 root root    0 Jul 25 11:11 custom.txt
    
    1. A --stdout option that allow user to print everything to stdout so they can pipe it into another tool or redirect all output to an output file with an operating system redirector.

    Command: [email protected]:/opt/dev/Photon# python photon.py -u http://10.10.10.9:80 -l 2 -t100 --stdout

    Output:

          ____  __          __
         / __ \/ /_  ____  / /_____  ____
        / /_/ / __ \/ __ \/ __/ __ \/ __ \
       / ____/ / / / /_/ / /_/ /_/ / / / /
      /_/   /_/ /_/\____/\__/\____/_/ /_/
    
    [+] URLs retrieved from robots.txt: 68
    [~] Level 1: 69 URLs
    [!] Progress: 69/69
    [~] Level 2: 9 URLs
    [!] Progress: 9/9
    [~] Crawling 0 JavaScript files
    
    --------------------------------------------------
    [+] URLs: 78
    [+] Intel: 1
    [+] Files: 1
    [+] Endpoints: 0
    [+] Fuzzable URLs: 9
    [+] Custom strings: 0
    [+] JavaScript Files: 0
    [+] External References: 3
    --------------------------------------------------
    [!] Total time taken: 0:32
    [!] Average request time: 0.40
    [+] Results saved in 10.10.10.9:80 directory
    
    All Results:
    
    http://10.10.10.9:80/themes/*.gif
    http://10.10.10.9:80/modules/*.png
    http://10.10.10.9:80/INSTALL.mysql.txt
    http://10.10.10.9:80/install.php
    http://10.10.10.9:80/scripts/
    http://10.10.10.9:80/node/add/
    http://10.10.10.9:80/?q=admin/
    http://10.10.10.9:80/themes/*.png
    http://10.10.10.9:80/modules/*.gif
    http://10.10.10.9:80
    http://10.10.10.9:80/includes/
    http://10.10.10.9:80/?q=user/password/
    http://10.10.10.9:80/INSTALL.txt
    http://10.10.10.9:80/profiles/
    http://10.10.10.9:80/themes/bartik/css/ie6.css?on28x3
    http://10.10.10.9:80/MAINTAINERS.txt
    http://10.10.10.9:80/themes/bartik/css/ie.css?on28x3
    http://10.10.10.9:80/modules/*.jpeg
    http://10.10.10.9:80/misc/*.gif
    

    The last thing i changed is the way you were wiping the directory each time you ran the tool so that you would get clean output. If you accept the -o option which allows the user to specify the directory, you can't just blindly delete the directory anymore (can't trust user input ;)). So i think i added a cleaner way to just overwrite each file (replacing the w+ with w), that should accomplish the same thing without needing to delete directories.

    enhancement 
    opened by sethsec 13
  • Added -v/--verbose option + fixed few logic error in detecting bad js files

    Added -v/--verbose option + fixed few logic error in detecting bad js files

    Changes:

    • Added the -v/--verbose option for verbose output.
    • Fixed 2 logical errors in detecting bad JS scripts. - With this fix, number of bad js files have almost gone to zero.
    opened by 0xInfection 11
  • RuntimeError: Set changed size during iteration in Photon 1.0.7

    RuntimeError: Set changed size during iteration in Photon 1.0.7

    [!] Progress: 1/1
    [~] Level 2: 387 URLs
    [!] Progress: 387/387
    [~] Level 3: 18078 URLs
    [!] Progress: 18078/18078
    [~] Level 4: 90143 URLs
    [!] Progress: 39750/90143^C
    [~] Crawling 0 JavaScript files
    
    Traceback (most recent call last):
      File "photon.py", line 454, in <module>
        for url in external:
    RuntimeError: Set changed size during iteration
    
    invalid 
    opened by thistehneisen 10
  • A couple minor issues

    A couple minor issues

    SSL issues

    You're ignoring SSL verification:

    /usr/local/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:843: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
      InsecureRequestWarning)
    

    You can fix this by adding the following to the top of the file (or wherever you feel like putting it):

    import warnings
    warnings.filterwarnings("ignore")
    

    Now this is bad practice, for obvious reasons, however, it makes sense why you're doing it. The above solution is the quickest and simplest solution to solve the problem and keep the errors from being annoying.

    An example with and without:

    Without: without

    With: with


    Dammit Unicode

    If I supply a URL that is in, lets say Russian and try to extract all the data:

          ____  __          __
         / __ \/ /_  ____  / /_____  ____
        / /_/ / __ \/ __ \/ __/ __ \/ __ \
       / ____/ / / / /_/ / /_/ /_/ / / / /
      /_/   /_/ /_/\____/\__/\____/_/ /_/ 
    
     URLs retrieved from robots.txt: 5
     Level 1: 6 URLs
     Progress: 6/6
     Level 2: 35 URLs
     Progress: 35/35
     Level 3: 7 URLs
     Progress: 7/7
     Crawling 7 JavaScript files
     Progress: 7/7
    Traceback (most recent call last):
      File "photon.py", line 429, in <module>
        f.write(x + '\n')
    UnicodeEncodeError: 'ascii' codec can't encode character u'\u200b' in position 7: ordinal not in range(128)
    

    Easiest solution would be to just ignore things like this and continue, with a warning to the user that it was ignored.

    opened by Ekultek 10
  • Adding code to detect and report broken links.

    Adding code to detect and report broken links.

    Current Issue: Broken links for which server returns 404 status (or similar error) are not reported as failed links, instead these erroneous pages are parsed for text content.

    opened by snehm 9
  • Multiple improvements

    Multiple improvements

    The intels are searched only inside the page plain text to avoid retrieving tokens are garbage javascript code. Better regular expressions could allow searching inside javascript code for intels though.

    v1.3.0

    • Added more intels (GENERIC_URL, BRACKET_URL, BACKSLASH_URL, HEXENCODED_URL, URLENCODED_URL, B64ENCODED_URL, IPV4, IPV6, EMAIL, MD5, SHA1, SHA256, SHA512, YARA_PARSE, CREDIT_CARD)
    • Intel search only applied to text (not inside javascript or html tags)
    • proxy support with -p, --proxy option (http proxy only)
    • minor fixes and pep8 format

    Tested on

    os: Linux Mint 19.1
    python: 3.6.7
    
    opened by oXis 8
  • get_lfd

    get_lfd

    Hi After using the option --update i get this error:

    Traceback (most recent call last): File "photon.py", line 187, in domain = topLevel(main_url) File "photon.py", line 182, in topLevel toplevel = tld.get_fld(host, fix_protocol=True) AttributeError: 'module' object has no attribute 'get_fld'

    Any clue? I installed also on another system and still the same change target: same delete totally and clone again: same

    OS Kali3 Thanks

    invalid 
    opened by psychomad 7
  • Exceptions in threads during scanning in Level 1 & 2

    Exceptions in threads during scanning in Level 1 & 2

    Exception in thread Thread-4694: Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner self.run() File "/usr/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "photon.py", line 211, in extractor if is_link(link, processed, files): File "/usr/share/Photon/core/utils.py", line 41, in is_link is_file = url.endswith(BAD_TYPES) TypeError: endswith first arg must be str, unicode, or tuple, not list

    image

    opened by 4k4xs4pH1r3 6
  • Improve dev-experience

    Improve dev-experience

    Add .gitignore and requirements file just to improve development experience.

    Doing this:

    • You won't need to remove unnecessary python compiled files or either other not needed files.
    • You just "pip install -r requirements.txt", and you'll have all necessary third-party libraries to execute script.
    invalid 
    opened by dgarana 5
  • TLSCertVerificationDisabled

    TLSCertVerificationDisabled

    Under [core/requester.py] line 48 of code.

    response = SESSION.get

    Certificate verification is disabled by setting verify to False in get. This may lead to Man-in-the-middle attacks.

    opened by yonasuriv 0
  • .well-known files

    .well-known files

    I see that this project examines robots.txt and sitemap.xml. I was wondering if you could add some of the other .well-known files like ads.txt, security.txt and others found https://well-known.dev/resources/

    opened by WebBreacher 0
  • No generating a DNS map

    No generating a DNS map

    So i cant get an dns map only creates when getting from google and example.com.

    Im using python 3 and command: python3 photon.py -u "http://example.com" --dns

    opened by DEPSTRCZ 0
Releases(v1.3.0)
  • v1.3.0(Apr 5, 2019)

    • Dropped Python < 3.2 support
    • Removed Ninja mode
    • Fixed a bug in link parsing
    • Fixed Unicode output
    • Fixed a bug which caused URLs to be treated as files
    • Intel is now associated with the URL where it was found
    Source code(tar.gz)
    Source code(zip)
  • v1.2.1(Jan 26, 2019)

  • v1.1.6(Jan 25, 2019)

  • v1.1.5(Oct 24, 2018)

  • v1.1.4(Sep 18, 2018)

  • v1.1.3(Sep 6, 2018)

  • v1.1.2(Aug 7, 2018)

    • Code refactor
    • Better identification of external URLs
    • Fixed a major bug that made several intel URLs pass under the radar
    • Fixed a major bug that caused non-html type content to be marked a crawlable URL
    Source code(tar.gz)
    Source code(zip)
  • v1.1.1(Sep 4, 2018)

    • Added --wayback
    • Fixed progress bar for python > 3.2
    • Added /core/config.py for easy customization
    • --dns now saves subdomains in subdomains.txt
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Aug 29, 2018)

    • Use of ThreadPoolExecutor for x2 speed (for python > 3.2)
    • Fixed mishandling of urls starting with //
    • Removed a redundant try-except statement
    • Evaluate entropy of found keys to avoid false positives
    Source code(tar.gz)
    Source code(zip)
  • v1.0.9(Aug 19, 2018)

  • v1.0.8(Aug 4, 2018)

    • added --exclude option
    • Better regex and code logic to favor performance
    • Fixed a bug that caused dnsdumpster to fail if target was a subdomain
    • Fixed a bug that caused a crash if run outside "Photon" directory
    • Fixed a bug in file saving (specific to python3)
    Source code(tar.gz)
    Source code(zip)
  • v1.0.7(Jul 28, 2018)

    • Added --timeout option
    • Added --output option
    • Added --user-agent option
    • Replaced lxml with regex
    • Better logic for favoring performance
    • Added bigger and separate file for user-agents
    Source code(tar.gz)
    Source code(zip)
  • v1.0.6(Jul 26, 2018)

  • v1.0.5(Jul 26, 2018)

  • v1.0.4(Jul 25, 2018)

    • Fixed an issue which caused regular links to be saved in robots.txt
    • Simplified flash function
    • Removed -n as an alias of --ninja
    • Added --only-urls option
    • Refactored code for readability
    • Skip saving files if the content is empty
    Source code(tar.gz)
    Source code(zip)
  • v1.0.3(Jul 24, 2018)

    • Introduced plugins
    • Added dnsdumpster plugin
    • Fixed non-ascii character handling, again
    • 404 pages are now added to failed list
    • Handling exceptions in jscanner
    Source code(tar.gz)
    Source code(zip)
  • v1.0.2(Jul 23, 2018)

    • Proper handling of null response from robots.txt & sitemap.xml
    • Python2 compatibility
    • Proper handling of non-ascii chars
    • Added ability to specify custom regex pattern
    • Display total time taken and average time per request
    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Jul 23, 2018)

Owner
Somdev Sangwan
I make things, I break things and I make things that break things.
Somdev Sangwan
Twitter Scraper

Twitter's API is annoying to work with, and has lots of limitations — luckily their frontend (JavaScript) has it's own API, which I reverse–engineered. No API rate limits. No restrictions. Extremely

Tayyab Kharl 45 Dec 30, 2022
A simple django-rest-framework api using web scraping

Apicell You can use this api to search in google, bing, pypi and subscene and get results Method : POST Parameter : query Example import request url =

Hesam N 1 Dec 19, 2021
Raspi-scraper is a configurable python webscraper that checks raspberry pi stocks from verified sellers

Raspi-scraper is a configurable python webscraper that checks raspberry pi stocks from verified sellers.

Louie Cai 13 Oct 15, 2022
Shopee Scraper - A web scraper in python that extract sales, price, avaliable stock, location and more of a given seller in Brazil

Shopee Scraper A web scraper in python that extract sales, price, avaliable stock, location and more of a given seller in Brazil. The project was crea

Paulo DaRosa 5 Nov 29, 2022
Scrap the 42 Intranet's elearning videos in a single click

42intra_scraper Scrap the 42 Intranet's elearning videos in a single click. Why you would want to use it ? Adjust speed at your convenience. (The intr

Noufel 5 Oct 27, 2022
The open-source web scrapers that feed the Los Angeles Times California coronavirus tracker.

The open-source web scrapers that feed the Los Angeles Times' California coronavirus tracker. Processed data ready for analysis is available at datade

Los Angeles Times Data and Graphics Department 51 Dec 14, 2022
Google Developer Profile Badge Scraper

Google Developer Profile Badge Scraper It is a Google Developer Profile Web Scraper which scrapes for specific badges in a user's Google Developer Pro

Hemant Sachdeva 2 Feb 22, 2022
Binance Smart Chain Contract Scraper + Contract Evaluator

Pulls Binance Smart Chain feed of newly-verified contracts every 30 seconds, then checks their contract code for links to socials.Returns only those with socials information included, and then submit

14 Dec 09, 2022
A distributed crawler for weibo, building with celery and requests.

A distributed crawler for weibo, building with celery and requests.

SpiderClub 4.8k Jan 03, 2023
对于有验证码的站点爆破,用于安全合法测试

使用方法 python3 main.py + 配置好的文件 python3 main.py Verify.json python3 main.py NoVerify.json 以上分别对应有验证码的demo和无验证码的demo Tips: 你可以以域名作为配置文件名字加载:python3 main

47 Nov 09, 2022
一款利用Python来自动获取QQ音乐上某个歌手所有歌曲歌词的爬虫软件

QQ音乐歌词爬虫 一款利用Python来自动获取QQ音乐上某个歌手所有歌曲歌词的爬虫软件,默认去除了所有演唱会(Live)版本的歌曲。 使用方法 直接运行python run.py即可,然后输入你想获取的歌手名字,然后静静等待片刻。 output目录下保存生成的歌词和歌名文件。以周杰伦为例,会生成两

Yang Wei 11 Jul 27, 2022
Haphazard scripts for scraping bitcoin/bitcoin data from GitHub

This is a quick-and-dirty tool used to scrape bitcoin/bitcoin pull request and commentary data. Each output/pr number folder contains comments.json:

James O'Beirne 8 Oct 12, 2022
Scrape puzzle scrambles from csTimer.net

Scroodle Selenium script to scrape scrambles from csTimer.net csTimer runs locally in your browser, so this doesn't strain the servers any more than i

Jason Nguyen 1 Oct 29, 2021
This is a script that scrapes the longitude and latitude on food.grab.com

grab This is a script that scrapes the longitude and latitude for any restaurant in Manila on food.grab.com, location can be adjusted. Search Result p

0 Nov 22, 2021
A powerful annex BUBT, BUBT Soft, and BUBT website scraping script.

Annex Bubt Scraping Script I think this is the first public repository that provides free annex-BUBT, BUBT-Soft, and BUBT website scraping API script

Md Imam Hossain 4 Dec 03, 2022
爬取各大SRC当日公告 | 通过微信通知的小工具 | 赏金工具

OnTimeHacker V1.0 OnTimeHacker 是一个爬取各大SRC当日公告,并通过微信通知的小工具 OnTimeHacker目前版本为1.0,已支持24家SRC,列表如下 360、爱奇艺、阿里、百度、哔哩哔哩、贝壳、Boss、58、菜鸟、滴滴、斗鱼、 饿了么、瓜子、合合、享道、京东、

Bywalks 95 Jan 07, 2023
Library to scrape and clean web pages to create massive datasets.

lazynlp A straightforward library that allows you to crawl, clean up, and deduplicate webpages to create massive monolingual datasets. Using this libr

Chip Huyen 2.1k Jan 06, 2023
A tool can scrape product in aliexpress: Title, Price, and URL Product.

Scrape-Product-Aliexpress A tool can scrape product in aliexpress: Title, Price, and URL Product. Usage: 1. Install Python 3.8 3.9 padahal halaman ins

Rahul Joshua Damanik 1 Dec 30, 2021
Google Scholar Web Scraping

Google Scholar Web Scraping This is a python script that asks for a user to input the url for a google scholar profile, and then it writes publication

Suzan M 1 Dec 12, 2021
A training task for web scraping using python multithreading and a real-time-updated list of available proxy servers.

Parallel web scraping The project is a training task for web scraping using python multithreading and a real-time-updated list of available proxy serv

Kushal Shingote 1 Feb 10, 2022