CloudProxy is to hide your scrapers IP behind the cloud

Related tags

Networkingcloudproxy
Overview

CodeCov Coverage Codacy Quality Docker Cloud Build Status Contributors Forks Stargazers Issues MIT License

CloudProxy

cloudproxy

About The Project

The purpose of CloudProxy is to hide your scrapers IP behind the cloud. It allows you to spin up a pool of proxies using popular cloud providers with just an API token. No configuration needed.

CloudProxy exposes an API with the IPs and credentials of the provisioned proxies.

Providers supported:

Planned:

  • Google Cloud
  • Azure
  • Scaleway
  • Vultr

Inspired by

This project was inspired by Scrapoxy, though that project no longer seems actively maintained.

The primary advantage of CloudProxy over Scrapoxy is that CloudProxy only requires an API token from a cloud provider. CloudProxy automatically deploys and configures the proxy on the cloud instances without the user needing to preconfigure or copy an image.

Please always scrape nicely, respectfully and do not slam servers.

Getting Started

To get a local copy up and running follow these simple steps.

Prerequisites

All you need is:

  • Docker

Installation

Environment variables:

Required

USERNAME - set the username for the forward proxy.

PASSWORD - set the password for the forward proxy.

Optional

AGE_LIMIT - set the age limit for your forward proxies in seconds. Once the age limit is reached, the proxy is replaced. A value of 0 disables the feature. Default value: 0.

See individual provider pages for environment variables required in above providers supported section.

Docker (recommended)

For example:

docker run -e USERNAME='CHANGE_THIS_USERNAME' \
    -e PASSWORD='CHANGE_THIS_PASSWORD' \
    -e DIGITALOCEAN_ENABLED=True \
    -e DIGITALOCEAN_ACCESS_TOKEN='YOUR SECRET ACCESS KEY' \
    -it -p 8000:8000 laffin/cloudproxy:latest

It is recommended to use a Docker image tagged to a version e.g. laffin/cloudproxy:0.3.0-beta, see releases for latest version.

Usage

CloudProxy exposes an API on localhost:8000. Your application can use the below API to retrieve the IPs with auth for the proxy servers deployed. Then your application can use those IPs to proxy.

The logic to cycle through IPs for proxying will need to be in your application, for example:

import random
import requests as requests


# Returns a random proxy from CloudProxy
def random_proxy():
    ips = requests.get("http://localhost:8000").json()
    return random.choice(ips['ips'])


proxies = {"http": random_proxy(), "https": random_proxy()}
my_request = requests.get("https://api.ipify.org", proxies=proxies)

CloudProxy UI

cloudproxy-ui

You can manage CloudProxy via an API and UI. You can access the UI at http://localhost/ui.

You can scale up and down your proxies and remove them for each provider via the UI.

CloudProxy API

List available proxy servers

Request

GET /

curl -X 'GET' 'http://localhost:8000/' -H 'accept: application/json'

Response

{"ips":["http://username:password:192.168.0.1:8899", "http://username:password:192.168.0.2:8899"]}

List random proxy server

Request

GET /random

curl -X 'GET' 'http://localhost:8000/random' -H 'accept: application/json'

Response

["http://username:password:192.168.0.1:8899"]

Remove proxy server

Request

DELETE /destroy

curl -X 'DELETE' 'http://localhost:8000/destroy?ip_address=192.1.1.1' -H 'accept: application/json'

Response

["Proxy to be destroyed"]

Get provider

Request

GET /provider/digitalocean

curl -X 'GET' 'http://localhost:8000/providers/digitalocean' -H 'accept: application/json'

Response

  {
    "ips": [
      "192.1.1.2",
      "192.1.1.3"
    ],
    "scaling": {
      "min_scaling": 2,
      "max_scaling": 2
    }
  }

Update provider

Request

PATCH /provider/digitalocean

curl -X 'PATCH' 'http://localhost:8000/providers/digitalocean?min_scaling=5&max_scaling=5' -H 'accept: application/json'

Response

  {
    "ips": [
      "192.1.1.2",
      "192.1.1.3"
    ],
    "scaling": {
      "min_scaling": 5,
      "max_scaling": 5
    }
  }

CloudProxy runs on a schedule of every 30 seconds, it will check if the minimum scaling has been met, if not then it will deploy the required number of proxies. The new proxy info will appear in IPs once they are deployed and ready to be used.

Roadmap

The project is at early alpha with limited features. In the future more providers will be supported, autoscaling will be implemented and a rich API to allow for blacklisting and recycling of proxies.

See the open issues for a list of proposed features (and known issues).

Limitations

This method of scraping via cloud providers has limitations, many websites have anti-bot protections and blacklists in place which can limit the effectiveness of CloudProxy. Many websites block datacenter IPs and IPs may be tarnished already due to IP recycling. Rotating the CloudProxy proxies regularly may improve results. The best solution for scraping is via proxy services providing residential IPs, which are less likely to be blocked, however are much more expensive. CloudProxy is a much cheaper alternative for scraping sites that do not block datacenter IPs nor have advanced anti-bot protection. This a point frequently made when people share this project which is why I am including this in the README.

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Your Name - @christianlaffin - [email protected]

Project Link: https://github.com/claffin/cloudproxy

Acknowledgements

Comments
  • Maybe add feature for multiple digital ocean accounts?

    Maybe add feature for multiple digital ocean accounts?

    I'm loving this, it's perfect for what I need. Maybe a feature to consider in the future is to be able to use multiple digital ocean accounts as there's a limit of 10 for new users?

    Thanks for releasing this!

    enhancement 
    opened by sblfc 11
  • Digitalocean Droplet created but not showing in the API

    Digitalocean Droplet created but not showing in the API

    Hello

    I launched the container as per the documentation, but I'm still not seeing anything in the UI nor the API, I can see it created the droplets and keep removing/adding new droplets every few minutes

    What I'm missing?

    export DIGITALOCEAN_ENABLED=True
    export DIGITALOCEAN_ACCESS_TOKEN="XXXXX"
    export DIGITALOCEAN_MIN_SCALING=2
    export DIGITALOCEAN_MAX_SCALING=2
    export DIGITALOCEAN_SIZE="s-1vcpu-512mb-10gb"
    export DIGITALOCEAN_REGION="fra1"
    export AGE_LIMIT="1200"
    export USERNAME="XXX"
    export PASSWORD='XXXX'
    
    docker run -e USERNAME=$USERNAME \
        -e PASSWORD=$PASSWORD \
        -e DIGITALOCEAN_ENABLED=$DIGITALOCEAN_ENABLED \
        -e DIGITALOCEAN_ACCESS_TOKEN=$DIGITALOCEAN_ACCESS_TOKEN \
        -e DIGITALOCEAN_MIN_SCALING=$DIGITALOCEAN_MIN_SCALING \
        -e DIGITALOCEAN_MAX_SCALING=$DIGITALOCEAN_MAX_SCALING \
        -e DIGITALOCEAN_SIZE=$DIGITALOCEAN_SIZE \
        -e DIGITALOCEAN_REGION=$DIGITALOCEAN_REGION \
        -e AGE_LIMIT=$AGE_LIMIT \
        -it -p 8000:8000 laffin/cloudproxy:latest
    
    bug 
    opened by mrahmadt 5
  • Allowed IP as alternative auth

    Allowed IP as alternative auth

    Actually i'm working with a project with same name called CloudProxy (https://github.com/NoahCardoza/CloudProxy). It's to pass throw cloudflare challenge. To do that is using a chrome with pupeeter. It's starting a chrome browser passing a proxy via parameter but it doesn't accept user and password on it (https://superuser.com/questions/902620/google-chrome-proxy-settings-with-username-and-password). This is the motivation to fork it and implement an alternative way to authenticate with proxy than using user and password. I used the parameter allowed_ip from tinyproxy.

    Implemented:

    • Alternatrive authentication using ALLOWED_IP
    • Variable environments that are using boolean now are using real boolean instead of String. Instead of using ENABLE_AWS=True now you should write ENABLE_AWS=true. And in code we are using now (if ENABLE_AWS:) instead of (if ENABLE_AWS='True')
    • Check if proxy is working we are trying to load google.com throw the proxy instead of check the proxy url directly. For some strange reason proxy server response 403 HTTP forbidden when you visit is directly. Ex: http://10.12.23.3:8899 if it's using allowed_id, but it's really working as proxy. Increased timeout to 6 to let the proxy do its job.
    • Added optional parameter AWS_KEY_NAME to pass throw a pairkey to login on the EC2 instance.
    • Added optional parameter PROXY_STEALTH=true, default false to set proxy in stealth mode and not sending proxy headers on its requests to the sites.
    • Update README.md and docs/aws.md

    Since is the first time i do something on python and docker. Probably is not really optimitzed and i guess there is space to enhance. I tested it on DigitalOcean, Hetzner and AWS and it works flawless without problems atm.

    I published a docker image: bubexel/cloudprroxy:latest

    Thank you for your work on it!

    Greetings

    opened by serk7 5
  • Unable to authenticate through DigitalOcean

    Unable to authenticate through DigitalOcean

    Expected Behavior

    Running the command:

    "docker run -e USERNAME='xxx' -e PASSWORD='xx' -e DIGITALOCEAN_ENABLED=True -e DIGITALOCEAN_ACCESS_TOKEN='xxx' -it -p 8000:8000 laffin/cloudproxy:latest

    Username & Password being alphanumeric. Token validated by using:

    "doctl auth init -t "xxx"

    I get the following error:

    File "/usr/local/lib/python3.8/site-packages/digitalocean/baseapi.py", line 233, in get_data raise DataReadError(msg) │ └ 'Unable to authenticate you' └ <class 'digitalocean.DataReadError'>

    I think my bug is identical to George Roscoe's. I've never had an issue running this before. I ran this a few weeks ago and it worked completely fine

    bug 
    opened by HazzaWaltham123 4
  • Requests to AWS starts throwing [Errno 113] No route to host

    Requests to AWS starts throwing [Errno 113] No route to host

    I've run into an issue that I can't seem to pinpoint so I'm not sure if it's due to CloudProxy (TinyProxy).

    I've set up CloudProxy to run in Docker with 15 AWS Spot instances. Then I've written a Python Flask script that fetches the IPs from CloudProxy once every minute, accepts an URL (GET request), and returns the html page fetched through one of these AWS proxies. The reason I'm doing it this way is because my original application that uses the html data doesn't allow me to set the user agent, so I need to go through a proxy that allows this.

    This is the fetch line in the Flask application (proxy):

    proxies = {"http": proxy, "https": proxy} resp = requests.get(url, headers=headers, proxies=proxies, timeout=5, allow_redirects=True, stream=True)

    It can run fine for hours until suddenly all my AWS instances started dying. I went through the CloudProxy code and identified that the restarts was due to the ALIVE checks failing. So I disabled that code and also added some exceptions in my own application. It solved the instances dying, but not the original issue.

    It turned out that the code line above (requests.get) suddenly starts throwing the following error:

    HTTPConnectionPool(host='X.X.X.X', port=8899): Max retries exceeded with url: http://www.url.com?page=1 (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f39cd7d7550>: Failed to establish a new connection: [Errno 113] No route to host')))

    I've masked the IP and url for privacy reasons.

    So basically, my scripts run for hours with 1-2 requests every second until all the requests suddenly starts spitting out the exception above until it bogged down my entire WiFi. The internet on all of my computers almost stops responding. The only solution is to stop the requests, give it a few minutes and then resume like nothing happened.

    After fixing the Spot instances dying, my second idea was that there's some kind of TCP limit in AWS. So I upgraded my instances from Nano to Micro with no apparent improvement. I considered it being a Docker issue but I only fetch ALIVE IPs once every minute so I can't see how that would be limited in any way. I don't see it being a TinyProxy limit since my 1-2 requests are spread out over 15 different AWS instances.

    Do you know if there is any AWS limit I'm hitting or have you experienced anything similar with CloudProxy?

    opened by secretobserve 3
  • Add Google Cloud provider

    Add Google Cloud provider

    This Pull Request adds Google Cloud provider support for cloudproxy and solves #35.

    Known issues:

    • Proxy removal from cloudproxy-ui does not work when "Remove" button is clicked. (At least didn't work for me)

    Future work:

    • Given that this code is based on the AWS code, it will be probably a good idea at some point in the future to refactor all the providers' code to reduce the amount of duplicated code and logic.
    opened by dusancz 3
  • trouble authenticating proxy / documentation of authentication for AWS

    trouble authenticating proxy / documentation of authentication for AWS

    Expected Behavior

    I followed the docs here https://github.com/claffin/cloudproxy and here https://github.com/claffin/cloudproxy/blob/main/docs/aws.md I created environment variables, all alphanumerica, for USERNAME and PASSWORD. I created an IAM role as instructed, and I can see the EC2 instances. WHen using the toy example these are correctly filled in (i.e., instead of being changeme:[email protected] it is user:[email protected]).

    Actual Behavior

    WHen connecting to the proxy I get the error message: The administrator of this proxy has not configured it to service requests from you.


    This is almost certainly from my misunderstanding the docs (as I haven't worked with AWS before). Are we meant to set the username and password somewhere in AWS too? I also tried creating a password for the IAM user and using that, but that isn't allowed to be alphanumeric. I'd also be happy to write some documentation for entry/beginners like myself once I get it up and running

    bug 
    opened by EthanTheMathmo 2
  • Can't change the default zone of GCP Proxies

    Can't change the default zone of GCP Proxies

    docker run -e USERNAME='xxx' -e PASSWORD='xxx' -e GCP_ENABLED=True -e GCP_PROJECT='xxx' -e GCP_SIZE='e2-micro' -e GCP_ZONE='asia-northeast3-a' -e GCP_SERVICE_ACCOUNT_KEY='xxx' -it -p 8000:8000 laffin/cloudproxy:latest

    I tried this command but it always creates instances in the us default zone, I can't switch it to the different zone. Can you fix this ?

    Thank you

    bug 
    opened by LuongPhuHoa 2
  • Cancel spot requests when associated instances are terminated

    Cancel spot requests when associated instances are terminated

    When deleting proxies filled by one-time spot requests, the instances are terminated, but the spot request itself is not cancelled, leaving the door open to being filled again in the future when not associated with cloudproxy. This PR deletes the spot request if it exists in the delete_proxy() function. Related to #42, but unclear if applicable to persistent spot requests.

    opened by henryzxu 2
  • Ghost proxies when destroying using spot in AWS

    Ghost proxies when destroying using spot in AWS

    Expected Behavior

    The proxies are destroyed and remain gone.

    Actual Behavior

    The proxies are destroyed, then some unknown time later are restarted but without the cloudproxy tag since they are started by something other than cloudproxy. They are fully functional proxies though, just missing the tag.

    Steps to Reproduce the Problem

    1. Start cloudproxy
    2. Increase servers to 30, wait.
    3. Decrease servers to 5, wait.

    Specifications

    • Version: 0.5.2

    Solution

    My guess is that when you destroy the instances you also have to remove the spot request somehow, but I don't quite understand why.

    bug 
    opened by xanrag 2
  • Add SSL config for HTTPS

    Add SSL config for HTTPS

    I think this will partly solve https://github.com/claffin/cloudproxy/issues/3.

    The cert.pem and key.pem can be generated with mkcert

    Tbh I have no idea what I'm doing, any suggestions would be much appreciated. I tested this locally and I have the docker image running on https now.

    Edit: I suppose mkcert is only good for https in local environments. My goal is to deploy Cloudproxy to an AWS ec2 instance and make it accessible via HTTPS, and ensure communication between the cloudproxy server and proxy servers are done via HTTPS as well.

    opened by jcohenho 1
  • Multiple regions & Historical reporting

    Multiple regions & Historical reporting

    Hello

    Thank you very much for this great script, simple and can be a replacement for scraproxy

    Is it possible to define multiple regions? for example, I want to have 3-5 regions with digitalocean and cloudproxy will randomly create VMs on them

    my second question is there any log file or report that I can use to check how many VMs has been created, the duration, and the period? so I can compare my hosting cost let's say on weekly bases and decide which cloud provider is better for me?

    enhancement 
    opened by mrahmadt 1
  • Support multiple client applications sharing single proxy cloud

    Support multiple client applications sharing single proxy cloud

    One thing I've always missed in Scrapoxy is ability to support multiple clients. Would be great to see it implemented here.

    In Scrapoxy you could set (min, required, max) scaling and it works well as long as there is just one client application trying to use the proxy cloud. But as soon as you want to share same cloud between multiple applications, you run into a problem that they conflict with each other. E.g. when one application has finished crawling, it can't just downscale the cloud as it's still being used by another application etc.

    Ideally that requires a centralized logic that manages requests from multiple client applications. It would need to track the most recently requested scaling for each client, and combine them. A very simple logic could be to just take max of all min/required/max parameters across clients and use that as the scaling. That way, the cloud would only downscale when the last client sends the downscale request. You can imagine logic becoming more complex though, e.g. when one client asks to destroy an instance that the other client still uses etc.

    As an extra feature, it should ideally handle stale clients - if a client has not communicated with it for a while, it should disregard its requirements, to avoid leaving dangling instances when client unexpectedly disappears.

    enhancement 
    opened by nirvana-msu 0
  • Not all environment vars passed to container being used

    Not all environment vars passed to container being used

    Report of -e DIGITALOCEAN_MIN_SCALING=0 -e DIGITALOCEAN_MAX_SCALING=0 commands do not work, it always starts at 2.

    Originally posted by @sblfc in https://github.com/claffin/cloudproxy/issues/21#issuecomment-836554779

    bug 
    opened by claffin 0
  • What providers should be added next?

    What providers should be added next?

    At the moment CloudProxy supports AWS and DigitalOcean, which is enough for my own personal use case. I'm keen to hear if there is interest in other providers being supported, please share here and I will prioritise. Otherwise, new features will be prioritised for now.

    enhancement 
    opened by claffin 6
Releases(v0.6.5-beta)
  • v0.6.5-beta(Sep 27, 2022)

  • v0.6.4-beta(Jul 5, 2022)

  • v0.6.3-beta(Feb 13, 2022)

  • v0.6.1-beta(Jul 19, 2021)

  • v0.6.0-beta(Jul 4, 2021)

  • v0.5.2-beta(Jul 1, 2021)

  • v0.5.1-beta(Jul 1, 2021)

  • v0.5.0-beta(Jun 28, 2021)

  • v0.4.0-beta(Jun 15, 2021)

    • #24 Bugfix AWS delete only checking the first instance
    • Change retries to 1 and set a timeout of 10s on fetch_ip
    • Rewrote the check_alive function to be much simpler, the fetch_ip check was not viable at 20+ proxies. It took too long.
    • Updated ip_list not to return IP:s slated for destruction
    • Added option to restart AWS proxies, much faster than destroy/create and fetches a new IP. Not supported for DO.
    • Opened up port 22 on proxies for debugging, future enhancement is to only allow web control. (Use the EC2_INSTANCE_CONNECT filter for the service parameter to get the IP address ranges in the EC2 Instance Connect subset. )
    • Enhanced status messages a bit in the check_alive for AWS
    • Moved check_delete/stop to before provision so it removes and then immediately provisions a new one instead of waiting 20s for the next tick
    • Changed the proxy software to tinyproxy directly on the image instead of using docker. Much faster deployment and less CPU intensive so should work better with t2.nano
    • Updated the settings checks to compare true/false as a string since it seems to be what it is getting, earlier a value of False in the config would read as true.
    • Updated environ get to match the doc (ie SCALING instead of SCALE)
    • Added botocore to requirements.txt, which seemed to be missing.

    @xanrag thank you for all these fixes.

    Source code(tar.gz)
    Source code(zip)
  • v0.3.3-beta(May 10, 2021)

  • v0.3.2-beta(May 7, 2021)

  • v0.3.1-beta(May 6, 2021)

  • v0.3.0-beta(Apr 28, 2021)

  • v0.2.2-beta(Apr 27, 2021)

    • Updated error handling
    • Added retry to check alive
    • Added CORS and delete_queue now set
    • Schedule with providers every 20 seconds now and removed auth from IP
    • Fixed failing tests
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1-beta(Apr 26, 2021)

  • v0.2.0-beta(Apr 22, 2021)

  • v0.1.1-alpha(Apr 19, 2021)

  • v0.1.0-alpha(Apr 19, 2021)

Owner
Christian Laffin
Christian Laffin
tradingview socket api for fetching real time prices.

tradingView-API tradingview socket api for fetching real time prices. How to run git clone https://github.com/mohamadkhalaj/tradingView-API.git cd tra

MohammadKhalaj 35 Dec 31, 2022
Build surface water network for MODFLOW's SFR Package

Surface water network Creates surface water network, which can be used to create MODFLOW's SFR. Python packages Python 3.6+ is required. Required geop

Mike Taves 20 Nov 22, 2022
A Python module that allows you to create and use simple sockets.

EasySockets A Python module that allows you to create and use simple sockets. Installation The easysockets module can be installed using pip. pip inst

Matthias Wijnsma 2 Jan 16, 2022
Malcolm is a powerful, easily deployable network traffic analysis tool suite for full packet capture artifacts (PCAP files) and Zeek logs.

Malcolm is a powerful, easily deployable network traffic analysis tool suite for full packet capture artifacts (PCAP files) and Zeek logs.

Cybersecurity and Infrastructure Security Agency 1.3k Jan 08, 2023
Quickly fetch your WiFi password and if needed, generate a QR code of your WiFi to allow phones to easily connect

wifi-password Quickly fetch your WiFi password and if needed, generate a QR code of your WiFi to allow phones to easily connect. Works on macOS and Li

Siddharth Dushantha 2.6k Jan 05, 2023
Use Raspberry Pi and CircuitSetup's power monitor hardware to publish electrical usage to MQTT

This repo has code and notes for whole home electrical power monitoring using a Raspberry Pi and CircuitSetup modules. Beyond just collecting data, it

Eric Tsai 10 Jul 25, 2022
MQTT Explorer - MQTT Subscriber client to explore topic hierarchies

mqtt-explorer MQTT Explorer - MQTT Subscriber client to explore topic hierarchies Overview The MQTT Explorer subscriber client is designed to explore

Gambit Communications, Inc. 4 Jun 19, 2022
GlokyPortScannar is a really fast tool to scan TCP ports implemented in Python.

GlokyPortScannar is a really fast tool to scan TCP ports implemented in Python. Installation: This program requires Python 3.9. Linux

gl0ky 5 Jun 25, 2022
API for concurrency connections

Multi-connection-server-API API for concurrency connections difference between this server and the echo server is the call to lsock.setblocking(False)

Muziwandile Nkomo 1 Jan 04, 2022
Impacket is a collection of Python classes for working with network protocols.

What is Impacket? Impacket is a collection of Python classes for working with network protocols. Impacket is focused on providing low-level programmat

SecureAuth Corporation 10.4k Jan 09, 2023
A Network tool kit for scanning active IP addresses and open ports

Network scanner A small project that I wrote on the fly for (IT351) Computer Networks University Course to identify and label the devices in my networ

Mohamed Abdelrahman 10 Nov 07, 2022
Securely and anonymously share files, host websites, and chat with friends using the Tor network

OnionShare OnionShare is an open source tool that lets you securely and anonymously share files, host websites, and chat with friends using the Tor ne

OnionShare 5.4k Jan 01, 2023
Tool written on Python that locate all up host on your subnet

HOSTSCAN Easy to use command line network host scanner. From noob to noobs. Dependencies Nmap 7.92 or superior Python 3.9 or superior All requirements

NexCreep 4 Feb 27, 2022
Readable, simple and fast asynchronous non-blocking network apps

Fast and readable async non-blocking network apps Netius is a Python network library that can be used for the rapid creation of asynchronous non-block

Hive Solutions 120 Nov 20, 2022
Pesquise, filtre e obtenha informações sobre animes. ( Módulo PIP )

Pesquise, filtre e obtenha informações sobre animes. ( Módulo PIP )

AimCaffe 3 Jan 30, 2022
Serves some data over HTTP, once. Based on the built-in Python module http.server

serve-me-once Serves some data over HTTP, once. Based on the built-in Python module http.server.

Peder Bergebakken Sundt 2 Jan 06, 2022
GhostVPN - Simple and lightweight TUI application for CyberGhostVPN

GhostVPN Simple and lightweight TUI application for CyberGhostVPN. Screenshot Us

Mehmet Ali KERİMOĞLU 5 Jul 27, 2022
This program ingests a Cisco "sh ip arp" as a text file and produces the list of vendors seen in the file

IP-ARP-Vendor_lookup This program ingests a Cisco "sh ip arp" as a text file and produces the list of vendors seen in the file Why? Answers the questi

Stew Alexander 1 Dec 24, 2022
A simple implementation of an RPC toolkit

Simple RPC With Raw Sockets Repository for the Data network course project: Introduction In this project, you will attempt to code a simple implementa

Milad Samimifar 1 Mar 25, 2022
An opensource library to use SNMP get/bulk/set/walk in Python

SNMP-UTILS An opensource library to use SNMP get/bulk/set/walk in Python Features Work with OIDS json list [Find Here](#OIDS List) GET command SET com

Alexandre Gossard 3 Aug 03, 2022