CreamySoup - a helper script for automated SourceMod plugin updates management.

Related tags

Web Crawlingsoup
Overview

soup

CreamySoup/"Creamy SourceMod Updater" (or just soup for short), a helper script for automated SourceMod plugin updates management.

This project started as a custom utility for the Creamy Neotokyo servers (hence the name), but open sourcing and generalising it for any kind of SRCDS/SourceMod servers seemed like a good idea, in case it's helpful for someone else too.

alt text

FAQ

What it be?

soup is a Python 3 script, a SRCDS SourceMod plugin update helper, intended to be invoked periodically by an external cronjob-like automation system.

It parses soup recipes, remote lists of resources to be kept up-to-date, compares those resources' contents to the target machine's local files, and re-downloads & re-compiles them if they differ. This will automatically keep such resources in-sync with their remote repository. For a SourceMod plugin, this means any new updates get automatically applied upon the next mapchange after the completion of a soup update cycle.

The purpose of soup is to reduce SRCDS sysop workload by making SourceMod plugin updates more automated, while also providing some granularity in terms of which plugins get updated when, with the introduction of maintained/curated recipes. For example, you can have some trusted recipes auto-update their target plugins without any admin intervention, but choose to manually update more fragile or experimental plugins as required (or not at all).

Which recipes to use?

You should always use the default self-updater recipe to keep the soup script itself updated.

If you are operating a Neotokyo SRCDS, this project offers some recommended recipe(s) here. This resource is still work-in-progress, more curated lists to be added later!

You can also host your own custom recipes as you like for any SRCDS+SourceMod server setup.

Foreword of warning

While automation is nice, a malicious actor could use this updater to execute arbitrary code on the target machine. Be sure to only use updater source lists ("recipes") that you trust 100%, or maintain your own fork of such resources where you can review and control the updates.

Installation

Recommended to install with pip, using the requirements.txt file.

You should also consider using a virtual environment to isolate any Python dependencies from the rest of the system (although if you go this route, any cron job or similar automation should also run in that venv to have access to those deps).

Other requirements

  • Python 3

Config

Configuration can be edited in the config.yml file that exists in the same dir as the Python script itself. Please see the additional comments within the config file for more information on the options.

Recipes

The most powerful config option is recipes, which is a list of 0 or more URLs pointing to soup.py "recipes".

A recipe is defined as a valid JSON document using the following structure:

}, <...> ], <...> } ">
{
  "section": [
    {
      "key": "value",
      <...>
    },
    <...>
  ],
  <...>
}

where

<...>

indicates 0 or more additional repeated elements of the same type as above.

Note that trailing commas are not allowed in the JSON syntax – it's a good idea to validate the file before pushing any recipe updates online.

Recipe sections

There are three valid recipe sections: updater, includes, and plugins. Examples follow:

  • updater – A self-updater section for the soup.py script contents. Only one section in total of this kind should exist at most in all of the recipes being used.
	"updater": [
		{
			"version": "1.0.0",
			"url": "https://raw.githubusercontent.com/CreamySoup/soup/main/soup.py"
		}
	]
  • includes – SourceMod include files that are required by some of the plugins in the recipes' plugins section. Required file extension: .inc
	"includes": [
		{
			"name": "neotokyo",
			"about": "sourcemod-nt-include - The de facto NT standard include.",
			"source_url": "https://raw.githubusercontent.com/CreamySoup/sourcemod-nt-include/master/scripting/include/neotokyo.inc"
		}
	]
  • plugins – SourceMod plugins that are to be kept up to date with their remote source code repositories. Required file extension: .sp
	"plugins": [
		{
			"name": "nt_srs_limiter",
			"about": "SRS rof limiter timed from time of shot, inspired by Rain's nt_quickswitchlimiter.",
			"source_url": "https://raw.githubusercontent.com/CreamySoup/nt-srs-limiter/master/scripting/nt_srs_limiter.sp"
		}
	]

For full examples of valid recipes, see the self updater in this repo, and the Neotokyo recipe repository. By default, this repo is configured for game "NeotokyoSource", and to use these Neotokyo default recipes.

Usage

The script can be run manually with python soup.py, but is recommended to be automated as a cron job or similar.

For developers

The soup.py Python script should be PEP 8 compliant (tested using pycodestyle).

You might also like...
A webdriver-based script for reserving Tsinghua badminton courts.

AutoReserve A webdriver-based script for reserving badminton courts. 使用说明 下载 chromedriver 选择当前Chrome对应版本 安装 selenium pip install selenium 更改场次、金额信息dat

A web crawler script that crawls the target website and lists its links

A web crawler script that crawls the target website and lists its links || A web crawler script that lists links by scanning the target website.

Script for scrape user data like
Script for scrape user data like "id,username,fullname,followers,tweets .. etc" by Twitter's search engine .

TwitterScraper Script for scrape user data like "id,username,fullname,followers,tweets .. etc" by Twitter's search engine . Screenshot Data Users Only

This is a script that scrapes the longitude and latitude on food.grab.com
This is a script that scrapes the longitude and latitude on food.grab.com

grab This is a script that scrapes the longitude and latitude for any restaurant in Manila on food.grab.com, location can be adjusted. Search Result p

Free-Game-Scraper is a useful script that allows you to track down free games and DLCs on many platforms.
Free-Game-Scraper is a useful script that allows you to track down free games and DLCs on many platforms.

Game Scraper Free-Game-Scraper is a useful script that allows you to track down free games and DLCs on many platforms. Join the discord About The Proj

Screenhook is a script that captures an image of a web page and send it to a discord webhook.
Screenhook is a script that captures an image of a web page and send it to a discord webhook.

screenshot from the web for discord webhooks screenhook is a script that captures an image of a web page and send it to a discord webhook.

Python script to check if there is any differences in responses of an application when the request comes from a search engine's crawler.
Python script to check if there is any differences in responses of an application when the request comes from a search engine's crawler.

crawlersuseragents This Python script can be used to check if there is any differences in responses of an application when the request comes from a se

Scraping script for stats on covid19 pandemic status in Chiba prefecture, Japan

About 千葉県の地域別の詳細感染者統計(Excelファイル) をCSVに変換し、かつ地域別の日時感染者集計値を出力するスクリプトです。 Requirement POSIX互換なシェル, e.g. GNU Bash (1) curl (1) python = 3.8 pandas = 1.1.

A simple python script to fetch the latest covid info

covid-tracker-script A simple python script to fetch the latest covid info How it works First, get the current date in MM-DD-YYYY format. Check if the

Comments
  • Bump pipenv from 2021.11.23 to 2022.1.8

    Bump pipenv from 2021.11.23 to 2022.1.8

    Bumps pipenv from 2021.11.23 to 2022.1.8.

    Changelog

    Sourced from pipenv's changelog.

    2022.1.8 (2022-01-08)

    Bug Fixes

    • Remove the extra parentheses around the venv prompt. [#4877](https://github.com/pypa/pipenv/issues/4877) <https://github.com/pypa/pipenv/issues/4877>_
    • Fix a bug of installation fails when extra index url is given. [#4881](https://github.com/pypa/pipenv/issues/4881) <https://github.com/pypa/pipenv/issues/4881>_
    • Fix regression where lockfiles would only include the hashes for releases for the platform generating the lockfile [#4885](https://github.com/pypa/pipenv/issues/4885) <https://github.com/pypa/pipenv/issues/4885>_
    • Fix the index parsing to reject illegal requirements.txt. [#4899](https://github.com/pypa/pipenv/issues/4899) <https://github.com/pypa/pipenv/issues/4899>_
    Commits
    • d378b9f Release v2022.1.8
    • 439782a Merge pull request from GHSA-qc9x-gjcv-465w
    • 1679098 fix TLS validation for requirements.txt
    • 9cb42e1 Merge pull request #4910 from jfly/update-run-tests-instructions
    • 08a7fcf Oops, set the CI environment variable even earlier.
    • f42fcaa Misc doc updates (mostly around running tests)
    • c8f34dd Merge pull request #4908 from jfly/issue-4885-custom-indices-lacking-hashes
    • 34652df Use a PackageFinder with ignore_compatibility when collecting hashes
    • b0ebaf0 Merge pull request #4907 from milo-minderbinder/bugfix/requirements-file-options
    • d535301 disallow abbreviated forms of full option names
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Improve self-updater

    Improve self-updater

    Deprecate the "updater" recipe section in favour of using GitHub Releases API. Fix recursion bug in recipe iteration logic that could cause multiple traversal of the same node.

    bug enhancement 
    opened by Rainyan 0
  • Fix updating Python requirements upon soup self-update

    Fix updating Python requirements upon soup self-update

    Run the pip requirements installation command when self-updating so that script dependencies also stay in sync.

    This is a breaking change from 1.2.x, since:

    • 1.2 doesn't have the ability to self-update requirements.txt
    • the self-update recipe syntax has changed
    enhancement 
    opened by Rainyan 0
  • Generalise recipe inputs

    Generalise recipe inputs

    Currently we are supporting source code (.sp) and include (.inc) files specifically, but there's also other kinds of SM dependencies like localization/translations, configs, gamedata, etc. Ideally, this system should be completely agnostic of the kinds of files being updated. This would require some refactoring of the current concept of a "recipe".

    enhancement 
    opened by Rainyan 0
Releases(2.1.0)
  • 2.1.0(Nov 24, 2022)

    What's Changed

    • Fix selfupdate on major updates by @Rainyan in https://github.com/CreamySoup/soup/pull/12

    Full Changelog: https://github.com/CreamySoup/soup/compare/2.0.0...2.1.0

    Source code(tar.gz)
    Source code(zip)
  • 2.0.0(Nov 24, 2022)

  • 1.6.3(Jan 13, 2022)

    What's Changed

    • Bump pipenv from 2021.11.23 to 2022.1.8 by @dependabot in https://github.com/CreamySoup/soup/pull/5

    New Contributors

    • @dependabot made their first contribution in https://github.com/CreamySoup/soup/pull/5

    Full Changelog: https://github.com/CreamySoup/soup/compare/1.6.2...1.6.3

    Source code(tar.gz)
    Source code(zip)
  • 1.6.2(Dec 25, 2021)

  • 1.6.1(Dec 25, 2021)

  • 1.6.0(Dec 25, 2021)

  • 1.5.1(Nov 29, 2021)

  • 1.4.5(Nov 29, 2021)

    What's Changed

    Changes from 1.4.0

    • Fix some bugs that broke the new updater

    Changes from 1.3.x

    • Deprecate the updater recipe section in favour of using GitHub Releases API.
    • Fix recursion bug in recipe iteration logic that could cause multiple traversal of the same node.
    • Trigger soup self-restart upon receiving self-update

    This is a breaking change from 1.3.x, since:

    • self-update logic has been changed
    Source code(tar.gz)
    Source code(zip)
  • 1.3.0(Nov 29, 2021)

    What's Changed

    • Fix updating Python requirements upon soup self-update:

    Run the pip requirements installation command when self-updating so that script dependencies also stay in sync.

    This is a breaking change from 1.2.x, since:

    • 1.2.x doesn't have the ability to self-update requirements.txt
    • the self-update recipe syntax has changed
    Source code(tar.gz)
    Source code(zip)
  • 1.2.1(Nov 26, 2021)

  • 1.2.0(Nov 16, 2021)

  • 1.1.0(Nov 16, 2021)

    What's Changed

    • Use StrictYAML for config by @Rainyan in https://github.com/CreamySoup/soup/pull/2

    Full Changelog: https://github.com/CreamySoup/soup/compare/1.0.0...1.1.0

    Source code(tar.gz)
    Source code(zip)
  • 1.0.0(Nov 16, 2021)

Owner
Helper script for automated SourceMod plugin updates management.
VG-Scraper is a python program using the module called BeautifulSoup which allows anyone to scrape something off an website. This program lets you put in a number trough an input and a number is 1 news article.

VG-Scraper VG-Scraper is a convinient program where you can find all the news articles instead of finding one yourself. Installing [Linux] Open a term

3 Feb 13, 2022
Python Web Scrapper Project

Web Scrapper Projeto desenvolvido em python, sobre tudo com Selenium, BeautifulSoup e Pandas é um web scrapper que puxa uma tabela com as principais e

Jordan Ítalo Amaral 2 Jan 04, 2022
An introduction to free, automated web scraping with GitHub’s powerful new Actions framework.

An introduction to free, automated web scraping with GitHub’s powerful new Actions framework Published at palewi.re/docs/first-github-scraper/ Contrib

Ben Welsh 15 Nov 24, 2022
This is a python api to scrape search results from a url.

googlescrape Installation Installation is simple! # Stable version pip install googlescrape Examples from googlescrape import client scrapeClient=cli

1 Dec 15, 2022
Divar.ir Ads scrapper

Divar.ir Ads Scrapper Introduction This project first asynchronously grab Divar.ir Ads and then save to .csv and .xlsx files named data.csv and data.x

Iman Kermani 4 Aug 29, 2022
A database scraper created with mechanical soup and sqlite

WebscrapingDatabases a database scraper created with mechanical soup and sqlite author: Mariya Sha Watch on YouTube: This repository was created to su

Mariya 30 Aug 08, 2022
Basic-html-scraper - A complete how to of web scraping with Python for beginners

basic-html-scraper Code from YT Video This video includes a complete how to of w

John 12 Oct 22, 2022
WebScrapping Project - G1 Latest News

Web Scrapping com Python Esse projeto consiste em um código para o usuário buscar as últimas nóticias sobre um termo qualquer, no site G1. Para esse p

Eduardo Henrique 2 Feb 13, 2022
Web Scraping COVID 19 Meta Portal with Python

Web-Scraping-COVID-19-Meta-Portal-with-Python - Requests API and Beautiful Soup to scrape real-time COVID statistics from worldometer website and perform data cleaning and visual analysis in Jupyter

Aarif Munwar Jahan 1 Jan 04, 2022
Automated Linkedin bot that will improve your visibility and increase your network.

LinkedinSpider LinkedinSpider is a small project using browser automating to increase your visibility and network of connections on Linkedin. DISCLAIM

Frederik 2 Nov 26, 2021
Automatically download and crop key information from the arxiv daily paper.

Arxiv daily 速览 功能:按关键词筛选arxiv每日最新paper,自动获取摘要,自动截取文中表格和图片。 1 测试环境 Ubuntu 16+ Python3.7 torch 1.9 Colab GPU 2 使用演示 首先下载权重baiduyun 提取码:il87,放置于code/Pars

HeoLis 20 Jul 30, 2022
京东秒杀商品抢购Python脚本

Jd_Seckill 非常感谢原作者 https://github.com/zhou-xiaojun/jd_mask 提供的代码 也非常感谢 https://github.com/wlwwu/jd_maotai 进行的优化 主要功能 登陆京东商城(www.jd.com) cookies登录 (需要自

Andy Zou 1.5k Jan 03, 2023
Web Content Retrieval for Humans™

Lassie Lassie is a Python library for retrieving basic content from websites. Usage import lassie lassie.fetch('http://www.youtube.com/watch?v

Mike Helmick 570 Dec 19, 2022
Meme-videos - Scrapes memes and turn them into a video compilations

Meme Videos Scrapes memes from reddit using praw and request and then converts t

Partho 12 Oct 28, 2022
Dude is a very simple framework for writing web scrapers using Python decorators

Dude is a very simple framework for writing web scrapers using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. Dude has an easy-to-lea

Ronie Martinez 326 Dec 15, 2022
Grab the changelog from releases on Github

release-notes-scraper This simple script can be used to grab the release notes for projects from github that do not keep a CHANGELOG, but publish thei

Dan Čermák 4 Apr 01, 2022
:arrow_double_down: Dumb downloader that scrapes the web

You-Get NOTICE: Read this if you are looking for the conventional "Issues" tab. You-Get is a tiny command-line utility to download media contents (vid

Mort Yao 46.4k Jan 03, 2023
对于有验证码的站点爆破,用于安全合法测试

使用方法 python3 main.py + 配置好的文件 python3 main.py Verify.json python3 main.py NoVerify.json 以上分别对应有验证码的demo和无验证码的demo Tips: 你可以以域名作为配置文件名字加载:python3 main

47 Nov 09, 2022
A scrapy pipeline that provides an easy way to store files and images using various folder structures.

scrapy-folder-tree This is a scrapy pipeline that provides an easy way to store files and images using various folder structures. Supported folder str

Panagiotis Simakis 7 Oct 23, 2022
A training task for web scraping using python multithreading and a real-time-updated list of available proxy servers.

Parallel web scraping The project is a training task for web scraping using python multithreading and a real-time-updated list of available proxy serv

Kushal Shingote 1 Feb 10, 2022