京东云无线宝积分推送,支持查看多设备积分使用情况

Overview

JDRouterPush

项目简介

本项目调用京东云无线宝API,可每天定时推送积分收益情况,帮助你更好的观察主要信息

更新日志

2021-03-02:

  1. 查询绑定的京东账户
  2. 通知排版优化
  3. 脚本检测更新
  4. 支持Server酱Turbo版

2021-02-25:

  1. 实现多设备查询
  2. 查询今日收益,总收益,可用收益
  3. 设备在线天数
  4. 可查看最近七条积分动态

使用说明

Actions 方式

  1. Fork 本项目
  2. 获取京东云无线宝wskey
  • 目前只用Android抓包演示(抓包工具有很多,这里使用HttpCanary作为演示)

  • 打开HttpCanary点击右下角按钮开始抓包

  • 然后再打开京东云无线宝,点击积分管理

  • 回到HttpCanary,右上角找到搜索,搜索wskey

  • 然后随便点击一条进去,找到请求里面的wskey ,复制值

  1. 点击项目 Settings -> Secrets -> New Secrets 添加以下 2 个 Secrets,其中server酱微信推送的sckey可参阅微信订阅通知
Name Value
WSKEY 从京东云无线宝中获取
SERVERPUSHKEY server酱推送的sckey

  1. 开启 Actions 并触发每日自动执行

Github Actions 默认处于关闭状态,需要手动开启 Actions ,执行一次工作流,验证是否可以正常工作。

图示

如果需要修改每日任务执行的时间,请修改 .github/workflows/JDPush.yml,在第 7行左右位置找到下如下配置。

  schedule:
    - cron: '30 22 * * *'
    # cron表达式,Actions时区是UTC时间,需要往前推8个小时  此时为6点30推送
    # 示例: 每天晚上22点30执行 '30 14 * * *'

如果收到了 GitHub Action 的错误邮件,请检查 WSKEY是不是失效了,如果退出或重登都会导致京东云无线宝 WSKEY 失效

订阅通知

订阅执行结果

目前Turbo版本的消息通道支持以下渠道

  • 企业微信应用消息
  • Android
  • Bark iOS
  • 企业微信群机器人
  • 钉钉群机器人
  • 飞书群机器人
  • 自定义微信测试号
  • 方糖服务号
  1. 前往 sct.ftqq.com点击登入,创建账号。
  2. 点击点SendKey ,生成一个 Key。将其增加到 Github Secrets 中,变量名为 SERVERPUSHKEY
  3. 配置消息通道 ,选择方糖服务号,保存即可。
  4. 推送效果展示

旧版推送渠道sc.ftqq.com即将与4月底下线,请前往sct.ftqq.com生成Turbo版本的Key 注意,申请Turbo版Key后请配置消息通道,如果想沿用以前的微信推送方式,选择方糖服务号即可

New World Market Scraper

Bean Seller A New Worlds market scraper. Deployment This must be installed on Windows as it uses the Windows api to do its stuff Install Prerequisites

4 Sep 21, 2022
A Smart, Automatic, Fast and Lightweight Web Scraper for Python

AutoScraper: A Smart, Automatic, Fast and Lightweight Web Scraper for Python This project is made for automatic web scraping to make scraping easy. It

Mika 4.8k Jan 04, 2023
Audio media crawler for lbry.

Audio media crawler for lbry. Requirements Python 3.8 Poetry 1.1.7 Elasticsearch 7.14.0 Lbry-sdk 0.99.0 Development This project uses poetry as a depe

Hound.fm 4 Dec 03, 2022
An helper library to scrape data from TikTok in one line, using the Influencer Hunters APIs.

TikTok Scraper An utility library to scrape data from TikTok hassle-free Go to the website » View Demo · Report Bug · Request Feature About The Projec

6 Jan 08, 2023
对于有验证码的站点爆破,用于安全合法测试

使用方法 python3 main.py + 配置好的文件 python3 main.py Verify.json python3 main.py NoVerify.json 以上分别对应有验证码的demo和无验证码的demo Tips: 你可以以域名作为配置文件名字加载:python3 main

47 Nov 09, 2022
:arrow_double_down: Dumb downloader that scrapes the web

You-Get NOTICE: Read this if you are looking for the conventional "Issues" tab. You-Get is a tiny command-line utility to download media contents (vid

Mort Yao 46.4k Jan 03, 2023
A Spider for BiliBili comments with a simple API server.

BiliComment A spider for BiliBili comment. Spider Usage Put config.json into config directory, and then python . ./config/config.json. A example confi

Hao 3 Jul 05, 2021
This is python to scrape overview and reviews of companies from Glassdoor.

Data Scraping for Glassdoor This is python to scrape overview and reviews of companies from Glassdoor. Please use it carefully and follow the Terms of

Houping 5 Jun 23, 2022
Introduction to WebScraping Workshop - Semcomp 24 Beta

Extrair informações da internet de forma automatizada. Existem diversas maneiras de fazer isso, nesse tutorial vamos ver algumas delas, por meio de bibliotecas de python.

Luísa Moura 19 Sep 11, 2022
Nekopoi scraper using python3

Features Scrap from url Todo [+] Search by genre [+] Search by query [+] Scrap from homepage Example # Hentai Scraper from nekopoi import Hent

MhankBarBar 9 Apr 06, 2022
Parse feeds in Python

feedparser - Parse Atom and RSS feeds in Python. Copyright 2010-2020 Kurt McKee Kurt McKee 1.5k Dec 30, 2022

A simple Discord scraper for discord bots

A simple Discord scraper for discord bots. That includes sending an guild members ids to an file, Mass inviter for joining servers your bot is in and Fetching all the servers of the bot (w/MemberCoun

3zg 1 Jan 06, 2022
Scraping script for stats on covid19 pandemic status in Chiba prefecture, Japan

About 千葉県の地域別の詳細感染者統計(Excelファイル) をCSVに変換し、かつ地域別の日時感染者集計値を出力するスクリプトです。 Requirement POSIX互換なシェル, e.g. GNU Bash (1) curl (1) python = 3.8 pandas = 1.1.

Conv4Japan 1 Nov 29, 2021
用python爬取江苏几大高校的就业网站,并提供3种方式通知给用户,分别是通过微信发送、命令行直接输出、windows气泡通知。

crawler_for_university 用python爬取江苏几大高校的就业网站,并提供3种方式通知给用户,分别是通过微信发送、命令行直接输出、windows气泡通知。 环境依赖 wxpy,requests,bs4等库 功能描述 该项目基于python,通过爬虫爬各高校的就业信息网,爬取招聘信

8 Aug 16, 2021
A Python library for automating interaction with websites.

Home page https://mechanicalsoup.readthedocs.io/ Overview A Python library for automating interaction with websites. MechanicalSoup automatically stor

4.3k Jan 07, 2023
Python script for crawling ResearchGate.net papers✨⭐️📎

ResearchGate Crawler Python script for crawling ResearchGate.net papers About the script This code start crawling process by urls in start.txt and giv

Mohammad Sadegh Salimi 4 Aug 30, 2022
Open Crawl Vietnamese Text

Open Crawl Vietnamese Text This repo contains crawled Vietnamese text from multiple sources. This list of a topic-centric public data sources in high

QAI Research 4 Jan 05, 2022
Web-scraping - Program that scrapes a website for a collection of quotes, picks one at random and displays it

web-scraping Program that scrapes a website for a collection of quotes, picks on

Manvir Mann 1 Jan 07, 2022
LSpider 一个为被动扫描器定制的前端爬虫

LSpider LSpider - 一个为被动扫描器定制的前端爬虫 什么是LSpider? 一款为被动扫描器而生的前端爬虫~ 由Chrome Headless、LSpider主控、Mysql数据库、RabbitMQ、被动扫描器5部分组合而成。

Knownsec, Inc. 321 Dec 12, 2022
Facebook Group Scraping Using Beautiful Soup & Selenium

Extract Facebook group posts that are related to a specific topic and write them to a .json file.

Fatima Ghadieh 14 Aug 12, 2022