京东云无线宝积分推送,支持查看多设备积分使用情况

Overview

JDRouterPush

项目简介

本项目调用京东云无线宝API,可每天定时推送积分收益情况,帮助你更好的观察主要信息

更新日志

2021-03-02:

  1. 查询绑定的京东账户
  2. 通知排版优化
  3. 脚本检测更新
  4. 支持Server酱Turbo版

2021-02-25:

  1. 实现多设备查询
  2. 查询今日收益,总收益,可用收益
  3. 设备在线天数
  4. 可查看最近七条积分动态

使用说明

Actions 方式

  1. Fork 本项目
  2. 获取京东云无线宝wskey
  • 目前只用Android抓包演示(抓包工具有很多,这里使用HttpCanary作为演示)

  • 打开HttpCanary点击右下角按钮开始抓包

  • 然后再打开京东云无线宝,点击积分管理

  • 回到HttpCanary,右上角找到搜索,搜索wskey

  • 然后随便点击一条进去,找到请求里面的wskey ,复制值

  1. 点击项目 Settings -> Secrets -> New Secrets 添加以下 2 个 Secrets,其中server酱微信推送的sckey可参阅微信订阅通知
Name Value
WSKEY 从京东云无线宝中获取
SERVERPUSHKEY server酱推送的sckey

  1. 开启 Actions 并触发每日自动执行

Github Actions 默认处于关闭状态,需要手动开启 Actions ,执行一次工作流,验证是否可以正常工作。

图示

如果需要修改每日任务执行的时间,请修改 .github/workflows/JDPush.yml,在第 7行左右位置找到下如下配置。

  schedule:
    - cron: '30 22 * * *'
    # cron表达式,Actions时区是UTC时间,需要往前推8个小时  此时为6点30推送
    # 示例: 每天晚上22点30执行 '30 14 * * *'

如果收到了 GitHub Action 的错误邮件,请检查 WSKEY是不是失效了,如果退出或重登都会导致京东云无线宝 WSKEY 失效

订阅通知

订阅执行结果

目前Turbo版本的消息通道支持以下渠道

  • 企业微信应用消息
  • Android
  • Bark iOS
  • 企业微信群机器人
  • 钉钉群机器人
  • 飞书群机器人
  • 自定义微信测试号
  • 方糖服务号
  1. 前往 sct.ftqq.com点击登入,创建账号。
  2. 点击点SendKey ,生成一个 Key。将其增加到 Github Secrets 中,变量名为 SERVERPUSHKEY
  3. 配置消息通道 ,选择方糖服务号,保存即可。
  4. 推送效果展示

旧版推送渠道sc.ftqq.com即将与4月底下线,请前往sct.ftqq.com生成Turbo版本的Key 注意,申请Turbo版Key后请配置消息通道,如果想沿用以前的微信推送方式,选择方糖服务号即可

This is a simple website crawler which asks for a website link from the user to crawl and find specific data from the given website address.

This is a simple website crawler which asks for a website link from the user to crawl and find specific data from the given website address.

Faisal Ahmed 1 Jan 10, 2022
Scraping news from Ucsal portal with Scrapy.

NewsScraping Esse é um projeto de raspagem das últimas noticias, de 2021, do portal da universidade Ucsal http://noosfero.ucsal.br/institucional Tecno

Crissiano Pires 0 Sep 30, 2021
Web Scraping Framework

Grab Framework Documentation Installation $ pip install -U grab See details about installing Grab on different platforms here http://docs.grablib.

2.3k Jan 04, 2023
京东茅台抢购 2021年4月最新版

Jd_Seckill 特别声明: 本仓库发布的jd_seckill项目中涉及的任何脚本,仅用于测试和学习研究,禁止用于商业用途,不能保证其合法性,准确性,完整性和有效性,请根据情况自行判断。 本项目内所有资源文件,禁止任何公众号、自媒体进行任何形式的转载、发布。 huanghyw 对任何脚本问题概不

45 Dec 14, 2022
Scrape plants scientific name information from Agroforestry Species Switchboard 2.0.

Agroforestry Species Switchboard 2.0 Scraper Scrape plants scientific name information from Species Switchboard 2.0. Requirements python = 3.10 (you

Mgs. M. Rizqi Fadhlurrahman 2 Dec 23, 2021
A command-line program to download media, like and unlike posts, and more from creators on OnlyFans.

onlyfans-scraper A command-line program to download media, like and unlike posts, and more from creators on OnlyFans. Installation You can install thi

185 Jul 23, 2022
Here I provide the source code for doing web scraping using the python library, it is Selenium.

Here I provide the source code for doing web scraping using the python library, it is Selenium.

M Khaidar 1 Nov 13, 2021
Parsel lets you extract data from XML/HTML documents using XPath or CSS selectors

Parsel Parsel is a BSD-licensed Python library to extract and remove data from HTML and XML using XPath and CSS selectors, optionally combined with re

Scrapy project 859 Dec 29, 2022
This is python to scrape overview and reviews of companies from Glassdoor.

Data Scraping for Glassdoor This is python to scrape overview and reviews of companies from Glassdoor. Please use it carefully and follow the Terms of

Houping 5 Jun 23, 2022
This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster

This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster.

IST Research 1.1k Jan 06, 2023
Python script to check if there is any differences in responses of an application when the request comes from a search engine's crawler.

crawlersuseragents This Python script can be used to check if there is any differences in responses of an application when the request comes from a se

Podalirius 13 Dec 27, 2022
UdemyBot - A Simple Udemy Free Courses Scrapper

UdemyBot - A Simple Udemy Free Courses Scrapper

Gautam Kumar 112 Nov 12, 2022
热搜榜-python爬虫+正则re+beautifulsoup+xpath

仓库简介 微博热搜榜, 参数wb 百度热搜榜, 参数bd 360热点榜, 参数360 csdn热榜接口, 下方查看 其他热搜待加入 如何使用? 注册vercel fork到你的仓库, 右上角 点击这里完成部署(一键部署) 请求参数 vercel配置好的地址+api?tit=+参数(仓库简介有参数信息

Harry 3 Jul 08, 2022
An experiment to deploy a serverless infrastructure for a scrapy project.

Serverless Scrapy project This project aims to evaluate the feasibility of an architecture based on serverless technology for a web crawler using scra

José Ferraz Neto 5 Jul 08, 2022
Dictionary - Application focused on word search through web scraping

Dictionary - Application focused on word search through web scraping, in addition to other functions such as dictation, spell and conjugation of syllables.

Juan Manuel 2 May 09, 2022
爱奇艺会员,腾讯视频,哔哩哔哩,百度,各类签到

My-Actions 个人收集并适配Github Actions的各类签到大杂烩 不要fork了 ⭐️ star就行 使用方式 新建仓库并同步代码 点击Settings - Secrets - 点击绿色按钮 (如无绿色按钮说明已激活。直接到下一步。) 新增 new secret 并设置 Secr

280 Dec 30, 2022
Github scraper app is used to scrape data for a specific user profile created using streamlit and BeautifulSoup python packages

Github Scraper Github scraper app is used to scrape data for a specific user profile. Github scraper app gets a github profile name and check whether

Siva Prakash 6 Apr 05, 2022
An IpVanish Proxies Scraper

EzProxies Tired of searching for good proxies for hours? Just get an IpVanish account and get thousands of good proxies in few seconds! Showcase Watch

11 Nov 13, 2022
Web-scraping - Program that scrapes a website for a collection of quotes, picks one at random and displays it

web-scraping Program that scrapes a website for a collection of quotes, picks on

Manvir Mann 1 Jan 07, 2022
Goblyn is a Python tool focused to enumeration and capture of website files metadata.

Goblyn Metadata Enumeration What's Goblyn? Goblyn is a tool focused to enumeration and capture of website files metadata. How it works? Goblyn will se

Gustavo 46 Nov 22, 2022