河南工业大学 完美校园 自动校外打卡

Overview

HAUT-checkin

河南工业大学自动校外打卡
由于github actions存在明显延迟,建议直接使用腾讯云函数

特点

  • 多人打卡
  • 使用简单,仅需账号密码以及用于微信推送的uid
  • 自动获取上一次打卡信息用于打卡
  • 向所有成员微信单独推送打卡状态
  • 完美校园服务器繁忙时造成打卡失败会自动重新打卡,直到所有成员成功打卡

更新日志

2021.1.31 解决完美校园新设备需要验证码问题,重构项目,放弃github actions

使用方法

点这里下载源代码文件到本地

登陆腾讯云函数控制台

点击函数服务->新建创建云函数

选择自定义函数

地域可以随便选

运行环境选择python3.6

提交方法选择本地上传zip包

点击上传选择刚刚下载的zip文件

展开高级配置子菜单

执行超时时间设置为900

然后点击此链接获取二维码

QRcode

每个用户都需要扫描此二维码关注新消息服务公众号用于推送打卡状态

关注后在公众号内依次点击我的->我的UID,获取每个用户的UID

在环境变量处按照以下格式填入打卡成员信息

device_seed 可填入任意数字

建议不要让多个用户的device_seed相同

key Value
user1 账号 密码 device_seed uid
user2 账号 密码 device_seed uid

展开触发器配置子菜单

触发周期选择自定义触发周期

cron表达式填入0 10 0 * * * *

即为凌晨00:10打卡,第二位表示分钟,第三位表示小时

可自行修改打卡时间

点击完成

以上步骤完成后

进入函数代码页面

点击左侧的SMS.py

在点击右上角的绿色小三角运行此脚本

用于验证虚拟的新设备

在命令行依次填入username和刚刚填入环境变量的device_seed

然后输入收到的验证码

至此所有步骤完成

你可以点击页面下方的测试来验证是否出错

部署完成后如需添加打卡成员,修改函数配置添加环境变量即可

部署成功后第一次使用时,请在打卡时间确认脚本运行正常,默认每日00:10开始打卡

注意:本项目默认学校为河南工业大学,其他学校请自行修改代码。

You might also like...
Comments
  • index.py中的bug

    index.py中的bug

    index.py的第31行写错了,应该是uid = error[i][3]

    而且device_seed没有写上,不写的话后面一直用的是上一个循环的最后一个人的设备码。

                phone = error[i][0]
                password = error[i][1]
                uid = error[i][2]
    
    opened by tyu-t 1
  • 修复error队列处理uid的错误

    修复error队列处理uid的错误

    index.py的第31行写错了,应该是uid = error[i][3]

    而且device_seed没有写上,不写的话后面一直用的是上一个循环的最后一个人的设备码。

                phone = error[i][0]
                password = error[i][1]
                uid = error[i][2]
    

    应该是:

                phone = error[i][0]
                password = error[i][1]
                device_seed = error[i][2]
                uid = error[i][3]
    
    opened by tyu-t 0
  • 新增字段信息

    新增字段信息

    字段变了, 暂时懒得自己fork或者提pr,给看到这里的同学几个字段信息: image

    {
    ...,
    "updatainfo": [
        {
            "propertyname": "temperature",
            "value": get_updatainfo(last_check_json['updatainfos'], "temperature")
        },
        {
            "propertyname": "symptom",
            "value": get_updatainfo(last_check_json['updatainfos'], "symptom")
        },
        {
            "propertyname":"isFFHasSymptom",
            # "value": get_updatainfo(last_check_json['updatainfos'], "isFFHasSymptom") # 该字段已经无法获取
            "value": isFFHasSymptomDict[phone]
        },
        {
            "propertyname":"isContactFriendIn14",
            "value": "否"
        },
         { # 2022.03.24 该字段失效
        #     "propertyname":"xinqing",
        #     # "value": "是,已接种二针剂型(灭活疫苗,科兴、国药等)满6个月"
        #     "value": get_updatainfo(last_check_json['updatainfos'], "xinqing")
        # },
        # { # 2022.03.24 该字段失效
        #     "propertyname":"xndkrqzj",          # 2021/12/8号更新增加,接种时间
        #     # "value": "2023-06-30"
        #     "value": get_updatainfo(last_check_json['updatainfos'], "xndkrqzj")
        # },
        # { # 2022.03.24 该字段失效
        #     "propertyname":"zdyqdq0511",        # 2021/12/8号更新增加,接种企业
        #     # "value": "科兴"                    
        #     "value": get_updatainfo(last_check_json['updatainfos'], "zdyqdq0511")
        # }
        { # 2022.03.24 新增字段(其实是改名)"您昨日是否进行核酸检测"
            "propertyname":"xinqing",
            "value": "否"
        }
    ],
    ...
    }
    

    直接 Python 代码复制过来的,不是标准 Json 语法。其中 isFFHasSymptomDict[phone] 在外面搞了个词典,类似这样:

    isFFHasSymptomDict = {
        '18666666666': '接种部分剂次',
        '15555555555': '完成接种,待接种加强针',
        '17666666666': '未接种或不能接种',
        '15777777777': '已接种加强针'
    }
    

    字段名还是一如既往地让人一头雾水(

    opened by CHxCOOH 1
Releases(v0.1.0)
Iptvcrawl - A scrapy project for crawl IPTV playlist

iptvcrawl a scrapy project for crawl IPTV playlist. Dependency Python3 pip insta

Zhijun 18 May 05, 2022
A Python web scraper to scrape latest posts from official Coinbase's Blog.

Coinbase Blog Scraper A Python web scraper to scrape latest posts from official Coinbase's Blog. IDEA It scrapes up latest blog posts from https://blo

Lucas Villela 3 Feb 18, 2022
Complete pipeline for crawling online newspaper article.

Complete pipeline for crawling online newspaper article. The articles are stored to MongoDB. The whole pipeline is dockerized, thus the user does not need to worry about dependencies. Additionally, d

newspipe 4 May 27, 2022
Python framework to scrape Pastebin pastes and analyze them

pastepwn - Paste-Scraping Python Framework Pastebin is a very helpful tool to store or rather share ascii encoded data online. In the world of OSINT,

Rico 105 Dec 29, 2022
A tool to easily scrape youtube data using the Google API

YouTube data scraper To easily scrape any data from the youtube homepage, a youtube channel/user, search results, playlists, and a single video itself

7 Dec 03, 2022
High available distributed ip proxy pool, powerd by Scrapy and Redis

高可用IP代理池 README | 中文文档 本项目所采集的IP资源都来自互联网,愿景是为大型爬虫项目提供一个高可用低延迟的高匿IP代理池。 项目亮点 代理来源丰富 代理抓取提取精准 代理校验严格合理 监控完备,鲁棒性强 架构灵活,便于扩展 各个组件分布式部署 快速开始 注意,代码请在release

SpiderClub 5.2k Jan 03, 2023
tweet random sand cat pictures

sandcatbot setup pip3 install --user -r requirements.txt cp sandcatbot.example.conf sandcatbot.conf vim sandcatbot.conf running the first parameter i

jess 8 Aug 07, 2022
Lovely Scrapper

Lovely Scrapper

Tushar Gadhe 2 Jan 01, 2022
Generate a repository with mirror links for DriveDroid app

DriveDroid Repository Generator Generate a repository for the app that allow boot a PC using ISO files stored on your Android phone Check also an offi

Evgeny 11 Nov 19, 2022
Simply scrape / download all the media from an fansly account.

Simply scrape / download all the media from an fansly account. Providing updates as long as its continuously gaining popularity, so hit the ⭐ button!

Mika C. 334 Jan 01, 2023
A social networking service scraper in Python

snscrape snscrape is a scraper for social networking services (SNS). It scrapes things like user profiles, hashtags, or searches and returns the disco

2.4k Jan 01, 2023
Web-Scrapper using Python and Flask

Web-Scrapper "[초급]Python으로 웹 스크래퍼 만들기" 코스 -NomadCoders 기초적인 Python 문법강의부터 시작하여 웹사이트의 html파일에서 원하는 내용을 Scrapping해서 출력, csv 파일로 저장, flask를 이용한 간단한 웹페이지

윤성도 1 Nov 10, 2021
python+selenium实现的web端自动打卡 + 每日邮件发送 + 金山词霸 每日一句 + 毒鸡汤(从2月份稳定运行至今)

python+selenium实现的web端自动打卡 说明 本打卡脚本适用于郑州大学健康打卡,其他web端打卡也可借鉴学习。(自己用的,从2月分稳定运行至今) 仅供学习交流使用,请勿依赖。开发者对使用本脚本造成的问题不负任何责任,不对脚本执行效果做出任何担保,原则上不提供任何形式的技术支持。 为防止

Sunday 1 Aug 27, 2022
A simple python script to fetch the latest covid info

covid-tracker-script A simple python script to fetch the latest covid info How it works First, get the current date in MM-DD-YYYY format. Check if the

Dot 0 Dec 15, 2021
Poolbooru gelscraper - a simple python script for scraping images off gelbooru pools.

poolbooru_gelscraper a simple python script for scraping images off gelbooru pools. modules required:requests_html, and os by default saves files with

savantshuia 1 Jan 02, 2022
A distributed crawler for weibo, building with celery and requests.

A distributed crawler for weibo, building with celery and requests.

SpiderClub 4.8k Jan 03, 2023
Collection of code files to scrap different kinds of websites.

STW-Collection Scrap The Web Collection; blog posts. This repo contains Scrapy sample code to scrap the following kind of websites: Do you want to lea

Tapasweni Pathak 15 Jun 08, 2022
Pelican plugin that adds site search capability

Search: A Plugin for Pelican This plugin generates an index for searching content on a Pelican-powered site. Why would you want this? Static sites are

22 Nov 21, 2022
An arxiv spider

An Arxiv Spider 做为一个cser,杰出男孩深知内核对连接到计算机上的硬件设备进行管理的高效方式是中断而不是轮询。每当小伙伴发来一篇刚挂在arxiv上的”热乎“好文章时,杰出男孩都会感叹道:”师兄这是每天都挂在arxiv上呀,跑的好快~“。于是杰出男孩找了找 github,借鉴了一下其

Jie Liu 11 Sep 09, 2022
Screen scraping and web crawling framework

Pomp Pomp is a screen scraping and web crawling framework. Pomp is inspired by and similar to Scrapy, but has a simpler implementation that lacks the

Evgeniy Tatarkin 61 Jun 21, 2021