河南工业大学 完美校园 自动校外打卡

Overview

HAUT-checkin

河南工业大学自动校外打卡
由于github actions存在明显延迟,建议直接使用腾讯云函数

特点

  • 多人打卡
  • 使用简单,仅需账号密码以及用于微信推送的uid
  • 自动获取上一次打卡信息用于打卡
  • 向所有成员微信单独推送打卡状态
  • 完美校园服务器繁忙时造成打卡失败会自动重新打卡,直到所有成员成功打卡

更新日志

2021.1.31 解决完美校园新设备需要验证码问题,重构项目,放弃github actions

使用方法

点这里下载源代码文件到本地

登陆腾讯云函数控制台

点击函数服务->新建创建云函数

选择自定义函数

地域可以随便选

运行环境选择python3.6

提交方法选择本地上传zip包

点击上传选择刚刚下载的zip文件

展开高级配置子菜单

执行超时时间设置为900

然后点击此链接获取二维码

QRcode

每个用户都需要扫描此二维码关注新消息服务公众号用于推送打卡状态

关注后在公众号内依次点击我的->我的UID,获取每个用户的UID

在环境变量处按照以下格式填入打卡成员信息

device_seed 可填入任意数字

建议不要让多个用户的device_seed相同

key Value
user1 账号 密码 device_seed uid
user2 账号 密码 device_seed uid

展开触发器配置子菜单

触发周期选择自定义触发周期

cron表达式填入0 10 0 * * * *

即为凌晨00:10打卡,第二位表示分钟,第三位表示小时

可自行修改打卡时间

点击完成

以上步骤完成后

进入函数代码页面

点击左侧的SMS.py

在点击右上角的绿色小三角运行此脚本

用于验证虚拟的新设备

在命令行依次填入username和刚刚填入环境变量的device_seed

然后输入收到的验证码

至此所有步骤完成

你可以点击页面下方的测试来验证是否出错

部署完成后如需添加打卡成员,修改函数配置添加环境变量即可

部署成功后第一次使用时,请在打卡时间确认脚本运行正常,默认每日00:10开始打卡

注意:本项目默认学校为河南工业大学,其他学校请自行修改代码。

You might also like...
Comments
  • index.py中的bug

    index.py中的bug

    index.py的第31行写错了,应该是uid = error[i][3]

    而且device_seed没有写上,不写的话后面一直用的是上一个循环的最后一个人的设备码。

                phone = error[i][0]
                password = error[i][1]
                uid = error[i][2]
    
    opened by tyu-t 1
  • 修复error队列处理uid的错误

    修复error队列处理uid的错误

    index.py的第31行写错了,应该是uid = error[i][3]

    而且device_seed没有写上,不写的话后面一直用的是上一个循环的最后一个人的设备码。

                phone = error[i][0]
                password = error[i][1]
                uid = error[i][2]
    

    应该是:

                phone = error[i][0]
                password = error[i][1]
                device_seed = error[i][2]
                uid = error[i][3]
    
    opened by tyu-t 0
  • 新增字段信息

    新增字段信息

    字段变了, 暂时懒得自己fork或者提pr,给看到这里的同学几个字段信息: image

    {
    ...,
    "updatainfo": [
        {
            "propertyname": "temperature",
            "value": get_updatainfo(last_check_json['updatainfos'], "temperature")
        },
        {
            "propertyname": "symptom",
            "value": get_updatainfo(last_check_json['updatainfos'], "symptom")
        },
        {
            "propertyname":"isFFHasSymptom",
            # "value": get_updatainfo(last_check_json['updatainfos'], "isFFHasSymptom") # 该字段已经无法获取
            "value": isFFHasSymptomDict[phone]
        },
        {
            "propertyname":"isContactFriendIn14",
            "value": "否"
        },
         { # 2022.03.24 该字段失效
        #     "propertyname":"xinqing",
        #     # "value": "是,已接种二针剂型(灭活疫苗,科兴、国药等)满6个月"
        #     "value": get_updatainfo(last_check_json['updatainfos'], "xinqing")
        # },
        # { # 2022.03.24 该字段失效
        #     "propertyname":"xndkrqzj",          # 2021/12/8号更新增加,接种时间
        #     # "value": "2023-06-30"
        #     "value": get_updatainfo(last_check_json['updatainfos'], "xndkrqzj")
        # },
        # { # 2022.03.24 该字段失效
        #     "propertyname":"zdyqdq0511",        # 2021/12/8号更新增加,接种企业
        #     # "value": "科兴"                    
        #     "value": get_updatainfo(last_check_json['updatainfos'], "zdyqdq0511")
        # }
        { # 2022.03.24 新增字段(其实是改名)"您昨日是否进行核酸检测"
            "propertyname":"xinqing",
            "value": "否"
        }
    ],
    ...
    }
    

    直接 Python 代码复制过来的,不是标准 Json 语法。其中 isFFHasSymptomDict[phone] 在外面搞了个词典,类似这样:

    isFFHasSymptomDict = {
        '18666666666': '接种部分剂次',
        '15555555555': '完成接种,待接种加强针',
        '17666666666': '未接种或不能接种',
        '15777777777': '已接种加强针'
    }
    

    字段名还是一如既往地让人一头雾水(

    opened by CHxCOOH 1
Releases(v0.1.0)
feapder 是一款简单、快速、轻量级的爬虫框架。以开发快速、抓取快速、使用简单、功能强大为宗旨。支持分布式爬虫、批次爬虫、多模板爬虫,以及完善的爬虫报警机制。

feapder 是一款简单、快速、轻量级的爬虫框架。起名源于 fast、easy、air、pro、spider的缩写,以开发快速、抓取快速、使用简单、功能强大为宗旨,历时4年倾心打造。支持轻量爬虫、分布式爬虫、批次爬虫、爬虫集成,以及完善的爬虫报警机制。 之

boris 1.4k Dec 29, 2022
This script is intended to crawl license information of repositories through the GitHub API.

GithubLicenseCrawler This script is intended to crawl license information of repositories through the GitHub API. Taking a csv file with requirements.

schutera 4 Oct 25, 2022
Bulk download tool for the MyMedia platform

MyMedia Bulk Content Downloader This is a bulk download tool for the MyMedia platform. USE ONLY WHERE ALLOWED BY THE COPYRIGHT OWNER. NOT AFFILIATED W

Ege Feyzioglu 3 Oct 14, 2022
A Web Scraping Program.

Web Scraping AUTHOR: Saurabh G. MTech Information Security, IIT Jammu. If you find this repository useful. I would appreciate if you Star it and Fork

Saurabh G. 2 Dec 14, 2022
FilmMikirAPI - A simple rest-api which is used for scrapping on the Kincir website using the Python and Flask package

FilmMikirAPI - A simple rest-api which is used for scrapping on the Kincir website using the Python and Flask package

UserGhost411 1 Nov 17, 2022
SearchifyX, predecessor to Searchify, is a fast Quizlet, Quizizz, and Brainly webscraper with various stealth features.

SearchifyX SearchifyX, predecessor to Searchify, is a fast Quizlet, Quizizz, and Brainly webscraper with various stealth features. SearchifyX lets you

28 Dec 20, 2022
CRI Scrape is a tool for get general info about Italian Red Cross in GAIA Platform

CRI Scrape CRI Scrape is a tool for get general info about Italian Red Cross in GAIA Platform Disclaimer This code is only for educational purpose. So

Vincenzo Cardone 0 Jul 23, 2022
Kusonime scraper using python3

Features Scrap from url Scrap from recommendation Search by query Todo [+] Search by genre Example # Get download url from kusonime import Scrap

MhankBarBar 2 Jan 28, 2022
Demonstration on how to use async python to control multiple playwright browsers for web-scraping

Playwright Browser Pool This example illustrates how it's possible to use a pool of browsers to retrieve page urls in a single asynchronous process. i

Bernardas Ališauskas 8 Oct 27, 2022
Web Scraping COVID 19 Meta Portal with Python

Web-Scraping-COVID-19-Meta-Portal-with-Python - Requests API and Beautiful Soup to scrape real-time COVID statistics from worldometer website and perform data cleaning and visual analysis in Jupyter

Aarif Munwar Jahan 1 Jan 04, 2022
A pure-python HTML screen-scraping library

Scrapely Scrapely is a library for extracting structured data from HTML pages. Given some example web pages and the data to be extracted, scrapely con

Scrapy project 1.8k Dec 31, 2022
This is my CS 20 final assesment.

eeeeeSpider This is my CS 20 final assesment. How to use: Open program Run to your hearts content! There are no external dependancies that you will ha

1 Jan 17, 2022
Web-Scraping using Selenium Master

Web-Scraping using Selenium What is the need of Selenium? Some websites don't like to be scrapped and in that case you need to disguise your webscrapi

Md Rashidul Islam 1 Oct 26, 2021
Html Content / Article Extractor, web scrapping lib in Python

Python-Goose - Article Extractor Intro Goose was originally an article extractor written in Java that has most recently (Aug2011) been converted to a

Xavier Grangier 3.8k Jan 02, 2023
Web Scraping Practica With Python

Web-Scraping-Practica Integrants: Guillem Vidal Pallarols. Lídia Bandrés Solé Fitxers: Aquest document és el primer que trobem. A continuació trobem u

2 Nov 08, 2021
Binance Smart Chain Contract Scraper + Contract Evaluator

Pulls Binance Smart Chain feed of newly-verified contracts every 30 seconds, then checks their contract code for links to socials.Returns only those with socials information included, and then submit

14 Dec 09, 2022
Rottentomatoes, Goodreads and IMDB sites crawler. Semantic Web final project.

Crawler Rottentomatoes, Goodreads and IMDB sites crawler. Crawler written by beautifulsoup, selenium and lxml to gather books and films information an

Faeze Ghorbanpour 1 Dec 30, 2021
Scraping news from Ucsal portal with Scrapy.

NewsScraping Esse é um projeto de raspagem das últimas noticias, de 2021, do portal da universidade Ucsal http://noosfero.ucsal.br/institucional Tecno

Crissiano Pires 0 Sep 30, 2021
A web scraping pipeline project that retrieves TV and movie data from two sources, then transforms and stores data in a MySQL database.

New to Streaming Scraper An in-progress web scraping project built with Python, R, and SQL. The scraped data are movie and TV show information. The go

Charles Dungy 1 Mar 28, 2022
Web Crawlers for Data Labelling of Malicious Domain Detection & IP Reputation Evaluation

Web Crawlers for Data Labelling of Malicious Domain Detection & IP Reputation Evaluation This repository provides two web crawlers to label domain nam

1 Nov 05, 2021