A way to scrape sports streams for use with Jellyfin.

Related tags

Web Crawlingsportyfin
Overview

Sportyfin

Description

Stream sports events straight from your Jellyfin server. Sportyfin allows users to scrape for live streamed events and watch straight from Jellyfin. Sportyfin also generates meta-data that is used in Jellyfin to provide a great viewing experience.

Currently, Sportyfin supports NBA, NHL, NFL and Premier League livestreams, but we plan to support other leagues in the future.

Installation

To install Sportyfin with your running instance of Jellyfin, follow the steps bellow:

pip install sportyfin --no-binary=sportyfin

To uninstall the program:

pip uninstall sportyfin

Usage

We highly recommend running Sportyfin in combination with tmux, or something similar.

Example usage:

python3 -m sportyfin <arguments>

Start the sportyfin server as follows:

# -nba specifies finding streams for the NBA
# -s allows sportyfin to use Selenium to scrape
# -v enables verbose mode
# -o enables selecting output location

python3 -m sportyfin -nba -s -v -o ~/Desktop
# -vv specifies silent mode (no output will be produced)
# -a specifies all leagues supported by sportyfin

python3 -m sportyfin -a -vv

Once you have run the program, make sure to link to the .m3u's in the Jellyfin dashboard:

Dashboard > Live TV > Tuner Devices (+) > Tuner Type (M3U Tuner) > File or URL (enter path)

Additionally, make sure to change the "Refresh Guide" setting under:

Dashboard > Scheduled Tasks > Live TV > Refresh Guide > Task Triggers

Once the path has been defined and the settings have been updated, you can check out your streams under:

Home > Live TV > Channels (at the top)

Documentation

Find all the documentation here.

Future Improvement

Add server functionality, aka, ability to access streams (m3u files) from HTTP server.

You might also like...
Scrape and display grades onto the console

WebScrapeGrades About The Project This Project is a personal project where I learned how to webscrape using python requests. Being able to get request

Scrape puzzle scrambles from csTimer.net
Scrape puzzle scrambles from csTimer.net

Scroodle Selenium script to scrape scrambles from csTimer.net csTimer runs locally in your browser, so this doesn't strain the servers any more than i

A simple flask application to scrape gogoanime website.

gogoanime-api-flask A simple flask application to scrape gogoanime website. Used for demo and learning purposes only. How to use the API The base api

CRI Scrape is a tool for get general info about Italian Red Cross in GAIA Platform
CRI Scrape is a tool for get general info about Italian Red Cross in GAIA Platform

CRI Scrape CRI Scrape is a tool for get general info about Italian Red Cross in GAIA Platform Disclaimer This code is only for educational purpose. So

Scrape plants scientific name information from Agroforestry Species Switchboard 2.0.

Agroforestry Species Switchboard 2.0 Scraper Scrape plants scientific name information from Species Switchboard 2.0. Requirements python = 3.10 (you

API which uses discord to scrape NameMC searches/droptime/dropping status of minecraft names

NameMC Scrape API This is an api to scrape NameMC using message previews generated by discord. NameMC makes it a pain to scrape their website, but som

This is python to scrape overview and reviews of companies from Glassdoor.

Data Scraping for Glassdoor This is python to scrape overview and reviews of companies from Glassdoor. Please use it carefully and follow the Terms of

A Python web scraper to scrape latest posts from official Coinbase's Blog.
A Python web scraper to scrape latest posts from official Coinbase's Blog.

Coinbase Blog Scraper A Python web scraper to scrape latest posts from official Coinbase's Blog. IDEA It scrapes up latest blog posts from https://blo

A package designed to scrape data from Yahoo Finance.

yahoostock A package designed to scrape data from Yahoo Finance. Installation The most simple installation method is through PIP. pip install yahoosto

Comments
  • Very interesting! How to expand for different  European tournaments?

    Very interesting! How to expand for different European tournaments?

    How can we expand to other football tournaments like Champions League, Europa League and Seria A, La Liga, Bundesliga, Ligue 1?

    It seems to me that from the page you get the streams you can also get the other tournaments so it should be pretty easy right?

    opened by rdibella 1
  • Error occurred bypassing bitly

    Error occurred bypassing bitly

    I have this error when trying to generate the stream then i have another (HTTPSConnectionPool) that crashes the docker container I understand that this is no longer maintained but thought i would give it a shot

    image
    opened by neek0la 0
  • "list index out of range" when searching for stream links

    Sometimes when searching for stream links, the program completely fails and says "list index out of range". Im using the python version and I've noticed this happens quite often and I assume it happens when it cannot find any streams.

    opened by bbdcmf 1
  • Specify file

    Specify file "location" in addition to "destination"

    Hello!

    Might be a bit of an edge case/weird setup, and if its not workable -- no worries!

    All of my media is stored on a linux server, however my Jellyfin instance runs on a windows machine (as it has more computing oomf). Ideally, I would like to run Sportyfin on my linux box as a Cron job along with my other self-hosted projects. I have my linux box mounted as a network drive in Windows -- so Jellyfin can access the files on the linux server.

    The issue I run into, is that the xmltv file indicates the images are stored where the script put the files (in linux) - which isn't the same file path in Windows.

    What would be ideal, is if I could specify to the script where the files can be "located" by Jellyfin - which is different than where the script's output/destination is specified.

    e.g. output script files to /media/sportyfin but the created xmltv files say they are actually located at Z:\sportyfin python3 -m sportyfin -a -s -v -o /media/sportyfin -l z:\sportyfin

    Cheers, and thanks for your work on this project!

    opened by Albuca 2
Releases(1.0.7-stable)
Owner
axelmierczuk
axelmierczuk
Scraping Top Repositories for Topics on GitHub,

0.-Webscrapping-using-python Scraping Top Repositories for Topics on GitHub, Web scraping is the process of extracting and parsing data from websites

Dev Aravind D Satprem 2 Mar 18, 2022
A web scraper for nomadlist.com, made to avoid website restrictions.

Gypsylist gypsylist.py is a web scraper for nomadlist.com, made to avoid website restrictions. nomadlist.com is a website with a lot of information fo

Alessio Greggi 5 Nov 24, 2022
Find papers by keywords and venues. Then download it automatically

paper finder Find papers by keywords and venues. Then download it automatically. How to use this? Search CLI python search.py -k "knowledge tracing,kn

Jiahao Chen (TabChen) 2 Dec 15, 2022
Web-Scrapper using Python and Flask

Web-Scrapper "[초급]Python으로 웹 스크래퍼 만들기" 코스 -NomadCoders 기초적인 Python 문법강의부터 시작하여 웹사이트의 html파일에서 원하는 내용을 Scrapping해서 출력, csv 파일로 저장, flask를 이용한 간단한 웹페이지

윤성도 1 Nov 10, 2021
A universal package of scraper scripts for humans

Scrapera is a completely Chromedriver free package that provides access to a variety of scraper scripts for most commonly used machine learning and data science domains.

299 Dec 15, 2022
Python web scrapper

Website scrapper Web scrapping project in Python. Created for learning purposes. Start Install python Update configuration with websites Launch script

Nogueira Vitor 1 Dec 19, 2021
Binance Smart Chain Contract Scraper + Contract Evaluator

Pulls Binance Smart Chain feed of newly-verified contracts every 30 seconds, then checks their contract code for links to socials.Returns only those with socials information included, and then submit

14 Dec 09, 2022
联通手机营业厅自动做任务、签到、领流量、领积分等。

联通手机营业厅自动完成每日任务,领流量、签到获取积分等,月底流量不发愁。 功能 沃之树领流量、浇水(12M日流量) 每日签到(1积分+翻倍4积分+第七天1G流量日包) 天天抽奖,每天三次免费机会(随机奖励) 游戏中心每日打卡(连续打卡,积分递增至最高

2k May 06, 2021
A web crawler for recording posts in "sina weibo"

Web Crawler for "sina weibo" A web crawler for recording posts in "sina weibo" Introduction This script helps collect attributes of posts in "sina wei

4 Aug 20, 2022
Current Antarctic large iceberg positions derived from ASCAT and OSCAT-2

Iceberg Locations Antarctic large iceberg positions derived from ASCAT and OSCAT-2. All data collected here are from the NASA SCP website Overview Thi

Joel Hanson 5 Jul 27, 2022
Dex-scrapper - Hobby project for scrapping dex data on VeChain

Folders /zumo_abis # abi extracted from zumo repo /zumo_pools # runtime e

3 Jan 20, 2022
This app will let you continuously scrape certain parts of LeasePlan and extract data of cars becoming available for lease.

LeasePlan - Scraper This app will let you continuously scrape certain parts of LeasePlan and extract data of cars becoming available for lease. It has

Rodney 4 Nov 18, 2022
This program will help you to properly scrape all data from a specific website

This program will help you to properly scrape all data from a specific website

MD. MINHAZ 0 May 15, 2022
Subscrape - A Python scraper for substrate chains

subscrape A Python scraper for substrate chains that uses Subscan. Usage copy co

ChaosDAO 14 Dec 15, 2022
A spider for Universal Online Judge(UOJ) system, converting problem pages to PDFs.

Universal Online Judge Spider Introduction This is a spider for Universal Online Judge (UOJ) system (https://uoj.ac/). It also works for all other Onl

TriNitroTofu 1 Dec 07, 2021
👁️ Tool for Data Extraction and Web Requests.

httpmapper 👁️ Project • Technologies • Installation • How it works • License Project 🚧 For educational purposes. This is a project that I developed,

15 Dec 05, 2021
Python framework to scrape Pastebin pastes and analyze them

pastepwn - Paste-Scraping Python Framework Pastebin is a very helpful tool to store or rather share ascii encoded data online. In the world of OSINT,

Rico 105 Dec 29, 2022
Pythonic Crawling / Scraping Framework based on Non Blocking I/O operations.

Pythonic Crawling / Scraping Framework Built on Eventlet Features High Speed WebCrawler built on Eventlet. Supports relational databases engines like

Juan Manuel Garcia 173 Dec 05, 2022
An experiment to deploy a serverless infrastructure for a scrapy project.

Serverless Scrapy project This project aims to evaluate the feasibility of an architecture based on serverless technology for a web crawler using scra

José Ferraz Neto 5 Jul 08, 2022
Web Content Retrieval for Humans™

Lassie Lassie is a Python library for retrieving basic content from websites. Usage import lassie lassie.fetch('http://www.youtube.com/watch?v

Mike Helmick 570 Dec 19, 2022