♻️ API to run evaluations of the FAIR principles (Findable, Accessible, Interoperable, Reusable) on online resources

Overview

♻️ FAIR enough 🎯

Run backend tests

An OpenAPI where anyone can run evaluations to assess how compliant to the FAIR principles is a resource, given the resource identifier (URI/URL).

Using FastAPI, Pydantic and MongoDB

📥️ Requirements

  • Docker

  • Docker Compose

  • Poetry if you need to install new Python packages.

  • Node.js (with npm) if you need to do frontend development

📝 Add an assessment

All assessments used to run evaluations are python scripts defined in the same folder: https://github.com/MaastrichtU-IDS/fair-enough/tree/main/backend/app/assessments

Feel free to add new assessments and send a pull request! To create a new assessment:

  • Optionally, create a folder if you want to group multiple assessments under a same category

  • Copy an existing assessment to get started

  • Change the attributes of this assessment to describe it so that users can easily understand what your assessment do. Provide your email in the author attribute.

  • Add the code in the evaluate() function, 2 variables are passed to the assessment, plus you can access the assessment object itself to log what the test is trying to do, and why it success or fail:

    • eval: evaluation object that you can use to pass data between assessments (e.g. to pass the license URL, or JSON-LD metadata your assessment retrieves)

    • g: RDFLib graph of the RDF retrieved when searching for the resource metadata

    • self: the assessment object itself, can be used to perform various logging actions related to the test (don't use print otherwise it will not show up in the evaluation results returned by the API)

      self.log('This print a regular event', '✔️') # 2nd arg (prefix added to the log) is optional
      self.success('This will also increase the score of the assessment by 1')
      self.bonus('This will also increase the bonus score of the assessment by 1')
      self.error('This will print a failure while running the assessment')

Most of the Python code for the API is in https://github.com/MaastrichtU-IDS/fair-enough/tree/main/backend/app

🐳 Backend local development

Start the stack for development locally with Docker Compose from the root folder of this repository:

docker-compose up -d

Now you can open your browser and interact with these URLs:

To check the logs of a specific service, run:

docker-compose logs backend

To delete the volume and reset the database, run:

docker-compose down
docker volume rm fair-enough_mongodb-data

You can also run this script to reset the database, and restart the docker-compose:

./reset_local_db.sh

If you need to completely reset the Python cache:

docker-compose down
sudo rm -rf **/__pycache__
docker-compose build --no-cache

General workflow

By default, the dependencies are managed with Poetry, go there and install it.

From ./backend/ you can install all the dependencies with:

poetry install

To add new dependencies, run:

poetry add my-package

If you install a new package you will need to stop the current docker-compose running, then restarting it to rebuild the docker image:

docker-compose up --build --force-recreate

You can start a shell session with the new environment with:

poetry shell

Next, open your editor at ./backend/ (instead of the project root: ./), so that you see an ./app/ directory with your code inside. That way, your editor will be able to find all the imports, etc. Make sure your editor uses the environment you just created with Poetry.

Docker Compose Override

During development, you can change Docker Compose settings that will only affect the local development environment, in the file docker-compose.override.yml

docker-compose up -d

To get inside the container with a bash session you can exec inside the running container:

docker-compose exec backend bash

Backend tests

Test running stack

If your stack is already up and you just want to run the tests, you can use:

docker-compose exec backend /app/tests-start.sh

That /app/tests-start.sh script just calls pytest after making sure that the rest of the stack is running. If you need to pass extra arguments to pytest, you can pass them to that command and they will be forwarded.

For example, to stop on first error:

docker-compose exec backend bash /app/tests-start.sh -x

Test new stack

To test the backend run:

DOMAIN=backend sh ./scripts/test.sh

The file ./scripts/test.sh has the commands to generate a testing docker-stack.yml file, start the stack and test it.

Local tests

Start the stack with this command:

DOMAIN=backend sh ./scripts/test-local.sh

The ./backend directory is mounted as a "host volume" inside the docker container (set in the file docker-compose.dev.volumes.yml). You can rerun the test on live code:

docker-compose exec backend /app/tests-start.sh

Test Coverage

Because the test scripts forward arguments to pytest, you can enable test coverage HTML report generation by passing --cov-report=html.

To run the local tests with coverage HTML reports:

DOMAIN=backend sh ./scripts/test-local.sh --cov-report=html

To run the tests in a running stack with coverage HTML reports:

docker-compose exec backend bash /app/tests-start.sh --cov-report=html

🖥️ Frontend development

  • Enter the frontend directory, install the NPM packages and start the live server using the npm scripts:
cd frontend
npm install
npm run serve

Then open your browser at http://localhost:8080

If you have Vue CLI installed, you can also run vue ui to control, configure, serve, and analyze your application using a nice local web user interface.

🚀 Deployment

Traefik network

This stack expects the public Traefik network to be named traefik-public, just as in the tutorials in DockerSwarm.rocks.

If you need to use a different Traefik public network name, update it in the docker-compose.yml files, in the section:

networks:
  traefik-public:
    external: true

Change traefik-public to the name of the used Traefik network. And then update it in the file .env:

TRAEFIK_PUBLIC_NETWORK=traefik-public

Docker Compose files and env vars

There is a main docker-compose.yml file with all the configurations that apply to the whole stack, it is used automatically by docker-compose.

And there's also a docker-compose.override.yml with overrides for development, for example to mount the source code as a volume. It is used automatically by docker-compose to apply overrides on top of docker-compose.yml.

These Docker Compose files use the .env file containing configurations to be injected as environment variables in the containers.

They also use some additional configurations taken from environment variables set in the scripts before calling the docker-compose command.

It is all designed to support several "stages", like development, building, testing, and deployment. Also, allowing the deployment to different environments like staging and production (and you can add more environments very easily).

They are designed to have the minimum repetition of code and configurations, so that if you need to change something, you have to change it in the minimum amount of places. That's why files use environment variables that get auto-expanded. That way, if for example, you want to use a different domain, you can call the docker-compose command with a different DOMAIN environment variable instead of having to change the domain in several places inside the Docker Compose files.

Also, if you want to have another deployment environment, say preprod, you just have to change environment variables, but you can keep using the same Docker Compose files.

🔗 Links

Livestream logs:

Project bootstrapped with https://github.com/tiangolo/full-stack-fastapi-postgresql

Comments
  • not detecting multiple license terms

    not detecting multiple license terms

    Evaluation on Uniprot entry: https://fair-enough.semanticscience.org/evaluation/619df87cc671558d15c297d4

    fails to recognize dcterms:license in metadata

    bug 
    opened by micheldumontier 2
  • Better querying of the evaluations results

    Better querying of the evaluations results

    Users should be able to query the evaluations results to answer questions like "get me all evaluations that have a score of 1 for the metric test i1-structrured metadata and a score of 0 for i1-license"

    Either:

    • add functions to graphql, cf. https://dgraph.io/docs/graphql/schema/search/
    • Or load the whole DB in a triplestore for SPARQL querying
    opened by vemonet 1
  • Sort Metrics tests and Collections in categories

    Sort Metrics tests and Collections in categories

    We might want to have a multi dimension categorization for collections and metrics tests, e.g.

    domains = [ 'biomedical', 'geography']
    resource_type= ['publication', 'ontology', 'tabular data']
    

    That will help users to know which collection they should use when evaluating specific types of data.

    And help them to choose the right metrics tests when composing a collection.

    @rcelebi

    opened by vemonet 1
  • issue in starting containers

    issue in starting containers

    checkout with github dektop, using ubuntu shell. needed to chmod 755 bash scripts:

    chmod 755 init_metric_tests.sh
    chmod 755 backend/prestart.sh
    

    and also clean them

    sed -i -e 's/\r$//' init_metric_tests.sh
    sed -i -e 's/\r$//' backend/prestart.sh
    
    
    opened by micheldumontier 0
  • Local development support for HTTPS + ORCID

    Local development support for HTTPS + ORCID

    ORCID authentication requires HTTPS redirects. Out of the box, the application does not respond to HTTPS addresses on localhost. How to configure this?

    help wanted 
    opened by micheldumontier 1
  • Create a Web Extension to evaluate researchers publications from aggregators

    Create a Web Extension to evaluate researchers publications from aggregators

    This extension could be called by a researcher when on his Pure, or Google Scholar page. It will automatically extract the URI of each publication of the researcher, and run a FAIR evaluation for each publication

    Generate a report on the FAIRness of all publications of the researcher

    We should use the new WebExtension standard to build an extension that will work on all browsers (at least Firefox and Chrome, safari is dead anyway)

    @pedrohserrano

    good first issue help wanted 
    opened by vemonet 0
  • create

    create "resource-centric" view

    For a given resource, list all the tests performed and the scores, from newest to oldest. make infographic to visualise the scores across time add button to "test now"

    good first issue help wanted 
    opened by micheldumontier 3
  • Map base and bonus scores to F,A,I,R

    Map base and bonus scores to F,A,I,R

    It's difficult to understand the breakdown of the base and bonus scores to each of F,A,I, R. might suggest some kind of interactive chart that i) shows where the resource is passing/failing, and ii) is clickable to the specific assessment section.

    enhancement 
    opened by micheldumontier 2
  • enable text search on all metadata

    enable text search on all metadata

    I'd like to be able to search all the evaluations using a global search input box. for instance, to search for "sea ice" and get matching records.

    enhancement 
    opened by micheldumontier 4
Releases(0.0.1)
  • 0.0.1(Jan 27, 2022)

    FAIR enough release before using FAIRMetrics specifications and tests through APIs

    Using Celery workers and RabbitMQ to execute the tests

    Source code(tar.gz)
    Source code(zip)
Owner
Maastricht University IDS
Institute of Data Science at Maastricht University
Maastricht University IDS
An hcaptcha-solving discord account generator; capable of randomizing names, profile pictures, and verifying phone numbers.

discord-account-generator An hcaptcha-solving discord account generator; capable of randomizing names, profile pictures, and verifying phone numbers.

Acier 61 Dec 10, 2022
Install and manage Proton-GE and Luxtorpeda for Steam and Wine-GE for Lutris with this graphical user interface. Based on AUNaseef's ProtonUp, made with Python 3 and Qt 6.

ProtonUp-Qt Qt-based graphical user interface to install and manage Proton-GE installations for Steam and Wine-GE installations for Lutris. Based on A

638 Jan 02, 2023
Rbx-mass-send - mass sends trades to item owners

mass sends trades to item owners proxies should be in ip:port format itemsToSend

0 Feb 20, 2022
wyscoutapi is an extremely basic API client for the Wyscout API (v2 & v3) for Python

wyscoutapi wyscoutapi is an extremely basic API client for the Wyscout API (v2 & v3). Usage Install with pip install wyscoutapi. To connect to the Wys

Ben Torvaney 11 Nov 22, 2022
Bot made by BLACKSTORM[BM] Contact Us - t.me/BLACKSTORM18

ᴡʜᴀᴛ ɪs ᴊᴀʀᴠɪs sᴇᴄᴜʀɪᴛʏ ʙᴏᴛ ᴊᴀʀᴠɪs ʙᴏᴛ ɪs ᴛᴇʟᴇɢʀᴀᴍ ɢʀᴏᴜᴘ ᴍᴀɴᴀɢᴇʀ ʙᴏᴛ ᴡɪᴛʜ ᴍᴀɴʏ ғᴇᴀᴛᴜʀᴇs. ᴛʜɪs ʙᴏᴛ ʜᴇʟᴘs ʏᴏᴜ ᴛᴏ ᴍᴀɴᴀɢᴇ ʏᴏᴜʀ ɢʀᴏᴜᴘs ᴇᴀsɪʟʏ. ᴏʀɪɢɪɴᴀʟʟʏ ᴀ

1 Dec 11, 2021
This bot will delete messages containing blacklisted words in your telegram groups.

Profanity Detector Bot This bot will delete messages containing blacklisted words in your telegram groups. Made using ProfanityDetector.

Aditya 17 Oct 08, 2022
Gdrive-python: A wrapping module in python of gdrive

gdrive-python gdrive-python is a wrapping module in python of gdrive made by @pr

Vittorio Pippi 3 Feb 19, 2022
Jalali version of python calendar :date:

jcalendar jcalendar is Jalali implementation of Python's calendar module Status Install pip install jcalendar Documents This module almost follows Py

Iman Kermani 7 Aug 09, 2022
OKEX数字货币自动交易python语言SDK

okex-py OKEx数字货币自动交易python语言SDK (非官方) OKEx Cryptocurrency Exchange python SDK (Unofficial) 本项目基于V5 API 使用例子 Example import okex.v5.account_api as acco

43 Dec 01, 2022
Administration Panel for Control FiveM Servers From Discord

FiveM Discord Administration Panel Version 1.0.0 If you would like to report an issue or request a feature. Join our Discord or create an issue. Contr

NIma 9 Jun 17, 2022
Simple script to ban bots at Twitch chats using a text file as a source.

AUTOBAN 🇺🇸 English version Simple script to ban bots at Twitch chats using a text file as a source. How to use Windows Go to releases for further in

And Paiva 5 Feb 06, 2022
This will create new discord accounts and add them to your server

Discord-Botter This tool will create new discord accounts add them to your server, this tool needs a captcha api like capmonster.cloud or anti-captcha

Shahzain 27 Nov 30, 2022
A Script to automate fowarding all new messages from one/many channel(s) to another channel(s), without the forwarded tag.

Channel Auto Message Forward A script to automate fowarding all new messages from one/many channel(s) to another channel(s), without the forwarded tag

16 Oct 21, 2022
Gera um PDF, logo depois de você responder um questionário simples, e envia para o e-mail que você informar.

PDF generator and send it for your email Criador: Francisco Robson de O. Dutra Filho Repositório criado no dia 18/09/2021 Instagram: @robsondutra_ Sob

8 Nov 22, 2021
一个基于Python3的Bot。目前支持以Docker的方式部署在vps上。支持Aria2、本子下载、网易云音乐下载、Pixiv榜单下载、Youtue-dl支持、搜图。

介绍 一个基于Python3的Bot。目前支持以Docker的方式部署在vps上。 主要功能: 文件管理 修改主界面为 filebrowser,账号为admin,密码为admin,主界面路径:http://ip:port,请自行修改密码 FolderMagic自带的webdav:路径:http://

Ben 650 Jan 08, 2023
Minimal Python client for the Iris API, built on top of Authlib and httpx.

🕸️ Iris Python Client Minimal Python client for the Iris API, built on top of Authlib and httpx. Installation pip install dioptra-iris-client Usage f

Dioptra 1 Jan 28, 2022
Deep reinforcement learning library built on top of Neural Network Libraries

Deep Reinforcement Learning Library built on top of Neural Network Libraries NNablaRL is a deep reinforcement learning library built on top of Neural

Sony 100 Dec 14, 2022
JAWS Pankration 2021 - DDD on AWS Lambda sample

JAWS Pankration 2021 - DDD on AWS Lambda sample What is this project? This project contains sample code for AWS Lambda with domain models. I presented

Atsushi Fukui 21 Mar 30, 2022
Automatically send commands to send Twitch followers to any Twitch account.

Automatically send commands to send Twitch followers to any Twitch account. You just need to be in a Twitch follow bot Discord server!

Thomas Keig 6 Nov 27, 2022