Generate lookml for views from dbt models

Overview

dbt2looker

Use dbt2looker to generate Looker view files automatically from dbt models.

Features

  • Column descriptions synced to looker
  • Dimension for each column in dbt model
  • Dimension groups for datetime/timestamp/date columns
  • Measures defined through dbt column metadata see below
  • Looker types
  • Warehouses: BigQuery, Snowflake, Redshift (postgres to come)

demo

Quickstart

Run dbt2looker in the root of your dbt project after compiling looker docs.

Generate Looker view files for all models:

dbt docs generate
dbt2looker

Generate Looker view files for all models tagged prod

dbt2looker --tag prod

Install

Install from PyPi repository

Install from pypi into a fresh virtual environment.

# Create virtual env
python3.7 -m venv dbt2looker-venv
source dbt2looker-venv/bin/activate

# Install
pip install dbt2looker

# Run
dbt2looker

Build from source

Requires poetry and python >=3.7

# Install
poetry install

# Run
poetry run dbt2looker

Defining measures

You can define looker measures in your dbt schema.yml files. For example:

models:
  - name: pages
    columns:
      - name: url
        description: "Page url"
      - name: event_id
        description: unique event id for page view
        meta:
           measures:
             page_views:
               type: count
Comments
  • Column Type None Error - Field's Not Converting To Dimensions

    Column Type None Error - Field's Not Converting To Dimensions

    When running dbt2looker --tag marts on my mart models, I receive dozens of errors around none type conversions.

    20:54:28 WARNING Column type None not supported for conversion from snowflake to looker. No dimension will be created.

    Here is the example of the schema.yml file.

    image

    The interesting thing is that it correctly recognizes the doc that corresponds to the model. The explore within the model file is correct and has the correct documentation.

    Not sure if I can be of any more help but let me know if there is anything!

    bug 
    opened by sisu-callum 19
  • ValueError: Failed to parse dbt manifest.json

    ValueError: Failed to parse dbt manifest.json

    Hey! I'm trying to run this package and hitting errors right after installation. I pip installed dbt2looker, ran the following in the root of my dbt project.

    dbt docs generate
    dbt2looker
    

    This gives me the following error:

    Traceback (most recent call last): File "/Users/josh/.pyenv/versions/3.10.0/bin/dbt2looker", line 8, in sys.exit(run()) File "/Users/josh/.pyenv/versions/3.10.0/lib/python3.10/site-packages/dbt2looker/cli.py", line 108, in run raw_manifest = get_manifest(prefix=args.target_dir) File "/Users/josh/.pyenv/versions/3.10.0/lib/python3.10/site-packages/dbt2looker/cli.py", line 33, in get_manifest parser.validate_manifest(raw_manifest) File "/Users/josh/.pyenv/versions/3.10.0/lib/python3.10/site-packages/dbt2looker/parser.py", line 20, in validate_manifest raise ValueError("Failed to parse dbt manifest.json") ValueError: Failed to parse dbt manifest.json

    This is preceded by a whole mess of error messages like such:

    21:01:05 ERROR Error in manifest at nodes.model.jaffle_shop.stg_customers.created_at: 1639274126.771925 is not of type 'integer' 21:01:05 ERROR Error in manifest at nodes.model.jaffle_shop.stg_customers.resource_type: 'model' is not one of ['analysis'] 21:01:05 ERROR Error in manifest at nodes.model.jaffle_shop.stg_customers.created_at: 1639274126.771925 is not of type 'integer' 21:01:05 ERROR Error in manifest at nodes.model.jaffle_shop.stg_customers.resource_type: 'model' is not one of ['test']

    Any idea what might be going wrong here? Happy to provide more detail. Thank you!

    opened by jdavid459 6
  • DBT version 1.0

    DBT version 1.0

    Hi,

    Is this library supporting DBT version 1.0 and forward? I can't get it to run at all. There's a lot of errors when checking the schema of the manifest.json file.

    / Andrea

    opened by AndreasTA-AW 3
  • Multiple manifest.json/catalog.json/dbt_project.yml files found in path ./

    Multiple manifest.json/catalog.json/dbt_project.yml files found in path ./

    When running

    dbt2looker --tag test
    

    I get

    $ dbt2looker --tag test
    19:31:20 WARNING Multiple manifest.json files found in path ./ this can lead to unexpected behaviour
    19:31:20 WARNING Multiple catalog.json files found in path ./ this can lead to unexpected behaviour
    19:31:20 WARNING Multiple dbt_project.yml files found in path ./ this can lead to unexpected behaviour
    19:31:20 INFO   Generated 0 lookml views in ./lookml/views
    19:31:20 INFO   Generated 1 lookml model in ./lookml
    19:31:20 INFO   Success
    

    and no lookml files are generated.

    I assume this is because I have multiple dbt packages installed? Is there a way to get around this? Otherwise, a feature request would be the ability to specify which files should be used - perhaps in a separate dbt2looker.yml settings file.

    enhancement 
    opened by arniwesth 3
  • Support Bigquery BIGNUMERIC datatype

    Support Bigquery BIGNUMERIC datatype

    Previously, dbt2looker would not create dimension for field with data type BIGNUMERIC since Looker didn't support converting BIGNUMERIC. So when we ran dbt2looker in CLI there is a warning WARNING Column type BIGNUMERIC not supported for conversion from bigquery to looker. No dimension will be created. However, as of November 2021, Looker has officially supported BigQuery BIGNUMBERIC (link). Please help to add this. Thank you,

    opened by IL-Jerry 2
  • Adding Filters to Meta Looker Config in schema.yml

    Adding Filters to Meta Looker Config in schema.yml

    Use Case: Given that programatic creation of all LookML files is the goal, there are a couple features that could potentially be added in order to give people more flexibility in measure creation. The first one I could think of was filters. Individuals would use filters to calculate measures like Active Users (ex: count_distinct user ids where some sort of flag is true).

    The following code is my admitted techno-babble as I don't fully understand pydantic and my python is almost exclusively pandas based.

    def lookml_dimensions_from_model(model: models.DbtModel, adapter_type: models.SupportedDbtAdapters):
        return [
            {
                'name': column.name,
                'type': map_adapter_type_to_looker(adapter_type, column.data_type),
                'sql': f'${{TABLE}}.{column.name}',
                'description': column.description
                'filter':[measure.name: f'measure.value']
    
            }
            for column in model.columns.values()
            for filter in column.meta.looker.filters
            if map_adapter_type_to_looker(adapter_type, column.data_type) in looker_scalar_types
        ]
    
    
    def lookml_measures_from_model(model: models.DbtModel):
        return [
            {
                'name': measure.name,
                'type': measure.type.value,
                'sql': f'${{TABLE}}.{column.name}',
                'description': f'{measure.type.value.capitalize()} of {column.description}',
                **'filter':[measure.name: f'measure.value']**
            }
            for column in model.columns.values()
            for measure in column.meta.looker.measures
            **for filter in column.meta.looker.filters**
    
        ]
    

    Pretty obvious I would imagine that my Python skills are lacking/non-existent (and I have no idea if this would actually work) but this idea would add more functionality for those who want to create more dynamic measures. Here is a bare-bones idea of how it could be configured in dbt

    image

    Then the output would look something like.

      measure: Page views {
        type: count
        sql: ${TABLE}.relevant_field ;;
        description: "Count of something."
        filter: [the_name_of_defined_column: value_of_defined_column]
      }
    
    enhancement 
    opened by sisu-callum 2
  • Incompatible packages when using snowflake

    Incompatible packages when using snowflake

    This error comes up when using with snowflake: https://github.com/snowflakedb/snowflake-connector-python/issues/1206

    it is remedied by the simple line pip install typing-extensions>=4.3.0 , but dbt2looker depends on < 4.0.0.

    dbt2looker 0.9.2 requires typing-extensions<4.0.0,>=3.10.0, but you have typing-extensions 4.3.0 which is incompatible.
    
    opened by owlas 1
  • Allowing skipping dbt manifest validation.

    Allowing skipping dbt manifest validation.

    Some users use the manifest heavily in order to enhance their work with dbt. IMHO, in such cases, the Looker library should not enforce any schema validations and it is the users' responsibility to keep the Looker generation not broken.

    opened by cgrosman 1
  • Redshift type conversions missing

    Redshift type conversions missing

    Redshift has missing type conversions:

    10:07:17 WARNING Column type timestamp without time zone not supported for conversion from redshift to looker. No dimension will be created.
    10:07:17 WARNING Column type boolean not supported for conversion from redshift to looker. No dimension will be created.
    10:07:17 WARNING Column type double precision not supported for conversion from redshift to looker. No dimension will be created.
    10:07:17 WARNING Column type character varying(108) not supported for conversion from redshift to looker. No dimension will be created.
    10:07:17 DEBUG  Created view from model dim_appointment with 0 measures, 0 dimensions
    
    bug 
    opened by owlas 1
  • Join models in explores

    Join models in explores

    Expose config for defining explores with joined models.

    Ideally this would live in a dbt exposure but it's currently missing meta information.

    Add to models for now?

    enhancement 
    opened by owlas 1
  • feat: remove strict manifest validation

    feat: remove strict manifest validation

    Closes #72 Closes #37

    We have some validation already with typing and the dbt manifest keeps changing. I think json-schema is causing more problems that it is solving. If we get weird errors, we can introduce some more relaxed validation.

    opened by owlas 0
  • Support group_labels in yml for dimensions

    Support group_labels in yml for dimensions

    https://github.com/lightdash/dbt2looker/blob/bb8f5b485ec541e2b1be15363ac3c7f8f19d030d/dbt2looker/models.py#L99

    measures seem to have this but not dimensions. probably all/most properties in available in https://docs.lightdash.com/references/dimensions/ should be represented here -- is this something lightdash is willing to maintain or would you want a contribution? @TuringLovesDeathMetal / @owlas - i figure full support for lightdash properties that can map to looker should be, in order to maximize the value of this utility for enabling looker customers to uncouple themselves from looker.

    opened by mike-weinberg 1
  • Issue when parsing dbt models

    Issue when parsing dbt models

    Hey folks!

    I've just run 'dbt2looker' in my local dbt repo folder, and I receive the following error:

    ❯ dbt2looker
    12:11:54 ERROR  Cannot parse model with id: "model.smallpdf.brz_exchange_rates" - is the model file empty?
    Failed
    

    The model file itself (pictured below) is not empty, therefore I am not sure what the issue with parsing this model dbt2looker appears to have. It is not materialised as a table or view, it is utilised by dbt as ephemeral - is that of importance when parsing files in the project? I've also tried running dbt2looker on a limited subset of dbt models via a tag, the same error appears. Any help is greatly appreciated!

    Screenshot 2022-06-20 at 12 12 22

    Other details:

    • on dbt version dbt 1.0.0
    • using dbt-redshift adapter [email protected]
    • let me know if anything else is of importance!
    opened by lewisosborne 8
  • Support model level measures

    Support model level measures

    Motivation

    We technically implement a measure with multiple columns under a column meta. But, it would be more natural to implement such measures as model-level.

    models:
      - name: ubie_jp_lake__dm_medico__hourly_score_for_nps
        description: |
          {{ doc("ubie_jp_lake__dm_medico__hourly_score_for_nps") }}
        meta:
          measures:
            total_x_y_z:
              type: number
              description: 'Summation of total x, total y and total z'
              sql: '${total_x} + ${total_y} + ${total_z}'
    
    
    opened by yu-iskw 0
  • Lookml files should merge with existing views

    Lookml files should merge with existing views

    If I already have a view file, I'd like to merge in any new columns I've added in dbt.

    For example, if I have a description in dbt but not in looker, I'd like to add it

    If looker already has a description, it should be left alone

    Thread in dbt slack: https://getdbt.slack.com/archives/C01DPMVM2LU/p1650353949839609?thread_ts=1649968691.671229&cid=C01DPMVM2LU

    opened by owlas 0
  • Non-empty models cannot be parsed and are reported as empty

    Non-empty models cannot be parsed and are reported as empty

    As of version 0.9.2, dbt2looker will not run for us anymore. v0.7.0 does run successfully. The error returned by 0.9.2 is 'Cannot parse model with id: "%s" - is the model file empty?'. However, the model that this is returned for is not empty. Based on the code, it seems like the attribute 'name' is missing, but inspecting the manifest.json file shows that there is actually a name for this model. I have no idea why the system reports these models as empty. The manifest.json object for one of the offending models is pasted below.

    Reverting to v0.9.0 (which does not yet have this error message) just leads to dbt2looker crashing without any information. Reverting to 0.7.0 fixes the problem. This issue effectively locks us (and likely others) into using an old version of dbt2looker

    "model.zivver_dwh.crm_account_became_customer_dates":
            {
                "raw_sql": "WITH sfdc_accounts AS (\r\n\r\n    SELECT * FROM {{ ref('stg_sfdc_accounts') }}\r\n\r\n), crm_opportunities AS (\r\n\r\n    SELECT * FROM {{ ref('crm_opportunities') }}\r\n\r\n), crm_account_lifecycle_stage_changes_into_customer_observed AS (\r\n\r\n    SELECT\r\n        *\r\n    FROM {{ ref('crm_account_lifecycle_stage_changes_observed') }}\r\n    WHERE\r\n        new_stage = 'CUSTOMER'\r\n\r\n), became_customer_dates_from_opportunities AS (\r\n\r\n    SELECT\r\n        crm_account_id AS sfdc_account_id,\r\n\r\n        -- An account might have multiple opportunities. The account became customer when the first one was closed won.\r\n        MIN(closed_at) AS became_customer_at\r\n    FROM crm_opportunities\r\n    WHERE\r\n        opportunity_stage = 'CLOSED_WON'\r\n    GROUP BY\r\n        1\r\n\r\n), became_customer_dates_observed AS (\r\n\r\n    -- Some accounts might not have closed won opportunities, but still be a customer. Examples would be Connect4Care\r\n    -- customers, which have a single opportunity which applies to multiple accounts. If an account is manually set\r\n    -- to customer, this should also count as a customer.\r\n    --\r\n    -- We try to get the date at which they became a customer from the property history. Since that wasn't on from\r\n    -- the beginning, we conservatively default to either the creation date of the account or the history tracking\r\n    -- start date, whichever was earlier. Please note that this case should be exceedingly rare.\r\n    SELECT\r\n        sfdc_accounts.sfdc_account_id,\r\n        CASE\r\n            WHEN {{ var('date:sfdc:account_history_tracking:start_date') }} <= sfdc_accounts.created_at\r\n                THEN sfdc_accounts.created_at\r\n            ELSE {{ var('date:sfdc:account_history_tracking:start_date') }}\r\n        END AS default_became_customer_date,\r\n\r\n        COALESCE(\r\n            MIN(crm_account_lifecycle_stage_changes_into_customer_observed.new_stage_entered_at),\r\n            default_became_customer_date\r\n        ) AS became_customer_at\r\n\r\n    FROM sfdc_accounts\r\n    LEFT JOIN crm_account_lifecycle_stage_changes_into_customer_observed\r\n        ON sfdc_accounts.sfdc_account_id = crm_account_lifecycle_stage_changes_into_customer_observed.sfdc_account_id\r\n    WHERE\r\n        sfdc_accounts.lifecycle_stage = 'CUSTOMER'\r\n    GROUP BY\r\n        1,\r\n        2\r\n\r\n)\r\nSELECT\r\n    COALESCE(became_customer_dates_from_opportunities.sfdc_account_id,\r\n        became_customer_dates_observed.sfdc_account_id) AS sfdc_account_id,\r\n    COALESCE(became_customer_dates_from_opportunities.became_customer_at,\r\n        became_customer_dates_observed.became_customer_at) AS became_customer_at\r\nFROM became_customer_dates_from_opportunities\r\nFULL OUTER JOIN became_customer_dates_observed\r\n    ON became_customer_dates_from_opportunities.sfdc_account_id = became_customer_dates_observed.sfdc_account_id",
                "resource_type": "model",
                "depends_on":
                {
                    "macros":
                    [
                        "macro.zivver_dwh.ref",
                        "macro.zivver_dwh.audit_model_deployment_started",
                        "macro.zivver_dwh.audit_model_deployment_completed",
                        "macro.zivver_dwh.grant_read_rights_to_role"
                    ],
                    "nodes":
                    [
                        "model.zivver_dwh.stg_sfdc_accounts",
                        "model.zivver_dwh.crm_opportunities",
                        "model.zivver_dwh.crm_account_lifecycle_stage_changes_observed"
                    ]
                },
                "config":
                {
                    "enabled": true,
                    "materialized": "ephemeral",
                    "persist_docs":
                    {},
                    "vars":
                    {},
                    "quoting":
                    {},
                    "column_types":
                    {},
                    "alias": null,
                    "schema": "bl",
                    "database": null,
                    "tags":
                    [
                        "business_layer",
                        "commercial"
                    ],
                    "full_refresh": null,
                    "crm_record_types": null,
                    "post-hook":
                    [
                        {
                            "sql": "{{ audit_model_deployment_completed() }}",
                            "transaction": true,
                            "index": null
                        },
                        {
                            "sql": "{{ grant_read_rights_to_role('data_engineer', ['all']) }}",
                            "transaction": true,
                            "index": null
                        },
                        {
                            "sql": "{{ grant_read_rights_to_role('analyst', ['all']) }}",
                            "transaction": true,
                            "index": null
                        }
                    ],
                    "pre-hook":
                    [
                        {
                            "sql": "{{ audit_model_deployment_started() }}",
                            "transaction": true,
                            "index": null
                        }
                    ]
                },
                "database": "analytics",
                "schema": "bl",
                "fqn":
                [
                    "zivver_dwh",
                    "business_layer",
                    "commercial",
                    "crm_account_lifecycle_stage_changes",
                    "intermediates",
                    "crm_account_became_customer_dates",
                    "crm_account_became_customer_dates"
                ],
                "unique_id": "model.zivver_dwh.crm_account_became_customer_dates",
                "package_name": "zivver_dwh",
                "root_path": "C:\\Users\\tjebbe.bodewes\\Documents\\zivver-dwh\\dwh\\transformations",
                "path": "business_layer\\commercial\\crm_account_lifecycle_stage_changes\\intermediates\\crm_account_became_customer_dates\\crm_account_became_customer_dates.sql",
                "original_file_path": "models\\business_layer\\commercial\\crm_account_lifecycle_stage_changes\\intermediates\\crm_account_became_customer_dates\\crm_account_became_customer_dates.sql",
                "name": "crm_account_became_customer_dates",
                "alias": "crm_account_became_customer_dates",
                "checksum":
                {
                    "name": "sha256",
                    "checksum": "a037b5681219d90f8bf8d81641d3587f899501358664b8ec77168901b3e1808b"
                },
                "tags":
                [
                    "business_layer",
                    "commercial"
                ],
                "refs":
                [
                    [
                        "stg_sfdc_accounts"
                    ],
                    [
                        "crm_opportunities"
                    ],
                    [
                        "crm_account_lifecycle_stage_changes_observed"
                    ]
                ],
                "sources":
                [],
                "description": "",
                "columns":
                {
                    "sfdc_account_id":
                    {
                        "name": "sfdc_account_id",
                        "description": "",
                        "meta":
                        {},
                        "data_type": null,
                        "quote": null,
                        "tags":
                        []
                    },
                    "became_customer_at":
                    {
                        "name": "became_customer_at",
                        "description": "",
                        "meta":
                        {},
                        "data_type": null,
                        "quote": null,
                        "tags":
                        []
                    }
                },
                "meta":
                {},
                "docs":
                {
                    "show": true
                },
                "patch_path": "zivver_dwh://models\\business_layer\\commercial\\crm_account_lifecycle_stage_changes\\intermediates\\crm_account_became_customer_dates\\crm_account_became_customer_dates.yml",
                "compiled_path": null,
                "build_path": null,
                "deferred": false,
                "unrendered_config":
                {
                    "pre-hook":
                    [
                        "{{ audit_model_deployment_started() }}"
                    ],
                    "post-hook":
                    [
                        "{{ grant_read_rights_to_role('analyst', ['all']) }}"
                    ],
                    "tags":
                    [
                        "commercial"
                    ],
                    "materialized": "ephemeral",
                    "schema": "bl",
                    "crm_record_types": null
                },
                "created_at": 1637233875
            }
    
    opened by Tbodewes 2
Releases(v0.11.0)
  • v0.11.0(Dec 1, 2022)

    Added

    • support label and hidden fields (#49)
    • support non-aggregate measures (#41)
    • support bytes and bignumeric for bigquery (#75)
    • support for custom connection name on the cli (#78)

    Changed

    • updated dependencies (#74)

    Fixed

    • Types maps for redshift (#76)

    Removed

    • Strict manifest validation (#77)
    Source code(tar.gz)
    Source code(zip)
  • v0.9.2(Oct 11, 2021)

  • v0.9.1(Oct 7, 2021)

    Fixed

    • Fixed bug where dbt2looker would crash if a dbt project contained an empty model

    Changed

    • When filtering models by tag, models that have no tag property will be ignored
    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Oct 7, 2021)

    Added

    • Support for spark adapter (@chaimt)

    Changed

    • Updated with support for dbt2looker (@chaimt)
    • Lookml views now populate their "sql_table_name" using the dbt relation name
    Source code(tar.gz)
    Source code(zip)
  • v0.8.2(Sep 22, 2021)

    Changed

    • Measures with missing descriptions fall back to coloumn descriptions. If there is no column description it falls back to "{measure_type} of {column_name}".
    Source code(tar.gz)
    Source code(zip)
  • v0.8.1(Sep 22, 2021)

    Added

    • Dimensions have an enabled flag that can be used to switch off generated dimensions for certain columns with enabled: false
    • Measures have been aliased with the following: measures,measure,metrics,metric

    Changed

    • Updated dependencies
    Source code(tar.gz)
    Source code(zip)
  • v0.8.0(Sep 9, 2021)

    Changed

    • Command line interface changed argument from --target to --target-dir

    Added

    • Added the --project-dir flag to the command line interface to change the search directory for dbt_project.yml
    Source code(tar.gz)
    Source code(zip)
  • v0.7.3(Sep 9, 2021)

  • v0.7.2(Sep 9, 2021)

  • v0.7.1(Aug 27, 2021)

    Added

    • Use dbt2looker --output-dir /path/to/dir to customise the output directory of the generated lookml files

    Fixed

    • Fixed error with reporting json validation errors
    • Fixed error in join syntax in example .yml file
    • Fixed development environment for python3.7 users
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Apr 18, 2021)

  • v0.6.2(Apr 18, 2021)

  • v0.6.1(Apr 17, 2021)

  • v0.6.0(Apr 17, 2021)

Owner
lightdash
lightdash
Fancy data functions that will make your life as a data scientist easier.

WhiteBox Utilities Toolkit: Tools to make your life easier Fancy data functions that will make your life as a data scientist easier. Installing To ins

WhiteBox 3 Oct 03, 2022
Active Learning demo using two small datasets

ActiveLearningDemo How to run step one put the dataset folder and use command below to split the dataset to the required structure run utils.py For ea

3 Nov 10, 2021
A Python Tools to imaging the shallow seismic structure

ShallowSeismicImaging Tools to imaging the shallow seismic structure, above 10 km, based on the ZH ratio measured from the ambient seismic noise, and

Xiao Xiao 9 Aug 09, 2022
API>local_db>AWS_RDS - Disclaimer! All data used is for educational purposes only.

APIlocal_dbAWS_RDS Disclaimer! All data used is for educational purposes only. ETL pipeline diagram. Aim of project By creating a fully working pipe

0 Apr 25, 2022
InDels analysis of CRISPR lines by NGS amplicon sequencing technology for a multicopy gene family.

CRISPRanalysis InDels analysis of CRISPR lines by NGS amplicon sequencing technology for a multicopy gene family. In this work, we present a workflow

2 Jan 31, 2022
An ETL framework + Monitoring UI/API (experimental project for learning purposes)

Fastlane An ETL framework for building pipelines, and Flask based web API/UI for monitoring pipelines. Project structure fastlane |- fastlane: (ETL fr

Dan Katz 2 Jan 06, 2022
Exploratory data analysis

Exploratory data analysis An Exploratory data analysis APP TAPIWA CHAMBOKO 🚀 About Me I'm a full stack developer experienced in deploying artificial

tapiwa chamboko 1 Nov 07, 2021
Convert monolithic Jupyter notebooks into Ploomber pipelines.

Soorgeon Join our community | Newsletter | Contact us | Blog | Website | YouTube Convert monolithic Jupyter notebooks into Ploomber pipelines. soorgeo

Ploomber 65 Dec 16, 2022
Conduits - A Declarative Pipelining Tool For Pandas

Conduits - A Declarative Pipelining Tool For Pandas Traditional tools for declaring pipelines in Python suck. They are mostly imperative, and can some

Kale Miller 7 Nov 21, 2021
Very basic but functional Kakuro solver written in Python.

kakuro.py Very basic but functional Kakuro solver written in Python. It uses a reduction to exact set cover and Ali Assaf's elegant implementation of

Louis Abraham 4 Jan 15, 2022
t-SNE and hierarchical clustering are popular methods of exploratory data analysis, particularly in biology.

tree-SNE t-SNE and hierarchical clustering are popular methods of exploratory data analysis, particularly in biology. Building on recent advances in s

Isaac Robinson 61 Nov 21, 2022
Bearsql allows you to query pandas dataframe with sql syntax.

Bearsql adds sql syntax on pandas dataframe. It uses duckdb to speedup the pandas processing and as the sql engine

14 Jun 22, 2022
Retentioneering 581 Jan 07, 2023
Python utility to extract differences between two pandas dataframes.

Python utility to extract differences between two pandas dataframes.

Jaime Valero 8 Jan 07, 2023
Repositori untuk menyimpan material Long Course STMKGxHMGI tentang Geophysical Python for Seismic Data Analysis

Long Course "Geophysical Python for Seismic Data Analysis" Instruktur: Dr.rer.nat. Wiwit Suryanto, M.Si Dipersiapkan oleh: Anang Sahroni Waktu: Sesi 1

Anang Sahroni 0 Dec 04, 2021
Python beta calculator that retrieves stock and market data and provides linear regressions.

Stock and Index Beta Calculator Python script that calculates the beta (β) of a stock against the chosen index. The script retrieves the data and resa

sammuhrai 4 Jul 29, 2022
DaDRA (day-druh) is a Python library for Data-Driven Reachability Analysis.

DaDRA (day-druh) is a Python library for Data-Driven Reachability Analysis. The main goal of the package is to accelerate the process of computing estimates of forward reachable sets for nonlinear dy

2 Nov 08, 2021
A set of functions and analysis classes for solvation structure analysis

SolvationAnalysis The macroscopic behavior of a liquid is determined by its microscopic structure. For ionic systems, like batteries and many enzymes,

MDAnalysis 19 Nov 24, 2022
A multi-platform GUI for bit-based analysis, processing, and visualization

A multi-platform GUI for bit-based analysis, processing, and visualization

Mahlet 529 Dec 19, 2022
Data and code accompanying the paper Politics and Virality in the Time of Twitter

Politics and Virality in the Time of Twitter Data and code accompanying the paper Politics and Virality in the Time of Twitter. In specific: the code

Cardiff NLP 3 Jul 02, 2022