A Python library that helps data scientists to infer causation rather than observing correlation.

Overview

CausalNex


Theme Status
Latest Release PyPI version
Python Version Python Version
master Branch Build CircleCI
develop Branch Build CircleCI
Documentation Build Documentation
License License
Code Style Code Style: Black

What is CausalNex?

"A toolkit for causal reasoning with Bayesian Networks."

CausalNex aims to become one of the leading libraries for causal reasoning and "what-if" analysis using Bayesian Networks. It helps to simplify the steps:

  • To learn causal structures,
  • To allow domain experts to augment the relationships,
  • To estimate the effects of potential interventions using data.

Why CausalNex?

CausalNex is built on our collective experience to leverage Bayesian Networks to identify causal relationships in data so that we can develop the right interventions from analytics. We developed CausalNex because:

  • We believe leveraging Bayesian Networks is more intuitive to describe causality compared to traditional machine learning methodology that are built on pattern recognition and correlation analysis.
  • Causal relationships are more accurate if we can easily encode or augment domain expertise in the graph model.
  • We can then use the graph model to assess the impact from changes to underlying features, i.e. counterfactual analysis, and identify the right intervention.

In our experience, a data scientist generally has to use at least 3-4 different open-source libraries before arriving at the final step of finding the right intervention. CausalNex aims to simplify this end-to-end process for causality and counterfactual analysis.

What are the main features of CausalNex?

The main features of this library are:

  • Use state-of-the-art structure learning methods to understand conditional dependencies between variables
  • Allow domain knowledge to augment model relationship
  • Build predictive models based on structural relationships
  • Fit probability distribution of the Bayesian Networks
  • Evaluate model quality with standard statistical checks
  • Simplify how causality is understood in Bayesian Networks through visualisation
  • Analyse the impact of interventions using Do-calculus

How do I install CausalNex?

CausalNex is a Python package. To install it, simply run:

pip install causalnex

Since pygraphviz can be difficult to install, esp. on Windows machines, the requirement is optional. If you want to use the causalnex native plotting tools, you can use

pip install "causalnex[plot]"

Alternatively, you can use the networkx drawing functionality for visualisations with fewer dependencies.

Use all for a full installation of dependencies (only the plotting right now):

pip install "causalnex[all]"

See more detailed installation instructions, including how to setup Python virtual environments, in our installation guide and get started with our tutorial.

How do I use CausalNex?

You can find the documentation for the latest stable release here. It explains:

Note: You can find the notebook and markdown files used to build the docs in docs/source.

Can I contribute?

Yes! We'd love you to join us and help us build CausalNex. Check out our contributing documentation.

How do I upgrade CausalNex?

We use SemVer for versioning. The best way to upgrade safely is to check our release notes for any notable breaking changes.

What licence do you use?

See our LICENSE for more detail.

We're hiring!

Do you want to be part of the team that builds CausalNex and other great products at QuantumBlack? If so, you're in luck! QuantumBlack is currently hiring Machine Learning Engineers who love using data to drive their decisions. Take a look at our open positions and see if you're a fit.

Comments
  • About from_pandas

    About from_pandas

    Description

    when i use 'from_pandas' to learning causal map by notears,i run 'watch -n 1 free -m',it shows that 3/16GB used.i run 370 thousand data but only use memory 3G?how to improve efficiency?

    Context

    Every 1.0s: free -m Tue Jun 30 16:18:36 2020

              total        used        free      shared  buff/cache   available
    

    Mem: 16384 2799 12213 0 1371 13584 Swap: 0 0 0

    enhancement 
    opened by ziyuwzf 17
  • Installation Docs have an example of installing Python 3.8

    Installation Docs have an example of installing Python 3.8

    Description

    Change your installation docs to have an example of installing a version of Python that is compatible with this module.

    Context

    If the software is made for Python 3.5, 3.6, or 3.7 then the installation examples shouldn't show someone how to install Python 3.8

    Possible Implementation

    Change the 3.8 to 3.5,6, or 7.

    Possible Alternatives

    (Optional) Describe any alternative solutions or features you've considered.

    documentation 
    opened by AdrianAntico 14
  • Why does my model exists some negative weight edges?

    Why does my model exists some negative weight edges?

    Hi, After learn the BN model, I use the 'sm.egdes(data)' and then I find some negative weight edges in my model. What's the meaning of those negative weight edges?

    question 
    opened by 1021808202 13
  • I have a question about Dynotears

    I have a question about Dynotears

    Description

    I want to know an input data and result for dynotears.

    Context

    I tried to use dynotears.from_pandas using DREAM4 challenge data, but get an empty graph. I constructed a list of dataframe as below that contains 10 dataframes. For each dataframe, the column is node and the row is timepoint such as below. g1 g2 1 1 2
    2 4 2 3 3 1

    bug 
    opened by minsik-bioinfo 11
  • Racism and the `load_boston` dataset

    Racism and the `load_boston` dataset

    Description

    Your documentation lists a demo that is using the load_boston dataset to explain how to use the tool here. It also lists the variables in the dataset and you can confirm the contents.

    - B        1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
    

    One of the variables used to predict a house price is skin color. That's incredibly problematic.

    Given that this project is backed by a global consultancy firm I'm especially worried. People will look at this example and copy it. Since documentation pages are typically seen as an authoritative source of truth it's really dubious to use this dataset without even a mention of the controversy in the variables. The use of skin color to predict a house price is legitimately a bad practice and the documentation currently makes no effort to acknowledge it. For a package that's all about causal inference, you'd expect it to acknowledge that you still need to understand the dataset that you feed in.

    Note that this dataset is up for removal from scikit-learn because of the obvious controversy and it's also something that's been pointed out at many conferences. Here is one talk from me if you're interested.

    opened by koaning 10
  • Enable Python 3.10

    Enable Python 3.10

    Motivation and Context

    Why was this PR created?

    How has this been tested?

    What testing strategies have you used?

    Checklist

    • [ ] Read the contributing guidelines
    • [ ] Opened this PR as a 'Draft Pull Request' if it is work-in-progress
    • [ ] Updated the documentation to reflect the code changes
    • [ ] Added a description of this change and added my name to the list of supporting contributions in the RELEASE.md file
    • [ ] Added tests to cover my changes
    • [ ] Assigned myself to the PR

    Notice

    • [ ] I acknowledge and agree that, by checking this box and clicking "Submit Pull Request":

    • I submit this contribution under the Apache 2.0 license and represent that I am entitled to do so on behalf of myself, my employer, or relevant third parties, as applicable.

    • I certify that (a) this contribution is my original creation and / or (b) to the extent it is not my original creation, I am authorised to submit this contribution on behalf of the original creator(s) or their licensees.

    • I certify that the use of this contribution as authorised by the Apache 2.0 license does not violate the intellectual property rights of anyone else.

    configuration 
    opened by tonyabracadabra 9
  • Question about w_threshold

    Question about w_threshold

    Hi, I dont know the meaning of the parameter 'w_threshold' in "from_pandas",because I can get a BN model when I using ''hill_climb" by pgmpy.The edges' number of the model vary the value of w_threshold ,so I dont know which one is correct? this problem is not exist in ''hill_climb".

    question 
    opened by 1021808202 8
  • do_intervention never ends running despite simple query

    do_intervention never ends running despite simple query

    Hi QB––

    Description

    I am running a do-calculus on a small dataset (116x32) with 2 to 4 discretized buckets. The BN fits the CPDs in 2 sec, so relatively good perf.

    However a simple do-intervention takes forever and even never ends running, I waited several hours then I interrupted kernel.

    Steps to Reproduce

    $ from causalnex.inference import InferenceEngine $ ie = InferenceEngine(bn) $ ie.do_intervention("cD_TropCycl", {1: 0.2, 2: 0.8}) $ print("distribution after do", ie.query()["cD_TropCycl"])

    Expected Result

    Shouldn't it be running just a few seconds given the low number of buckets? How long does it normally take?

    Actual Result

    no results returned after hours running a simple query.

    Your Environment

    Include as many relevant details about the environment in which you experienced the bug:

    • CausalNex version used (pip show causalnex): 0.5.0
    • Python version used (python -V): python 3.7.6
    • Operating system and version: osx 10.15.14 on 2.3 GHz Quad-Core Intel Core i5

    Thank you very much!!

    bug 
    opened by ironcrypto 8
  • Fix DAGLayer moving out of gpu during optimization step

    Fix DAGLayer moving out of gpu during optimization step

    Motivation and Context

    Quick bug fix: During the optimizer step the DAG parameters are being replaced with tensors that live on the CPU rather than on the GPU. This change basically make sure these tensors are being sent to the same device the model was originally on.

    Without this fix, you will get error: "Expected all tensors to be on the same device..."

    Longer term solution: I would refactor the device assignment - rather than relying on default dtype or calling to(device), create a single call at model creation that sets the dtype and device and use that for all layers and parameters.
    I used Black Linter. Will add test and ensure to use the same linter as the project after my paper deadline, this quick fix to help anyone else who was stuck on this :)

    How has this been tested?

    Ran few examples on my data.

    Checklist

    • [x] Read the contributing guidelines
    • [x] Opened this PR as a 'Draft Pull Request' if it is work-in-progress
    • [ ] Updated the documentation to reflect the code changes
    • [ ] Added a description of this change and added my name to the list of supporting contributions in the RELEASE.md file
    • [ ] Added tests to cover my changes
    • [x] Assigned myself to the PR

    Notice

    • [x] I acknowledge and agree that, by checking this box and clicking "Submit Pull Request":

    • I submit this contribution under the Apache 2.0 license and represent that I am entitled to do so on behalf of myself, my employer, or relevant third parties, as applicable.

    • I certify that (a) this contribution is my original creation and / or (b) to the extent it is not my original creation, I am authorised to submit this contribution on behalf of the original creator(s) or their licensees.

    • I certify that the use of this contribution as authorised by the Apache 2.0 license does not violate the intellectual property rights of anyone else.

    bug 
    opened by samialabed 7
  • Fix optional dependencies

    Fix optional dependencies

    Notice

    • [x] I acknowledge and agree that, by checking this box and clicking "Submit Pull Request":

    • I submit this contribution under the Apache 2.0 license and represent that I am entitled to do so on behalf of myself, my employer, or relevant third parties, as applicable.

    • I certify that (a) this contribution is my original creation and / or (b) to the extent it is not my original creation, I am authorised to submit this contribution on behalf of the original creator(s) or their licensees.

    • I certify that the use of this contribution as authorised by the Apache 2.0 license does not violate the intellectual property rights of anyone else.

    Motivation and Context

    Why was this PR created?

    • [x] Fix optional dependencies: .structure imported pytorch
    • [x] Make optional pygraphviz dependency clearer: raise warning and add to README
    • [ ] Test on windows machine if we catch the right errors

    How has this been tested?

    What testing strategies have you used?

    Checklist

    • [x] Read the contributing guidelines
    • [x] Opened this PR as a 'Draft Pull Request' if it is work-in-progress
    • [x] Updated the documentation to reflect the code changes
    • [ ] Added a description of this change and added my name to the list of supporting contributions in the RELEASE.md file
    • [ ] Added tests to cover my changes
    • [x] Assigned myself to the PR
    opened by qbphilip 7
  • pandas error

    pandas error

    tryng to run the model on google collab and pandas rises issues about importing some packages the first one that turned out to be fatal is relevent to 'OrderedDict' code: /usr/local/lib/python3.6/dist-packages/pandas/io/formats/html.py in () 8 from textwrap import dedent 9 ---> 10 from pandas.compat import OrderedDict, lzip, map, range, u, unichr, zip 11 12 from pandas.core.dtypes.generic import ABCMultiIndex

    ImportError: cannot import name 'OrderedDict'

    the second one which is fatal is relevant to importing lmap

    /usr/local/lib/python3.6/dist-packages/pandas/core/config.py in () 55 56 import pandas.compat as compat ---> 57 from pandas.compat import lmap, map, u 58 59 DeprecatedOption = namedtuple('DeprecatedOption', 'key msg rkey removal_ver')

    ImportError: cannot import name 'lmap'

    with standalone pandas, I did not get these issues seems to me google collab missing something any suggestions please ?

    opened by aminemosbah 7
  • [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1

    [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • test_requirements.txt
    ⚠️ Warning
    mdlp-discretization 0.3.3 requires scikit-learn, which is not installed.
    mdlp-discretization 0.3.3 requires numpy, which is not installed.
    mdlp-discretization 0.3.3 requires scipy, which is not installed.
    matplotlib 3.5.3 requires numpy, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-SETUPTOOLS-3180412 | setuptools:
    39.0.1 -> 65.5.1
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Regular Expression Denial of Service (ReDoS)

    opened by leonnallamuthu 0
  • [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1

    [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • doc_requirements.txt
    ⚠️ Warning
    Sphinx 3.5.4 has requirement docutils<0.17,>=0.12, but you have docutils 0.19.
    sphinx-rtd-theme 0.5.2 has requirement docutils<0.17, but you have docutils 0.19.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-SETUPTOOLS-3180412 | setuptools:
    39.0.1 -> 65.5.1
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Regular Expression Denial of Service (ReDoS)

    opened by leonnallamuthu 0
  • [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1

    [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • requirements.txt
    ⚠️ Warning
    statsmodels 0.0.0 requires pandas, which is not installed.
    statsmodels 0.0.0 requires numpy, which is not installed.
    scipy 1.6.3 requires numpy, which is not installed.
    scikit-learn 0.24.2 requires numpy, which is not installed.
    pgmpy 0.1.19 requires pandas, which is not installed.
    pgmpy 0.1.19 requires numpy, which is not installed.
    patsy 0.5.3 requires numpy, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-SETUPTOOLS-3180412 | setuptools:
    39.0.1 -> 65.5.1
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Regular Expression Denial of Service (ReDoS)

    opened by leonnallamuthu 0
  • Add GitHub Actions installation jobs across environments

    Add GitHub Actions installation jobs across environments

    Description

    As discovered by many during a recent hackathon workshop in SG, we found that many users, (mostly Windows & Mac) faced problems installing through pip / conda.

    Furthermore, due to a recent update in pygraphviz, it broke tutorials at that time.

    Context

    I do find this library's direction good, however, could use more stability, especially when workshops are conducted with it.

    Possible Implementation

    GitHub Actions (Free) can run pip install / conda install pipelines on many environments. See: https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources

    The workflow may be something like this:

    | Envs | Windows | Mac | Ubuntu | |---------------------|---------|--------|--------| | Installation Method | | | | | pip | Job 1 | Job 2 | ... | | pip w/ pygraphviz | | | | | conda | | | | | conda w/ pygraphviz | | | Job n |

    Note that dimensionality can be reduced through matrices. Which takes a cartesian product of lists provided. See: https://docs.github.com/en/actions/using-jobs/using-a-matrix-for-your-jobs

    opened by Eve-ning 0
  • EMSingleLatentVariable is producing random error at random times

    EMSingleLatentVariable is producing random error at random times

    Description

    I was trying to determine a single latent variable in my model, and when I tried to run the EM algorithm using fit_latent_cpds, it sometimes throw random errors while some times it can product some result.

    Steps to Reproduce

    I have created the following test data to try the model:

    data = pd.DataFrame({'node1': np.repeat(1, 50), 'node2': np.repeat(1,50)})
    for i in [0, 3, 5, 13, 17, 29, 30, 31, 32]:
        data['node1'][i] = 0
    
    for i in [4,5,11,15,17,25,27,34,41,47]:
        data['node2'][i] = 0
    

    The data structure is very simple, a latent variable latent1 that affects node1 and node2.

    sm = StructureModel()
    sm.add_edges_from([('latent1', 'node1'), ('latent1', 'node2')])
    bn = BayesianNetwork(sm)
    bn.node_states = {'latent1':{0,1}, 'node1': {0,1}, 'node2': {0,1}}
    bn.fit_latent_cpds(lv_name="latent1", lv_states=[0, 1], data=data[["node1", "node2"]], n_runs=30)
    

    Some times I received good result as following:

    {'latent1':                  
    latent1          
    0        0.283705
    1        0.716295,
    
    'node1': latent1        0         1
    node1                     
    0        0.21017  0.168051
    1        0.78983  0.831949,
    
    'node2': latent1         0         1
    node2                      
    0        0.253754  0.178709
    1        0.746246  0.821291}
    

    However, some times I receive different error messages:

    Traceback (most recent call last):
      File "test_2.py", line 28, in <module>
        bn.fit_latent_cpds(lv_name="latent1", lv_states=[0, 1], data=data[["node1", "node2"]], n_runs=30)
      File "/Users/user/opt/anaconda3/envs/py38/lib/python3.8/site-packages/causalnex/network/network.py", line 553, in fit_latent_cpds
        estimator = EMSingleLatentVariable(
      File "/Users/user/opt/anaconda3/envs/py38/lib/python3.8/site-packages/causalnex/estimator/em.py", line 144, in __init__
        self._mb_data, self._mb_partitions = self._get_markov_blanket_data(data)
      File "/Users/user/opt/anaconda3/envs/py38/lib/python3.8/site-packages/causalnex/estimator/em.py", line 585, in _get_markov_blanket_data
        mb_product = cpd_multiplication([self.cpds[node] for node in self.valid_nodes])
      File "/Users/user/opt/anaconda3/envs/py38/lib/python3.8/site-packages/causalnex/utils/pgmpy_utils.py", line 122, in cpd_multiplication
        product_pgmpy = factor_product(*cpds_pgmpy)  # type: TabularCPD
      File "/Users/user/opt/anaconda3/envs/py38/lib/python3.8/site-packages/pgmpy/factors/base.py", line 76, in factor_product
        return reduce(lambda phi1, phi2: phi1 * phi2, args)
      File "/Users/user/opt/anaconda3/envs/py38/lib/python3.8/site-packages/pgmpy/factors/base.py", line 76, in <lambda>
        return reduce(lambda phi1, phi2: phi1 * phi2, args)
      File "/Users/user/opt/anaconda3/envs/py38/lib/python3.8/site-packages/pgmpy/factors/discrete/DiscreteFactor.py", line 930, in __mul__
        return self.product(other, inplace=False)
      File "/Users/user/opt/anaconda3/envs/py38/lib/python3.8/site-packages/pgmpy/factors/discrete/DiscreteFactor.py", line 697, in product
        phi = self if inplace else self.copy()
      File "/Users/user/opt/anaconda3/envs/py38/lib/python3.8/site-packages/pgmpy/factors/discrete/CPD.py", line 299, in copy
        return TabularCPD(
      File "/Users/user/opt/anaconda3/envs/py38/lib/python3.8/site-packages/pgmpy/factors/discrete/CPD.py", line 142, in __init__
        super(TabularCPD, self).__init__(
      File "/Users/user/opt/anaconda3/envs/py38/lib/python3.8/site-packages/pgmpy/factors/discrete/DiscreteFactor.py", line 99, in __init__
        raise ValueError("Variable names cannot be same")
    ValueError: Variable names cannot be same
    

    And sometimes I receive this error:

    Traceback (most recent call last):
      File "test_2.py", line 28, in <module>
        bn.fit_latent_cpds(lv_name="latent1", lv_states=[0, 1], data=data[["node1", "node2"]], n_runs=30)
      File "/Users/user/opt/anaconda3/envs/py38/lib/python3.8/site-packages/causalnex/network/network.py", line 563, in fit_latent_cpds
        estimator.run(n_runs=n_runs, stopping_delta=stopping_delta)
      File "/Users/user/opt/anaconda3/envs/py38/lib/python3.8/site-packages/causalnex/estimator/em.py", line 181, in run
        self.e_step()  # Expectation step
      File "/Users/user/opt/anaconda3/envs/py38/lib/python3.8/site-packages/causalnex/estimator/em.py", line 233, in e_step
        results = self._update_sufficient_stats(node_mb_data["_lookup_"])
      File "/Users/user/opt/anaconda3/envs/py38/lib/python3.8/site-packages/causalnex/estimator/em.py", line 448, in _update_sufficient_stats
        prob_lv_given_mb = self._mb_product[mb_cols]
    KeyError: (nan, 0.0)
    

    My code originally also includes the boundaries and priors, however I realise these two errors just randomly pop up at different times.

    Please let me know if I have done something wrong in setting up the network.

    Your Environment

    Include as many relevant details about the environment in which you experienced the bug:

    • CausalNex version used (pip show causalnex): 0.11.0
    • Python version used (python -V): 3.8.15 (via conda)
    • Operating system and version: Mac OS M1
    opened by ianchlee 0
  • Can't install causalnex using poetry on new Apple M1 chip

    Can't install causalnex using poetry on new Apple M1 chip

    Description

    Trying to install causalnex on new M1 chip using poetry generates the following issue:

    Because causalnex (0.11.0) depends on scikit-learn (>=0.22.0,<0.22.2.post1 || >0.22.2.post1,<0.24.1 || >0.24.1,<0.25.0)
     and no versions of causalnex match >0.11.0,<0.12.0, causalnex (>=0.11.0,<0.12.0) requires scikit-learn (>=0.22.0,<0.22.2.post1 || >0.22.2.post1,<0.24.1 || >0.24.1,<0.25.0).
    So, because your package depends on both scikit-learn (^1.1.3) and causalnex (^0.11.0), version solving failed.
    

    Unfortunately downgrading the version of scikit-learn generates a numpy compatibility error.

      ImportError: numpy is not installed.
            scikit-learn requires numpy >= 1.11.0.
            Installation instructions are available on the scikit-learn website: http://scikit-learn.org/stable/install.html
    

    Context

    I migrated to a new computer that has the Apple M1 chip

    Steps to Reproduce

    1. Setting up python environment (in my case: 3.8.15)
    2. Installing poetry globally (installation guide)
    3. Run poetry add causalnex to resolve package dependency and install causalnex

    Expected Result

    I was expecting causalnex to be compatible with at least scikit-learn 1.1.3

    Actual Result

    I can't install causalnex.

    -- If you received an error, place it here.

    poetry add causalnex
    
    Because causalnex (0.11.0) depends on scikit-learn (>=0.22.0,<0.22.2.post1 || >0.22.2.post1,<0.24.1 || >0.24.1,<0.25.0)
     and no versions of causalnex match >0.11.0,<0.12.0, causalnex (>=0.11.0,<0.12.0) requires scikit-learn (>=0.22.0,<0.22.2.post1 || >0.22.2.post1,<0.24.1 || >0.24.1,<0.25.0).
    So, because your package depends on both scikit-learn (^1.1.3) and causalnex (^0.11.0), version solving failed.
    

    Your Environment

    Include as many relevant details about the environment in which you experienced the bug:

    • CausalNex version used (pip show causalnex):
    • Python version used (python -V): 3.8.15
    • Operating system and version: Apple M1
    opened by achuinar 0
Releases(v0.11.1)
  • v0.11.1(Nov 16, 2022)

    Change log:

    • Add python 3.9, 3.10 support
    • Unlock Scipy restrictions
    • Fix bug: infinite loop on lv inference engine
    • Fix DAGLayer moving out of gpu during optimization step of Pytorch learning
    • Fix CPD comparison of floating point - rounding issue
    • Fix set_cpd for parentless nodes that are not MultiIndex
    • Add Docker files for development on a dockerized environment
    Source code(tar.gz)
    Source code(zip)
  • v0.11.0(Nov 11, 2021)

    Changelog:

    • Add expectation-maximisation (EM) algorithm to learn with latent variables
    • Add a new tutorial on adding latent variable as well as identifying its candidate location
    • Allow users to provide self-defined CPD, as per #18 and #99
    • Generalise the utility function to get Markov blanket and incorporate it within StructureModel (cf. #136)
    • Add a link to PyGraphviz installation guide under the installation prerequisites
    • Add GPU support to Pytorch implementation, as requested in #56 and #114 (some issues remain)
    • Add an example for structure model exporting into first causalnex tutorial, as per #124 and #129
    • Fix infinite loop when querying InferenceEngine after a do-intervention that splits the graph into two or more subgraphs, as per #45 and #100
    • Fix decision tree and mdlp discretisations bug when input data is shuffled
    • Fix broken URLs in FAQ documentation, as per #113 and #125
    • Fix integer index type checking for timeseries data, as per #74 and #86
    • Fix bug where inputs to the DAGRegressor/Classifier yielded different predictions between float and int dtypes, as per #140
    Source code(tar.gz)
    Source code(zip)
  • 0.11.0(Nov 11, 2021)

  • v0.10.0(May 11, 2021)

    Functionality:

    • Add BayesianNetworkClassifier an sklearn compatible class for fitting and predicting probabilities in a BN.
    • Add supervised discretisation strategies using Decision Tree and MDLP algorithms.
    • Support receiving a list of inputs for InferenceEngine with a multiprocessing option
    • Add utility function to extract Markov blanket from a Bayesian Network

    Minor fixes and housekeeping:

    • Fix estimator issues with sklearn ("unofficial python 3.9 support", doesn't work with discretiser option)
    • Fixes cyclical import of causalnex.plots, as per #106.
    • Added manifest files to ensure requirements and licenses are packaged
    • Minor bumps in dependency versions, remove prettytable as dependency
    Source code(tar.gz)
    Source code(zip)
  • 0.9.2(Mar 11, 2021)

    No functional changes.

    Docs:

    • Remove Boston housing dataset from the "sklearn tutorial", see #91 for more information.

    Development experience:

    • Update pylint version to 2.7
    • Improve speed and non-stochasticity of tests
    Source code(tar.gz)
    Source code(zip)
  • 0.9.1(Jan 6, 2021)

  • 0.9.0(Dec 7, 2020)

    Core changes

    • Add Python 3.8 support and drop 3.5.
    • Add pandas >=1.0 support, among other dependencies
    • PyTorch is now a full requirement
    • Extension of distribution types for structure learning
    • Bugfixes
    Source code(tar.gz)
    Source code(zip)
  • 0.8.1(Sep 18, 2020)

  • v0.8.0(Sep 10, 2020)

  • 0.7.0(May 28, 2020)

  • 0.6.0(Apr 27, 2020)

  • 0.5.0(Mar 25, 2020)

  • 0.4.3(Feb 5, 2020)

  • 0.4.2(Jan 28, 2020)

  • 0.4.1(Jan 28, 2020)

  • 0.4.0(Jan 28, 2020)

Owner
QuantumBlack Labs
Intelligence. Beautifully Engineered.
QuantumBlack Labs
Grimoire is a Python library for creating interactive fiction as hyperlinked html.

Grimoire Grimoire is a Python library for creating interactive fiction as hyperlinked html. Installation pip install grimoire-if Usage Check out the

Scott Russell 5 Oct 11, 2022
A Notifier Program that Notifies you to relax your eyes Every 15 Minutes👀

Every 15 Minutes is an application that is used to Notify you to Relax your eyes Every 15 Minutes, This is fully made with Python and also with the us

FSP Gang s' Admin 1 Nov 03, 2021
📦 A Human's Ultimate Guide to setup.py.

📦 setup.py (for humans) This repo exists to provide an example setup.py file, that can be used to bootstrap your next Python project. It includes som

Navdeep Gill 5k Jan 04, 2023
Multitrack exporter for OP-Z

Underbridge for OP-Z Multitrack exporter Description Exports patterns and projects individual audio tracks to seperate folders for use in your DAW. Py

Thomas Herrmann 71 Dec 25, 2022
Parser for RISC OS Font control characters in Python

RISC OS Font control parsing in Python This repository contains a class (FontControlParser) for parsing font control codes from a byte squence, in Pyt

Charles Ferguson 1 Nov 02, 2021
decorator

Decorators for Humans The goal of the decorator module is to make it easy to define signature-preserving function decorators and decorator factories.

Michele Simionato 734 Dec 30, 2022
A repository containing useful resources needed to complete the SUSE Scholarship Challenge #UdacitySUSEScholars #poweredbySUSE

SUSE-udacity-cloud-native-scholarship A repository containing useful resources needed to complete the SUSE Scholarship Challenge #UdacitySUSEScholars

Nandini Proothi 11 Dec 02, 2021
Herramienta para pentesting web.

iTell 🕴 ¡Tool con herramientas para pentesting web! Metodos ❣ DDoS Attacks Recon Active Recon (Vulns) Extras (Bypass CF, FTP && SSH Bruter) Respons

1 Jul 28, 2022
Install JetBrains Toolbox

ansible-role-jetbrains-toolbox Install JetBrains Toolbox Example Playbook This example is taken from molecule/default/converge.yml and is tested on ea

Antoine Mace 2 Feb 04, 2022
bamboo-engine 是一个通用的流程引擎,他可以解析,执行,调度由用户创建的流程任务,并提供了如暂停,撤销,跳过,强制失败,重试和重入等等灵活的控制能力和并行、子流程等进阶特性,并可通过水平扩展来进一步提升任务的并发处理能力。

bamboo-engine 是一个通用的流程引擎,他可以解析,执行,调度由用户创建的流程任务,并提供了如暂停,撤销,跳过,强制失败,重试和重入等等灵活的控制能力和并行、子流程等进阶特性,并可通过水平扩展来进一步提升任务的并发处理能力。 整体设计 Quick start 1. 安装依赖 2. 项目初始

腾讯蓝鲸 96 Dec 15, 2022
Async-first dependency injection library based on python type hints

Dependency Depression Async-first dependency injection library based on python type hints Quickstart First let's create a class we would be injecting:

Doctor 8 Oct 10, 2022
EDF R&D implementation of ISO 15118-20 FDIS.

EDF R&D implementation of ISO 15118-20 FDIS ============ This project implements the ISO 15118-20 using Python. Supported features: DC Bidirectional P

30 Dec 29, 2022
A web UI for managing your 351ELEC device ROMs.

351ELEC WebUI A web UI for managing your 351ELEC device ROMs. Requirements Python 3 or Python 2.7 are required. If the ftfy package is installed, it w

Ben Phelps 5 Sep 26, 2022
Twikoo自定义表情列表 | HexoPlusPlus自定义表情列表(其实基于OwO的项目都可以用的啦)

Twikoo-Magic 更新说明 2021/1/15 基于2021/1/14 Twikoo 更新1.1.0-beta,所有表情都将以缩写形式(如:[ text ]:)输出。1/14之前本仓库有部分表情text缺失及重复, 导致无法正常使用表情 1/14后的所有表情json列表已全部更新

noionion 90 Jan 05, 2023
A basic notes app to store your notes.

Notes Webapp A basic notes webapp to keep your notes.You can add, edit and delete notes after signing up. To add a note type your note in the text box

2 Oct 23, 2021
GWCelery is a simple and reliable package for annotating and orchestrating LIGO/Virgo alerts

GWCelery is a simple and reliable package for annotating and orchestrating LIGO/Virgo alerts, built from widely used open source components.

Min-A Cho Zeno 1 Nov 02, 2021
Safe temperature monitor for baby's room. Made for Raspberry Pi Pico.

Baby Safe Temperature Monitor This project is meant to build a temperature safety monitor for a baby or small child's room. Studies have shown the ris

Jeff Geerling 72 Oct 09, 2022
Python wrapper around Apple App Store Api

App Store Connect Api This is a Python wrapper around the Apple App Store Api : https://developer.apple.com/documentation/appstoreconnectapi So far, i

123 Jan 06, 2023
Data on Free Food at MIT

MIT Free Food Timing Procrastinating research by plotting data on how long it takes emails on the free-food at mit edu mailing list to go through. Dat

Peter Sharpe 2 Nov 01, 2021
The ROS publisher/subscriber example packaged as a snap

publisher-subscriber The ROS publisher/subscriber example packaged as a snap, based on ROS Noetic and Ubuntu Core 20. Strictly confined. This example

3 Dec 03, 2021