Predictive AI layer for existing databases.

Overview

MindsDB

MindsDB workflow Python supported PyPi Version PyPi Downloads MindsDB Community MindsDB Website

MindsDB is an open-source AI layer for existing databases that allows you to effortlessly develop, train and deploy state-of-the-art machine learning models using SQL queries. Tweet

Predictive AI layer for existing databases
MindsDB

Try it out

Contributing

To contribute to mindsdb, please check out our Contribution guide.

Current contributors

Made with contributors-img.

Report Issues

Please help us by reporting any issues you may have while using MindsDB.

License

Issues
  • install issues on windows 10

    install issues on windows 10

    Describe the bug When installing mindsdb, the following error message is output with the following command.

    command: pip install --requirement reqs.txt

    error message: ERROR: Could not find a version that satisfies the requirement torch>=1.0.1.post2 (from lightwood==0.6.4->-r reqs.txt (line 25)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch>=1.0.1.post2 (from lightwood==0.6.4->-r reqs.txt (line 25))

    Desktop (please complete the following information):

    • OS: windows 10

    Additional context I think pytorch for Windows is currently not available through PYPI, so using the commands in https://pytorch.org is a better way.

    Bug 
    opened by YottaGin 22
  • Could not load module ModelInterface

    Could not load module ModelInterface

    hello can someone help to solve this :

    • ERROR:mindsdb-logger-f7442ec0-574d-11ea-a55e-106530eaf271:c:\users\drpbengrir\appdata\local\programs\python\python37\lib\site-packages\mindsdb\libs\controllers\transaction.py:188 - Could not load module ModelInterface
    Bug 
    opened by simofilahi 18
  • FileNotFoundError: [Errno 2] No such file or directory: '/home/milia/.venvs/mindsdb/lib/python3.6/site-packages/mindsdb_storage/1_0_5/suicide_rates_light_model_metadata.pickle'

    FileNotFoundError: [Errno 2] No such file or directory: '/home/milia/.venvs/mindsdb/lib/python3.6/site-packages/mindsdb_storage/1_0_5/suicide_rates_light_model_metadata.pickle'

    Describe the bug A FileNotFoundError occurs when running the predict.py script described below.

    The full traceback is the following:

    Traceback (most recent call last):
      File "predict.py", line 12, in <module>
        result = Predictor(name='suicide_rates').predict(when={'country':'Greece','year':1981,'sex':'male','age':'35-54','population':300000})
      File "/home/milia/.venvs/mindsdb/lib/python3.6/site-packages/mindsdb/libs/controllers/predictor.py", line 472, in predict
        transaction = Transaction(session=self, light_transaction_metadata=light_transaction_metadata, heavy_transaction_metadata=heavy_transaction_metadata, breakpoint=breakpoint)
      File "/home/milia/.venvs/mindsdb/lib/python3.6/site-packages/mindsdb/libs/controllers/transaction.py", line 53, in __init__
        self.run()
      File "/home/milia/.venvs/mindsdb/lib/python3.6/site-packages/mindsdb/libs/controllers/transaction.py", line 259, in run
        self._execute_predict()
      File "/home/milia/.venvs/mindsdb/lib/python3.6/site-packages/mindsdb/libs/controllers/transaction.py", line 157, in _execute_predict
        with open(CONFIG.MINDSDB_STORAGE_PATH + '/' + self.lmd['name'] + '_light_model_metadata.pickle', 'rb') as fp:
    FileNotFoundError: [Errno 2] No such file or directory: '/home/milia/.venvs/mindsdb/lib/python3.6/site-packages/mindsdb_storage/1_0_5/suicide_rates_light_model_metadata.pickle'
    

    To Reproduce Steps to reproduce the behavior:

    1. Create a train.py script using the dataset: https://www.kaggle.com/russellyates88/suicide-rates-overview-1985-to-2016#master.csv. The train.py script is the one below:
    from mindsdb import Predictor
    
    Predictor(name='suicide_rates').learn(
        to_predict='suicides_no', # the column we want to learn to predict given all the data in the file
        from_data="master.csv" # the path to the file where we can learn from, (note: can be url)
    )
    
    1. Run the train.py script.
    2. Create and run the predict.py script:
    from mindsdb import Predictor
    
    # use the model to make predictions
    result = Predictor(name='suicide_rates').predict(when={'country':'Greece','year':1981,'sex':'male','age':'35-54','population':300000})
    
    # you can now print the results
    print(result)
    
    1. See error

    Expected behavior What was expected was to see the results.

    Desktop (please complete the following information):

    • OS: Ubuntu 18.04.2 LTS
    • mindsdb 1.0.5
    • python 3.6.7
    Bug 
    opened by mlliarm 18
  • MySQL / Singlestore DB SSL support

    MySQL / Singlestore DB SSL support

    Problem I cannot connect my Singlestore DB (MySQL driver) because Mindsdb doesn't support SSL options.

    Describe the solution you'd like Full support for MySQL SSL (key, cert, ca).

    Describe alternatives you've considered No alternative is possible at the moment, security first.

    enhancement question 
    opened by pierre-b 17
  • Caching historical data for streams

    Caching historical data for streams

    I'll be using an example here and generalize whenever needed, let's say we have the following dataset we train a timeseries predictor on:

    time,gb,target,aux
    1     ,A, 7,        foo
    2     ,A, 10,      foo
    3     ,A,  12,     bar
    4     ,A,  14,     bar
    2     ,B,  5,       foo
    4     ,B,  9,       foo
    

    In this case target is what we are predicting, gb is the column we are grouping on and we are ordering by time. aux is an unrelated column that's not timeseries in nature and just used "normally".

    We train a predictor with a window of n

    Then let's say we have an input stream that looks something like this:

    time, gb, target, aux
    6,      A ,   33,     foo
    7,      A,    54,     foo
    

    Caching

    First, we will need to store, for each value of the column gb n recrods.

    So, for example, if n==1 we would save the last row in the data above, if n==2 we would save both, whem new rows come in, we un-cache the older rows`.

    Infering

    Second, when a new datapoint comes into the input stream we'll need to "infer" that the prediction we have to make is actually for the "next" datapoint. Which is to say that when: 7, A, 54, foo comes in we need to infer that we need to actually make predictions for:

    8, A, <this is what we are predicting>, foo

    The challenge here is how do we infer that the next timestamp is 8, one simple way to do this is to just subtract from the previous record, but that's an issue for the first observation (since we don't have a previous record to substract from, unless we cache part of the training data) or we could add a feature to native, to either:

    a) Provide a delta argument for each group by (representing by how much we increment the order column[s]) b) Have an argument when doing timeseries prediction that tells it to predict for the "next" row and then do the inferences under the cover.

    @paxcema let me know which of these features would be easy to implement in native, since you're now the resident timeseries expert.

    enhancement 
    opened by George3d6 16
  • Data extraction with mindsdb v.2

    Data extraction with mindsdb v.2

    This is a followup to #334, which was about the same use case and dataset, but different version of MindsDB with different errors.

    Your Environment

    Google Colab.

    • Python version: 3.6
    • Pip version: 19.3.1
    • Mindsdb version you tried to install: 2.13.8

    Describe the bug Running .learn() fails.

    [nltk_data]   Package stopwords is already up-to-date!
    
    /usr/local/lib/python3.6/dist-packages/lightwood/mixers/helpers/ranger.py:86: UserWarning: This overload of addcmul_ is deprecated:
    	addcmul_(Number value, Tensor tensor1, Tensor tensor2)
    Consider using one of the following signatures instead:
    	addcmul_(Tensor tensor1, Tensor tensor2, *, Number value) (Triggered internally at  /pytorch/torch/csrc/utils/python_arg_parser.cpp:766.)
      exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
    
    Downloading: 100%
    232k/232k [00:00<00:00, 1.31MB/s]
    
    
    Downloading: 100%
    442/442 [00:06<00:00, 68.0B/s]
    
    
    Downloading: 100%
    268M/268M [00:06<00:00, 43.1MB/s]
    
    
    Token indices sequence length is longer than the specified maximum sequence length for this model (606 > 512). Running this sequence through the model will result in indexing errors
    ERROR:mindsdb-logger-ac470732-3303-11eb-bbe9-0242ac1c0002---eb4b7352-566f-4a1b-aef2-c286163e1a10:/usr/local/lib/python3.6/dist-packages/mindsdb_native/libs/controllers/transaction.py:173 - Could not load module ModelInterface
    
    ERROR:mindsdb-logger-ac470732-3303-11eb-bbe9-0242ac1c0002---eb4b7352-566f-4a1b-aef2-c286163e1a10:/usr/local/lib/python3.6/dist-packages/mindsdb_native/libs/controllers/transaction.py:239 - index out of range in self
    
    ---------------------------------------------------------------------------
    
    IndexError                                Traceback (most recent call last)
    
    <ipython-input-13-a5e3bd095e46> in <module>()
          7 mdb.learn(
          8     from_data=train,
    ----> 9     to_predict='birth_year' # the column we want to learn to predict given all the data in the file
         10 )
    
    20 frames
    
    /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
       1812         # remove once script supports set_grad_enabled
       1813         _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
    -> 1814     return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
       1815 
       1816 
    
    IndexError: index out of range in self
    

    https://colab.research.google.com/drive/1a6WRSoGK927m3eMkVdlwBhW6-3BtwaAO?usp=sharing

    To Reproduce

    1. Rerun this notebook on Google Colab https://github.com/opendataby/vybary2019/blob/e08c32ac51e181ddce166f8a4fbf968f81bd2339/canal03-parsing-with-mindsdb.ipynb
    Bug 
    opened by abitrolly 15
  • Upload new Predictor

    Upload new Predictor

    Your Environment Scout The Scout upload Prediction does not function when a zip file is trying to upload.

    Errror is "Just .zip files are allowed" even though the zip file was selected. 
    
    

    Question on prediction, is there any way to upload a model of Tensorflow, or do we need to Convert TensorFlow model into the MindDB prediction model?

    Bug question 
    opened by Winthan 15
  • Second run fails with `FileNotFound` error inside the `mindsdb_storage`

    Second run fails with `FileNotFound` error inside the `mindsdb_storage`

    Your Environment

    • Python version: Python3
    • Pip version: pip3
    • Operating system: Ubuntu 18
    • Python environment used (e.g. venv, conda): -
    • Mindsdb version you tried to install: 1.13.12
    • Additional info if applicable:

    Describe the bug I trained a model in Codelab and exported it as a ZIP to my local machine. Now I'm using it a script to predict. It ran for the first time without an issue. The consecutive runs are now experiencing an issue like below.

    Screenshot from 2020-03-04 14-48-33

    My script is very simple,

    #!/usr/bin/env python3
    import mindsdb
    from mindsdb import *
    import csv
    
    predictor = mindsdb.Predictor(name='Incidents')
    predictor.load_model('./models/Incidents.zip')
    
    with open('./test_incidents.csv') as csv_file:
        csv_reader = csv.reader(csv_file, delimiter=',')
        line_count = 0
        for row in csv_reader:
            if line_count == 0:
                print(f'Column names are {", ".join(row)}')
                line_count += 1
            else:
                r = predictor.predict(when={'Description': row[0]})
                print(r['Category'])
    

    To Reproduce Steps to reproduce the behaviour, for example:

    1. Run the above script with a downloaded model ZIP.
    2. Rerun the script

    Expected behaviour It should run without an error

    Additional context I'm new to the MindsBD, I'm trying it out.

    Bug 
    opened by agentmilindu 15
  • fix CI can't pass, update test case

    fix CI can't pass, update test case

    Hi @torrmal

    Found travis failed to run python3 run_tests.py, so I changed to python setup.py install. But the test still can not pass.

    https://travis-ci.org/wangshub/mindsdb/jobs/499776388#L1132

    image

    opened by wangshub 14
  • [Bug]: Docker container - Killed pip install mindsdb

    [Bug]: Docker container - Killed pip install mindsdb

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Current Behavior

    When I try to run the docker container, I have this error message below.

    ➜  Desktop docker run -p 47334:47334 mindsdb/mindsdb
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100     6  100     6    0     0      7      0 --:--:-- --:--:-- --:--:--     7
    Collecting mindsdb==2.54.0
      Downloading MindsDB-2.54.0.tar.gz (120 kB)
      Installing build dependencies: started
      Installing build dependencies: finished with status 'done'
      Getting requirements to build wheel: started
      Getting requirements to build wheel: finished with status 'done'
        Preparing wheel metadata: started
        Preparing wheel metadata: finished with status 'done'
    Collecting dfsql==0.6.6
      Downloading dfsql-0.6.6-py3-none-manylinux1_x86_64.whl (27 kB)
    Requirement already satisfied: waitress>=1.4.4 in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (2.0.0)
    Requirement already satisfied: psutil in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (5.8.0)
    Requirement already satisfied: pymongo[srv,tls]>=3.10.1 in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (3.12.0)
    Collecting checksumdir>=1.2.0
      Downloading checksumdir-1.2.0-py3-none-any.whl (5.3 kB)
    Requirement already satisfied: flask<2.0,>=1.0 in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (1.1.4)
    Requirement already satisfied: pyparsing==2.3.1 in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (2.3.1)
    Collecting mindsdb-sql==0.0.27
      Downloading mindsdb_sql-0.0.27-py3-none-any.whl (48 kB)
    Requirement already satisfied: setuptools in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (50.3.1.post20201107)
    Collecting lightwood<1.3.0,>=1.2.0
      Downloading lightwood-1.2.0.tar.gz (120 kB)
      Installing build dependencies: started
      Installing build dependencies: finished with status 'done'
      Getting requirements to build wheel: started
      Getting requirements to build wheel: finished with status 'done'
        Preparing wheel metadata: started
        Preparing wheel metadata: finished with status 'done'
    Requirement already satisfied: moz-sql-parser==3.32.20026 in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (3.32.20026)
    Requirement already satisfied: sentry-sdk in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (1.3.1)
    Requirement already satisfied: appdirs>=1.0.0 in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (1.4.4)
    Requirement already satisfied: sqlalchemy>=1.3.0 in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (1.4.23)
    Requirement already satisfied: cryptography<3.4,>=2.9.2 in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (3.2.1)
    Requirement already satisfied: python-tds>=1.10.0 in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (1.11.0)
    Requirement already satisfied: kafka-python>=2.0.0 in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (2.0.2)
    Requirement already satisfied: flask-compress>=1.0.0 in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (1.10.1)
    Collecting mindsdb-datasources==1.5.0
      Downloading mindsdb_datasources-1.5.0.tar.gz (15 kB)
    Requirement already satisfied: flask-restx>=0.2.0 in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (0.5.0)
    Requirement already satisfied: python-multipart>=0.0.5 in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (0.0.5)
    Requirement already satisfied: walrus==0.8.2 in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (0.8.2)
    Requirement already satisfied: pg8000>=1.15.3 in /opt/conda/lib/python3.8/site-packages (from mindsdb==2.54.0) (1.21.0)
    Collecting confi>=0.0.4.1
      Downloading confi-0.0.4.1.tar.gz (3.2 kB)
    Requirement already satisfied: numpy>=1.18.5 in /opt/conda/lib/python3.8/site-packages (from dfsql==0.6.6->mindsdb==2.54.0) (1.19.2)
    Requirement already satisfied: pandas>=1.1.2 in /opt/conda/lib/python3.8/site-packages (from dfsql==0.6.6->mindsdb==2.54.0) (1.3.2)
    Requirement already satisfied: dnspython<2.0.0,>=1.16.0; extra == "srv" in /opt/conda/lib/python3.8/site-packages (from pymongo[srv,tls]>=3.10.1->mindsdb==2.54.0) (1.16.0)
    Requirement already satisfied: click<8.0,>=5.1 in /opt/conda/lib/python3.8/site-packages (from flask<2.0,>=1.0->mindsdb==2.54.0) (7.1.2)
    Requirement already satisfied: itsdangerous<2.0,>=0.24 in /opt/conda/lib/python3.8/site-packages (from flask<2.0,>=1.0->mindsdb==2.54.0) (1.1.0)
    Collecting Werkzeug<2.0,>=0.15
      Downloading Werkzeug-1.0.1-py2.py3-none-any.whl (298 kB)
    Requirement already satisfied: Jinja2<3.0,>=2.10.1 in /opt/conda/lib/python3.8/site-packages (from flask<2.0,>=1.0->mindsdb==2.54.0) (2.11.3)
    Requirement already satisfied: sly>=0.4 in /opt/conda/lib/python3.8/site-packages (from mindsdb-sql==0.0.27->mindsdb==2.54.0) (0.4)
    Requirement already satisfied: pytest>=5.4.3 in /opt/conda/lib/python3.8/site-packages (from mindsdb-sql==0.0.27->mindsdb==2.54.0) (6.2.4)
    Requirement already satisfied: schema>=0.6.8 in /opt/conda/lib/python3.8/site-packages (from lightwood<1.3.0,>=1.2.0->mindsdb==2.54.0) (0.7.4)
    Requirement already satisfied: pillow<7 in /opt/conda/lib/python3.8/site-packages (from lightwood<1.3.0,>=1.2.0->mindsdb==2.54.0) (6.2.2)
    Requirement already satisfied: wheel>=0.32.2 in /opt/conda/lib/python3.8/site-packages (from lightwood<1.3.0,>=1.2.0->mindsdb==2.54.0) (0.35.1)
    Collecting black>=21.9b0
      Downloading black-21.9b0-py3-none-any.whl (148 kB)
    Collecting pmdarima>=1.8.0
      Downloading pmdarima-1.8.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (1.5 MB)
    Requirement already satisfied: dill==0.3.4 in /opt/conda/lib/python3.8/site-packages (from lightwood<1.3.0,>=1.2.0->mindsdb==2.54.0) (0.3.4)
    Requirement already satisfied: NLTK!=3.6,>=3 in /opt/conda/lib/python3.8/site-packages (from lightwood<1.3.0,>=1.2.0->mindsdb==2.54.0) (3.6.2)
    Collecting autopep8>=1.5.7
      Downloading autopep8-1.5.7-py2.py3-none-any.whl (45 kB)
    Collecting dataclasses-json>=0.5.4
      Downloading dataclasses_json-0.5.6-py3-none-any.whl (25 kB)
    Collecting torch>=1.9.0
      Downloading torch-1.10.0-cp38-cp38-manylinux1_x86_64.whl (881.9 MB)
    bash: line 1:    13 Killed                  pip install mindsdb==$(curl https://public.api.mindsdb.com/installer/release/docker___started___None)
    

    Expected Behavior

    The container start and I'm happy after that.

    Steps To Reproduce

    1. docker pull mindsdb/mindsdb
    2. docker run -p 47334:47334 mindsdb/mindsdb
    

    Anything else?

    No response

    Big Bad Bug 
    opened by zakariamehbi 13
  • convention over configuration

    convention over configuration

    we should have a configuration paradigm that follows convention over configuration, that is that we have default values for the configuration, and we only need to pass what we want different in the config file.

    https://en.wikipedia.org/wiki/Convention_over_configuration

    help wanted good first issue refactor 
    opened by torrmal 13
  • Documentation for using a custom predictor/analysis block from mindsdb (on-prem & cloud)

    Documentation for using a custom predictor/analysis block from mindsdb (on-prem & cloud)

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Is your feature request related to a problem? Please describe.

    A likely solution for one potential customer is to use a custom predictor and analysis block. We should document how this can be done, internally (and ideally externally), both for on-prem and cloud.

    Describe the solution you'd like.

    Included steps

    • How to create the custom mixer/analysis block
    • What to do with it
    • How to call it from mindsdb

    Describe an alternate solution.

    No response

    Anything else? (Additional Context)

    Build upon existing work: https://lightwood.io/tutorials.html

    documentation 
    opened by tomhuds 0
  • Documentation for querying mindsdb via backend frameworks

    Documentation for querying mindsdb via backend frameworks

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Is your feature request related to a problem? Please describe.

    Customers are unsure about how to run queries from within their backend frameworks.

    Some documentation on this will be really useful for:

    • Increasing number of customers trying out mindsdb
    • Reducing time to production
    • Designing how a user-friendly package should work e.g. a new version of mindsdb_python_sdk

    Describe the solution you'd like.

    • A written demo/tutorial showing how to use mindsdb from a backend framework (e.g. python)
    • Video accompanying it

    Possible frameworks

    • Python (current customer)
    • Ruby on Rails (current customer)
    • PHP (current customer)
    • Javascript
    • Any others? @ZoranPandovski ?

    Example functionality to demonstrate for each

    • Connect datasource
    • Train predictor
    • Make predictions
    • Persist predictions
    • Retrain predictor
    • (Export/import predictor)

    Describe an alternate solution.

    No response

    Anything else? (Additional Context)

    We will need to:

    • [ ] Decide what frameworks to do, in what order
    • [ ] Decide dataset (home_rentals?) and functionality to demonstrate
    • [ ] Make tutorials and videos
    documentation 
    opened by tomhuds 0
  • Add predictions as a column to the original table

    Add predictions as a column to the original table

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Is your feature request related to a problem? Please describe.

    We currently have a good workaround, which adds predictions to a new 'predictions' table. (https://docs.google.com/document/d/1dhqW3fdiPJhtRptMuWP-kVF9SWrWwcs-Q2EhLtabbH0/edit)

    One customer is ok with this, but having the predictions in the same table would be 'less of a hassle'. If they think this, it is very likely other customers will do too.

    Describe the solution you'd like.

    Possible syntax, to get the conversation started:

    INSERT COLUMNS INTO table_name ( query, including an id? )

    or, something much simpler like:

    ADD PREDICTIONS

    Describe an alternate solution.

    No response

    Anything else? (Additional Context)

    We should also consider what would happen for retraining etc.

    • [ ] Meeting: decide if this is something we want to build
    • [ ] Decide on syntax and implications
    • [ ] Build/Test
    opened by tomhuds 0
  • H2O PredictiveHandler

    H2O PredictiveHandler

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Is your feature request related to a problem? Please describe.

    Very low priority for now.

    Could be useful to have in integration for H2O, another AutoML.

    This may also open up additional use-cases that we can solve, e.g. clustering https://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/intro.html

    Describe the solution you'd like.

    TBC

    Describe an alternate solution.

    No response

    Anything else? (Additional Context)

    No response

    help wanted integration 
    opened by tomhuds 0
  • Apache Superset support

    Apache Superset support

    Is there an existing issue for this?

    • [X] I have searched the existing issues

    Is your feature request related to a problem? Please describe.

    Support of apache superset. Known problems using mindsdb as mysql database:

    1. In "add dataset" dialog such sql is sended which is not supported at the moment
    SHOW CREATE TABLE `mindsdb`.`fish_model`
    
    1. In case of dataset created from sql with *, e.g:
    SELECT *
       FROM photorep.fish
       JOIN mindsdb.fish_model
       WHERE photorep.fish.species = 'Pike'
    

    dashboard tries to run:

    SELECT width AS width,
           count(*) AS count
    FROM
      (SELECT *
       FROM photorep.fish
       JOIN mindsdb.fish_model
       WHERE photorep.fish.species = 'Pike') AS virtual_table
    GROUP BY width
    ORDER BY count DESC
    

    And as a result error:

    RuntimeError: Binder Error: table "pandas_scan" has duplicate column name "Weight"

    Describe the solution you'd like.

    No response

    Describe an alternate solution.

    No response

    Anything else? (Additional Context)

    No response

    opened by ea-rus 0
Releases(v22.6.2.2)
existing and custom freqtrade strategies supporting the new hyperstrategy format.

freqtrade-strategies Description Existing and self-developed strategies, rewritten to support the new HyperStrategy format from the freqtrade-develop

null 39 Aug 20, 2021
Easy and comprehensive assessment of predictive power, with support for neuroimaging features

Documentation: https://raamana.github.io/neuropredict/ News As of v0.6, neuropredict now supports regression applications i.e. predicting continuous t

Pradeep Reddy Raamana 89 May 19, 2022
An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates neural fields, predictive coding, top-down-bottom-up, and attention (consensus between columns)

GLOM - Pytorch (wip) An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates neural fields, predictive coding,

Phil Wang 166 May 19, 2022
Official implementation for NIPS'17 paper: PredRNN: Recurrent Neural Networks for Predictive Learning Using Spatiotemporal LSTMs.

PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning The predictive learning of spatiotemporal sequences aims to generate future

THUML: Machine Learning Group @ THSS 205 Jul 6, 2022
PyTorch Code of "Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spatiotemporal Dynamics"

Memory In Memory Networks It is based on the paper Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spati

Yang Li 12 May 30, 2022
Real-Time Multi-Contact Model Predictive Control via ADMM

Here, you can find the code for the paper 'Real-Time Multi-Contact Model Predictive Control via ADMM'. Code is currently being cleared up and optimize

null 16 May 24, 2022
A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization

Website, Tutorials, and Docs    Uncertainty Toolbox A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualizatio

Uncertainty Toolbox 1.2k Jun 27, 2022
Predictive Maintenance LSTM

Predictive-Maintenance-LSTM - Predictive maintenance study for Complex case study, we've obtained failure causes by operational error and more deeply by design mistakes.

Amir M. Sadafi 1 Dec 31, 2021
EfficientMPC - Efficient Model Predictive Control Implementation

efficientMPC Efficient Model Predictive Control Implementation The original algo

Vin 7 May 3, 2022
Natural Posterior Network: Deep Bayesian Predictive Uncertainty for Exponential Family Distributions

Natural Posterior Network This repository provides the official implementation o

Oliver Borchert 39 Jul 2, 2022
Implementation of CVPR'2022:Surface Reconstruction from Point Clouds by Learning Predictive Context Priors

Surface Reconstruction from Point Clouds by Learning Predictive Context Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository c

null 118 Jul 6, 2022
Code for "Retrieving Black-box Optimal Images from External Databases" (WSDM 2022)

Retrieving Black-box Optimal Images from External Databases (WSDM 2022) We propose how a user retreives an optimal image from external databases of we

joisino 5 Apr 13, 2022
Implementation of the Point Transformer layer, in Pytorch

Point Transformer - Pytorch Implementation of the Point Transformer self-attention layer, in Pytorch. The simple circuit above seemed to have allowed

Phil Wang 469 Jun 30, 2022
An abstraction layer for mathematical optimization solvers.

MathOptInterface Documentation Build Status Social An abstraction layer for mathematical optimization solvers. Replaces MathProgBase. Citing MathOptIn

JuMP-dev 264 Jun 28, 2022
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021)

Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021) Citation Please cite as: @inproceedings{liu2020understan

Sunbow Liu 19 Mar 21, 2022
Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones

HaloNet - Pytorch Implementation of the Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones. This re

Phil Wang 181 Jun 18, 2022
Multi-layer convolutional LSTM with Pytorch

Convolution_LSTM_pytorch Thanks for your attention. I haven't got time to maintain this repo for a long time. I recommend this repo which provides an

Zijie Zhuang 721 Jun 23, 2022
EigenGAN Tensorflow, EigenGAN: Layer-Wise Eigen-Learning for GANs

Gender Bangs Body Side Pose (Yaw) Lighting Smile Face Shape Lipstick Color Painting Style Pose (Yaw) Pose (Pitch) Zoom & Rotate Flush & Eye Color Mout

Zhenliang He 315 Jul 1, 2022
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Jun 21, 2022