Sqlalchemy-databricks - SQLAlchemy dialect for Databricks

Overview

sqlalchemy-databricks

pypi pyversions

A SQLAlchemy Dialect for Databricks using the officially supported databricks-sql-connector dbapi. Compatible with workspace and sql analytics clusters. (Thanks to @Tom-Newton for pointing this out)

Installation

Install using pip.

pip install sqlalchemy-databricks

Usage

Installing registers the databricks+connector dialect/driver with SQLAlchemy. Fill in the required information when passing the engine URL. The http path can be for either a workspace or sql analytics cluster.

@ :443/default", connect_args={"http_path": " "} ) logs = Table("my_table", MetaData(bind=engine), autoload=True) print(select([func.count("*")], from_obj=logs).scalar())">
from sqlalchemy import *
from sqlalchemy.engine import create_engine


engine = create_engine(
    "databricks+connector://token:
     
      @
      
       :443/default"
      
     ,
    connect_args={"http_path": "
     
      "
     }
)

logs = Table("my_table", MetaData(bind=engine), autoload=True)
print(select([func.count("*")], from_obj=logs).scalar())
Comments
  • schema doesn't change from default

    schema doesn't change from default

    sorry if this is a really dumb/misguided question. if it is, I am hoping you could point me to a place where I can learn more.

    I set up an engine like

    engine = create_engine( f"databricks+connector://token:{token}@{host}:443/{my_schema}", connect_args={"http_path": http_path} )

    but engine.table_names() and anything I try to do with that engine have the default schema tables.

    I have to workaround by doing a schema translation, but that can't be the right way to do this, or is it?

    engine1 = engine.execution_options(schema_translate_map={ "default": my_schema })

    edit: whoops put some private data in there

    opened by jonathan-dufault-kr 14
  • Add alembic support

    Add alembic support

    Following instructions from (here)[https://groups.google.com/g/sqlalchemy-alembic/c/t3KmE9KDzH4/m/AK1UylnCCQAJ], making this dialect work with Alembic.

    opened by harry19023 2
  • InvalidRequestError: Could not reflect: requested table(s)

    InvalidRequestError: Could not reflect: requested table(s)

    Summary

    We are trying to get data from Databricks into a pandas dataframe. We were able to get the list of all the tables but when we try to actually query the data on one table, it fails. We are actually able to make it work using only databricks-sql-connector, but we'd prefer to use sql_alchemy if possible.

    Data

    The tables are in Databricks under hive_metastore.my_schema which we access through a SQL warehouse.

    Code and Error

    The code below correctly returns all the tables (schema_name = "my_schema")

    from databricks import sql
    from sqlalchemy import *
    from sqlalchemy.engine import create_engine
    
    engine = create_engine(
        f"""databricks+connector://token:{databricks_token}@{databricks_server_hostname}:443/{schema_name}""",
        connect_args={ "http_path": f"{databricks_http_path}",},
    )
    sql_query = "show tables"
    df = pd.read_sql(sql_query, engine)
    

    However when we change the line before last to:

    sql_query = "select * from my_table limit 100"
    

    We get the following error: InvalidRequestError: Could not reflect: requested table(s) not available in Engine(databricks+connector://token:***@****.azuredatabricks.net:443/my_schema): (select * from my_table limit 100)

    We tried to replace my_table by my_schema.my_table, hive_metastore.my_schema.my_table and we get the same error.

    Libraries

    Python 3.10.8 pandas 1.5.1 sqlalchemy-databricks 0.2.0 sqlalchemy 1.4.39 databricks-sql-connector 2.0.2

    Please let me know if I need to provide more details. Thanks for the help.

    opened by colineRamee 1
  • Connection value  examples

    Connection value examples

    Why?

    I is never easy to figure out exactly what value the conn string wants and what format. Examples help. Now we have examples.

    What Changes?

    .env.template now has examples and links to detailed databricks docs so new users can quickly get conn strings right :)

    How Does This Impact Us?

    One less silly thing to slow us down.

    opened by norton120 1
  • Example .env values?

    Example .env values?

    This is always a weirdly painful thing with Sqlalchemy dialects and seems like such a simple thing to make easier. I have yet to find 2 dialects that expect params like host/cluster/role/compute_warehouse in exactly the same format. If you were to provide dummy values of each in the .env file it would probably save millions of quietly lost person-hours trying to decipher things like "does this dialect want the host escaped? where does it want the account id, as a get value in the string or a conn param? what have I done with my life that I am re-running tests at 3am because I don't know if this conn string wants protocol prefixes?!?!?" and such.

    Also once/if I figure it out I'm happy to make a PR with the examples

    opened by norton120 1
  • Update sample environment with example formats

    Update sample environment with example formats

    Description

    Updates the sample environment to show what format the host, token, and http path conventionally take. These are taken directly from the official Python connector docs. The change to the readme also reflects the new name SQL Warehouses (which replaced SQL Endpoints in June 2022).

    Related Tickets & Documents

    Closes #9

    opened by susodapop 0
  • Either set supports_statement_cache to True or False

    Either set supports_statement_cache to True or False

    Hi,

    Many thanks for providing this dialect !

    I'm getting this warning when running queries:

    SAWarning: Dialect databricks:connector will not make use of SQL compilation caching as it does not set the 'supports_statement_cache' attribute to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Dialect maintainers should seek to set this attribute to True after appropriate development and testing for SQLAlchemy 1.4 caching support. Alternatively, this attribute may be set to False which will disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)

    opened by ebarault 0
  • [Question] passing http_path as query parameter

    [Question] passing http_path as query parameter

    Hi @crflynn ,

    Thanks for this wonderful tool.

    I was wondering if is possible to pass the http_path as a query parameter, e.g:

    SQLALCHEMY_DATABASE_URL="databricks+connector://token:<databricks_token>@<databricks_host>:443/<database_or_schema_name>?http_path=<MY_HTTP_PATH>"
    

    This is because I would like to standardize my application so if in the future I want to use another database, I just have to change the URL. The problem that I have at the moment is that I have to define a connect_args in the create_engine function for databricks, but for sqlite o MySQL I don´t need that parameter and raises an error that:

    TypeError: 'http_path' is an invalid keyword argument for this function

    This is when I run:

    engine = create_engine(
            settings.SQLALCHEMY_DATABASE_URL,
            echo=False,
            connect_args={"check_same_thread": False, "http_path": settings.DATABRICKS_HTTP_PATH},
        )
    

    and SQLALCHEMY_DATABASE_URL=sqlite:///./sql_app.db

    opened by dtcMLOps 0
  • add custom type compiler, fix data type DATE

    add custom type compiler, fix data type DATE

    PyHive mappes the data type DATE to a old data type TIMESTAMP which is a quite old way of mapping Date types. Since databricks have been launched after Hive 0.12 I would strongly recommend to fix the data type mapping to DATE in the SQLAlchemy implementation for databricks.

    See also: https://github.com/dropbox/PyHive/issues/139

    opened by leo-schick 0
  • How to run queries in a non-blocking way?

    How to run queries in a non-blocking way?

    Hi there! Thanks for the contribution, the project is awesome and saved me countless hours of coding!

    I have a stupid question: is there an example of how to run queries in a non-blocking way serving data out of a fastapi async handler? Is that supported?

    If so, how would one do that, using something like this: https://docs.sqlalchemy.org/en/14/orm/extensions/asyncio.html ?

    opened by thiagodelgado111 0
  • pandas to sql fails to create table if it does not exist

    pandas to sql fails to create table if it does not exist

    when using the pandas to sql function it would appear if the table does not already exist that the table wont be created. The expected behaviour is that the table should be created and the results inserted.

    Is there a known work around or is this going to be fixed soon?

    opened by timpiperseek 1
  • Inconsistent schema name handling

    Inconsistent schema name handling

    My table in Databricks is in a database, then a schema, so I reference it like database.schema.tablename. When using the to_sql method to insert from a Pandas dataframe, SQLAlchemy issues a DESCRIBE database.schema.tablename command before trying to insert the data into the table. To get the DESCRIBE command to work properly I have to set the 'schema' parameter on the to_sql method to 'database.schema'. However, when I do this, the INSERT statement looks like INSERT INTO TABLE database.schema.tablename. I think the backticks on database.schema make Databricks SQL look for a single schema or database with the name database.schema, instead of treating those as two separate things. Is this something that needs to be addressed here or is this a deeper SQLAlchemy issue? Thanks for your help!

    opened by clayton-bridge 0
  • Failure to connect

    Failure to connect

    I have Azure databricks running in two regions (US and UK). I have been using the databricks-sql-connector successfully to connect to Hive Metastore tables in each for some months. Currently the clusters are not running all the time, but if I use the databricks connector it first spins up the appropriate cluster. I would love to be able to use SQLAlchemy instead, however. I tried your test setup in a Jupyter notebook, but get a Operational Error.

    import numpy as np import pandas as pd import os from sqlalchemy import * from sqlalchemy.engine import create_engine

    def test_dialect(host, http_path, token): engine = create_engine( f"databricks+connector://token:{token}@{host}:443/default", connect_args={"http_path": f"{http_path}"}, ) tables = inspect(engine).get_table_names() print(tables)

    region = "UK" server_hostname= os.getenv(f'DATABRICKS_{region}HOSTNAME'), http_path= os.getenv(f'DATABRICKS{region}HTTP'), access_token = os.getenv(f'DATABRICKS{region}_TOKEN') test_dialect(server_hostname, http_path, access_token)

    gives: OperationalError: (databricks.sql.exc.RequestError) Error during request to server (Background on this error at: https://sqlalche.me/e/14/e3q8)

    I do notice that this relies on PyHive, which is marked as unsupported.

    opened by alunap 0
Owner
Flynn
Flynn
A Python Object-Document-Mapper for working with MongoDB

MongoEngine Info: MongoEngine is an ORM-like layer on top of PyMongo. Repository: https://github.com/MongoEngine/mongoengine Author: Harry Marr (http:

MongoEngine 3.9k Dec 30, 2022
The Orator ORM provides a simple yet beautiful ActiveRecord implementation.

Orator The Orator ORM provides a simple yet beautiful ActiveRecord implementation. It is inspired by the database part of the Laravel framework, but l

Sébastien Eustace 1.4k Jan 01, 2023
A pythonic interface to Amazon's DynamoDB

PynamoDB A Pythonic interface for Amazon's DynamoDB. DynamoDB is a great NoSQL service provided by Amazon, but the API is verbose. PynamoDB presents y

2.1k Dec 30, 2022
An async ORM. 🗃

ORM The orm package is an async ORM for Python, with support for Postgres, MySQL, and SQLite. ORM is built with: SQLAlchemy core for query building. d

Encode 1.7k Dec 28, 2022
A very simple CRUD class for SQLModel! ✨

Base SQLModel A very simple CRUD class for SQLModel! ✨ Inspired on: Full Stack FastAPI and PostgreSQL - Base Project Generator FastAPI Microservices I

Marcelo Trylesinski 40 Dec 14, 2022
SQLModel is a library for interacting with SQL databases from Python code, with Python objects.

SQLModel is a library for interacting with SQL databases from Python code, with Python objects. It is designed to be intuitive, easy to use, highly compatible, and robust.

Sebastián Ramírez 9.1k Dec 31, 2022
A database migrations tool for TortoiseORM, ready to production.

Aerich Introduction Aerich is a database migrations tool for Tortoise-ORM, which is like alembic for SQLAlchemy, or like Django ORM with it's own migr

Tortoise 596 Jan 06, 2023
SQLAlchemy support for aiohttp.

aiohttp-sqlalchemy SQLAlchemy 1.4 / 2.0 support for AIOHTTP. The library provides the next features: initializing asynchronous sessions through a midd

Ruslan Ilyasovich Gilfanov 5 Dec 11, 2022
Sqlalchemy-databricks - SQLAlchemy dialect for Databricks

sqlalchemy-databricks A SQLAlchemy Dialect for Databricks using the officially s

Flynn 19 Nov 03, 2022
A single model for shaping, creating, accessing, storing data within a Database

'db' within pydantic - A single model for shaping, creating, accessing, storing data within a Database Key Features Integrated Redis Caching Support A

Joshua Jamison 178 Dec 16, 2022
A pure Python Database Abstraction Layer

pyDAL pyDAL is a pure Python Database Abstraction Layer. It dynamically generates the SQL/noSQL in realtime using the specified dialect for the databa

440 Nov 13, 2022
Adds SQLAlchemy support to Flask

Flask-SQLAlchemy Flask-SQLAlchemy is an extension for Flask that adds support for SQLAlchemy to your application. It aims to simplify using SQLAlchemy

The Pallets Projects 3.9k Jan 09, 2023
Pony Object Relational Mapper

Downloads Pony Object-Relational Mapper Pony is an advanced object-relational mapper. The most interesting feature of Pony is its ability to write que

3.1k Jan 01, 2023
Prisma Client Python is an auto-generated and fully type-safe database client

Prisma Client Python is an unofficial implementation of Prisma which is a next-generation ORM that comes bundled with tools, such as Prisma Migrate, which make working with databases as easy as possi

Robert Craigie 930 Jan 08, 2023
Rich Python data types for Redis

Created by Stephen McDonald Introduction HOT Redis is a wrapper library for the redis-py client. Rather than calling the Redis commands directly from

Stephen McDonald 281 Nov 10, 2022
Easy-to-use data handling for SQL data stores with support for implicit table creation, bulk loading, and transactions.

dataset: databases for lazy people In short, dataset makes reading and writing data in databases as simple as reading and writing JSON files. Read the

Friedrich Lindenberg 4.2k Dec 26, 2022
The Python SQL Toolkit and Object Relational Mapper

SQLAlchemy The Python SQL Toolkit and Object Relational Mapper Introduction SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that giv

mike bayer 3.5k Dec 29, 2022
Piccolo - A fast, user friendly ORM and query builder which supports asyncio.

A fast, user friendly ORM and query builder which supports asyncio.

919 Jan 04, 2023
A curated list of awesome tools for SQLAlchemy

Awesome SQLAlchemy A curated list of awesome extra libraries and resources for SQLAlchemy. Inspired by awesome-python. (See also other awesome lists!)

Hong Minhee (洪 民憙) 2.5k Dec 31, 2022
Python helpers for using SQLAlchemy with Tornado.

tornado-sqlalchemy Python helpers for using SQLAlchemy with Tornado. Installation $ pip install tornado-sqlalchemy In case you prefer installing from

Siddhant Goel 122 Aug 23, 2022