Rust PostgreSQL foreign data wrapper for interfacing with Google Cloud Bigtable, as well as other API compatible databases (HBase should work with some effort).
While logic is contained in Rust, it leverages PostgreSQL C FDW callbacks.
-
SELECT -
SELECT LIMIT -
SELECT OFFSET -
SELECT WHERE -
INSERT -
UPDATE -
DELETE -
FOREIGN SCHEMA, abandon funnyINSERTformat - Support for PG 9.3+
- Useful
EXPLAIN - Reduce
Cboilerplate
PostgreSQL 9.6+Stable Rust 1.15+, get it using rustup.
git clone https://github.com/durch/google-bigtable-postgres-fdw.git cd google-bigtable-postgres-fdw make install psql -U postgresCREATE EXTENSION bigtable; CREATE SERVER test FOREIGN DATA WRAPPER bigtable OPTIONS (instance '`instance_id`', project '`project_id`'); CREATE FOREIGN TABLE test(bt json) SERVER test OPTIONS (name '`table_name`'); CREATE USER MAPPING FOR postgres SERVER TEST OPTIONS (credentials_path '`path_to_service_account_json_credentials`');You can use gen.py to generate some test data. Modify gen.py to adjust for the number of generated records, also modify thecolumn key in the generated output as this needs be a column familly that exists in your Bigtable, running python gen.py outputs test.sql, which can be fed into PG. WHERE is evaluted on the PG side so be sure to grab what you need from BT.
psql -U postgres < test.sql One Bigtable row per PG rowis returned, limit is done on the BT side, rows are returned as json and can be further manipulated using Postgres json functions and operators.
SELECT * FROM test; SELECT * FROM test LIMIT 100; SELECT bt->'familyName', bt->'qualifier' FROM test WHERE bt->>'rowKey' ~* '.*regex.*'; SELECT bt->'familyName', bt->'qualifier' FROM test WHERE bt->>'rowKey' = 'exact';INSERT format is a bit weird ATM:
{ "row_key": string, "column": string, "column_qualifier": string, "data": [ json ] } Currently row_key is treated as a prefix and concated with a loop counter, while this covers a few use cases it is not really ideal for Bigtable. This will likely be extended to allow passing of a row_key array. As you are passing in one json object which gets expanded, INSERT counter always shows one row inserted, truth can be found in PG logs.