odbc#

predict_backend.persistence.odbc.iterable_to_odbc(_iterable, table, conn_id, store_interface, column_names=None, max_chunk_size=1000, **kwargs)#

Persist any iterable object to a database table. This is an all or nothing operation. If part of the insert fails the operation will be rolled back.

Parameters:
  • _iterable (Union[Iterable[tuple], Iterable[list]]) – An iterable of tuples/lists of the same length for DBAPI compliance, these will be supplied to the dbapi’s insert statement in batches of max_chunk_size.

  • table (str) – Destination table.

  • conn_id (str) – Connection_id that references a specific ODBC compliant data store.

  • store_interface – a store interface object from the flow step where this utility is being called

  • column_names (list) – Optional list of column names (must correspond to length and index of values supplied).

  • max_chunk_size (int) – Batch size for insert statements, defaults to 1000.

  • kwargs

Return type:

bool

Returns:

boolean: success?

predict_backend.persistence.odbc.odbc_to_iterable(query, conn_id, store_interface)#

Create a data store connection, execute the read only query and return the result set as a generator.

Parameters:
  • query (str) – The read only query to execute.

  • conn_id (str) – The connection id / connection name.

  • store_interface – a store interface object from the flow step where this utility is being called

Returns:

predict_backend.persistence.odbc.odbc_to_pandas(query, conn_id, store_interface, http_path=None)#

Query the config store for connection parameters, create a connection pool, execute sql query against specified datastore and read result set into pandas data frame, close connections/pool.

Parameters:
  • query (str) – A valid SQL-like query for the given data store being queried.

  • conn_id (str) – Connection_id that references a specific ODBC compliant data store.

  • store_interface – a store interface object from the flow step where this utility is being called

  • http_path (str) – Databricks cluster http path, overrides default.

Return type:

DataFrame

Returns:

A pandas data frame equivalent to the result set of the supplied query, with dtypes inferred from underlying schema (where applicable) or underlying data

predict_backend.persistence.odbc.pandas_to_odbc(_df, table, conn_id, store_interface, if_exists='fail', http_path=None)#

Safely persist the dataframe to the given data store provided by conn_id.

Parameters:
  • _df (DataFrame) – Pandas dataframe to be written.

  • table (str) – Name of the: table / collection / index(elasticsearch).

  • conn_id (str) – Connection_id that references a specific ODBC compliant data store.

  • store_interface – a store interface object from the flow step where this utility is being called

  • if_exists (str) – fail, replace, append.

  • http_path (str) – Databricks cluster http path, overrides default.

Return type:

bool

Returns:

Is write successful.