public class RollbackStagedTable
extends java.lang.Object
implements org.apache.spark.sql.connector.catalog.StagedTable, org.apache.spark.sql.connector.catalog.SupportsRead, org.apache.spark.sql.connector.catalog.SupportsWrite, org.apache.spark.sql.connector.catalog.SupportsDelete
A Spark catalog can implement StagingTableCatalog to support atomic operations by producing
StagedTable. But if a catalog implements StagingTableCatalog, Spark expects the catalog to be
able to produce a StagedTable for any table loaded by the catalog. This assumption doesn't always
work, as in the case of SparkSessionCatalog, which supports atomic operations can produce
a StagedTable for Iceberg tables, but wraps the session catalog and cannot necessarily produce a
working StagedTable implementation for tables that it loads.
The work-around is this class, which implements the StagedTable interface but does not have atomic behavior. Instead, the StagedTable interface is used to implement the behavior of the non-atomic SQL plans that will create a table, write, and will drop the table to roll back.
This StagedTable implements SupportsRead, SupportsWrite, and SupportsDelete by passing the
calls to the real table. Implementing those interfaces is safe because Spark will not use them
unless the table supports them and returns the corresponding capabilities from capabilities().
| Constructor and Description |
|---|
RollbackStagedTable(org.apache.spark.sql.connector.catalog.TableCatalog catalog,
org.apache.spark.sql.connector.catalog.Identifier ident,
org.apache.spark.sql.connector.catalog.Table table) |
| Modifier and Type | Method and Description |
|---|---|
void |
abortStagedChanges() |
java.util.Set<org.apache.spark.sql.connector.catalog.TableCapability> |
capabilities() |
void |
commitStagedChanges() |
void |
deleteWhere(org.apache.spark.sql.sources.Filter[] filters) |
java.lang.String |
name() |
org.apache.spark.sql.connector.read.ScanBuilder |
newScanBuilder(org.apache.spark.sql.util.CaseInsensitiveStringMap options) |
org.apache.spark.sql.connector.write.WriteBuilder |
newWriteBuilder(org.apache.spark.sql.connector.write.LogicalWriteInfo info) |
org.apache.spark.sql.connector.expressions.Transform[] |
partitioning() |
java.util.Map<java.lang.String,java.lang.String> |
properties() |
org.apache.spark.sql.types.StructType |
schema() |
public RollbackStagedTable(org.apache.spark.sql.connector.catalog.TableCatalog catalog,
org.apache.spark.sql.connector.catalog.Identifier ident,
org.apache.spark.sql.connector.catalog.Table table)
public void commitStagedChanges()
commitStagedChanges in interface org.apache.spark.sql.connector.catalog.StagedTablepublic void abortStagedChanges()
abortStagedChanges in interface org.apache.spark.sql.connector.catalog.StagedTablepublic java.lang.String name()
name in interface org.apache.spark.sql.connector.catalog.Tablepublic org.apache.spark.sql.types.StructType schema()
schema in interface org.apache.spark.sql.connector.catalog.Tablepublic org.apache.spark.sql.connector.expressions.Transform[] partitioning()
partitioning in interface org.apache.spark.sql.connector.catalog.Tablepublic java.util.Map<java.lang.String,java.lang.String> properties()
properties in interface org.apache.spark.sql.connector.catalog.Tablepublic java.util.Set<org.apache.spark.sql.connector.catalog.TableCapability> capabilities()
capabilities in interface org.apache.spark.sql.connector.catalog.Tablepublic void deleteWhere(org.apache.spark.sql.sources.Filter[] filters)
deleteWhere in interface org.apache.spark.sql.connector.catalog.SupportsDeletepublic org.apache.spark.sql.connector.read.ScanBuilder newScanBuilder(org.apache.spark.sql.util.CaseInsensitiveStringMap options)
newScanBuilder in interface org.apache.spark.sql.connector.catalog.SupportsReadpublic org.apache.spark.sql.connector.write.WriteBuilder newWriteBuilder(org.apache.spark.sql.connector.write.LogicalWriteInfo info)
newWriteBuilder in interface org.apache.spark.sql.connector.catalog.SupportsWrite