Class RollbackStagedTable
- All Implemented Interfaces:
org.apache.spark.sql.connector.catalog.StagedTable,org.apache.spark.sql.connector.catalog.SupportsDelete,org.apache.spark.sql.connector.catalog.SupportsDeleteV2,org.apache.spark.sql.connector.catalog.SupportsRead,org.apache.spark.sql.connector.catalog.SupportsWrite,org.apache.spark.sql.connector.catalog.Table,org.apache.spark.sql.connector.catalog.TruncatableTable
A Spark catalog can implement StagingTableCatalog to support atomic operations by producing
StagedTable. But if a catalog implements StagingTableCatalog, Spark expects the catalog to be
able to produce a StagedTable for any table loaded by the catalog. This assumption doesn't always
work, as in the case of SparkSessionCatalog, which supports atomic operations can produce
a StagedTable for Iceberg tables, but wraps the session catalog and cannot necessarily produce a
working StagedTable implementation for tables that it loads.
The work-around is this class, which implements the StagedTable interface but does not have atomic behavior. Instead, the StagedTable interface is used to implement the behavior of the non-atomic SQL plans that will create a table, write, and will drop the table to roll back.
This StagedTable implements SupportsRead, SupportsWrite, and SupportsDelete by passing the
calls to the real table. Implementing those interfaces is safe because Spark will not use them
unless the table supports them and returns the corresponding capabilities from capabilities().
-
Constructor Summary
ConstructorsConstructorDescriptionRollbackStagedTable(org.apache.spark.sql.connector.catalog.TableCatalog catalog, org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.connector.catalog.Table table) -
Method Summary
Modifier and TypeMethodDescriptionvoidSet<org.apache.spark.sql.connector.catalog.TableCapability> voidvoiddeleteWhere(org.apache.spark.sql.sources.Filter[] filters) name()org.apache.spark.sql.connector.read.ScanBuildernewScanBuilder(org.apache.spark.sql.util.CaseInsensitiveStringMap options) org.apache.spark.sql.connector.write.WriteBuildernewWriteBuilder(org.apache.spark.sql.connector.write.LogicalWriteInfo info) org.apache.spark.sql.connector.expressions.Transform[]org.apache.spark.sql.types.StructTypeschema()Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface org.apache.spark.sql.connector.catalog.StagedTable
reportDriverMetricsMethods inherited from interface org.apache.spark.sql.connector.catalog.SupportsDelete
canDeleteWhere, canDeleteWhere, deleteWhere, truncateTableMethods inherited from interface org.apache.spark.sql.connector.catalog.Table
columns
-
Constructor Details
-
RollbackStagedTable
public RollbackStagedTable(org.apache.spark.sql.connector.catalog.TableCatalog catalog, org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.connector.catalog.Table table)
-
-
Method Details
-
commitStagedChanges
public void commitStagedChanges()- Specified by:
commitStagedChangesin interfaceorg.apache.spark.sql.connector.catalog.StagedTable
-
abortStagedChanges
public void abortStagedChanges()- Specified by:
abortStagedChangesin interfaceorg.apache.spark.sql.connector.catalog.StagedTable
-
name
- Specified by:
namein interfaceorg.apache.spark.sql.connector.catalog.Table
-
schema
public org.apache.spark.sql.types.StructType schema()- Specified by:
schemain interfaceorg.apache.spark.sql.connector.catalog.Table
-
partitioning
public org.apache.spark.sql.connector.expressions.Transform[] partitioning()- Specified by:
partitioningin interfaceorg.apache.spark.sql.connector.catalog.Table
-
properties
- Specified by:
propertiesin interfaceorg.apache.spark.sql.connector.catalog.Table
-
capabilities
- Specified by:
capabilitiesin interfaceorg.apache.spark.sql.connector.catalog.Table
-
deleteWhere
public void deleteWhere(org.apache.spark.sql.sources.Filter[] filters) - Specified by:
deleteWherein interfaceorg.apache.spark.sql.connector.catalog.SupportsDelete
-
newScanBuilder
public org.apache.spark.sql.connector.read.ScanBuilder newScanBuilder(org.apache.spark.sql.util.CaseInsensitiveStringMap options) - Specified by:
newScanBuilderin interfaceorg.apache.spark.sql.connector.catalog.SupportsRead
-
newWriteBuilder
public org.apache.spark.sql.connector.write.WriteBuilder newWriteBuilder(org.apache.spark.sql.connector.write.LogicalWriteInfo info) - Specified by:
newWriteBuilderin interfaceorg.apache.spark.sql.connector.catalog.SupportsWrite
-