Class RollbackStagedTable
- All Implemented Interfaces:
org.apache.spark.sql.connector.catalog.StagedTable
,org.apache.spark.sql.connector.catalog.SupportsDelete
,org.apache.spark.sql.connector.catalog.SupportsDeleteV2
,org.apache.spark.sql.connector.catalog.SupportsRead
,org.apache.spark.sql.connector.catalog.SupportsWrite
,org.apache.spark.sql.connector.catalog.Table
,org.apache.spark.sql.connector.catalog.TruncatableTable
A Spark catalog can implement StagingTableCatalog to support atomic operations by producing
StagedTable. But if a catalog implements StagingTableCatalog, Spark expects the catalog to be
able to produce a StagedTable for any table loaded by the catalog. This assumption doesn't always
work, as in the case of SparkSessionCatalog
, which supports atomic operations can produce
a StagedTable for Iceberg tables, but wraps the session catalog and cannot necessarily produce a
working StagedTable implementation for tables that it loads.
The work-around is this class, which implements the StagedTable interface but does not have atomic behavior. Instead, the StagedTable interface is used to implement the behavior of the non-atomic SQL plans that will create a table, write, and will drop the table to roll back.
This StagedTable implements SupportsRead, SupportsWrite, and SupportsDelete by passing the
calls to the real table. Implementing those interfaces is safe because Spark will not use them
unless the table supports them and returns the corresponding capabilities from capabilities()
.
-
Constructor Summary
ConstructorDescriptionRollbackStagedTable
(org.apache.spark.sql.connector.catalog.TableCatalog catalog, org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.connector.catalog.Table table) -
Method Summary
Modifier and TypeMethodDescriptionvoid
Set<org.apache.spark.sql.connector.catalog.TableCapability>
void
void
deleteWhere
(org.apache.spark.sql.sources.Filter[] filters) name()
org.apache.spark.sql.connector.read.ScanBuilder
newScanBuilder
(org.apache.spark.sql.util.CaseInsensitiveStringMap options) org.apache.spark.sql.connector.write.WriteBuilder
newWriteBuilder
(org.apache.spark.sql.connector.write.LogicalWriteInfo info) org.apache.spark.sql.connector.expressions.Transform[]
org.apache.spark.sql.types.StructType
schema()
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface org.apache.spark.sql.connector.catalog.SupportsDelete
canDeleteWhere, canDeleteWhere, deleteWhere, truncateTable
Methods inherited from interface org.apache.spark.sql.connector.catalog.Table
columns
-
Constructor Details
-
RollbackStagedTable
public RollbackStagedTable(org.apache.spark.sql.connector.catalog.TableCatalog catalog, org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.connector.catalog.Table table)
-
-
Method Details
-
commitStagedChanges
public void commitStagedChanges()- Specified by:
commitStagedChanges
in interfaceorg.apache.spark.sql.connector.catalog.StagedTable
-
abortStagedChanges
public void abortStagedChanges()- Specified by:
abortStagedChanges
in interfaceorg.apache.spark.sql.connector.catalog.StagedTable
-
name
- Specified by:
name
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
schema
public org.apache.spark.sql.types.StructType schema()- Specified by:
schema
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
partitioning
public org.apache.spark.sql.connector.expressions.Transform[] partitioning()- Specified by:
partitioning
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
properties
- Specified by:
properties
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
capabilities
- Specified by:
capabilities
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
deleteWhere
public void deleteWhere(org.apache.spark.sql.sources.Filter[] filters) - Specified by:
deleteWhere
in interfaceorg.apache.spark.sql.connector.catalog.SupportsDelete
-
newScanBuilder
public org.apache.spark.sql.connector.read.ScanBuilder newScanBuilder(org.apache.spark.sql.util.CaseInsensitiveStringMap options) - Specified by:
newScanBuilder
in interfaceorg.apache.spark.sql.connector.catalog.SupportsRead
-
newWriteBuilder
public org.apache.spark.sql.connector.write.WriteBuilder newWriteBuilder(org.apache.spark.sql.connector.write.LogicalWriteInfo info) - Specified by:
newWriteBuilder
in interfaceorg.apache.spark.sql.connector.catalog.SupportsWrite
-