Class RollbackStagedTable
- java.lang.Object
-
- org.apache.iceberg.spark.RollbackStagedTable
-
- All Implemented Interfaces:
org.apache.spark.sql.connector.catalog.StagedTable
,org.apache.spark.sql.connector.catalog.SupportsDelete
,org.apache.spark.sql.connector.catalog.SupportsDeleteV2
,org.apache.spark.sql.connector.catalog.SupportsRead
,org.apache.spark.sql.connector.catalog.SupportsWrite
,org.apache.spark.sql.connector.catalog.Table
,org.apache.spark.sql.connector.catalog.TruncatableTable
public class RollbackStagedTable extends java.lang.Object implements org.apache.spark.sql.connector.catalog.StagedTable, org.apache.spark.sql.connector.catalog.SupportsRead, org.apache.spark.sql.connector.catalog.SupportsWrite, org.apache.spark.sql.connector.catalog.SupportsDelete
An implementation of StagedTable that mimics the behavior of Spark's non-atomic CTAS and RTAS.A Spark catalog can implement StagingTableCatalog to support atomic operations by producing StagedTable. But if a catalog implements StagingTableCatalog, Spark expects the catalog to be able to produce a StagedTable for any table loaded by the catalog. This assumption doesn't always work, as in the case of
SparkSessionCatalog
, which supports atomic operations can produce a StagedTable for Iceberg tables, but wraps the session catalog and cannot necessarily produce a working StagedTable implementation for tables that it loads.The work-around is this class, which implements the StagedTable interface but does not have atomic behavior. Instead, the StagedTable interface is used to implement the behavior of the non-atomic SQL plans that will create a table, write, and will drop the table to roll back.
This StagedTable implements SupportsRead, SupportsWrite, and SupportsDelete by passing the calls to the real table. Implementing those interfaces is safe because Spark will not use them unless the table supports them and returns the corresponding capabilities from
capabilities()
.
-
-
Constructor Summary
Constructors Constructor Description RollbackStagedTable(org.apache.spark.sql.connector.catalog.TableCatalog catalog, org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.connector.catalog.Table table)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description void
abortStagedChanges()
java.util.Set<org.apache.spark.sql.connector.catalog.TableCapability>
capabilities()
void
commitStagedChanges()
void
deleteWhere(org.apache.spark.sql.sources.Filter[] filters)
java.lang.String
name()
org.apache.spark.sql.connector.read.ScanBuilder
newScanBuilder(org.apache.spark.sql.util.CaseInsensitiveStringMap options)
org.apache.spark.sql.connector.write.WriteBuilder
newWriteBuilder(org.apache.spark.sql.connector.write.LogicalWriteInfo info)
org.apache.spark.sql.connector.expressions.Transform[]
partitioning()
java.util.Map<java.lang.String,java.lang.String>
properties()
org.apache.spark.sql.types.StructType
schema()
-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
-
-
-
Method Detail
-
commitStagedChanges
public void commitStagedChanges()
- Specified by:
commitStagedChanges
in interfaceorg.apache.spark.sql.connector.catalog.StagedTable
-
abortStagedChanges
public void abortStagedChanges()
- Specified by:
abortStagedChanges
in interfaceorg.apache.spark.sql.connector.catalog.StagedTable
-
name
public java.lang.String name()
- Specified by:
name
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
schema
public org.apache.spark.sql.types.StructType schema()
- Specified by:
schema
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
partitioning
public org.apache.spark.sql.connector.expressions.Transform[] partitioning()
- Specified by:
partitioning
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
properties
public java.util.Map<java.lang.String,java.lang.String> properties()
- Specified by:
properties
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
capabilities
public java.util.Set<org.apache.spark.sql.connector.catalog.TableCapability> capabilities()
- Specified by:
capabilities
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
deleteWhere
public void deleteWhere(org.apache.spark.sql.sources.Filter[] filters)
- Specified by:
deleteWhere
in interfaceorg.apache.spark.sql.connector.catalog.SupportsDelete
-
newScanBuilder
public org.apache.spark.sql.connector.read.ScanBuilder newScanBuilder(org.apache.spark.sql.util.CaseInsensitiveStringMap options)
- Specified by:
newScanBuilder
in interfaceorg.apache.spark.sql.connector.catalog.SupportsRead
-
newWriteBuilder
public org.apache.spark.sql.connector.write.WriteBuilder newWriteBuilder(org.apache.spark.sql.connector.write.LogicalWriteInfo info)
- Specified by:
newWriteBuilder
in interfaceorg.apache.spark.sql.connector.catalog.SupportsWrite
-
-