Class RollbackStagedTable
- java.lang.Object
- 
- org.apache.iceberg.spark.RollbackStagedTable
 
- 
- All Implemented Interfaces:
- org.apache.spark.sql.connector.catalog.StagedTable,- org.apache.spark.sql.connector.catalog.SupportsDelete,- org.apache.spark.sql.connector.catalog.SupportsRead,- org.apache.spark.sql.connector.catalog.SupportsWrite,- org.apache.spark.sql.connector.catalog.Table,- org.apache.spark.sql.connector.catalog.TruncatableTable
 
 public class RollbackStagedTable extends java.lang.Object implements org.apache.spark.sql.connector.catalog.StagedTable, org.apache.spark.sql.connector.catalog.SupportsRead, org.apache.spark.sql.connector.catalog.SupportsWrite, org.apache.spark.sql.connector.catalog.SupportsDeleteAn implementation of StagedTable that mimics the behavior of Spark's non-atomic CTAS and RTAS.A Spark catalog can implement StagingTableCatalog to support atomic operations by producing StagedTable. But if a catalog implements StagingTableCatalog, Spark expects the catalog to be able to produce a StagedTable for any table loaded by the catalog. This assumption doesn't always work, as in the case of SparkSessionCatalog, which supports atomic operations can produce a StagedTable for Iceberg tables, but wraps the session catalog and cannot necessarily produce a working StagedTable implementation for tables that it loads.The work-around is this class, which implements the StagedTable interface but does not have atomic behavior. Instead, the StagedTable interface is used to implement the behavior of the non-atomic SQL plans that will create a table, write, and will drop the table to roll back. This StagedTable implements SupportsRead, SupportsWrite, and SupportsDelete by passing the calls to the real table. Implementing those interfaces is safe because Spark will not use them unless the table supports them and returns the corresponding capabilities from capabilities().
- 
- 
Constructor SummaryConstructors Constructor Description RollbackStagedTable(org.apache.spark.sql.connector.catalog.TableCatalog catalog, org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.connector.catalog.Table table)
 - 
Method SummaryAll Methods Instance Methods Concrete Methods Modifier and Type Method Description voidabortStagedChanges()java.util.Set<org.apache.spark.sql.connector.catalog.TableCapability>capabilities()voidcommitStagedChanges()voiddeleteWhere(org.apache.spark.sql.sources.Filter[] filters)java.lang.Stringname()org.apache.spark.sql.connector.read.ScanBuildernewScanBuilder(org.apache.spark.sql.util.CaseInsensitiveStringMap options)org.apache.spark.sql.connector.write.WriteBuildernewWriteBuilder(org.apache.spark.sql.connector.write.LogicalWriteInfo info)org.apache.spark.sql.connector.expressions.Transform[]partitioning()java.util.Map<java.lang.String,java.lang.String>properties()org.apache.spark.sql.types.StructTypeschema()
 
- 
- 
- 
Method Detail- 
commitStagedChangespublic void commitStagedChanges() - Specified by:
- commitStagedChangesin interface- org.apache.spark.sql.connector.catalog.StagedTable
 
 - 
abortStagedChangespublic void abortStagedChanges() - Specified by:
- abortStagedChangesin interface- org.apache.spark.sql.connector.catalog.StagedTable
 
 - 
namepublic java.lang.String name() - Specified by:
- namein interface- org.apache.spark.sql.connector.catalog.Table
 
 - 
schemapublic org.apache.spark.sql.types.StructType schema() - Specified by:
- schemain interface- org.apache.spark.sql.connector.catalog.Table
 
 - 
partitioningpublic org.apache.spark.sql.connector.expressions.Transform[] partitioning() - Specified by:
- partitioningin interface- org.apache.spark.sql.connector.catalog.Table
 
 - 
propertiespublic java.util.Map<java.lang.String,java.lang.String> properties() - Specified by:
- propertiesin interface- org.apache.spark.sql.connector.catalog.Table
 
 - 
capabilitiespublic java.util.Set<org.apache.spark.sql.connector.catalog.TableCapability> capabilities() - Specified by:
- capabilitiesin interface- org.apache.spark.sql.connector.catalog.Table
 
 - 
deleteWherepublic void deleteWhere(org.apache.spark.sql.sources.Filter[] filters) - Specified by:
- deleteWherein interface- org.apache.spark.sql.connector.catalog.SupportsDelete
 
 - 
newScanBuilderpublic org.apache.spark.sql.connector.read.ScanBuilder newScanBuilder(org.apache.spark.sql.util.CaseInsensitiveStringMap options) - Specified by:
- newScanBuilderin interface- org.apache.spark.sql.connector.catalog.SupportsRead
 
 - 
newWriteBuilderpublic org.apache.spark.sql.connector.write.WriteBuilder newWriteBuilder(org.apache.spark.sql.connector.write.LogicalWriteInfo info) - Specified by:
- newWriteBuilderin interface- org.apache.spark.sql.connector.catalog.SupportsWrite
 
 
- 
 
-