Package org.apache.iceberg.spark.source
Class SparkTable
- java.lang.Object
-
- org.apache.iceberg.spark.source.SparkTable
-
- All Implemented Interfaces:
org.apache.spark.sql.connector.catalog.SupportsDelete,org.apache.spark.sql.connector.catalog.SupportsRead,org.apache.spark.sql.connector.catalog.SupportsWrite,org.apache.spark.sql.connector.catalog.Table,ExtendedSupportsDelete,SupportsMerge
- Direct Known Subclasses:
StagedSparkTable
public class SparkTable extends java.lang.Object implements org.apache.spark.sql.connector.catalog.Table, org.apache.spark.sql.connector.catalog.SupportsRead, org.apache.spark.sql.connector.catalog.SupportsWrite, ExtendedSupportsDelete, SupportsMerge
-
-
Constructor Summary
Constructors Constructor Description SparkTable(Table icebergTable, boolean refreshEagerly)SparkTable(Table icebergTable, org.apache.spark.sql.types.StructType requestedSchema, boolean refreshEagerly)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description booleancanDeleteWhere(org.apache.spark.sql.sources.Filter[] filters)Checks if it is possible to delete data from a data source table that matches filter expressions.java.util.Set<org.apache.spark.sql.connector.catalog.TableCapability>capabilities()voiddeleteWhere(org.apache.spark.sql.sources.Filter[] filters)booleanequals(java.lang.Object other)inthashCode()java.lang.Stringname()MergeBuildernewMergeBuilder(java.lang.String operation, org.apache.spark.sql.connector.write.LogicalWriteInfo info)Returns aMergeBuilderwhich can be used to create both a scan and a write for a row-level operation.org.apache.spark.sql.connector.read.ScanBuildernewScanBuilder(org.apache.spark.sql.util.CaseInsensitiveStringMap options)org.apache.spark.sql.connector.write.WriteBuildernewWriteBuilder(org.apache.spark.sql.connector.write.LogicalWriteInfo info)org.apache.spark.sql.connector.expressions.Transform[]partitioning()java.util.Map<java.lang.String,java.lang.String>properties()org.apache.spark.sql.types.StructTypeschema()Tabletable()java.lang.StringtoString()
-
-
-
Method Detail
-
table
public Table table()
-
name
public java.lang.String name()
- Specified by:
namein interfaceorg.apache.spark.sql.connector.catalog.Table
-
schema
public org.apache.spark.sql.types.StructType schema()
- Specified by:
schemain interfaceorg.apache.spark.sql.connector.catalog.Table
-
partitioning
public org.apache.spark.sql.connector.expressions.Transform[] partitioning()
- Specified by:
partitioningin interfaceorg.apache.spark.sql.connector.catalog.Table
-
properties
public java.util.Map<java.lang.String,java.lang.String> properties()
- Specified by:
propertiesin interfaceorg.apache.spark.sql.connector.catalog.Table
-
capabilities
public java.util.Set<org.apache.spark.sql.connector.catalog.TableCapability> capabilities()
- Specified by:
capabilitiesin interfaceorg.apache.spark.sql.connector.catalog.Table
-
newScanBuilder
public org.apache.spark.sql.connector.read.ScanBuilder newScanBuilder(org.apache.spark.sql.util.CaseInsensitiveStringMap options)
- Specified by:
newScanBuilderin interfaceorg.apache.spark.sql.connector.catalog.SupportsRead
-
newWriteBuilder
public org.apache.spark.sql.connector.write.WriteBuilder newWriteBuilder(org.apache.spark.sql.connector.write.LogicalWriteInfo info)
- Specified by:
newWriteBuilderin interfaceorg.apache.spark.sql.connector.catalog.SupportsWrite
-
newMergeBuilder
public MergeBuilder newMergeBuilder(java.lang.String operation, org.apache.spark.sql.connector.write.LogicalWriteInfo info)
Description copied from interface:SupportsMergeReturns aMergeBuilderwhich can be used to create both a scan and a write for a row-level operation. Spark will call this method to configure each data source row-level operation.- Specified by:
newMergeBuilderin interfaceSupportsMergeinfo- write info- Returns:
- a merge builder
-
canDeleteWhere
public boolean canDeleteWhere(org.apache.spark.sql.sources.Filter[] filters)
Description copied from interface:ExtendedSupportsDeleteChecks if it is possible to delete data from a data source table that matches filter expressions.Rows should be deleted from the data source iff all of the filter expressions match. That is, the expressions must be interpreted as a set of filters that are ANDed together.
Spark will call this method to check if the delete is possible without significant effort. Otherwise, Spark will try to rewrite the delete operation if the data source table supports row-level operations.
- Specified by:
canDeleteWherein interfaceExtendedSupportsDelete- Parameters:
filters- filter expressions, used to select rows to delete when all expressions match- Returns:
- true if the delete operation can be performed
-
deleteWhere
public void deleteWhere(org.apache.spark.sql.sources.Filter[] filters)
- Specified by:
deleteWherein interfaceorg.apache.spark.sql.connector.catalog.SupportsDelete
-
toString
public java.lang.String toString()
- Overrides:
toStringin classjava.lang.Object
-
equals
public boolean equals(java.lang.Object other)
- Overrides:
equalsin classjava.lang.Object
-
hashCode
public int hashCode()
- Overrides:
hashCodein classjava.lang.Object
-
-