Package org.apache.iceberg.spark.source
Class SparkTable
- java.lang.Object
-
- org.apache.iceberg.spark.source.SparkTable
-
- All Implemented Interfaces:
org.apache.spark.sql.connector.catalog.SupportsDelete
,org.apache.spark.sql.connector.catalog.SupportsRead
,org.apache.spark.sql.connector.catalog.SupportsWrite
,org.apache.spark.sql.connector.catalog.Table
,ExtendedSupportsDelete
,SupportsMerge
- Direct Known Subclasses:
StagedSparkTable
public class SparkTable extends java.lang.Object implements org.apache.spark.sql.connector.catalog.Table, org.apache.spark.sql.connector.catalog.SupportsRead, org.apache.spark.sql.connector.catalog.SupportsWrite, ExtendedSupportsDelete, SupportsMerge
-
-
Constructor Summary
Constructors Constructor Description SparkTable(Table icebergTable, boolean refreshEagerly)
SparkTable(Table icebergTable, org.apache.spark.sql.types.StructType requestedSchema, boolean refreshEagerly)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description boolean
canDeleteWhere(org.apache.spark.sql.sources.Filter[] filters)
Checks if it is possible to delete data from a data source table that matches filter expressions.java.util.Set<org.apache.spark.sql.connector.catalog.TableCapability>
capabilities()
void
deleteWhere(org.apache.spark.sql.sources.Filter[] filters)
boolean
equals(java.lang.Object other)
int
hashCode()
java.lang.String
name()
MergeBuilder
newMergeBuilder(java.lang.String operation, org.apache.spark.sql.connector.write.LogicalWriteInfo info)
Returns aMergeBuilder
which can be used to create both a scan and a write for a row-level operation.org.apache.spark.sql.connector.read.ScanBuilder
newScanBuilder(org.apache.spark.sql.util.CaseInsensitiveStringMap options)
org.apache.spark.sql.connector.write.WriteBuilder
newWriteBuilder(org.apache.spark.sql.connector.write.LogicalWriteInfo info)
org.apache.spark.sql.connector.expressions.Transform[]
partitioning()
java.util.Map<java.lang.String,java.lang.String>
properties()
org.apache.spark.sql.types.StructType
schema()
Table
table()
java.lang.String
toString()
-
-
-
Method Detail
-
table
public Table table()
-
name
public java.lang.String name()
- Specified by:
name
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
schema
public org.apache.spark.sql.types.StructType schema()
- Specified by:
schema
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
partitioning
public org.apache.spark.sql.connector.expressions.Transform[] partitioning()
- Specified by:
partitioning
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
properties
public java.util.Map<java.lang.String,java.lang.String> properties()
- Specified by:
properties
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
capabilities
public java.util.Set<org.apache.spark.sql.connector.catalog.TableCapability> capabilities()
- Specified by:
capabilities
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
newScanBuilder
public org.apache.spark.sql.connector.read.ScanBuilder newScanBuilder(org.apache.spark.sql.util.CaseInsensitiveStringMap options)
- Specified by:
newScanBuilder
in interfaceorg.apache.spark.sql.connector.catalog.SupportsRead
-
newWriteBuilder
public org.apache.spark.sql.connector.write.WriteBuilder newWriteBuilder(org.apache.spark.sql.connector.write.LogicalWriteInfo info)
- Specified by:
newWriteBuilder
in interfaceorg.apache.spark.sql.connector.catalog.SupportsWrite
-
newMergeBuilder
public MergeBuilder newMergeBuilder(java.lang.String operation, org.apache.spark.sql.connector.write.LogicalWriteInfo info)
Description copied from interface:SupportsMerge
Returns aMergeBuilder
which can be used to create both a scan and a write for a row-level operation. Spark will call this method to configure each data source row-level operation.- Specified by:
newMergeBuilder
in interfaceSupportsMerge
info
- write info- Returns:
- a merge builder
-
canDeleteWhere
public boolean canDeleteWhere(org.apache.spark.sql.sources.Filter[] filters)
Description copied from interface:ExtendedSupportsDelete
Checks if it is possible to delete data from a data source table that matches filter expressions.Rows should be deleted from the data source iff all of the filter expressions match. That is, the expressions must be interpreted as a set of filters that are ANDed together.
Spark will call this method to check if the delete is possible without significant effort. Otherwise, Spark will try to rewrite the delete operation if the data source table supports row-level operations.
- Specified by:
canDeleteWhere
in interfaceExtendedSupportsDelete
- Parameters:
filters
- filter expressions, used to select rows to delete when all expressions match- Returns:
- true if the delete operation can be performed
-
deleteWhere
public void deleteWhere(org.apache.spark.sql.sources.Filter[] filters)
- Specified by:
deleteWhere
in interfaceorg.apache.spark.sql.connector.catalog.SupportsDelete
-
toString
public java.lang.String toString()
- Overrides:
toString
in classjava.lang.Object
-
equals
public boolean equals(java.lang.Object other)
- Overrides:
equals
in classjava.lang.Object
-
hashCode
public int hashCode()
- Overrides:
hashCode
in classjava.lang.Object
-
-