Package org.apache.iceberg.spark.source
Class IcebergSource
- java.lang.Object
 - 
- org.apache.iceberg.spark.source.IcebergSource
 
 
- 
- All Implemented Interfaces:
 org.apache.spark.sql.connector.catalog.SupportsCatalogOptions,org.apache.spark.sql.connector.catalog.TableProvider,org.apache.spark.sql.sources.DataSourceRegister
public class IcebergSource extends java.lang.Object implements org.apache.spark.sql.sources.DataSourceRegister, org.apache.spark.sql.connector.catalog.SupportsCatalogOptionsThe IcebergSource loads/writes tables with format "iceberg". It can load paths and tables. How paths/tables are loaded when using spark.read().format("iceberg").path(table) table = "file:/path/to/table" -> loads a HadoopTable at given path table = "tablename" -> loads currentCatalog.currentNamespace.tablename table = "catalog.tablename" -> load "tablename" from the specified catalog. table = "namespace.tablename" -> load "namespace.tablename" from current catalog table = "catalog.namespace.tablename" -> "namespace.tablename" from the specified catalog. table = "namespace1.namespace2.tablename" -> load "namespace1.namespace2.tablename" from current catalog The above list is in order of priority. For example: a matching catalog will take priority over any namespace resolution. 
- 
- 
Constructor Summary
Constructors Constructor Description IcebergSource() 
- 
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description java.lang.StringextractCatalog(org.apache.spark.sql.util.CaseInsensitiveStringMap options)org.apache.spark.sql.connector.catalog.IdentifierextractIdentifier(org.apache.spark.sql.util.CaseInsensitiveStringMap options)org.apache.spark.sql.connector.catalog.TablegetTable(org.apache.spark.sql.types.StructType schema, org.apache.spark.sql.connector.expressions.Transform[] partitioning, java.util.Map<java.lang.String,java.lang.String> options)org.apache.spark.sql.connector.expressions.Transform[]inferPartitioning(org.apache.spark.sql.util.CaseInsensitiveStringMap options)org.apache.spark.sql.types.StructTypeinferSchema(org.apache.spark.sql.util.CaseInsensitiveStringMap options)java.lang.StringshortName()booleansupportsExternalMetadata() 
 - 
 
- 
- 
Method Detail
- 
shortName
public java.lang.String shortName()
- Specified by:
 shortNamein interfaceorg.apache.spark.sql.sources.DataSourceRegister
 
- 
inferSchema
public org.apache.spark.sql.types.StructType inferSchema(org.apache.spark.sql.util.CaseInsensitiveStringMap options)
- Specified by:
 inferSchemain interfaceorg.apache.spark.sql.connector.catalog.TableProvider
 
- 
inferPartitioning
public org.apache.spark.sql.connector.expressions.Transform[] inferPartitioning(org.apache.spark.sql.util.CaseInsensitiveStringMap options)
- Specified by:
 inferPartitioningin interfaceorg.apache.spark.sql.connector.catalog.TableProvider
 
- 
supportsExternalMetadata
public boolean supportsExternalMetadata()
- Specified by:
 supportsExternalMetadatain interfaceorg.apache.spark.sql.connector.catalog.TableProvider
 
- 
getTable
public org.apache.spark.sql.connector.catalog.Table getTable(org.apache.spark.sql.types.StructType schema, org.apache.spark.sql.connector.expressions.Transform[] partitioning, java.util.Map<java.lang.String,java.lang.String> options)- Specified by:
 getTablein interfaceorg.apache.spark.sql.connector.catalog.TableProvider
 
- 
extractIdentifier
public org.apache.spark.sql.connector.catalog.Identifier extractIdentifier(org.apache.spark.sql.util.CaseInsensitiveStringMap options)
- Specified by:
 extractIdentifierin interfaceorg.apache.spark.sql.connector.catalog.SupportsCatalogOptions
 
- 
extractCatalog
public java.lang.String extractCatalog(org.apache.spark.sql.util.CaseInsensitiveStringMap options)
- Specified by:
 extractCatalogin interfaceorg.apache.spark.sql.connector.catalog.SupportsCatalogOptions
 
 - 
 
 -