Package org.apache.iceberg.spark
Class SparkCatalog
- java.lang.Object
- 
- org.apache.iceberg.spark.SparkCatalog
 
- 
- All Implemented Interfaces:
- HasIcebergCatalog,- org.apache.spark.sql.connector.catalog.CatalogPlugin,- org.apache.spark.sql.connector.catalog.FunctionCatalog,- org.apache.spark.sql.connector.catalog.StagingTableCatalog,- org.apache.spark.sql.connector.catalog.SupportsNamespaces,- org.apache.spark.sql.connector.catalog.TableCatalog,- ProcedureCatalog
 
 public class SparkCatalog extends java.lang.ObjectA Spark TableCatalog implementation that wraps an IcebergCatalog.This supports the following catalog configuration options: - type- catalog type, "hive" or "hadoop". To specify a non-hive or hadoop catalog, use the- catalog-imploption.
- uri- the Hive Metastore URI (Hive catalog only)
- warehouse- the warehouse path (Hadoop catalog only)
- catalog-impl- a custom- Catalogimplementation to use
- default-namespace- a namespace to use as the default
- cache-enabled- whether to enable catalog cache
- cache.expiration-interval-ms- interval in millis before expiring tables from catalog cache. Refer to- CatalogProperties.CACHE_EXPIRATION_INTERVAL_MSfor further details and significant values.
 
- 
- 
Constructor SummaryConstructors Constructor Description SparkCatalog()
 - 
Method SummaryAll Methods Instance Methods Concrete Methods Modifier and Type Method Description voidalterNamespace(java.lang.String[] namespace, org.apache.spark.sql.connector.catalog.NamespaceChange... changes)org.apache.spark.sql.connector.catalog.TablealterTable(org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.connector.catalog.TableChange... changes)protected CatalogbuildIcebergCatalog(java.lang.String name, org.apache.spark.sql.util.CaseInsensitiveStringMap options)Build an IcebergCatalogto be used by this Spark catalog adapter.protected TableIdentifierbuildIdentifier(org.apache.spark.sql.connector.catalog.Identifier identifier)Build an IcebergTableIdentifierfor the given Spark identifier.voidcreateNamespace(java.lang.String[] namespace, java.util.Map<java.lang.String,java.lang.String> metadata)org.apache.spark.sql.connector.catalog.TablecreateTable(org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.types.StructType schema, org.apache.spark.sql.connector.expressions.Transform[] transforms, java.util.Map<java.lang.String,java.lang.String> properties)java.lang.String[]defaultNamespace()booleandropNamespace(java.lang.String[] namespace, boolean cascade)booleandropTable(org.apache.spark.sql.connector.catalog.Identifier ident)CatalogicebergCatalog()Returns the underlyingCatalogbacking this Spark Catalogvoidinitialize(java.lang.String name, org.apache.spark.sql.util.CaseInsensitiveStringMap options)voidinvalidateTable(org.apache.spark.sql.connector.catalog.Identifier ident)org.apache.spark.sql.connector.catalog.Identifier[]listFunctions(java.lang.String[] namespace)java.lang.String[][]listNamespaces()java.lang.String[][]listNamespaces(java.lang.String[] namespace)org.apache.spark.sql.connector.catalog.Identifier[]listTables(java.lang.String[] namespace)org.apache.spark.sql.connector.catalog.functions.UnboundFunctionloadFunction(org.apache.spark.sql.connector.catalog.Identifier ident)java.util.Map<java.lang.String,java.lang.String>loadNamespaceMetadata(java.lang.String[] namespace)ProcedureloadProcedure(org.apache.spark.sql.connector.catalog.Identifier ident)Load astored procedurebyidentifier.org.apache.spark.sql.connector.catalog.TableloadTable(org.apache.spark.sql.connector.catalog.Identifier ident)org.apache.spark.sql.connector.catalog.TableloadTable(org.apache.spark.sql.connector.catalog.Identifier ident, long timestamp)org.apache.spark.sql.connector.catalog.TableloadTable(org.apache.spark.sql.connector.catalog.Identifier ident, java.lang.String version)java.lang.Stringname()booleanpurgeTable(org.apache.spark.sql.connector.catalog.Identifier ident)voidrenameTable(org.apache.spark.sql.connector.catalog.Identifier from, org.apache.spark.sql.connector.catalog.Identifier to)org.apache.spark.sql.connector.catalog.StagedTablestageCreate(org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.types.StructType schema, org.apache.spark.sql.connector.expressions.Transform[] transforms, java.util.Map<java.lang.String,java.lang.String> properties)org.apache.spark.sql.connector.catalog.StagedTablestageCreateOrReplace(org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.types.StructType schema, org.apache.spark.sql.connector.expressions.Transform[] transforms, java.util.Map<java.lang.String,java.lang.String> properties)org.apache.spark.sql.connector.catalog.StagedTablestageReplace(org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.types.StructType schema, org.apache.spark.sql.connector.expressions.Transform[] transforms, java.util.Map<java.lang.String,java.lang.String> properties)- 
Methods inherited from class java.lang.Objectclone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 - 
Methods inherited from interface org.apache.spark.sql.connector.catalog.FunctionCatalogfunctionExists
 
- 
 
- 
- 
- 
Method Detail- 
buildIcebergCatalogprotected Catalog buildIcebergCatalog(java.lang.String name, org.apache.spark.sql.util.CaseInsensitiveStringMap options) Build an IcebergCatalogto be used by this Spark catalog adapter.- Parameters:
- name- Spark's catalog name
- options- Spark's catalog options
- Returns:
- an Iceberg catalog
 
 - 
buildIdentifierprotected TableIdentifier buildIdentifier(org.apache.spark.sql.connector.catalog.Identifier identifier) Build an IcebergTableIdentifierfor the given Spark identifier.- Parameters:
- identifier- Spark's identifier
- Returns:
- an Iceberg identifier
 
 - 
loadTablepublic org.apache.spark.sql.connector.catalog.Table loadTable(org.apache.spark.sql.connector.catalog.Identifier ident) throws org.apache.spark.sql.catalyst.analysis.NoSuchTableException- Throws:
- org.apache.spark.sql.catalyst.analysis.NoSuchTableException
 
 - 
loadTablepublic org.apache.spark.sql.connector.catalog.Table loadTable(org.apache.spark.sql.connector.catalog.Identifier ident, java.lang.String version) throws org.apache.spark.sql.catalyst.analysis.NoSuchTableException- Throws:
- org.apache.spark.sql.catalyst.analysis.NoSuchTableException
 
 - 
loadTablepublic org.apache.spark.sql.connector.catalog.Table loadTable(org.apache.spark.sql.connector.catalog.Identifier ident, long timestamp) throws org.apache.spark.sql.catalyst.analysis.NoSuchTableException- Throws:
- org.apache.spark.sql.catalyst.analysis.NoSuchTableException
 
 - 
createTablepublic org.apache.spark.sql.connector.catalog.Table createTable(org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.types.StructType schema, org.apache.spark.sql.connector.expressions.Transform[] transforms, java.util.Map<java.lang.String,java.lang.String> properties) throws org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException- Throws:
- org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException
 
 - 
stageCreatepublic org.apache.spark.sql.connector.catalog.StagedTable stageCreate(org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.types.StructType schema, org.apache.spark.sql.connector.expressions.Transform[] transforms, java.util.Map<java.lang.String,java.lang.String> properties) throws org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException- Throws:
- org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException
 
 - 
stageReplacepublic org.apache.spark.sql.connector.catalog.StagedTable stageReplace(org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.types.StructType schema, org.apache.spark.sql.connector.expressions.Transform[] transforms, java.util.Map<java.lang.String,java.lang.String> properties) throws org.apache.spark.sql.catalyst.analysis.NoSuchTableException- Throws:
- org.apache.spark.sql.catalyst.analysis.NoSuchTableException
 
 - 
stageCreateOrReplacepublic org.apache.spark.sql.connector.catalog.StagedTable stageCreateOrReplace(org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.types.StructType schema, org.apache.spark.sql.connector.expressions.Transform[] transforms, java.util.Map<java.lang.String,java.lang.String> properties)
 - 
alterTablepublic org.apache.spark.sql.connector.catalog.Table alterTable(org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.connector.catalog.TableChange... changes) throws org.apache.spark.sql.catalyst.analysis.NoSuchTableException- Throws:
- org.apache.spark.sql.catalyst.analysis.NoSuchTableException
 
 - 
dropTablepublic boolean dropTable(org.apache.spark.sql.connector.catalog.Identifier ident) 
 - 
purgeTablepublic boolean purgeTable(org.apache.spark.sql.connector.catalog.Identifier ident) 
 - 
renameTablepublic void renameTable(org.apache.spark.sql.connector.catalog.Identifier from, org.apache.spark.sql.connector.catalog.Identifier to) throws org.apache.spark.sql.catalyst.analysis.NoSuchTableException, org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException- Throws:
- org.apache.spark.sql.catalyst.analysis.NoSuchTableException
- org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException
 
 - 
invalidateTablepublic void invalidateTable(org.apache.spark.sql.connector.catalog.Identifier ident) 
 - 
listTablespublic org.apache.spark.sql.connector.catalog.Identifier[] listTables(java.lang.String[] namespace) 
 - 
defaultNamespacepublic java.lang.String[] defaultNamespace() 
 - 
listNamespacespublic java.lang.String[][] listNamespaces() 
 - 
listNamespacespublic java.lang.String[][] listNamespaces(java.lang.String[] namespace) throws org.apache.spark.sql.catalyst.analysis.NoSuchNamespaceException- Throws:
- org.apache.spark.sql.catalyst.analysis.NoSuchNamespaceException
 
 - 
loadNamespaceMetadatapublic java.util.Map<java.lang.String,java.lang.String> loadNamespaceMetadata(java.lang.String[] namespace) throws org.apache.spark.sql.catalyst.analysis.NoSuchNamespaceException- Throws:
- org.apache.spark.sql.catalyst.analysis.NoSuchNamespaceException
 
 - 
createNamespacepublic void createNamespace(java.lang.String[] namespace, java.util.Map<java.lang.String,java.lang.String> metadata) throws org.apache.spark.sql.catalyst.analysis.NamespaceAlreadyExistsException- Throws:
- org.apache.spark.sql.catalyst.analysis.NamespaceAlreadyExistsException
 
 - 
alterNamespacepublic void alterNamespace(java.lang.String[] namespace, org.apache.spark.sql.connector.catalog.NamespaceChange... changes) throws org.apache.spark.sql.catalyst.analysis.NoSuchNamespaceException- Throws:
- org.apache.spark.sql.catalyst.analysis.NoSuchNamespaceException
 
 - 
dropNamespacepublic boolean dropNamespace(java.lang.String[] namespace, boolean cascade) throws org.apache.spark.sql.catalyst.analysis.NoSuchNamespaceException- Throws:
- org.apache.spark.sql.catalyst.analysis.NoSuchNamespaceException
 
 - 
initializepublic final void initialize(java.lang.String name, org.apache.spark.sql.util.CaseInsensitiveStringMap options)
 - 
namepublic java.lang.String name() 
 - 
icebergCatalogpublic Catalog icebergCatalog() Description copied from interface:HasIcebergCatalogReturns the underlyingCatalogbacking this Spark Catalog
 - 
loadProcedurepublic Procedure loadProcedure(org.apache.spark.sql.connector.catalog.Identifier ident) throws NoSuchProcedureException Description copied from interface:ProcedureCatalogLoad astored procedurebyidentifier.- Specified by:
- loadProcedurein interface- ProcedureCatalog
- Parameters:
- ident- a stored procedure identifier
- Returns:
- the stored procedure's metadata
- Throws:
- NoSuchProcedureException- if there is no matching stored procedure
 
 - 
listFunctionspublic org.apache.spark.sql.connector.catalog.Identifier[] listFunctions(java.lang.String[] namespace) throws org.apache.spark.sql.catalyst.analysis.NoSuchNamespaceException- Specified by:
- listFunctionsin interface- org.apache.spark.sql.connector.catalog.FunctionCatalog
- Throws:
- org.apache.spark.sql.catalyst.analysis.NoSuchNamespaceException
 
 - 
loadFunctionpublic org.apache.spark.sql.connector.catalog.functions.UnboundFunction loadFunction(org.apache.spark.sql.connector.catalog.Identifier ident) throws org.apache.spark.sql.catalyst.analysis.NoSuchFunctionException- Specified by:
- loadFunctionin interface- org.apache.spark.sql.connector.catalog.FunctionCatalog
- Throws:
- org.apache.spark.sql.catalyst.analysis.NoSuchFunctionException
 
 
- 
 
-