Package org.apache.iceberg.spark
package org.apache.iceberg.spark
-
ClassDescriptionAn iterator that transforms rows from changelog tables within a single Spark task.utility class to accept thread local commit propertiesAn iterator that finds delete/insert rows which represent an update, and converts them into update records from changelog tables within a single Spark task.Captures information about the current job which is used for displaying on the UIThis class computes the net changes across multiple snapshots.An implementation of StagedTable that mimics the behavior of Spark's non-atomic CTAS and RTAS.This mimics a class inside of Spark which is private inside of LookupCatalog.An internal table catalog that is capable of loading tables from a cache.A Spark TableCatalog implementation that wraps an Iceberg
Catalog
.An executor cache for reducing the computation and IO overhead in tasks.A function catalog that can be used to resolve Iceberg functions without a metastore connection.A class for common Iceberg configs for Spark reads.Spark DF read optionsHelper methods for working with Spark/Hive metadata.SparkSessionCatalog<T extends org.apache.spark.sql.connector.catalog.TableCatalog & org.apache.spark.sql.connector.catalog.FunctionCatalog & org.apache.spark.sql.connector.catalog.SupportsNamespaces>A Spark catalog that can also load non-Iceberg tables.Java version of the original SparkTableUtil.scala https://github.com/apache/iceberg/blob/apache-iceberg-0.8.0-incubating/spark/src/main/scala/org/apache/iceberg/spark/SparkTableUtil.scalaClass representing a table partition.A utility class that converts Spark values to Iceberg's internal representation.A class for common Iceberg configs for Spark writes.Spark DF write optionsA set of requirements such as distribution and ordering reported to Spark during writes.A utility that contains helper methods for working with Spark writes.