All Classes Interface Summary Class Summary Enum Summary Exception Summary
Class |
Description |
AbstractMapredIcebergRecordReader<T> |
|
Accessor<T> |
|
Accessors |
Position2Accessor and Position3Accessor here is an optimization.
|
Action<ThisT,R> |
An action performed on a table.
|
Actions |
|
ActionsProvider |
An API that should be implemented by query engine integrations for providing actions.
|
AliyunClientFactories |
|
AliyunClientFactory |
|
AliyunProperties |
|
AllDataFilesTable |
A Table implementation that exposes a table's valid data files as rows.
|
AllDataFilesTable.AllDataFilesTableScan |
|
AllEntriesTable |
A Table implementation that exposes a table's manifest entries as rows, for both delete and data files.
|
AllManifestsTable |
A Table implementation that exposes a table's valid manifest files as rows.
|
AllManifestsTable.AllManifestsTableScan |
|
AlreadyExistsException |
Exception raised when attempting to create a table that already exists.
|
AncestorsOfProcedure |
|
And |
|
AppendFiles |
API for appending new files in a table.
|
ArrayUtil |
|
ArrowAllocation |
|
ArrowReader |
|
ArrowSchemaUtil |
|
ArrowVectorAccessor<DecimalT,Utf8StringT,ArrayT,ChildVectorT extends java.lang.AutoCloseable> |
|
ArrowVectorAccessors |
|
AssumeRoleAwsClientFactory |
|
Avro |
|
Avro.DataWriteBuilder |
|
Avro.DeleteWriteBuilder |
|
Avro.ReadBuilder |
|
Avro.WriteBuilder |
|
AvroEncoderUtil |
|
AvroIterable<D> |
|
AvroMetrics |
|
AvroSchemaUtil |
|
AvroSchemaVisitor<T> |
|
AvroSchemaWithTypeVisitor<T> |
|
AvroWithFlinkSchemaVisitor<T> |
|
AvroWithPartnerByStructureVisitor<P,T> |
A abstract avro schema visitor with partner type.
|
AvroWithSparkSchemaVisitor<T> |
|
AwsClientFactories |
|
AwsClientFactory |
Interface to customize AWS clients used by Iceberg.
|
AwsProperties |
|
BaseBatchReader<T> |
A base BatchReader class that contains common functionality
|
BaseColumnIterator |
|
BaseCombinedScanTask |
|
BaseDeleteOrphanFilesActionResult |
|
BaseDeleteOrphanFilesSparkAction |
An action that removes orphan metadata and data files by listing a given location and comparing
the actual files in that location with data and metadata files referenced by all valid snapshots.
|
BaseDeleteReachableFilesActionResult |
|
BaseDeleteReachableFilesSparkAction |
An implementation of DeleteReachableFiles that uses metadata tables in Spark
to determine which files should be deleted.
|
BaseExpireSnapshotsActionResult |
|
BaseExpireSnapshotsSparkAction |
An action that performs the same operation as ExpireSnapshots but uses Spark
to determine the delta in files between the pre and post-expiration table metadata.
|
BaseFileGroupRewriteResult |
|
BaseFileWriterFactory<T> |
A base writer factory to be extended by query engine integrations.
|
BaseMetastoreCatalog |
|
BaseMetastoreTableOperations |
|
BaseMetastoreTableOperations.CommitStatus |
|
BaseMigrateTableActionResult |
|
BaseMigrateTableSparkAction |
Takes a Spark table in the source catalog and attempts to transform it into an Iceberg
table in the same location with the same identifier.
|
BaseOverwriteFiles |
|
BasePageIterator |
|
BasePageIterator.IntIterator |
|
BaseParquetReaders<T> |
|
BaseParquetWriter<T> |
|
BasePositionDeltaWriter<T> |
|
BaseReplacePartitions |
|
BaseReplaceSortOrder |
|
BaseRewriteDataFilesAction<ThisT> |
|
BaseRewriteDataFilesFileGroupInfo |
|
BaseRewriteDataFilesResult |
|
BaseRewriteDataFilesSpark3Action |
|
BaseRewriteManifests |
|
BaseRewriteManifestsActionResult |
|
BaseRewriteManifestsSparkAction |
An action that rewrites manifests in a distributed manner and co-locates metadata for partitions.
|
BaseSnapshotTableActionResult |
|
BaseSnapshotTableSparkAction |
Creates a new Iceberg table based on a source Spark table.
|
BaseTable |
Base Table implementation.
|
BaseTaskWriter<T> |
|
BaseVectorizedParquetValuesReader |
A values reader for Parquet's run-length encoded data that reads column data in batches instead of one value at a
time.
|
BinaryUtil |
|
Binder |
Rewrites expressions by replacing unbound named references with references to
fields in a struct schema.
|
BinPacking |
|
BinPacking.ListPacker<T> |
|
BinPacking.PackingIterable<T> |
|
BinPackStrategy |
A rewrite strategy for data files which determines which files to rewrite
based on their size.
|
Bound<T> |
Represents a bound value expression.
|
BoundLiteralPredicate<T> |
|
BoundPredicate<T> |
|
BoundReference<T> |
|
BoundSetPredicate<T> |
|
BoundTerm<T> |
Represents a bound term.
|
BoundTransform<S,T> |
A transform expression.
|
BoundUnaryPredicate<T> |
|
ByteBuffers |
|
CachedClientPool |
|
CachingCatalog |
Class that wraps an Iceberg Catalog to cache tables.
|
Catalog |
A Catalog API for table create, drop, and load operations.
|
Catalog.TableBuilder |
|
CatalogLoader |
Serializable loader to load an Iceberg Catalog .
|
CatalogLoader.CustomCatalogLoader |
|
CatalogLoader.HadoopCatalogLoader |
|
CatalogLoader.HiveCatalogLoader |
|
CatalogProperties |
|
Catalogs |
Class for catalog resolution and accessing the common functions for Catalog API.
|
CatalogUtil |
|
CharSequenceSet |
|
CharSequenceWrapper |
Wrapper class to adapt CharSequence for use in maps and sets.
|
CheckCompatibility |
|
CherrypickAncestorCommitException |
This exception occurs when one cherrypicks an ancestor or when the picked snapshot is already linked to
a published ancestor.
|
ClientPool<C,E extends java.lang.Exception> |
|
ClientPool.Action<R,C,E extends java.lang.Exception> |
|
ClientPoolImpl<C,E extends java.lang.Exception> |
|
CloseableGroup |
This class acts as a helper for handling the closure of multiple resource.
|
CloseableIterable<T> |
|
CloseableIterable.ConcatCloseableIterable<E> |
|
CloseableIterator<T> |
|
ClosingIterator<T> |
A convenience wrapper around CloseableIterator , providing auto-close
functionality when all of the elements in the iterator are consumed.
|
ClusteredDataWriter<T> |
A data writer capable of writing to multiple specs and partitions that requires the incoming records
to be properly clustered by partition spec and by partition within each spec.
|
ClusteredEqualityDeleteWriter<T> |
An equality delete writer capable of writing to multiple specs and partitions that requires
the incoming delete records to be properly clustered by partition spec and by partition within each spec.
|
ClusteredPositionDeleteWriter<T> |
A position delete writer capable of writing to multiple specs and partitions that requires
the incoming delete records to be properly clustered by partition spec and by partition within each spec.
|
ColumnarBatch |
This class is inspired by Spark's ColumnarBatch .
|
ColumnarBatchReader |
VectorizedReader that returns Spark's ColumnarBatch to support Spark's vectorized read path.
|
ColumnIterator<T> |
|
ColumnVector |
This class is inspired by Spark's ColumnVector .
|
ColumnVectorWithFilter |
|
ColumnWriter<T> |
|
CombinedScanTask |
A scan task made of several ranges from files.
|
CommitFailedException |
Exception raised when a commit fails because of out of date metadata.
|
CommitStateUnknownException |
Exception for a failure to confirm either affirmatively or negatively that a commit was applied.
|
Comparators |
|
ConfigProperties |
|
Configurable<C> |
Interface used to avoid runtime dependencies on Hadoop Configurable
|
Container<T> |
A simple container of objects that you can get and set.
|
ContentFile<F> |
|
Conversions |
|
ConvertEqualityDeleteFiles |
An action for converting the equality delete files to position delete files.
|
ConvertEqualityDeleteFiles.Result |
The action result that contains a summary of the execution.
|
ConvertEqualityDeleteStrategy |
A strategy for the action to convert equality delete to position deletes.
|
CreateSnapshotEvent |
|
DataFile |
Interface for data files listed in a table manifest.
|
DataFiles |
|
DataFiles.Builder |
|
DataFilesTable |
A Table implementation that exposes a table's data files as rows.
|
DataFilesTable.FilesTableScan |
|
DataIterator<T> |
|
DataOperations |
Data operations that produce snapshots.
|
DataReader<T> |
|
DataTableScan |
|
DataTask |
A task that returns data as rows instead of where to read data.
|
DataWriter<T> |
|
DataWriter<T> |
|
DataWriteResult |
A result of writing data files.
|
DateTimeUtil |
|
DecimalUtil |
|
DecoderResolver |
Resolver to resolve Decoder to a ResolvingDecoder .
|
DelegatingInputStream |
|
DelegatingOutputStream |
|
DeleteFile |
Interface for delete files listed in a table delete manifest.
|
DeleteFiles |
API for deleting files from a table.
|
DeleteFilter<T> |
|
DeleteOrphanFiles |
An action that deletes orphan files in a table.
|
DeleteOrphanFiles.Result |
The action result that contains a summary of the execution.
|
DeleteReachableFiles |
An action that deletes all files referenced by a table metadata file.
|
DeleteReachableFiles.Result |
The action result that contains a summary of the execution.
|
Deletes |
|
DeleteSchemaUtil |
|
DeleteWriteResult |
A result of writing delete files.
|
DeltaBatchWrite |
An interface that defines how to write a delta of rows during batch processing.
|
DeltaWrite |
A logical representation of a data source write that handles a delta of rows.
|
DeltaWriteBuilder |
An interface for building delta writes.
|
DeltaWriter<T> |
A data writer responsible for writing a delta of rows.
|
DeltaWriterFactory |
A factory for creating and initializing delta writers at the executor side.
|
DistributionMode |
Enum of supported write distribution mode, it defines the write behavior of batch or streaming job:
|
DoubleFieldMetrics |
Iceberg internally tracked field level metrics, used by Parquet and ORC writers only.
|
DoubleFieldMetrics.Builder |
|
DuplicateWAPCommitException |
This exception occurs when the WAP workflow detects a duplicate wap commit.
|
DynamoDbCatalog |
DynamoDB implementation of Iceberg catalog
|
DynClasses |
|
DynClasses.Builder |
|
DynConstructors |
Copied from parquet-common
|
DynConstructors.Builder |
|
DynConstructors.Ctor<C> |
|
DynFields |
|
DynFields.BoundField<T> |
|
DynFields.Builder |
|
DynFields.StaticField<T> |
|
DynFields.UnboundField<T> |
Convenience wrapper class around Field .
|
DynMethods |
Copied from parquet-common
|
DynMethods.BoundMethod |
|
DynMethods.Builder |
|
DynMethods.StaticMethod |
|
DynMethods.UnboundMethod |
Convenience wrapper class around Method .
|
EncryptedFiles |
|
EncryptedInputFile |
Thin wrapper around an InputFile instance that is encrypted.
|
EncryptedOutputFile |
Thin wrapper around a OutputFile that is encrypting bytes written to the underlying
file system, via an encryption key that is symbolized by the enclosed
EncryptionKeyMetadata .
|
EncryptionKeyMetadata |
Light typedef over a ByteBuffer that indicates that the given bytes represent metadata about
an encrypted data file's encryption key.
|
EncryptionKeyMetadatas |
|
EncryptionManager |
Module for encrypting and decrypting table data files.
|
EqualityDeleteRowReader |
|
EqualityDeleteWriter<T> |
|
EqualityDeltaWriter<T> |
A writer capable of writing data and equality deletes that may belong to different specs and partitions.
|
Evaluator |
|
Exceptions |
|
ExceptionUtil |
|
ExceptionUtil.Block<R,E1 extends java.lang.Exception,E2 extends java.lang.Exception,E3 extends java.lang.Exception> |
|
ExceptionUtil.CatchBlock |
|
ExceptionUtil.FinallyBlock |
|
ExpireSnapshots |
An action that expires snapshots in a table.
|
ExpireSnapshots |
|
ExpireSnapshots.Result |
The action result that contains a summary of the execution.
|
ExpireSnapshotsProcedure |
A procedure that expires snapshots in a table.
|
Expression |
Represents a boolean expression tree.
|
Expression.Operation |
|
Expressions |
|
ExpressionVisitors |
|
ExpressionVisitors.BoundExpressionVisitor<R> |
|
ExpressionVisitors.BoundVisitor<R> |
|
ExpressionVisitors.ExpressionVisitor<R> |
|
ExtendedLogicalWriteInfo |
A class that holds logical write information not covered by LogicalWriteInfo in Spark.
|
False |
|
FanoutDataWriter<T> |
A data writer capable of writing to multiple specs and partitions that keeps data writers for each
seen spec/partition pair open until this writer is closed.
|
FieldMetrics<T> |
Iceberg internally tracked field level metrics.
|
FileAppender<D> |
|
FileAppenderFactory<T> |
|
FileContent |
Content type stored in a file, one of DATA, POSITION_DELETES, or EQUALITY_DELETES.
|
FileFormat |
Enum of supported file formats.
|
FileIO |
Pluggable module for reading, writing, and deleting files.
|
FileMetadata |
|
FileMetadata.Builder |
|
FileRewriteCoordinator |
|
Files |
|
FileScanTask |
A scan task over a range of a single file.
|
FileScanTaskReader<T> |
|
FileScanTaskSetManager |
|
FileWriter<T,R> |
A writer capable of writing files of a single type (i.e.
|
FileWriterFactory<T> |
A factory for creating data and delete writers.
|
Filter<T> |
A Class for generic filters
|
FilterIterator<T> |
An Iterator that filters another Iterator.
|
FindFiles |
|
FindFiles.Builder |
|
FixupTypes |
This is used to fix primitive types to match a table schema.
|
FlinkAppenderFactory |
|
FlinkAvroReader |
|
FlinkAvroWriter |
|
FlinkCatalog |
A Flink Catalog implementation that wraps an Iceberg Catalog .
|
FlinkCatalogFactory |
A Flink Catalog factory implementation that creates FlinkCatalog .
|
FlinkCompatibilityUtil |
This is a small util class that try to hide calls to Flink
Internal or PublicEvolve interfaces as Flink can change
those APIs during minor version release.
|
FlinkConfigOptions |
|
FlinkDynamicTableFactory |
|
FlinkFilters |
|
FlinkInputFormat |
Flink InputFormat for Iceberg.
|
FlinkInputSplit |
TODO Implement LocatableInputSplit .
|
FlinkOrcReader |
|
FlinkOrcWriter |
|
FlinkParquetReaders |
|
FlinkParquetWriters |
|
FlinkSchemaUtil |
Converter between Flink types and Iceberg type.
|
FlinkSink |
|
FlinkSink.Builder |
|
FlinkSource |
|
FlinkSource.Builder |
Source builder to build DataStream .
|
FlinkSplitPlanner |
|
FlinkTypeVisitor<T> |
|
FlinkValueReaders |
|
FlinkValueWriters |
|
FloatFieldMetrics |
Iceberg internally tracked field level metrics, used by Parquet and ORC writers only.
|
FloatFieldMetrics.Builder |
|
GCPProperties |
|
GCSFileIO |
FileIO Implementation backed by Google Cloud Storage (GCS)
|
GCSInputFile |
|
GCSOutputFile |
|
GenericAppenderFactory |
|
GenericArrowVectorAccessorFactory<DecimalT,Utf8StringT,ArrayT,ChildVectorT extends java.lang.AutoCloseable> |
|
GenericArrowVectorAccessorFactory.ArrayFactory<ChildVectorT,ArrayT> |
Create an array value of type ArrayT from arrow vector value.
|
GenericArrowVectorAccessorFactory.DecimalFactory<DecimalT> |
Create a decimal value of type DecimalT from arrow vector value.
|
GenericArrowVectorAccessorFactory.StringFactory<Utf8StringT> |
Create a UTF8 String value of type Utf8StringT from arrow vector value.
|
GenericArrowVectorAccessorFactory.StructChildFactory<ChildVectorT> |
Create a struct child vector of type ChildVectorT from arrow vector value.
|
GenericDeleteFilter |
|
GenericManifestFile |
|
GenericManifestFile.CopyBuilder |
|
GenericOrcReader |
|
GenericOrcReaders |
|
GenericOrcWriter |
|
GenericOrcWriters |
|
GenericOrcWriters.StructWriter<S> |
|
GenericParquetReaders |
|
GenericParquetWriter |
|
GenericPartitionFieldSummary |
|
GenericRecord |
|
GlueCatalog |
|
GuavaClasses |
|
HadoopCatalog |
HadoopCatalog provides a way to use table names like db.table to work with path-based tables under a common
location.
|
HadoopConfigurable |
An interface that extends the Hadoop Configurable interface to offer better serialization support for
customizable Iceberg objects such as FileIO .
|
HadoopFileIO |
|
HadoopInputFile |
InputFile implementation using the Hadoop FileSystem API.
|
HadoopOutputFile |
OutputFile implementation using the Hadoop FileSystem API.
|
HadoopTableOperations |
TableOperations implementation for file systems that support atomic rename.
|
HadoopTables |
Implementation of Iceberg tables that uses the Hadoop FileSystem
to store metadata and manifests.
|
HasTableOperations |
Used to expose a table's TableOperations.
|
HiddenPathFilter |
A PathFilter that filters out hidden paths.
|
HistoryEntry |
Table history entry.
|
HistoryTable |
A Table implementation that exposes a table's history as rows.
|
HiveCatalog |
|
HiveClientPool |
|
HiveIcebergFilterFactory |
|
HiveIcebergInputFormat |
|
HiveIcebergMetaHook |
|
HiveIcebergOutputCommitter |
An Iceberg table committer for adding data files to the Iceberg tables.
|
HiveIcebergOutputFormat<T> |
|
HiveIcebergSerDe |
|
HiveIcebergSplit |
|
HiveIcebergStorageHandler |
|
HiveSchemaUtil |
|
HiveTableOperations |
TODO we should be able to extract some more commonalities to BaseMetastoreTableOperations to
avoid code duplication between this class and Metacat Tables.
|
IcebergArrowColumnVector |
Implementation of Spark's ColumnVector interface.
|
IcebergBinaryObjectInspector |
|
IcebergDateObjectInspector |
|
IcebergDecimalObjectInspector |
|
IcebergDecoder<D> |
|
IcebergEncoder<D> |
|
IcebergFixedObjectInspector |
|
IcebergGenerics |
|
IcebergGenerics.ScanBuilder |
|
IcebergInputFormat<T> |
Generic Mrv2 InputFormat API for Iceberg.
|
IcebergObjectInspector |
|
IcebergPigInputFormat<T> |
|
IcebergRecordObjectInspector |
|
IcebergSource |
The IcebergSource loads/writes tables with format "iceberg".
|
IcebergSourceSplit |
|
IcebergSourceSplitSerializer |
TODO: use Java serialization for now.
|
IcebergSpark |
|
IcebergSplit |
|
IcebergSplitContainer |
|
IcebergSqlExtensionsBaseListener |
This class provides an empty implementation of IcebergSqlExtensionsListener ,
which can be extended to create a listener which only needs to handle a subset
of the available methods.
|
IcebergSqlExtensionsBaseVisitor<T> |
This class provides an empty implementation of IcebergSqlExtensionsVisitor ,
which can be extended to create a visitor which only needs to handle a subset
of the available methods.
|
IcebergSqlExtensionsLexer |
|
IcebergSqlExtensionsListener |
|
IcebergSqlExtensionsParser |
|
IcebergSqlExtensionsParser.AddPartitionFieldContext |
|
IcebergSqlExtensionsParser.ApplyTransformContext |
|
IcebergSqlExtensionsParser.BigDecimalLiteralContext |
|
IcebergSqlExtensionsParser.BigIntLiteralContext |
|
IcebergSqlExtensionsParser.BooleanLiteralContext |
|
IcebergSqlExtensionsParser.BooleanValueContext |
|
IcebergSqlExtensionsParser.CallArgumentContext |
|
IcebergSqlExtensionsParser.CallContext |
|
IcebergSqlExtensionsParser.ConstantContext |
|
IcebergSqlExtensionsParser.DecimalLiteralContext |
|
IcebergSqlExtensionsParser.DoubleLiteralContext |
|
IcebergSqlExtensionsParser.DropIdentifierFieldsContext |
|
IcebergSqlExtensionsParser.DropPartitionFieldContext |
|
IcebergSqlExtensionsParser.ExponentLiteralContext |
|
IcebergSqlExtensionsParser.ExpressionContext |
|
IcebergSqlExtensionsParser.FieldListContext |
|
IcebergSqlExtensionsParser.FloatLiteralContext |
|
IcebergSqlExtensionsParser.IdentifierContext |
|
IcebergSqlExtensionsParser.IdentityTransformContext |
|
IcebergSqlExtensionsParser.IntegerLiteralContext |
|
IcebergSqlExtensionsParser.MultipartIdentifierContext |
|
IcebergSqlExtensionsParser.NamedArgumentContext |
|
IcebergSqlExtensionsParser.NonReservedContext |
|
IcebergSqlExtensionsParser.NumberContext |
|
IcebergSqlExtensionsParser.NumericLiteralContext |
|
IcebergSqlExtensionsParser.OrderContext |
|
IcebergSqlExtensionsParser.OrderFieldContext |
|
IcebergSqlExtensionsParser.PositionalArgumentContext |
|
IcebergSqlExtensionsParser.QuotedIdentifierAlternativeContext |
|
IcebergSqlExtensionsParser.QuotedIdentifierContext |
|
IcebergSqlExtensionsParser.ReplacePartitionFieldContext |
|
IcebergSqlExtensionsParser.SetIdentifierFieldsContext |
|
IcebergSqlExtensionsParser.SetWriteDistributionAndOrderingContext |
|
IcebergSqlExtensionsParser.SingleStatementContext |
|
IcebergSqlExtensionsParser.SmallIntLiteralContext |
|
IcebergSqlExtensionsParser.StatementContext |
|
IcebergSqlExtensionsParser.StringLiteralContext |
|
IcebergSqlExtensionsParser.StringMapContext |
|
IcebergSqlExtensionsParser.TinyIntLiteralContext |
|
IcebergSqlExtensionsParser.TransformArgumentContext |
|
IcebergSqlExtensionsParser.TransformContext |
|
IcebergSqlExtensionsParser.TypeConstructorContext |
|
IcebergSqlExtensionsParser.UnquotedIdentifierContext |
|
IcebergSqlExtensionsParser.WriteDistributionSpecContext |
|
IcebergSqlExtensionsParser.WriteOrderingSpecContext |
|
IcebergSqlExtensionsParser.WriteSpecContext |
|
IcebergSqlExtensionsVisitor<T> |
|
IcebergStorage |
|
IcebergTableSink |
|
IcebergTableSource |
Flink Iceberg table source.
|
IcebergTimeObjectInspector |
|
IcebergTimestampObjectInspector |
|
IcebergTimestampWithZoneObjectInspector |
|
IcebergUUIDObjectInspector |
|
IdentityPartitionConverters |
|
InclusiveMetricsEvaluator |
|
IncrementalScanEvent |
Event sent to listeners when an incremental table scan is planned.
|
IndexByName |
|
IndexParents |
|
InputFile |
|
InputFilesDecryptor |
|
InputFormatConfig |
|
InputFormatConfig.ConfigBuilder |
|
InputFormatConfig.InMemoryDataModel |
|
InternalRecordWrapper |
|
IsolationLevel |
An isolation level in a table.
|
JavaHash<T> |
|
JavaHashes |
|
JdbcCatalog |
|
JobGroupInfo |
Captures information about the current job
which is used for displaying on the UI
|
JobGroupUtils |
|
JsonUtil |
|
Listener<E> |
A listener interface that can receive notifications.
|
Listeners |
Static registration and notification for listeners.
|
Literal<T> |
Represents a literal fixed value in an expression predicate
|
LocationProvider |
Interface for providing data file locations to write tasks.
|
LocationProviders |
|
LockManager |
An interface for locking, used to ensure commit isolation.
|
LockManagers |
|
LockManagers.BaseLockManager |
|
LogicalMap |
|
ManageSnapshots |
API for managing snapshots.
|
ManifestContent |
Content type stored in a manifest file, either DATA or DELETES.
|
ManifestEntriesTable |
A Table implementation that exposes a table's manifest entries as rows, for both delete and data files.
|
ManifestEvaluator |
|
ManifestFile |
Represents a manifest file that can be scanned to find data files in a table.
|
ManifestFile.PartitionFieldSummary |
Summarizes the values of one partition field stored in a manifest file.
|
ManifestFileBean |
|
ManifestFiles |
|
ManifestFileUtil |
|
ManifestReader<F extends ContentFile<F>> |
Base reader for data and delete manifest files.
|
ManifestReader.FileType |
|
ManifestsTable |
A Table implementation that exposes a table's manifest files as rows.
|
ManifestWriter<F extends ContentFile<F>> |
Writer for manifest files.
|
MappedField |
An immutable mapping between a field ID and a set of names.
|
MappedFields |
|
MappingUtil |
|
MapredIcebergInputFormat<T> |
Generic MR v1 InputFormat API for Iceberg.
|
MapredIcebergInputFormat.CompatibilityTaskAttemptContextImpl |
|
MetadataColumns |
|
MetadataTableType |
|
MetadataTableUtils |
|
MetadataUpdate |
Represents a change to table metadata.
|
MetadataUpdate.AddPartitionSpec |
|
MetadataUpdate.AddSchema |
|
MetadataUpdate.AddSnapshot |
|
MetadataUpdate.AddSortOrder |
|
MetadataUpdate.AssignUUID |
|
MetadataUpdate.RemoveProperties |
|
MetadataUpdate.RemoveSnapshot |
|
MetadataUpdate.SetCurrentSchema |
|
MetadataUpdate.SetCurrentSnapshot |
|
MetadataUpdate.SetDefaultPartitionSpec |
|
MetadataUpdate.SetDefaultSortOrder |
|
MetadataUpdate.SetLocation |
|
MetadataUpdate.SetProperties |
|
MetadataUpdate.UpgradeFormatVersion |
|
MetastoreUtil |
|
Metrics |
Iceberg file format metrics.
|
MetricsAwareDatumWriter<D> |
Wrapper writer around DatumWriter with metrics support.
|
MetricsConfig |
|
MetricsModes |
This class defines different metrics modes, which allow users to control the collection of
value_counts, null_value_counts, nan_value_counts, lower_bounds, upper_bounds for different columns in metadata.
|
MetricsModes.Counts |
Under this mode, only value_counts, null_value_counts, nan_value_counts are persisted.
|
MetricsModes.Full |
Under this mode, value_counts, null_value_counts, nan_value_counts
and full lower_bounds, upper_bounds are persisted.
|
MetricsModes.MetricsMode |
A metrics calculation mode.
|
MetricsModes.None |
Under this mode, value_counts, null_value_counts, nan_value_counts, lower_bounds, upper_bounds are not persisted.
|
MetricsModes.Truncate |
Under this mode, value_counts, null_value_counts, nan_value_counts
and truncated lower_bounds, upper_bounds are persisted.
|
MetricsUtil |
|
MicroBatches |
|
MicroBatches.MicroBatch |
|
MicroBatches.MicroBatchBuilder |
|
MigrateTable |
An action that migrates an existing table to Iceberg.
|
MigrateTable.Result |
The action result that contains a summary of the execution.
|
NamedReference<T> |
|
NameMapping |
Represents a mapping from external schema names to Iceberg type IDs.
|
NameMappingParser |
Parses external name mappings from a JSON representation.
|
Namespace |
|
NamespaceNotEmptyException |
Exception raised when attempting to drop a namespace that is not empty.
|
NaNUtil |
|
NessieCatalog |
Nessie implementation of Iceberg Catalog.
|
NessieTableOperations |
Nessie implementation of Iceberg TableOperations.
|
NessieUtil |
|
NoSuchIcebergTableException |
NoSuchTableException thrown when a table is found but it is not an Iceberg table.
|
NoSuchNamespaceException |
Exception raised when attempting to load a namespace that does not exist.
|
NoSuchProcedureException |
|
NoSuchTableException |
Exception raised when attempting to load a table that does not exist.
|
Not |
|
NotFoundException |
Exception raised when attempting to read a file that does not exist.
|
NullabilityHolder |
Instances of this class simply track whether a value at an index is null.
|
NullOrder |
|
Or |
|
ORC |
|
ORC.DataWriteBuilder |
|
ORC.DeleteWriteBuilder |
|
ORC.ReadBuilder |
|
ORC.WriteBuilder |
|
OrcBatchReader<T> |
Used for implementing ORC batch readers.
|
OrcMetrics |
|
OrcRowReader<T> |
Used for implementing ORC row readers.
|
OrcRowWriter<T> |
Write data value of a schema.
|
ORCSchemaUtil |
Utilities for mapping Iceberg to ORC schemas.
|
ORCSchemaUtil.BinaryType |
|
ORCSchemaUtil.LongType |
|
OrcSchemaVisitor<T> |
Generic visitor of an ORC Schema.
|
OrcSchemaWithTypeVisitor<T> |
|
OrcValueReader<T> |
|
OrcValueReaders |
|
OrcValueReaders.StructReader<T> |
|
OrcValueWriter<T> |
|
OSSFileIO |
FileIO implementation backed by OSS.
|
OSSInputFile |
|
OSSInputStream |
|
OSSOutputStream |
|
OSSURI |
This class represents a fully qualified location in OSS for input/output
operations expressed as as URI.
|
OutputFile |
|
OutputFileFactory |
Factory responsible for generating unique but recognizable data file names.
|
OutputFileFactory.Builder |
|
OverwriteFiles |
API for overwriting files in a table.
|
Pair<X,Y> |
|
ParallelIterable<T> |
|
Parquet |
|
Parquet.DataWriteBuilder |
|
Parquet.DeleteWriteBuilder |
|
Parquet.ReadBuilder |
|
Parquet.WriteBuilder |
|
ParquetAvroReader |
|
ParquetAvroValueReaders |
|
ParquetAvroValueReaders.TimeMillisReader |
|
ParquetAvroValueReaders.TimestampMillisReader |
|
ParquetAvroWriter |
|
ParquetDictionaryRowGroupFilter |
|
ParquetIterable<T> |
|
ParquetMetricsRowGroupFilter |
|
ParquetReader<T> |
|
ParquetSchemaUtil |
|
ParquetSchemaUtil.HasIds |
|
ParquetTypeVisitor<T> |
|
ParquetUtil |
|
ParquetValueReader<T> |
|
ParquetValueReaders |
|
ParquetValueReaders.BinaryAsDecimalReader |
|
ParquetValueReaders.ByteArrayReader |
|
ParquetValueReaders.BytesReader |
|
ParquetValueReaders.FloatAsDoubleReader |
|
ParquetValueReaders.IntAsLongReader |
|
ParquetValueReaders.IntegerAsDecimalReader |
|
ParquetValueReaders.ListReader<E> |
|
ParquetValueReaders.LongAsDecimalReader |
|
ParquetValueReaders.MapReader<K,V> |
|
ParquetValueReaders.PrimitiveReader<T> |
|
ParquetValueReaders.RepeatedKeyValueReader<M,I,K,V> |
|
ParquetValueReaders.RepeatedReader<T,I,E> |
|
ParquetValueReaders.ReusableEntry<K,V> |
|
ParquetValueReaders.StringReader |
|
ParquetValueReaders.StructReader<T,I> |
|
ParquetValueReaders.UnboxedReader<T> |
|
ParquetValueWriter<T> |
|
ParquetValueWriters |
|
ParquetValueWriters.PositionDeleteStructWriter<R> |
|
ParquetValueWriters.PrimitiveWriter<T> |
|
ParquetValueWriters.RepeatedKeyValueWriter<M,K,V> |
|
ParquetValueWriters.RepeatedWriter<L,E> |
|
ParquetValueWriters.StructWriter<S> |
|
ParquetWithFlinkSchemaVisitor<T> |
|
ParquetWithSparkSchemaVisitor<T> |
Visitor for traversing a Parquet type with a companion Spark type.
|
ParquetWriteAdapter<D> |
Deprecated.
|
PartitionedFanoutWriter<T> |
|
PartitionedWriter<T> |
|
PartitionField |
|
Partitioning |
|
PartitioningWriter<T,R> |
A writer capable of writing files of a single type (i.e.
|
PartitionKey |
A struct of partition values.
|
PartitionSet |
|
PartitionSpec |
Represents how to produce partition data for a table.
|
PartitionSpec.Builder |
|
PartitionSpecParser |
|
PartitionSpecVisitor<T> |
|
PartitionsTable |
A Table implementation that exposes a table's partitions as rows.
|
PartitionUtil |
|
PathIdentifier |
|
PendingUpdate<T> |
API for table metadata changes.
|
PigParquetReader |
|
PlaintextEncryptionManager |
|
PositionDelete<R> |
|
PositionDeleteIndex |
|
PositionDeleteWriter<T> |
|
PositionDeltaWriter<T> |
A writer capable of writing data and position deletes that may belong to different specs and partitions.
|
PositionOutputStream |
|
Predicate<T,C extends Term> |
|
Procedure |
An interface representing a stored procedure available for execution.
|
ProcedureCatalog |
A catalog API for working with stored procedures.
|
ProcedureParameter |
|
ProjectionDatumReader<D> |
|
Projections |
Utils to project expressions on rows to expressions on partitions.
|
Projections.ProjectionEvaluator |
A class that projects expressions for a table's data rows into expressions on the table's
partition values, for a table's partition spec .
|
PropertyUtil |
|
PruneColumnsWithoutReordering |
|
PruneColumnsWithReordering |
|
ReachableFileUtil |
|
Record |
|
Reference<T> |
|
RemoveIds |
|
RemoveIds |
|
RemoveOrphanFilesProcedure |
A procedure that removes orphan files in a table.
|
ReplacePartitions |
Not recommended: API for overwriting files in a table by partition.
|
ReplaceSortOrder |
API for replacing table sort order with a newly created order.
|
ResidualEvaluator |
|
ResolvingFileIO |
FileIO implementation that uses location scheme to choose the correct FileIO implementation.
|
RewriteDataFiles |
An action for rewriting data files according to a rewrite strategy.
|
RewriteDataFiles.FileGroupInfo |
A description of a file group, when it was processed, and within which partition.
|
RewriteDataFiles.FileGroupRewriteResult |
For a particular file group, the number of files which are newly created and the number of files
which were formerly part of the table but have been rewritten.
|
RewriteDataFiles.Result |
A map of file group information to the results of rewriting that file group.
|
RewriteDataFilesAction |
|
RewriteDataFilesActionResult |
|
RewriteDataFilesCommitManager |
Functionality used by RewriteDataFile Actions from different platforms to handle commits.
|
RewriteFileGroup |
Container class representing a set of files to be rewritten by a RewriteAction and the new files which have been
written by the action.
|
RewriteFiles |
API for replacing files in a table.
|
RewriteManifests |
An action that rewrites manifests.
|
RewriteManifests |
API for rewriting manifests for a table.
|
RewriteManifests.Result |
The action result that contains a summary of the execution.
|
RewritePositionDeleteFiles |
An action for rewriting position delete files.
|
RewritePositionDeleteFiles.Result |
The action result that contains a summary of the execution.
|
RewritePositionDeleteStrategy |
A strategy for an action to rewrite position delete files.
|
RewriteStrategy |
|
Rollback |
API for rolling table data back to the state at an older table snapshot .
|
RollbackStagedTable |
An implementation of StagedTable that mimics the behavior of Spark's non-atomic CTAS and RTAS.
|
RollingDataWriter<T> |
A rolling data writer that splits incoming data into multiple files within one spec/partition
based on the target file size.
|
RollingEqualityDeleteWriter<T> |
A rolling equality delete writer that splits incoming deletes into multiple files within one spec/partition
based on the target file size.
|
RollingPositionDeleteWriter<T> |
A rolling position delete writer that splits incoming deletes into multiple files within one spec/partition
based on the target file size.
|
RowDataFileScanTaskReader |
|
RowDataProjection |
|
RowDataRewriter |
|
RowDataRewriter |
|
RowDataRewriter.RewriteMap |
|
RowDataTaskWriterFactory |
|
RowDataUtil |
|
RowDataWrapper |
|
RowDelta |
API for encoding row-level changes to a table.
|
RowLevelOperation |
A logical representation of a data source DELETE, UPDATE, or MERGE operation that requires
rewriting data.
|
RowLevelOperation.Command |
The SQL operation being performed.
|
RowLevelOperationBuilder |
An interface for building a row-level operation.
|
RowLevelOperationInfo |
An interface with logical information for a row-level operation such as DELETE or MERGE.
|
RowLevelOperationMode |
Iceberg supports two ways to modify records in a table: copy-on-write and merge-on-read.
|
RowPositionColumnVector |
|
RuntimeIOException |
Deprecated.
|
RuntimeMetaException |
Exception used to wrap MetaException as a RuntimeException and add context.
|
S3FileIO |
FileIO implementation backed by S3.
|
S3InputFile |
|
S3OutputFile |
|
S3RequestUtil |
|
ScanEvent |
Event sent to listeners when a table scan is planned.
|
ScanSummary |
|
ScanSummary.Builder |
|
ScanSummary.PartitionMetrics |
|
ScanTask |
A scan task.
|
Schema |
The schema of a data table.
|
SchemaParser |
|
SchemaUtil |
|
SchemaWithPartnerVisitor<P,R> |
|
SchemaWithPartnerVisitor.PartnerAccessors<P> |
|
SeekableInputStream |
SeekableInputStream is an interface with the methods needed to read data from a file or
Hadoop data stream.
|
SerializableConfiguration |
Wraps a Configuration object in a Serializable layer.
|
SerializableMap<K,V> |
|
SerializableSupplier<T> |
|
SerializableTable |
A read-only serializable table that can be sent to other nodes in a cluster.
|
SerializationUtil |
|
SetLocation |
|
Snapshot |
A snapshot of the data in a table at a point in time.
|
SnapshotManager |
|
SnapshotParser |
|
SnapshotsTable |
A Table implementation that exposes a table's known snapshots as rows.
|
SnapshotSummary |
|
SnapshotSummary.Builder |
|
SnapshotTable |
An action that creates an independent snapshot of an existing table.
|
SnapshotTable.Result |
The action result that contains a summary of the execution.
|
SnapshotUpdate<ThisT,R> |
An action that produces snapshots.
|
SnapshotUpdate<ThisT> |
API for table changes that produce snapshots.
|
SnapshotUpdateAction<ThisT,R> |
|
SnapshotUtil |
|
SortDirection |
|
SortedMerge<T> |
An Iterable that merges the items from other Iterables in order.
|
SortField |
|
SortOrder |
A sort order that defines how data and delete files should be ordered in a table.
|
SortOrder.Builder |
|
SortOrderBuilder<R> |
Methods for building a sort order.
|
SortOrderParser |
|
SortOrderUtil |
|
SortOrderVisitor<T> |
|
SortStrategy |
A rewrite strategy for data files which aims to reorder data with data files to optimally lay them out
in relation to a column.
|
Spark3BinPackStrategy |
|
Spark3SortStrategy |
|
Spark3Util |
|
Spark3Util.CatalogAndIdentifier |
This mimics a class inside of Spark which is private inside of LookupCatalog.
|
Spark3Util.DescribeSchemaVisitor |
|
SparkActions |
|
SparkAvroReader |
|
SparkAvroWriter |
|
SparkCatalog |
A Spark TableCatalog implementation that wraps an Iceberg Catalog .
|
SparkDataFile |
|
SparkDistributionAndOrderingUtil |
|
SparkExceptionUtil |
|
SparkFilters |
|
SparkMetadataColumn |
|
SparkMicroBatchStream |
|
SparkOrcReader |
Converts the OrcIterator, which returns ORC's VectorizedRowBatch to a
set of Spark's UnsafeRows.
|
SparkOrcValueReaders |
|
SparkOrcWriter |
This class acts as an adaptor from an OrcFileAppender to a
FileAppender<InternalRow>.
|
SparkParquetReaders |
|
SparkParquetWriters |
|
SparkPartitionedFanoutWriter |
|
SparkPartitionedWriter |
|
SparkProcedures |
|
SparkProcedures.ProcedureBuilder |
|
SparkReadConf |
A class for common Iceberg configs for Spark reads.
|
SparkReadOptions |
Spark DF read options
|
SparkScanBuilder |
|
SparkSchemaUtil |
Helper methods for working with Spark/Hive metadata.
|
SparkSessionCatalog<T extends org.apache.spark.sql.connector.catalog.TableCatalog & org.apache.spark.sql.connector.catalog.SupportsNamespaces> |
A Spark catalog that can also load non-Iceberg tables.
|
SparkSQLProperties |
|
SparkStructLike |
|
SparkTable |
|
SparkTableUtil |
Java version of the original SparkTableUtil.scala
https://github.com/apache/iceberg/blob/apache-iceberg-0.8.0-incubating/spark/src/main/scala/org/apache/iceberg/spark/SparkTableUtil.scala
|
SparkTableUtil.SparkPartition |
Class representing a table partition.
|
SparkUtil |
|
SparkValueConverter |
A utility class that converts Spark values to Iceberg's internal representation.
|
SparkValueReaders |
|
SparkValueWriters |
|
SparkWriteConf |
A class for common Iceberg configs for Spark writes.
|
SparkWriteOptions |
Spark DF write options
|
StagedSparkTable |
|
StaticTableOperations |
TableOperations implementation that provides access to metadata for a Table at some point in time, using a
table metadata location.
|
StreamingMonitorFunction |
This is the single (non-parallel) monitoring task which takes a FlinkInputFormat ,
it is responsible for:
Monitoring snapshots of the Iceberg table.
Creating the splits corresponding to the incremental files
Assigning them to downstream tasks for further processing.
|
StreamingReaderOperator |
|
StrictMetricsEvaluator |
|
StructLike |
Interface for accessing data by position in a schema.
|
StructLikeMap<T> |
|
StructLikeSet |
|
StructLikeWrapper |
Wrapper to adapt StructLike for use in maps and sets by implementing equals and hashCode.
|
StructProjection |
|
SupportsDelta |
A mix-in interface for RowLevelOperation.
|
SupportsNamespaces |
Catalog methods for working with namespaces.
|
SupportsRowLevelOperations |
A mix-in interface for row-level operations support.
|
SupportsRowPosition |
Interface for readers that accept a callback to determine the starting row position of an Avro split.
|
SystemProperties |
Configuration properties that are controlled by Java system properties.
|
Table |
Represents a table.
|
TableIdentifier |
Identifies a table in iceberg catalog.
|
TableLoader |
Serializable loader to load an Iceberg Table .
|
TableLoader.CatalogTableLoader |
|
TableLoader.HadoopTableLoader |
|
TableMetadata |
Metadata for a table.
|
TableMetadata.Builder |
|
TableMetadata.MetadataLogEntry |
|
TableMetadata.SnapshotLogEntry |
|
TableMetadataParser |
|
TableMetadataParser.Codec |
|
TableMigrationUtil |
|
TableOperations |
SPI interface to abstract table metadata access and updates.
|
TableProperties |
|
Tables |
Generic interface for creating and loading a table implementation.
|
TableScan |
API for configuring a table scan.
|
TableScanUtil |
|
Tasks |
|
Tasks.Builder<I> |
|
Tasks.FailureTask<I,E extends java.lang.Exception> |
|
Tasks.Task<I,E extends java.lang.Exception> |
|
Tasks.UnrecoverableException |
|
TaskWriter<T> |
The writer interface could accept records and provide the generated data files.
|
TaskWriterFactory<T> |
|
Term |
An expression that evaluates to a value.
|
TezUtil |
|
ThreadPools |
|
Transaction |
A transaction for performing multiple updates to a table.
|
Transactions |
|
Transform<S,T> |
A transform function used for partitioning.
|
Transforms |
Factory methods for transforms.
|
TripleWriter<T> |
|
True |
|
Type |
|
Type.NestedType |
|
Type.PrimitiveType |
|
Type.TypeID |
|
Types |
|
Types.BinaryType |
|
Types.BooleanType |
|
Types.DateType |
|
Types.DecimalType |
|
Types.DoubleType |
|
Types.FixedType |
|
Types.FloatType |
|
Types.IntegerType |
|
Types.ListType |
|
Types.LongType |
|
Types.MapType |
|
Types.NestedField |
|
Types.StringType |
|
Types.StructType |
|
Types.TimestampType |
|
Types.TimeType |
|
Types.UUIDType |
|
TypeToMessageType |
|
TypeUtil |
|
TypeUtil.CustomOrderSchemaVisitor<T> |
|
TypeUtil.NextID |
Interface for passing a function that assigns column IDs.
|
TypeUtil.SchemaVisitor<T> |
|
TypeWithSchemaVisitor<T> |
Visitor for traversing a Parquet type with a companion Iceberg type.
|
Unbound<T,B> |
Represents an unbound expression node.
|
UnboundPredicate<T> |
|
UnboundTerm<T> |
Represents an unbound term.
|
UnboundTransform<S,T> |
|
UncheckedInterruptedException |
|
UncheckedSQLException |
|
UnicodeUtil |
|
UnionByNameVisitor |
Visitor class that accumulates the set of changes needed to evolve an existing schema into the union of the
existing and a new schema.
|
UnknownTransform<S,T> |
|
UnpartitionedWriter<T> |
|
UpdateLocation |
API for setting a table's base location.
|
UpdatePartitionSpec |
API for partition spec evolution.
|
UpdateProperties |
API for updating table properties.
|
UpdateSchema |
API for schema evolution.
|
Util |
|
UUIDConversion |
|
UUIDUtil |
|
ValidationException |
Exception raised when validation checks fail.
|
ValueReader<T> |
|
ValueReaders |
|
ValueReaders.StructReader<S> |
|
ValuesAsBytesReader |
Implements a ValuesReader specifically to read given number of bytes from the underlying ByteBufferInputStream .
|
ValueWriter<D> |
|
ValueWriters |
|
ValueWriters.StructWriter<S> |
|
VectorHolder |
Container class for holding the Arrow vector storing a batch of values along with other state needed for reading
values out of it.
|
VectorHolder.ConstantVectorHolder<T> |
A Vector Holder which does not actually produce values, consumers of this class should
use the constantValue to populate their ColumnVector implementation.
|
VectorHolder.PositionVectorHolder |
|
VectorizedArrowReader |
|
VectorizedArrowReader.ConstantVectorReader<T> |
A Dummy Vector Reader which doesn't actually read files, instead it returns a dummy
VectorHolder which indicates the constant value which should be used for this column.
|
VectorizedColumnIterator |
Vectorized version of the ColumnIterator that reads column values in data pages of a column in a row group in a
batched fashion.
|
VectorizedDictionaryEncodedParquetValuesReader |
This decoder reads Parquet dictionary encoded data in a vectorized fashion.
|
VectorizedPageIterator |
|
VectorizedParquetDefinitionLevelReader |
|
VectorizedParquetReader<T> |
|
VectorizedReader<T> |
Interface for vectorized Iceberg readers.
|
VectorizedReaderBuilder |
|
VectorizedRowBatchIterator |
An adaptor so that the ORC RecordReader can be used as an Iterator.
|
VectorizedSparkOrcReaders |
|
VectorizedSparkParquetReaders |
|
VectorizedSupport |
Copied here from Hive for compatibility
|
VectorizedSupport.Support |
|
VectorizedTableScanIterable |
A vectorized implementation of the Iceberg reader that iterates over the table scan.
|
WapUtil |
|
WriteObjectInspector |
Interface for converting the Hive primitive objects for the objects which could be added to an Iceberg Record.
|
WriteResult |
|
WriteResult.Builder |
|