All Classes Interface Summary Class Summary Enum Summary Exception Summary
Class |
Description |
AbstractMapredIcebergRecordReader<T> |
|
Accessor<T> |
|
Accessors |
Position2Accessor and Position3Accessor here is an optimization.
|
Action<ThisT,R> |
An action performed on a table.
|
Actions |
|
ActionsProvider |
An API that should be implemented by query engine integrations for providing actions.
|
AddedRowsScanTask |
A scan task for inserts generated by adding a data file to the table.
|
Aggregate<C extends Term> |
The aggregate functions that can be pushed and evaluated in Iceberg.
|
AggregateEvaluator |
A class for evaluating aggregates.
|
AliyunClientFactories |
|
AliyunClientFactory |
|
AliyunProperties |
|
AllDataFilesTable |
A Table implementation that exposes a table's valid data files as rows.
|
AllDataFilesTable.AllDataFilesTableScan |
|
AllDeleteFilesTable |
A Table implementation that exposes its valid delete files as rows.
|
AllDeleteFilesTable.AllDeleteFilesTableScan |
|
AllEntriesTable |
A Table implementation that exposes a table's manifest entries as rows, for both delete
and data files.
|
AllFilesTable |
A Table implementation that exposes its valid files as rows.
|
AllFilesTable.AllFilesTableScan |
|
AllManifestsTable |
A Table implementation that exposes a table's valid manifest files as rows.
|
AllManifestsTable.AllManifestsTableScan |
|
AlreadyExistsException |
Exception raised when attempting to create a table that already exists.
|
AncestorsOfProcedure |
|
And |
|
AppendFiles |
API for appending new files in a table.
|
ArrayUtil |
|
ArrowAllocation |
|
ArrowReader |
|
ArrowSchemaUtil |
|
ArrowVectorAccessor<DecimalT,Utf8StringT,ArrayT,ChildVectorT extends java.lang.AutoCloseable> |
|
ArrowVectorAccessors |
|
AssumeRoleAwsClientFactory |
|
Avro |
|
Avro.DataWriteBuilder |
|
Avro.DeleteWriteBuilder |
|
Avro.ReadBuilder |
|
Avro.WriteBuilder |
|
AvroEncoderUtil |
|
AvroGenericRecordFileScanTaskReader |
|
AvroGenericRecordReaderFunction |
Read Iceberg rows as GenericRecord .
|
AvroGenericRecordToRowDataMapper |
This util class converts Avro GenericRecord to Flink RowData.
|
AvroIterable<D> |
|
AvroMetrics |
|
AvroSchemaUtil |
|
AvroSchemaVisitor<T> |
|
AvroSchemaWithTypeVisitor<T> |
|
AvroWithFlinkSchemaVisitor<T> |
|
AvroWithPartnerByStructureVisitor<P,T> |
A abstract avro schema visitor with partner type.
|
AvroWithSparkSchemaVisitor<T> |
|
AwsClientFactories |
|
AwsClientFactory |
Interface to customize AWS clients used by Iceberg.
|
AwsProperties |
|
BadRequestException |
Exception thrown on HTTP 400 - Bad Request
|
BaseBatchReader<T> |
A base BatchReader class that contains common functionality
|
BaseColumnIterator |
|
BaseCombinedScanTask |
|
BaseDeleteOrphanFilesActionResult |
|
BaseDeleteReachableFilesActionResult |
|
BaseExpireSnapshotsActionResult |
Deprecated.
|
BaseFileGroupRewriteResult |
|
BaseFileScanTask |
|
BaseFileWriterFactory<T> |
A base writer factory to be extended by query engine integrations.
|
BaseMetadataTable |
Base class for metadata tables.
|
BaseMetastoreCatalog |
|
BaseMetastoreTableOperations |
|
BaseMetastoreTableOperations.CommitStatus |
|
BaseMigrateTableActionResult |
|
BaseOverwriteFiles |
|
BasePageIterator |
|
BasePageIterator.IntIterator |
|
BaseParquetReaders<T> |
|
BaseParquetWriter<T> |
|
BasePositionDeltaWriter<T> |
|
BaseReplacePartitions |
|
BaseReplaceSortOrder |
|
BaseRewriteDataFilesAction<ThisT> |
|
BaseRewriteDataFilesFileGroupInfo |
|
BaseRewriteDataFilesResult |
|
BaseRewriteManifests |
|
BaseRewriteManifestsActionResult |
|
BaseScanTaskGroup<T extends ScanTask> |
|
BaseSessionCatalog |
|
BaseSnapshotTableActionResult |
|
BaseTable |
Base Table implementation.
|
BaseTaskWriter<T> |
|
BaseTransaction |
|
BaseVectorizedParquetValuesReader |
A values reader for Parquet's run-length encoded data that reads column data in batches instead
of one value at a time.
|
BatchScan |
API for configuring a batch scan.
|
BinaryUtil |
|
Binder |
Rewrites expressions by replacing unbound named references with references to
fields in a struct schema.
|
BinPacking |
|
BinPacking.ListPacker<T> |
|
BinPacking.PackingIterable<T> |
|
BinPackStrategy |
A rewrite strategy for data files which determines which files to rewrite based on their size.
|
Blob |
|
BlobMetadata |
A metadata about a statistics or indices blob.
|
BlobMetadata |
|
Bound<T> |
Represents a bound value expression.
|
BoundAggregate<T,C> |
|
BoundLiteralPredicate<T> |
|
BoundPredicate<T> |
|
BoundReference<T> |
|
BoundSetPredicate<T> |
|
BoundTerm<T> |
Represents a bound term.
|
BoundTransform<S,T> |
A transform expression.
|
BoundUnaryPredicate<T> |
|
BucketFunction |
A Spark function implementation for the Iceberg bucket transform.
|
BucketFunction.BucketBase |
|
BucketFunction.BucketBinary |
|
BucketFunction.BucketDecimal |
|
BucketFunction.BucketInt |
|
BucketFunction.BucketLong |
|
BucketFunction.BucketString |
|
BucketUtil |
Contains the logic for hashing various types for use with the bucket partition
transformations
|
BulkDeletionFailureException |
|
ByteBufferInputStream |
|
ByteBuffers |
|
CachedClientPool |
|
CachingCatalog |
Class that wraps an Iceberg Catalog to cache tables.
|
Catalog |
A Catalog API for table create, drop, and load operations.
|
Catalog.TableBuilder |
|
CatalogHandlers |
|
CatalogLoader |
Serializable loader to load an Iceberg Catalog .
|
CatalogLoader.CustomCatalogLoader |
|
CatalogLoader.HadoopCatalogLoader |
|
CatalogLoader.HiveCatalogLoader |
|
CatalogLoader.RESTCatalogLoader |
|
CatalogProperties |
|
Catalogs |
Class for catalog resolution and accessing the common functions for Catalog API.
|
CatalogUtil |
|
ChangelogIterator |
An iterator that transforms rows from changelog tables within a single Spark task.
|
ChangelogOperation |
An enum representing possible operations in a changelog.
|
ChangelogScanTask |
A changelog scan task.
|
ChangelogUtil |
|
CharSequenceSet |
|
CharSequenceWrapper |
Wrapper class to adapt CharSequence for use in maps and sets.
|
CheckCompatibility |
|
CherrypickAncestorCommitException |
This exception occurs when one cherrypicks an ancestor or when the picked snapshot is already
linked to a published ancestor.
|
Ciphers |
|
Ciphers.AesGcmDecryptor |
|
Ciphers.AesGcmEncryptor |
|
ClientPool<C,E extends java.lang.Exception> |
|
ClientPool.Action<R,C,E extends java.lang.Exception> |
|
ClientPoolImpl<C,E extends java.lang.Exception> |
|
CloseableGroup |
This class acts as a helper for handling the closure of multiple resource.
|
CloseableIterable<T> |
|
CloseableIterable.ConcatCloseableIterable<E> |
|
CloseableIterator<T> |
|
ClosingIterator<T> |
A convenience wrapper around CloseableIterator , providing auto-close functionality when
all of the elements in the iterator are consumed.
|
ClusteredDataWriter<T> |
A data writer capable of writing to multiple specs and partitions that requires the incoming
records to be properly clustered by partition spec and by partition within each spec.
|
ClusteredEqualityDeleteWriter<T> |
An equality delete writer capable of writing to multiple specs and partitions that requires the
incoming delete records to be properly clustered by partition spec and by partition within each
spec.
|
ClusteredPositionDeleteWriter<T> |
A position delete writer capable of writing to multiple specs and partitions that requires the
incoming delete records to be properly clustered by partition spec and by partition within each
spec.
|
ColumnarBatch |
This class is inspired by Spark's ColumnarBatch .
|
ColumnarBatchReader |
VectorizedReader that returns Spark's ColumnarBatch to support Spark's vectorized
read path.
|
ColumnIterator<T> |
|
ColumnVector |
This class is inspired by Spark's ColumnVector .
|
ColumnVectorWithFilter |
|
ColumnWriter<T> |
|
CombinedScanTask |
A scan task made of several ranges from files.
|
CommitFailedException |
Exception raised when a commit fails because of out of date metadata.
|
CommitMetadata |
utility class to accept thread local commit properties
|
CommitMetrics |
Carries all metrics for a particular commit
|
CommitMetricsResult |
A serializable version of CommitMetrics that carries its results.
|
CommitReport |
A commit report that contains all relevant information from a Snapshot.
|
CommitReportParser |
|
CommitStateUnknownException |
Exception for a failure to confirm either affirmatively or negatively that a commit was applied.
|
Comparators |
|
ConfigProperties |
|
ConfigResponse |
Represents a response to requesting server-side provided configuration for the REST catalog.
|
ConfigResponse.Builder |
|
Configurable<C> |
Interface used to avoid runtime dependencies on Hadoop Configurable
|
Container<T> |
A simple container of objects that you can get and set.
|
ContentCache |
Class that provides file-content caching during reading.
|
ContentFile<F> |
|
ContentScanTask<F extends ContentFile<F>> |
A scan task over a range of bytes in a content file.
|
ContinuousIcebergEnumerator |
|
ContinuousSplitPlanner |
This interface is introduced so that we can plug in different split planner for unit test
|
ContinuousSplitPlannerImpl |
|
Conversions |
|
ConvertEqualityDeleteFiles |
An action for converting the equality delete files to position delete files.
|
ConvertEqualityDeleteFiles.Result |
The action result that contains a summary of the execution.
|
ConvertEqualityDeleteStrategy |
A strategy for the action to convert equality delete to position deletes.
|
CountAggregate<T> |
|
Counter |
Generalized Counter interface for creating telemetry-related instances when counting events.
|
CounterResult |
A serializable version of a Counter that carries its result.
|
CountNonNull<T> |
|
CountStar<T> |
|
CreateChangelogViewProcedure |
A procedure that creates a view for changed rows.
|
CreateNamespaceRequest |
A REST request to create a namespace, with an optional set of properties.
|
CreateNamespaceRequest.Builder |
|
CreateNamespaceResponse |
Represents a REST response for a request to create a namespace / database.
|
CreateNamespaceResponse.Builder |
|
CreateSnapshotEvent |
|
CreateTableRequest |
A REST request to create a table, either via direct commit or staging the creation of the table
as part of a transaction.
|
CreateTableRequest.Builder |
|
CredentialSupplier |
Interface used to expose credentials held by a FileIO instance.
|
DataFile |
Interface for data files listed in a table manifest.
|
DataFiles |
|
DataFiles.Builder |
|
DataFilesTable |
A Table implementation that exposes a table's data files as rows.
|
DataFilesTable.DataFilesTableScan |
|
DataIterator<T> |
|
DataIteratorBatcher<T> |
Batcher converts iterator of T into iterator of batched
RecordsWithSplitIds<RecordAndPosition<T>> , as FLIP-27's SplitReader.fetch() returns
batched records.
|
DataIteratorReaderFunction<T> |
|
DataOperations |
Data operations that produce snapshots.
|
DataReader<T> |
|
DataTableScan |
|
DataTask |
A task that returns data as rows instead of where to read data.
|
DataTaskReader |
|
DataWriter<T> |
|
DataWriter<T> |
|
DataWriteResult |
A result of writing data files.
|
DateTimeUtil |
|
Days<T> |
|
DaysFunction |
A Spark function implementation for the Iceberg day transform.
|
DaysFunction.DateToDaysFunction |
|
DaysFunction.TimestampToDaysFunction |
|
DecimalUtil |
|
DecimalVectorUtil |
|
DecoderResolver |
Resolver to resolve Decoder to a ResolvingDecoder .
|
DefaultCounter |
A default Counter implementation that uses an AtomicLong to count events.
|
DefaultMetricsContext |
A default MetricsContext implementation that uses native Java counters/timers.
|
DefaultTimer |
A default Timer implementation that uses a Stopwatch instance internally to
measure time.
|
DelegatingInputStream |
|
DelegatingOutputStream |
|
DeleteCounter |
A counter to be used to count deletes as they are applied.
|
DeletedColumnVector |
|
DeletedDataFileScanTask |
A scan task for deletes generated by removing a data file from the table.
|
DeletedRowsScanTask |
A scan task for deletes generated by adding delete files to the table.
|
DeleteFile |
Interface for delete files listed in a table delete manifest.
|
DeleteFiles |
API for deleting files from a table.
|
DeleteFilesTable |
A Table implementation that exposes a table's delete files as rows.
|
DeleteFilesTable.DeleteFilesTableScan |
|
DeleteFilter<T> |
|
DeleteOrphanFiles |
An action that deletes orphan metadata, data and delete files in a table.
|
DeleteOrphanFiles.PrefixMismatchMode |
Defines the action behavior when location prefixes (scheme/authority) mismatch.
|
DeleteOrphanFiles.Result |
The action result that contains a summary of the execution.
|
DeleteOrphanFilesSparkAction |
An action that removes orphan metadata, data and delete files by listing a given location and
comparing the actual files in that location with content and metadata files referenced by all
valid snapshots.
|
DeleteOrphanFilesSparkAction.FileURI |
|
DeleteReachableFiles |
An action that deletes all files referenced by a table metadata file.
|
DeleteReachableFiles.Result |
The action result that contains a summary of the execution.
|
DeleteReachableFilesSparkAction |
An implementation of DeleteReachableFiles that uses metadata tables in Spark to determine
which files should be deleted.
|
Deletes |
|
DeleteSchemaUtil |
|
DeleteWriteResult |
A result of writing delete files.
|
DellClientFactories |
|
DellClientFactory |
|
DellProperties |
|
DeltaBatchWrite |
An interface that defines how to write a delta of rows during batch processing.
|
DeltaLakeToIcebergMigrationActionsProvider |
An API that provide actions for migration from a delta lake table to an iceberg table.
|
DeltaLakeToIcebergMigrationActionsProvider.DefaultDeltaLakeToIcebergMigrationActions |
|
DeltaWrite |
A logical representation of a data source write that handles a delta of rows.
|
DeltaWriteBuilder |
An interface for building delta writes.
|
DeltaWriter<T> |
A data writer responsible for writing a delta of rows.
|
DeltaWriterFactory |
A factory for creating and initializing delta writers at the executor side.
|
DistributionMode |
Enum of supported write distribution mode, it defines the write behavior of batch or streaming
job:
|
DoubleFieldMetrics |
Iceberg internally tracked field level metrics, used by Parquet and ORC writers only.
|
DoubleFieldMetrics.Builder |
|
DuplicateWAPCommitException |
This exception occurs when the WAP workflow detects a duplicate wap commit.
|
DynamoDbCatalog |
DynamoDB implementation of Iceberg catalog
|
DynamoDbLockManager |
DynamoDB implementation for the lock manager.
|
DynClasses |
|
DynClasses.Builder |
|
DynConstructors |
Copied from parquet-common
|
DynConstructors.Builder |
|
DynConstructors.Ctor<C> |
|
DynFields |
|
DynFields.BoundField<T> |
|
DynFields.Builder |
|
DynFields.StaticField<T> |
|
DynFields.UnboundField<T> |
Convenience wrapper class around Field .
|
DynMethods |
Copied from parquet-common
|
DynMethods.BoundMethod |
|
DynMethods.Builder |
|
DynMethods.StaticMethod |
|
DynMethods.UnboundMethod |
Convenience wrapper class around Method .
|
EcsCatalog |
|
EcsFileIO |
FileIO implementation backed by Dell EMC ECS.
|
EcsTableOperations |
|
EncryptedFiles |
|
EncryptedInputFile |
Thin wrapper around an InputFile instance that is encrypted.
|
EncryptedOutputFile |
Thin wrapper around a OutputFile that is encrypting bytes written to the underlying file
system, via an encryption key that is symbolized by the enclosed EncryptionKeyMetadata .
|
EncryptionAlgorithm |
Algorithm supported for file encryption.
|
EncryptionKeyMetadata |
Light typedef over a ByteBuffer that indicates that the given bytes represent metadata about an
encrypted data file's encryption key.
|
EncryptionKeyMetadatas |
|
EncryptionManager |
Module for encrypting and decrypting table data files.
|
EnvironmentContext |
|
EnvironmentUtil |
|
EqualityDeleteRowReader |
|
EqualityDeleteWriter<T> |
|
EqualityDeltaWriter<T> |
A writer capable of writing data and equality deletes that may belong to different specs and
partitions.
|
ErrorHandler |
|
ErrorHandlers |
A set of consumers to handle errors for requests for table entities or for namespace entities, to
throw the correct exception.
|
ErrorResponse |
Standard response body for all API errors
|
ErrorResponse.Builder |
|
ErrorResponseParser |
|
EstimateOrcAvgWidthVisitor |
|
Evaluator |
|
Exceptions |
|
ExceptionUtil |
|
ExceptionUtil.Block<R,E1 extends java.lang.Exception,E2 extends java.lang.Exception,E3 extends java.lang.Exception> |
|
ExceptionUtil.CatchBlock |
|
ExceptionUtil.FinallyBlock |
|
ExpireSnapshots |
An action that expires snapshots in a table.
|
ExpireSnapshots |
|
ExpireSnapshots.Result |
The action result that contains a summary of the execution.
|
ExpireSnapshotsProcedure |
A procedure that expires snapshots in a table.
|
ExpireSnapshotsSparkAction |
An action that performs the same operation as ExpireSnapshots but uses
Spark to determine the delta in files between the pre and post-expiration table metadata.
|
Expression |
Represents a boolean expression tree.
|
Expression.Operation |
|
ExpressionParser |
|
Expressions |
|
ExpressionUtil |
Expression utility methods.
|
ExpressionVisitors |
|
ExpressionVisitors.BoundExpressionVisitor<R> |
|
ExpressionVisitors.BoundVisitor<R> |
|
ExpressionVisitors.CustomOrderExpressionVisitor<R> |
|
ExpressionVisitors.ExpressionVisitor<R> |
|
ExtendedLogicalWriteInfo |
A class that holds logical write information not covered by LogicalWriteInfo in Spark.
|
ExtendedParser |
|
ExtendedParser.RawOrderField |
|
False |
|
FanoutDataWriter<T> |
A data writer capable of writing to multiple specs and partitions that keeps data writers for
each seen spec/partition pair open until this writer is closed.
|
FieldMetrics<T> |
Iceberg internally tracked field level metrics.
|
FileAppender<D> |
|
FileAppenderFactory<T> |
|
FileContent |
Content type stored in a file, one of DATA, POSITION_DELETES, or EQUALITY_DELETES.
|
FileFormat |
Enum of supported file formats.
|
FileInfo |
|
FileInfo |
|
FileIO |
Pluggable module for reading, writing, and deleting files.
|
FileIOMetricsContext |
Extension of MetricsContext for use with FileIO to define standard metrics that should be
reported.
|
FileIOParser |
|
FileMetadata |
|
FileMetadata |
|
FileMetadata.Builder |
|
FileMetadataParser |
|
FileRewriteCoordinator |
|
Files |
|
FileScanTask |
A scan task over a range of bytes in a single data file.
|
FileScanTaskReader<T> |
|
FileScanTaskSetManager |
Deprecated.
|
FilesTable |
A Table implementation that exposes a table's files as rows.
|
FilesTable.FilesTableScan |
|
FileWriter<T,R> |
A writer capable of writing files of a single type (i.e.
|
FileWriterFactory<T> |
A factory for creating data and delete writers.
|
Filter<T> |
A Class for generic filters
|
FilterIterator<T> |
An Iterator that filters another Iterator.
|
FindFiles |
|
FindFiles.Builder |
|
FixedReservoirHistogram |
A Histogram implementation with reservoir sampling.
|
FixupTypes |
This is used to fix primitive types to match a table schema.
|
FlinkAppenderFactory |
|
FlinkAvroReader |
|
FlinkAvroWriter |
|
FlinkCatalog |
A Flink Catalog implementation that wraps an Iceberg Catalog .
|
FlinkCatalogFactory |
A Flink Catalog factory implementation that creates FlinkCatalog .
|
FlinkCompatibilityUtil |
This is a small util class that try to hide calls to Flink Internal or PublicEvolve interfaces as
Flink can change those APIs during minor version release.
|
FlinkConfigOptions |
When constructing Flink Iceberg source via Java API, configs can be set in Configuration
passed to source builder.
|
FlinkDynamicTableFactory |
|
FlinkFilters |
|
FlinkInputFormat |
Flink InputFormat for Iceberg.
|
FlinkInputSplit |
|
FlinkOrcReader |
|
FlinkOrcWriter |
|
FlinkPackage |
|
FlinkParquetReaders |
|
FlinkParquetWriters |
|
FlinkReadConf |
|
FlinkReadOptions |
Flink source read options
|
FlinkSchemaUtil |
Converter between Flink types and Iceberg type.
|
FlinkSink |
|
FlinkSink.Builder |
|
FlinkSource |
|
FlinkSource.Builder |
Source builder to build DataStream .
|
FlinkSplitPlanner |
|
FlinkTypeVisitor<T> |
|
FlinkValueReaders |
|
FlinkValueWriters |
|
FlinkWriteConf |
A class for common Iceberg configs for Flink writes.
|
FlinkWriteOptions |
Flink sink write options
|
FloatFieldMetrics |
Iceberg internally tracked field level metrics, used by Parquet and ORC writers only.
|
FloatFieldMetrics.Builder |
|
ForbiddenException |
Exception thrown on HTTP 403 Forbidden - Failed authorization checks.
|
GCPProperties |
|
GCSFileIO |
FileIO Implementation backed by Google Cloud Storage (GCS)
|
GenericAppenderFactory |
|
GenericArrowVectorAccessorFactory<DecimalT,Utf8StringT,ArrayT,ChildVectorT extends java.lang.AutoCloseable> |
|
GenericArrowVectorAccessorFactory.ArrayFactory<ChildVectorT,ArrayT> |
Create an array value of type ArrayT from arrow vector value.
|
GenericArrowVectorAccessorFactory.DecimalFactory<DecimalT> |
Create a decimal value of type DecimalT from arrow vector value.
|
GenericArrowVectorAccessorFactory.StringFactory<Utf8StringT> |
Create a UTF8 String value of type Utf8StringT from arrow vector value.
|
GenericArrowVectorAccessorFactory.StructChildFactory<ChildVectorT> |
Create a struct child vector of type ChildVectorT from arrow vector value.
|
GenericBlobMetadata |
|
GenericDeleteFilter |
|
GenericManifestFile |
|
GenericManifestFile.CopyBuilder |
|
GenericOrcReader |
|
GenericOrcReaders |
|
GenericOrcWriter |
|
GenericOrcWriters |
|
GenericOrcWriters.StructWriter<S> |
|
GenericParquetReaders |
|
GenericParquetWriter |
|
GenericPartitionFieldSummary |
|
GenericRecord |
|
GenericStatisticsFile |
|
GetNamespaceResponse |
Represents a REST response to fetch a namespace and its metadata properties
|
GetNamespaceResponse.Builder |
|
GetSplitResult |
|
GetSplitResult.Status |
|
GlueCatalog |
|
GuavaClasses |
|
HadoopCatalog |
HadoopCatalog provides a way to use table names like db.table to work with path-based tables
under a common location.
|
HadoopConfigurable |
An interface that extends the Hadoop Configurable interface to offer better serialization
support for customizable Iceberg objects such as FileIO .
|
HadoopFileIO |
|
HadoopInputFile |
InputFile implementation using the Hadoop FileSystem API.
|
HadoopMetricsContext |
FileIO Metrics implementation that delegates to Hadoop FileSystem statistics implementation using
the provided scheme.
|
HadoopOutputFile |
OutputFile implementation using the Hadoop FileSystem API.
|
HadoopStreams |
Convenience methods to get Parquet abstractions for Hadoop data streams.
|
HadoopTableOperations |
TableOperations implementation for file systems that support atomic rename.
|
HadoopTables |
Implementation of Iceberg tables that uses the Hadoop FileSystem to store metadata and manifests.
|
HasIcebergCatalog |
|
HasTableOperations |
Used to expose a table's TableOperations.
|
HiddenPathFilter |
A PathFilter that filters out hidden paths.
|
Histogram |
|
Histogram.Statistics |
|
HistoryEntry |
Table history entry.
|
HistoryTable |
A Table implementation that exposes a table's history as rows.
|
HiveCatalog |
|
HiveClientPool |
|
HiveHadoopUtil |
|
HiveIcebergFilterFactory |
|
HiveIcebergInputFormat |
|
HiveIcebergMetaHook |
|
HiveIcebergOutputCommitter |
An Iceberg table committer for adding data files to the Iceberg tables.
|
HiveIcebergOutputFormat<T> |
|
HiveIcebergSerDe |
|
HiveIcebergSplit |
|
HiveIcebergStorageHandler |
|
HiveSchemaUtil |
|
HiveTableOperations |
TODO we should be able to extract some more commonalities to BaseMetastoreTableOperations to
avoid code duplication between this class and Metacat Tables.
|
HiveVersion |
|
Hours<T> |
|
HoursFunction |
A Spark function implementation for the Iceberg hour transform.
|
HoursFunction.TimestampToHoursFunction |
|
HTTPClient |
An HttpClient for usage with the REST catalog.
|
HTTPClient.Builder |
|
IcebergArrowColumnVector |
Implementation of Spark's ColumnVector interface.
|
IcebergBinaryObjectInspector |
|
IcebergBuild |
Loads iceberg-version.properties with build information.
|
IcebergDateObjectInspector |
|
IcebergDecimalObjectInspector |
|
IcebergDecoder<D> |
|
IcebergEncoder<D> |
|
IcebergEnumeratorState |
Enumerator state for checkpointing
|
IcebergEnumeratorStateSerializer |
|
IcebergFixedObjectInspector |
|
IcebergGenerics |
|
IcebergGenerics.ScanBuilder |
|
IcebergInputFormat<T> |
Generic Mrv2 InputFormat API for Iceberg.
|
IcebergObjectInspector |
|
IcebergPigInputFormat<T> |
|
IcebergRecordObjectInspector |
|
IcebergSource<T> |
|
IcebergSource |
The IcebergSource loads/writes tables with format "iceberg".
|
IcebergSource.Builder<T> |
|
IcebergSourceReader<T> |
|
IcebergSourceReaderMetrics |
|
IcebergSourceSplit |
|
IcebergSourceSplitSerializer |
TODO: use Java serialization for now.
|
IcebergSourceSplitState |
|
IcebergSourceSplitStatus |
|
IcebergSpark |
|
IcebergSplit |
|
IcebergSplitContainer |
|
IcebergSqlExtensionsBaseListener |
This class provides an empty implementation of IcebergSqlExtensionsListener ,
which can be extended to create a listener which only needs to handle a subset
of the available methods.
|
IcebergSqlExtensionsBaseVisitor<T> |
This class provides an empty implementation of IcebergSqlExtensionsVisitor ,
which can be extended to create a visitor which only needs to handle a subset
of the available methods.
|
IcebergSqlExtensionsLexer |
|
IcebergSqlExtensionsListener |
|
IcebergSqlExtensionsParser |
|
IcebergSqlExtensionsParser.AddPartitionFieldContext |
|
IcebergSqlExtensionsParser.ApplyTransformContext |
|
IcebergSqlExtensionsParser.BigDecimalLiteralContext |
|
IcebergSqlExtensionsParser.BigIntLiteralContext |
|
IcebergSqlExtensionsParser.BooleanLiteralContext |
|
IcebergSqlExtensionsParser.BooleanValueContext |
|
IcebergSqlExtensionsParser.BranchOptionsContext |
|
IcebergSqlExtensionsParser.CallArgumentContext |
|
IcebergSqlExtensionsParser.CallContext |
|
IcebergSqlExtensionsParser.ConstantContext |
|
IcebergSqlExtensionsParser.CreateOrReplaceBranchContext |
|
IcebergSqlExtensionsParser.CreateOrReplaceTagContext |
|
IcebergSqlExtensionsParser.CreateReplaceBranchClauseContext |
|
IcebergSqlExtensionsParser.CreateReplaceTagClauseContext |
|
IcebergSqlExtensionsParser.DecimalLiteralContext |
|
IcebergSqlExtensionsParser.DoubleLiteralContext |
|
IcebergSqlExtensionsParser.DropBranchContext |
|
IcebergSqlExtensionsParser.DropIdentifierFieldsContext |
|
IcebergSqlExtensionsParser.DropPartitionFieldContext |
|
IcebergSqlExtensionsParser.DropTagContext |
|
IcebergSqlExtensionsParser.ExponentLiteralContext |
|
IcebergSqlExtensionsParser.ExpressionContext |
|
IcebergSqlExtensionsParser.FieldListContext |
|
IcebergSqlExtensionsParser.FloatLiteralContext |
|
IcebergSqlExtensionsParser.IdentifierContext |
|
IcebergSqlExtensionsParser.IdentityTransformContext |
|
IcebergSqlExtensionsParser.IntegerLiteralContext |
|
IcebergSqlExtensionsParser.MaxSnapshotAgeContext |
|
IcebergSqlExtensionsParser.MinSnapshotsToKeepContext |
|
IcebergSqlExtensionsParser.MultipartIdentifierContext |
|
IcebergSqlExtensionsParser.NamedArgumentContext |
|
IcebergSqlExtensionsParser.NonReservedContext |
|
IcebergSqlExtensionsParser.NumberContext |
|
IcebergSqlExtensionsParser.NumericLiteralContext |
|
IcebergSqlExtensionsParser.NumSnapshotsContext |
|
IcebergSqlExtensionsParser.OrderContext |
|
IcebergSqlExtensionsParser.OrderFieldContext |
|
IcebergSqlExtensionsParser.PositionalArgumentContext |
|
IcebergSqlExtensionsParser.QuotedIdentifierAlternativeContext |
|
IcebergSqlExtensionsParser.QuotedIdentifierContext |
|
IcebergSqlExtensionsParser.RefRetainContext |
|
IcebergSqlExtensionsParser.ReplacePartitionFieldContext |
|
IcebergSqlExtensionsParser.SetIdentifierFieldsContext |
|
IcebergSqlExtensionsParser.SetWriteDistributionAndOrderingContext |
|
IcebergSqlExtensionsParser.SingleOrderContext |
|
IcebergSqlExtensionsParser.SingleStatementContext |
|
IcebergSqlExtensionsParser.SmallIntLiteralContext |
|
IcebergSqlExtensionsParser.SnapshotIdContext |
|
IcebergSqlExtensionsParser.SnapshotRetentionContext |
|
IcebergSqlExtensionsParser.StatementContext |
|
IcebergSqlExtensionsParser.StringArrayContext |
|
IcebergSqlExtensionsParser.StringLiteralContext |
|
IcebergSqlExtensionsParser.StringMapContext |
|
IcebergSqlExtensionsParser.TagOptionsContext |
|
IcebergSqlExtensionsParser.TimeUnitContext |
|
IcebergSqlExtensionsParser.TinyIntLiteralContext |
|
IcebergSqlExtensionsParser.TransformArgumentContext |
|
IcebergSqlExtensionsParser.TransformContext |
|
IcebergSqlExtensionsParser.TypeConstructorContext |
|
IcebergSqlExtensionsParser.UnquotedIdentifierContext |
|
IcebergSqlExtensionsParser.WriteDistributionSpecContext |
|
IcebergSqlExtensionsParser.WriteOrderingSpecContext |
|
IcebergSqlExtensionsParser.WriteSpecContext |
|
IcebergSqlExtensionsVisitor<T> |
|
IcebergStorage |
|
IcebergTableSink |
|
IcebergTableSource |
Flink Iceberg table source.
|
IcebergTimeObjectInspector |
|
IcebergTimestampObjectInspector |
|
IcebergTimestampWithZoneObjectInspector |
|
IcebergUUIDObjectInspector |
|
IcebergVersionFunction |
A function for use in SQL that returns the current Iceberg version, e.g.
|
IdentityPartitionConverters |
|
InclusiveMetricsEvaluator |
|
IncrementalAppendScan |
API for configuring an incremental table scan for appends only snapshots
|
IncrementalChangelogScan |
API for configuring a scan for table changes.
|
IncrementalScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>> |
API for configuring an incremental scan.
|
IncrementalScanEvent |
Event sent to listeners when an incremental table scan is planned.
|
IndexByName |
|
IndexParents |
|
InputFile |
|
InputFilesDecryptor |
|
InputFormatConfig |
|
InputFormatConfig.ConfigBuilder |
|
InputFormatConfig.InMemoryDataModel |
|
InternalRecordWrapper |
|
IOUtil |
|
IsolationLevel |
An isolation level in a table.
|
JavaHash<T> |
|
JavaHashes |
|
JdbcCatalog |
|
JdbcClientPool |
|
JobGroupInfo |
Captures information about the current job which is used for displaying on the UI
|
JobGroupUtils |
|
JsonUtil |
|
JsonUtil.FromJson<T> |
|
JsonUtil.ToJson |
|
KmsClient |
Deprecated.
|
KmsClient.KeyGenerationResult |
For KMS systems that support key generation, this class keeps the key generation result - the
raw secret key, and its wrap.
|
LakeFormationAwsClientFactory |
|
Listener<E> |
A listener interface that can receive notifications.
|
Listeners |
Static registration and notification for listeners.
|
ListNamespacesResponse |
|
ListNamespacesResponse.Builder |
|
ListTablesResponse |
A list of table identifiers in a given namespace.
|
ListTablesResponse.Builder |
|
Literal<T> |
Represents a literal fixed value in an expression predicate
|
LoadTableResponse |
A REST response that is used when a table is successfully loaded.
|
LoadTableResponse.Builder |
|
LocationProvider |
Interface for providing data file locations to write tasks.
|
LocationProviders |
|
LocationUtil |
|
LockManager |
An interface for locking, used to ensure commit isolation.
|
LockManagers |
|
LockManagers.BaseLockManager |
|
LoggingMetricsReporter |
|
LogicalMap |
|
ManageSnapshots |
API for managing snapshots.
|
ManifestContent |
Content type stored in a manifest file, either DATA or DELETES.
|
ManifestEntriesTable |
A Table implementation that exposes a table's manifest entries as rows, for both delete
and data files.
|
ManifestEvaluator |
|
ManifestFile |
Represents a manifest file that can be scanned to find data files in a table.
|
ManifestFile.PartitionFieldSummary |
Summarizes the values of one partition field stored in a manifest file.
|
ManifestFileBean |
|
ManifestFiles |
|
ManifestFileUtil |
|
ManifestReader<F extends ContentFile<F>> |
Base reader for data and delete manifest files.
|
ManifestReader.FileType |
|
ManifestsTable |
A Table implementation that exposes a table's manifest files as rows.
|
ManifestWriter<F extends ContentFile<F>> |
Writer for manifest files.
|
MappedField |
An immutable mapping between a field ID and a set of names.
|
MappedFields |
|
MappingUtil |
|
MapredIcebergInputFormat<T> |
Generic MR v1 InputFormat API for Iceberg.
|
MapredIcebergInputFormat.CompatibilityTaskAttemptContextImpl |
|
MaxAggregate<T> |
|
MergeableScanTask<ThisT> |
A scan task that can be potentially merged with other scan tasks.
|
MetadataColumns |
|
MetadataLogEntriesTable |
|
MetaDataReaderFunction |
Reading metadata tables (like snapshots, manifests, etc.)
|
MetadataTableType |
|
MetadataTableUtils |
|
MetadataUpdate |
Represents a change to table metadata.
|
MetadataUpdate.AddPartitionSpec |
|
MetadataUpdate.AddSchema |
|
MetadataUpdate.AddSnapshot |
|
MetadataUpdate.AddSortOrder |
|
MetadataUpdate.AssignUUID |
|
MetadataUpdate.RemoveProperties |
|
MetadataUpdate.RemoveSnapshot |
|
MetadataUpdate.RemoveSnapshotRef |
|
MetadataUpdate.RemoveStatistics |
|
MetadataUpdate.SetCurrentSchema |
|
MetadataUpdate.SetDefaultPartitionSpec |
|
MetadataUpdate.SetDefaultSortOrder |
|
MetadataUpdate.SetLocation |
|
MetadataUpdate.SetProperties |
|
MetadataUpdate.SetSnapshotRef |
|
MetadataUpdate.SetStatistics |
|
MetadataUpdate.UpgradeFormatVersion |
|
MetadataUpdateParser |
|
MetastoreUtil |
|
Metrics |
Iceberg file format metrics.
|
MetricsAwareDatumWriter<D> |
Wrapper writer around DatumWriter with metrics support.
|
MetricsConfig |
|
MetricsContext |
Generalized interface for creating telemetry related instances for tracking operations.
|
MetricsContext.Counter<T extends java.lang.Number> |
Deprecated.
|
MetricsContext.Unit |
|
MetricsModes |
This class defines different metrics modes, which allow users to control the collection of
value_counts, null_value_counts, nan_value_counts, lower_bounds, upper_bounds for different
columns in metadata.
|
MetricsModes.Counts |
Under this mode, only value_counts, null_value_counts, nan_value_counts are persisted.
|
MetricsModes.Full |
Under this mode, value_counts, null_value_counts, nan_value_counts and full lower_bounds,
upper_bounds are persisted.
|
MetricsModes.MetricsMode |
A metrics calculation mode.
|
MetricsModes.None |
Under this mode, value_counts, null_value_counts, nan_value_counts, lower_bounds, upper_bounds
are not persisted.
|
MetricsModes.Truncate |
Under this mode, value_counts, null_value_counts, nan_value_counts and truncated lower_bounds,
upper_bounds are persisted.
|
MetricsReport |
|
MetricsReporter |
This interface defines the basic API for reporting metrics for operations to a Table.
|
MetricsUtil |
|
MetricsUtil.ReadableColMetricsStruct |
A struct of readable metric values for a primitive column
|
MetricsUtil.ReadableMetricColDefinition |
Fixed definition of a readable metric column, ie a mapping of a raw metric to a readable metric
|
MetricsUtil.ReadableMetricColDefinition.MetricFunction |
|
MetricsUtil.ReadableMetricColDefinition.TypeFunction |
|
MetricsUtil.ReadableMetricsStruct |
|
MicroBatches |
|
MicroBatches.MicroBatch |
|
MicroBatches.MicroBatchBuilder |
|
MigrateTable |
An action that migrates an existing table to Iceberg.
|
MigrateTable.Result |
The action result that contains a summary of the execution.
|
MigrateTableSparkAction |
Takes a Spark table in the source catalog and attempts to transform it into an Iceberg table in
the same location with the same identifier.
|
MinAggregate<T> |
|
Months<T> |
|
MonthsFunction |
A Spark function implementation for the Iceberg month transform.
|
MonthsFunction.DateToMonthsFunction |
|
MonthsFunction.TimestampToMonthsFunction |
|
NamedReference<T> |
|
NameMapping |
Represents a mapping from external schema names to Iceberg type IDs.
|
NameMappingParser |
Parses external name mappings from a JSON representation.
|
Namespace |
|
NamespaceNotEmptyException |
Exception raised when attempting to drop a namespace that is not empty.
|
NaNUtil |
|
NativeFileCryptoParameters |
Barebone encryption parameters, one object per content file.
|
NativeFileCryptoParameters.Builder |
|
NativelyEncryptedFile |
This interface is applied to OutputFile and InputFile implementations, in order to enable
delivery of crypto parameters (such as encryption keys etc) from the Iceberg key management
module to the writers/readers of file formats that support encryption natively (Parquet and ORC).
|
NessieCatalog |
Nessie implementation of Iceberg Catalog.
|
NessieIcebergClient |
|
NessieTableOperations |
Nessie implementation of Iceberg TableOperations.
|
NessieUtil |
|
NoSuchIcebergTableException |
NoSuchTableException thrown when a table is found but it is not an Iceberg table.
|
NoSuchNamespaceException |
Exception raised when attempting to load a namespace that does not exist.
|
NoSuchProcedureException |
|
NoSuchTableException |
Exception raised when attempting to load a table that does not exist.
|
NoSuchViewException |
Exception raised when attempting to load a view that does not exist.
|
Not |
|
NotAuthorizedException |
Exception thrown on HTTP 401 Unauthorized.
|
NotFoundException |
Exception raised when attempting to read a file that does not exist.
|
NullabilityHolder |
Instances of this class simply track whether a value at an index is null.
|
NullOrder |
|
NumDeletes |
|
NumSplits |
|
OAuth2Properties |
|
OAuth2Util |
|
OAuth2Util.AuthSession |
Class to handle authorization headers and token refresh.
|
OAuthErrorResponseParser |
|
OAuthTokenResponse |
|
OAuthTokenResponse.Builder |
|
Or |
|
ORC |
|
ORC.DataWriteBuilder |
|
ORC.DeleteWriteBuilder |
|
ORC.ReadBuilder |
|
ORC.WriteBuilder |
|
OrcBatchReader<T> |
Used for implementing ORC batch readers.
|
OrcMetrics |
|
OrcRowReader<T> |
Used for implementing ORC row readers.
|
OrcRowWriter<T> |
Write data value of a schema.
|
ORCSchemaUtil |
Utilities for mapping Iceberg to ORC schemas.
|
ORCSchemaUtil.BinaryType |
|
ORCSchemaUtil.LongType |
|
OrcSchemaVisitor<T> |
Generic visitor of an ORC Schema.
|
OrcSchemaWithTypeVisitor<T> |
|
OrcValueReader<T> |
|
OrcValueReaders |
|
OrcValueReaders.StructReader<T> |
|
OrcValueWriter<T> |
|
OSSFileIO |
FileIO implementation backed by OSS.
|
OSSOutputStream |
|
OSSURI |
This class represents a fully qualified location in OSS for input/output operations expressed as
as URI.
|
OutputFile |
|
OutputFileFactory |
Factory responsible for generating unique but recognizable data/delete file names.
|
OutputFileFactory.Builder |
|
OverwriteFiles |
API for overwriting files in a table.
|
Pair<X,Y> |
|
ParallelIterable<T> |
|
Parquet |
|
Parquet.DataWriteBuilder |
|
Parquet.DeleteWriteBuilder |
|
Parquet.ReadBuilder |
|
Parquet.WriteBuilder |
|
ParquetAvroReader |
|
ParquetAvroValueReaders |
|
ParquetAvroValueReaders.TimeMillisReader |
|
ParquetAvroValueReaders.TimestampMillisReader |
|
ParquetAvroWriter |
|
ParquetBloomRowGroupFilter |
|
ParquetCodecFactory |
This class implements a codec factory that is used when reading from Parquet.
|
ParquetDictionaryRowGroupFilter |
|
ParquetIterable<T> |
|
ParquetMetricsRowGroupFilter |
|
ParquetReader<T> |
|
ParquetSchemaUtil |
|
ParquetSchemaUtil.HasIds |
|
ParquetTypeVisitor<T> |
|
ParquetUtil |
|
ParquetValueReader<T> |
|
ParquetValueReaders |
|
ParquetValueReaders.BinaryAsDecimalReader |
|
ParquetValueReaders.ByteArrayReader |
|
ParquetValueReaders.BytesReader |
|
ParquetValueReaders.FloatAsDoubleReader |
|
ParquetValueReaders.IntAsLongReader |
|
ParquetValueReaders.IntegerAsDecimalReader |
|
ParquetValueReaders.ListReader<E> |
|
ParquetValueReaders.LongAsDecimalReader |
|
ParquetValueReaders.MapReader<K,V> |
|
ParquetValueReaders.PrimitiveReader<T> |
|
ParquetValueReaders.RepeatedKeyValueReader<M,I,K,V> |
|
ParquetValueReaders.RepeatedReader<T,I,E> |
|
ParquetValueReaders.ReusableEntry<K,V> |
|
ParquetValueReaders.StringReader |
|
ParquetValueReaders.StructReader<T,I> |
|
ParquetValueReaders.UnboxedReader<T> |
|
ParquetValueWriter<T> |
|
ParquetValueWriters |
|
ParquetValueWriters.PositionDeleteStructWriter<R> |
|
ParquetValueWriters.PrimitiveWriter<T> |
|
ParquetValueWriters.RepeatedKeyValueWriter<M,K,V> |
|
ParquetValueWriters.RepeatedWriter<L,E> |
|
ParquetValueWriters.StructWriter<S> |
|
ParquetWithFlinkSchemaVisitor<T> |
|
ParquetWithSparkSchemaVisitor<T> |
Visitor for traversing a Parquet type with a companion Spark type.
|
ParquetWriteAdapter<D> |
Deprecated.
|
PartitionData |
|
PartitionedFanoutWriter<T> |
|
PartitionedWriter<T> |
|
PartitionField |
|
Partitioning |
|
PartitioningWriter<T,R> |
A writer capable of writing files of a single type (i.e.
|
PartitionKey |
A struct of partition values.
|
PartitionScanTask |
A scan task for data within a particular partition
|
PartitionSet |
|
PartitionSpec |
Represents how to produce partition data for a table.
|
PartitionSpec.Builder |
|
PartitionSpecParser |
|
PartitionSpecVisitor<T> |
|
PartitionsTable |
A Table implementation that exposes a table's partitions as rows.
|
PartitionUtil |
|
PathIdentifier |
|
PendingUpdate<T> |
API for table metadata changes.
|
PigParquetReader |
|
PlaintextEncryptionManager |
|
PositionDelete<R> |
|
PositionDeleteIndex |
|
PositionDeletesScanTask |
|
PositionDeletesTable |
|
PositionDeletesTable.PositionDeletesBatchScan |
|
PositionDeleteWriter<T> |
|
PositionDeltaWriter<T> |
A writer capable of writing data and position deletes that may belong to different specs and
partitions.
|
PositionOutputStream |
|
Predicate<T,C extends Term> |
|
Procedure |
An interface representing a stored procedure available for execution.
|
ProcedureCatalog |
A catalog API for working with stored procedures.
|
ProcedureParameter |
|
ProjectionDatumReader<D> |
|
Projections |
Utils to project expressions on rows to expressions on partitions.
|
Projections.ProjectionEvaluator |
A class that projects expressions for a table's data rows into expressions on the table's
partition values, for a table's partition spec .
|
PropertiesSerDesUtil |
Convert Map properties to bytes.
|
PropertyUtil |
|
PruneColumnsWithoutReordering |
|
PruneColumnsWithReordering |
|
Puffin |
Utility class for reading and writing Puffin files.
|
Puffin.ReadBuilder |
|
Puffin.WriteBuilder |
|
PuffinCompressionCodec |
|
PuffinReader |
|
PuffinWriter |
|
RangeReadable |
RangeReadable is an interface that allows for implementations of InputFile
streams to perform positional, range-based reads, which are more efficient than unbounded reads
in many cloud provider object stores.
|
ReachableFileUtil |
|
ReaderFunction<T> |
|
Record |
|
RecordAndPosition<T> |
A record along with the reader position to be stored in the checkpoint.
|
Reference<T> |
|
RefsTable |
A Table implementation that exposes a table's known snapshot references as rows.
|
RemoveIds |
|
RemoveIds |
|
RemoveOrphanFilesProcedure |
A procedure that removes orphan files in a table.
|
RenameTableRequest |
A REST request to rename a table.
|
RenameTableRequest.Builder |
|
ReplacePartitions |
API for overwriting files in a table by partition.
|
ReplaceSortOrder |
API for replacing table sort order with a newly created order.
|
ReportMetricsRequest |
|
ReportMetricsRequest.ReportType |
|
ReportMetricsRequestParser |
|
ResidualEvaluator |
|
ResolvingFileIO |
FileIO implementation that uses location scheme to choose the correct FileIO implementation.
|
ResourcePaths |
|
RESTCatalog |
|
RESTClient |
Interface for a basic HTTP Client for interfacing with the REST catalog.
|
RESTException |
Base class for REST client exceptions
|
RESTMessage |
Interface to mark both REST requests and responses.
|
RESTRequest |
Interface to mark a REST request.
|
RESTResponse |
Interface to mark a REST response
|
RESTSerializers |
|
RESTSerializers.ErrorResponseDeserializer |
|
RESTSerializers.ErrorResponseSerializer |
|
RESTSerializers.MetadataUpdateDeserializer |
|
RESTSerializers.MetadataUpdateSerializer |
|
RESTSerializers.NamespaceDeserializer |
|
RESTSerializers.NamespaceSerializer |
|
RESTSerializers.OAuthTokenResponseDeserializer |
|
RESTSerializers.OAuthTokenResponseSerializer |
|
RESTSerializers.ReportMetricsRequestDeserializer<T extends ReportMetricsRequest> |
|
RESTSerializers.ReportMetricsRequestSerializer<T extends ReportMetricsRequest> |
|
RESTSerializers.SchemaDeserializer |
|
RESTSerializers.SchemaSerializer |
|
RESTSerializers.TableIdentifierDeserializer |
|
RESTSerializers.TableIdentifierSerializer |
|
RESTSerializers.TableMetadataDeserializer |
|
RESTSerializers.TableMetadataSerializer |
|
RESTSerializers.UnboundPartitionSpecDeserializer |
|
RESTSerializers.UnboundPartitionSpecSerializer |
|
RESTSerializers.UnboundSortOrderDeserializer |
|
RESTSerializers.UnboundSortOrderSerializer |
|
RESTSerializers.UpdateRequirementDeserializer |
|
RESTSerializers.UpdateRequirementSerializer |
|
RESTSessionCatalog |
|
RESTSigV4Signer |
Provides a request interceptor for use with the HTTPClient that calculates the required signature
for the SigV4 protocol and adds the necessary headers for all requests created by the client.
|
RESTUtil |
|
RewriteDataFiles |
An action for rewriting data files according to a rewrite strategy.
|
RewriteDataFiles.FileGroupInfo |
A description of a file group, when it was processed, and within which partition.
|
RewriteDataFiles.FileGroupRewriteResult |
For a particular file group, the number of files which are newly created and the number of
files which were formerly part of the table but have been rewritten.
|
RewriteDataFiles.Result |
A map of file group information to the results of rewriting that file group.
|
RewriteDataFilesAction |
|
RewriteDataFilesActionResult |
|
RewriteDataFilesCommitManager |
Functionality used by RewriteDataFile Actions from different platforms to handle commits.
|
RewriteDataFilesSparkAction |
|
RewriteFileGroup |
Container class representing a set of files to be rewritten by a RewriteAction and the new files
which have been written by the action.
|
RewriteFiles |
API for replacing files in a table.
|
RewriteJobOrder |
Enum of supported rewrite job order, it defines the order in which the file groups should be
written.
|
RewriteManifests |
An action that rewrites manifests.
|
RewriteManifests |
API for rewriting manifests for a table.
|
RewriteManifests.Result |
The action result that contains a summary of the execution.
|
RewriteManifestsSparkAction |
An action that rewrites manifests in a distributed manner and co-locates metadata for partitions.
|
RewritePositionDeleteFiles |
An action for rewriting position delete files.
|
RewritePositionDeleteFiles.Result |
The action result that contains a summary of the execution.
|
RewritePositionDeleteStrategy |
A strategy for an action to rewrite position delete files.
|
RewriteStrategy |
|
RollbackStagedTable |
An implementation of StagedTable that mimics the behavior of Spark's non-atomic CTAS and RTAS.
|
RollingDataWriter<T> |
A rolling data writer that splits incoming data into multiple files within one spec/partition
based on the target file size.
|
RollingEqualityDeleteWriter<T> |
A rolling equality delete writer that splits incoming deletes into multiple files within one
spec/partition based on the target file size.
|
RollingPositionDeleteWriter<T> |
A rolling position delete writer that splits incoming deletes into multiple files within one
spec/partition based on the target file size.
|
RowDataFileScanTaskReader |
|
RowDataProjection |
|
RowDataReaderFunction |
|
RowDataRewriter |
|
RowDataRewriter.RewriteMap |
|
RowDataTaskWriterFactory |
|
RowDataToAvroGenericRecordConverter |
This is not serializable because Avro Schema is not actually serializable, even though it
implements Serializable interface.
|
RowDataUtil |
|
RowDataWrapper |
|
RowDelta |
API for encoding row-level changes to a table.
|
RowLevelOperationMode |
Iceberg supports two ways to modify records in a table: copy-on-write and merge-on-read.
|
RowPositionColumnVector |
|
RuntimeIOException |
Deprecated.
|
RuntimeMetaException |
Exception used to wrap MetaException as a RuntimeException and add context.
|
S3FileIO |
FileIO implementation backed by S3.
|
S3InputFile |
|
S3ObjectMapper |
|
S3ObjectMapper.S3SignRequestDeserializer<T extends S3SignRequest> |
|
S3ObjectMapper.S3SignRequestSerializer<T extends S3SignRequest> |
|
S3ObjectMapper.S3SignResponseDeserializer<T extends S3SignResponse> |
|
S3ObjectMapper.S3SignResponseSerializer<T extends S3SignResponse> |
|
S3OutputFile |
|
S3RequestUtil |
|
S3SignRequest |
|
S3SignRequestParser |
|
S3SignResponse |
|
S3SignResponseParser |
|
S3V4RestSignerClient |
|
Scan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>> |
Scan objects are immutable and can be shared between threads.
|
ScanContext |
Context object with optional arguments for a Flink Scan.
|
ScanContext.Builder |
|
ScanEvent |
Event sent to listeners when a table scan is planned.
|
ScanMetrics |
Carries all metrics for a particular scan
|
ScanMetricsResult |
A serializable version of ScanMetrics that carries its results.
|
ScanReport |
A Table Scan report that contains all relevant information from a Table Scan.
|
ScanReportParser |
|
ScanSummary |
|
ScanSummary.Builder |
|
ScanSummary.PartitionMetrics |
|
ScanTask |
A scan task.
|
ScanTaskGroup<T extends ScanTask> |
A scan task that may include partial input files, multiple input files or both.
|
ScanTaskSetManager |
|
Schema |
The schema of a data table.
|
SchemaParser |
|
SchemaUtil |
|
SchemaWithPartnerVisitor<P,R> |
|
SchemaWithPartnerVisitor.PartnerAccessors<P> |
|
SeekableInputStream |
SeekableInputStream is an interface with the methods needed to read data from a file or
Hadoop data stream.
|
SerializableConfiguration |
Wraps a Configuration object in a Serializable layer.
|
SerializableFunction<S,T> |
A concrete transform function that applies a transform to values of a certain type.
|
SerializableMap<K,V> |
|
SerializableSupplier<T> |
|
SerializableTable |
A read-only serializable table that can be sent to other nodes in a cluster.
|
SerializableTable.SerializableMetadataTable |
|
SerializableTableWithSize |
This class provides a serializable table with a known size estimate.
|
SerializableTableWithSize.SerializableMetadataTableWithSize |
|
SerializationUtil |
|
ServiceFailureException |
Exception thrown on HTTP 5XX Server Error.
|
ServiceUnavailableException |
Exception thrown on HTTP 503: service is unavailable
|
SessionCatalog |
A Catalog API for table and namespace operations that includes session context.
|
SessionCatalog.SessionContext |
Context for a session.
|
SetAccumulator<T> |
|
SetLocation |
|
SetStatistics |
|
SimpleSplitAssigner |
Since all methods are called in the source coordinator thread by enumerator, there is no need for
locking.
|
SimpleSplitAssignerFactory |
Create simple assigner that hands out splits without any guarantee in order or locality.
|
SingleValueParser |
|
Snapshot |
A snapshot of the data in a table at a point in time.
|
SnapshotDeltaLakeTable |
Snapshot an existing Delta Lake table to Iceberg in place.
|
SnapshotDeltaLakeTable.Result |
The action result that contains a summary of the execution.
|
SnapshotIdGeneratorUtil |
|
SnapshotManager |
|
SnapshotParser |
|
SnapshotRef |
|
SnapshotRef.Builder |
|
SnapshotRefParser |
|
SnapshotScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>> |
This is a common base class to share code between different BaseScan implementations that handle
scans of a particular snapshot.
|
SnapshotsTable |
A Table implementation that exposes a table's known snapshots as rows.
|
SnapshotSummary |
|
SnapshotSummary.Builder |
|
SnapshotTable |
An action that creates an independent snapshot of an existing table.
|
SnapshotTable.Result |
The action result that contains a summary of the execution.
|
SnapshotTableSparkAction |
Creates a new Iceberg table based on a source Spark table.
|
SnapshotUpdate<ThisT,R> |
An action that produces snapshots.
|
SnapshotUpdate<ThisT> |
API for table changes that produce snapshots.
|
SnapshotUpdateAction<ThisT,R> |
|
SnapshotUtil |
|
SnowflakeCatalog |
|
SortDirection |
|
SortedMerge<T> |
An Iterable that merges the items from other Iterables in order.
|
SortField |
|
SortOrder |
A sort order that defines how data and delete files should be ordered in a table.
|
SortOrder.Builder |
|
SortOrderBuilder<R> |
Methods for building a sort order.
|
SortOrderParser |
|
SortOrderUtil |
|
SortOrderVisitor<T> |
|
SortStrategy |
A rewrite strategy for data files which aims to reorder data with data files to optimally lay
them out in relation to a column.
|
Spark3Util |
|
Spark3Util.CatalogAndIdentifier |
This mimics a class inside of Spark which is private inside of LookupCatalog.
|
Spark3Util.DescribeSchemaVisitor |
|
SparkActions |
|
SparkAggregates |
|
SparkAvroReader |
|
SparkAvroWriter |
|
SparkBinPackStrategy |
|
SparkCachedTableCatalog |
An internal table catalog that is capable of loading tables from a cache.
|
SparkCatalog |
A Spark TableCatalog implementation that wraps an Iceberg Catalog .
|
SparkChangelogTable |
|
SparkDataFile |
|
SparkDistributionAndOrderingUtil |
|
SparkExceptionUtil |
|
SparkFilters |
|
SparkFunctions |
|
SparkMetadataColumn |
|
SparkMicroBatchStream |
|
SparkOrcReader |
Converts the OrcIterator, which returns ORC's VectorizedRowBatch to a set of Spark's UnsafeRows.
|
SparkOrcValueReaders |
|
SparkOrcWriter |
This class acts as an adaptor from an OrcFileAppender to a FileAppender<InternalRow>.
|
SparkParquetReaders |
|
SparkParquetWriters |
|
SparkPartitionedFanoutWriter |
|
SparkPartitionedWriter |
|
SparkProcedures |
|
SparkProcedures.ProcedureBuilder |
|
SparkReadConf |
A class for common Iceberg configs for Spark reads.
|
SparkReadOptions |
Spark DF read options
|
SparkScanBuilder |
|
SparkSchemaUtil |
Helper methods for working with Spark/Hive metadata.
|
SparkSessionCatalog<T extends org.apache.spark.sql.connector.catalog.TableCatalog & org.apache.spark.sql.connector.catalog.SupportsNamespaces> |
A Spark catalog that can also load non-Iceberg tables.
|
SparkSortStrategy |
|
SparkSQLProperties |
|
SparkStructLike |
|
SparkTable |
|
SparkTableCache |
|
SparkTableUtil |
Java version of the original SparkTableUtil.scala
https://github.com/apache/iceberg/blob/apache-iceberg-0.8.0-incubating/spark/src/main/scala/org/apache/iceberg/spark/SparkTableUtil.scala
|
SparkTableUtil.SparkPartition |
Class representing a table partition.
|
SparkUtil |
|
SparkV2Filters |
|
SparkValueConverter |
A utility class that converts Spark values to Iceberg's internal representation.
|
SparkValueReaders |
|
SparkValueWriters |
|
SparkWriteConf |
A class for common Iceberg configs for Spark writes.
|
SparkWriteOptions |
Spark DF write options
|
SparkZOrderStrategy |
|
SplitAssigner |
SplitAssigner interface is extracted out as a separate component so that we can plug in different
split assignment strategy for different requirements.
|
SplitAssignerFactory |
|
SplitAssignerType |
|
SplitRequestEvent |
We can remove this class once FLINK-21364 is resolved.
|
SplittableScanTask<ThisT> |
A scan task that can be split into smaller scan tasks.
|
SQLViewRepresentation |
|
StagedSparkTable |
|
StandardBlobTypes |
|
StandardPuffinProperties |
|
StaticIcebergEnumerator |
One-time split enumeration at the start-up for batch execution
|
StaticTableOperations |
TableOperations implementation that provides access to metadata for a Table at some point in
time, using a table metadata location.
|
StatisticsFile |
Represents a statistics file in the Puffin format, that can be used to read table data more
efficiently.
|
StatisticsFileParser |
|
StreamingDelete |
Delete implementation that avoids loading full manifests in memory.
|
StreamingMonitorFunction |
This is the single (non-parallel) monitoring task which takes a FlinkInputFormat , it is
responsible for:
Monitoring snapshots of the Iceberg table.
|
StreamingReaderOperator |
|
StreamingStartingStrategy |
Starting strategy for streaming execution.
|
StrictMetricsEvaluator |
|
StructLike |
Interface for accessing data by position in a schema.
|
StructLikeMap<T> |
|
StructLikeSet |
|
StructLikeWrapper |
Wrapper to adapt StructLike for use in maps and sets by implementing equals and hashCode.
|
StructProjection |
|
StructRowData |
|
SupportsBulkOperations |
|
SupportsDelta |
A mix-in interface for RowLevelOperation.
|
SupportsNamespaces |
Catalog methods for working with namespaces.
|
SupportsPrefixOperations |
This interface is intended as an extension for FileIO implementations to provide additional
prefix based operations that may be useful in performing supporting operations.
|
SupportsRowPosition |
Interface for readers that accept a callback to determine the starting row position of an Avro
split.
|
SystemProperties |
Configuration properties that are controlled by Java system properties.
|
Table |
Represents a table.
|
TableIdentifier |
Identifies a table in iceberg catalog.
|
TableIdentifierParser |
Parses TableIdentifiers from a JSON representation, which is the JSON representation utilized in
the REST catalog.
|
TableLoader |
Serializable loader to load an Iceberg Table .
|
TableLoader.CatalogTableLoader |
|
TableLoader.HadoopTableLoader |
|
TableMetadata |
Metadata for a table.
|
TableMetadata.Builder |
|
TableMetadata.MetadataLogEntry |
|
TableMetadata.SnapshotLogEntry |
|
TableMetadataParser |
|
TableMetadataParser.Codec |
|
TableMigrationUtil |
|
TableOperations |
SPI interface to abstract table metadata access and updates.
|
TableProperties |
|
Tables |
Generic interface for creating and loading a table implementation.
|
TableScan |
API for configuring a table scan.
|
TableScanUtil |
|
TaskNumDeletes |
|
TaskNumSplits |
|
Tasks |
|
Tasks.Builder<I> |
|
Tasks.FailureTask<I,E extends java.lang.Exception> |
|
Tasks.Task<I,E extends java.lang.Exception> |
|
Tasks.UnrecoverableException |
|
TaskWriter<T> |
The writer interface could accept records and provide the generated data files.
|
TaskWriterFactory<T> |
|
Term |
An expression that evaluates to a value.
|
TezUtil |
|
ThreadPools |
|
Timer |
Generalized Timer interface for creating telemetry related instances for measuring duration of
operations.
|
Timer.Timed |
A timing sample that carries internal state about the Timer's start position.
|
TimerResult |
A serializable version of a Timer that carries its result.
|
Transaction |
A transaction for performing multiple updates to a table.
|
Transactions |
|
Transform<S,T> |
A transform function used for partitioning.
|
Transforms |
Factory methods for transforms.
|
TripleWriter<T> |
|
True |
|
TruncateFunction |
A Spark function implementation for the Iceberg truncate transform.
|
TruncateFunction.TruncateBase<T> |
|
TruncateFunction.TruncateBigInt |
|
TruncateFunction.TruncateBinary |
|
TruncateFunction.TruncateDecimal |
|
TruncateFunction.TruncateInt |
|
TruncateFunction.TruncateSmallInt |
|
TruncateFunction.TruncateString |
|
TruncateFunction.TruncateTinyInt |
|
TruncateUtil |
Contains the logic for various truncate transformations for various types.
|
Type |
|
Type.NestedType |
|
Type.PrimitiveType |
|
Type.TypeID |
|
Types |
|
Types.BinaryType |
|
Types.BooleanType |
|
Types.DateType |
|
Types.DecimalType |
|
Types.DoubleType |
|
Types.FixedType |
|
Types.FloatType |
|
Types.IntegerType |
|
Types.ListType |
|
Types.LongType |
|
Types.MapType |
|
Types.NestedField |
|
Types.StringType |
|
Types.StructType |
|
Types.TimestampType |
|
Types.TimeType |
|
Types.UUIDType |
|
TypeToMessageType |
|
TypeUtil |
|
TypeUtil.CustomOrderSchemaVisitor<T> |
|
TypeUtil.NextID |
Interface for passing a function that assigns column IDs.
|
TypeUtil.SchemaVisitor<T> |
|
TypeWithSchemaVisitor<T> |
Visitor for traversing a Parquet type with a companion Iceberg type.
|
Unbound<T,B> |
Represents an unbound expression node.
|
UnboundAggregate<T> |
|
UnboundPartitionSpec |
|
UnboundPredicate<T> |
|
UnboundSortOrder |
|
UnboundTerm<T> |
Represents an unbound term.
|
UnboundTransform<S,T> |
|
UncheckedInterruptedException |
|
UncheckedSQLException |
|
UnicodeUtil |
|
UnionByNameVisitor |
Visitor class that accumulates the set of changes needed to evolve an existing schema into the
union of the existing and a new schema.
|
UnknownTransform<S,T> |
|
UnknownViewRepresentation |
|
UnpartitionedWriter<T> |
|
UnprocessableEntityException |
REST exception thrown when a request is well-formed but cannot be applied.
|
UpdateLocation |
API for setting a table's base location.
|
UpdateNamespacePropertiesRequest |
A REST request to set and/or remove properties on a namespace.
|
UpdateNamespacePropertiesRequest.Builder |
|
UpdateNamespacePropertiesResponse |
A REST response to a request to set and/or remove properties on a namespace.
|
UpdateNamespacePropertiesResponse.Builder |
|
UpdatePartitionSpec |
API for partition spec evolution.
|
UpdateProperties |
API for updating table properties.
|
UpdateRequirementParser |
|
UpdateSchema |
API for schema evolution.
|
UpdateStatistics |
API for updating statistics files in a table.
|
UpdateTableRequest |
|
UpdateTableRequest.Builder |
|
UpdateTableRequest.UpdateRequirement |
|
UpdateTableRequest.UpdateRequirement.AssertCurrentSchemaID |
|
UpdateTableRequest.UpdateRequirement.AssertDefaultSortOrderID |
|
UpdateTableRequest.UpdateRequirement.AssertDefaultSpecID |
|
UpdateTableRequest.UpdateRequirement.AssertLastAssignedFieldId |
|
UpdateTableRequest.UpdateRequirement.AssertLastAssignedPartitionId |
|
UpdateTableRequest.UpdateRequirement.AssertRefSnapshotID |
|
UpdateTableRequest.UpdateRequirement.AssertTableDoesNotExist |
|
UpdateTableRequest.UpdateRequirement.AssertTableUUID |
|
UpdateViewProperties |
API for updating view properties.
|
Util |
|
UUIDConversion |
|
UUIDUtil |
|
ValidationException |
Exception which is raised when the arguments are valid in isolation, but not in conjunction with
other arguments or state, as opposed to IllegalArgumentException which is raised when an
argument value is always invalid.
|
ValueReader<T> |
|
ValueReaders |
|
ValueReaders.StructReader<S> |
|
ValuesAsBytesReader |
Implements a ValuesReader specifically to read given number of bytes from the underlying
ByteBufferInputStream .
|
ValueWriter<D> |
|
ValueWriters |
|
ValueWriters.StructWriter<S> |
|
VectorHolder |
Container class for holding the Arrow vector storing a batch of values along with other state
needed for reading values out of it.
|
VectorHolder.ConstantVectorHolder<T> |
A Vector Holder which does not actually produce values, consumers of this class should use the
constantValue to populate their ColumnVector implementation.
|
VectorHolder.DeletedVectorHolder |
|
VectorHolder.PositionVectorHolder |
|
VectorizedArrowReader |
|
VectorizedArrowReader.ConstantVectorReader<T> |
A Dummy Vector Reader which doesn't actually read files, instead it returns a dummy
VectorHolder which indicates the constant value which should be used for this column.
|
VectorizedArrowReader.DeletedVectorReader |
A Dummy Vector Reader which doesn't actually read files.
|
VectorizedColumnIterator |
Vectorized version of the ColumnIterator that reads column values in data pages of a column in a
row group in a batched fashion.
|
VectorizedDictionaryEncodedParquetValuesReader |
This decoder reads Parquet dictionary encoded data in a vectorized fashion.
|
VectorizedPageIterator |
|
VectorizedParquetDefinitionLevelReader |
|
VectorizedParquetReader<T> |
|
VectorizedReader<T> |
Interface for vectorized Iceberg readers.
|
VectorizedReaderBuilder |
|
VectorizedRowBatchIterator |
An adaptor so that the ORC RecordReader can be used as an Iterator.
|
VectorizedSparkOrcReaders |
|
VectorizedSparkParquetReaders |
|
VectorizedSupport |
Copied here from Hive for compatibility
|
VectorizedSupport.Support |
|
VectorizedTableScanIterable |
A vectorized implementation of the Iceberg reader that iterates over the table scan.
|
View |
Interface for view definition.
|
ViewBuilder |
A builder used to create or replace a SQL View .
|
ViewCatalog |
A Catalog API for view create, drop, and load operations.
|
ViewHistoryEntry |
View history entry.
|
ViewRepresentation |
|
ViewRepresentation.Type |
|
ViewVersion |
A version of the view at a point in time.
|
WapUtil |
|
WriteObjectInspector |
Interface for converting the Hive primitive objects for the objects which could be added to an
Iceberg Record.
|
WriteResult |
|
WriteResult.Builder |
|
YearsFunction |
A Spark function implementation for the Iceberg year transform.
|
YearsFunction.DateToYearsFunction |
|
YearsFunction.TimestampToYearsFunction |
|
Zorder |
|
ZOrderByteUtils |
Within Z-Ordering the byte representations of objects being compared must be ordered, this
requires several types to be transformed when converted to bytes.
|