All Classes Interface Summary Class Summary Enum Summary Exception Summary
Class |
Description |
AbstractMapredIcebergRecordReader<T> |
|
Accessor<T> |
|
Accessors |
Position2Accessor and Position3Accessor here is an optimization.
|
Action<ThisT,R> |
An action performed on a table.
|
Actions |
|
ActionsProvider |
An API that should be implemented by query engine integrations for providing actions.
|
AddedRowsScanTask |
A scan task for inserts generated by adding a data file to the table.
|
ADLSFileIO |
FileIO implementation backed by Azure Data Lake Storage Gen2.
|
AesGcmInputFile |
|
AesGcmInputStream |
|
AesGcmOutputFile |
|
AesGcmOutputStream |
|
Aggregate<C extends Term> |
The aggregate functions that can be pushed and evaluated in Iceberg.
|
AggregateEvaluator |
A class for evaluating aggregates.
|
AliyunClientFactories |
|
AliyunClientFactory |
|
AliyunProperties |
|
AllDataFilesTable |
A Table implementation that exposes a table's valid data files as rows.
|
AllDataFilesTable.AllDataFilesTableScan |
|
AllDeleteFilesTable |
A Table implementation that exposes its valid delete files as rows.
|
AllDeleteFilesTable.AllDeleteFilesTableScan |
|
AllEntriesTable |
A Table implementation that exposes a table's manifest entries as rows, for both delete
and data files.
|
AllFilesTable |
A Table implementation that exposes its valid files as rows.
|
AllFilesTable.AllFilesTableScan |
|
AllManifestsTable |
A Table implementation that exposes a table's valid manifest files as rows.
|
AllManifestsTable.AllManifestsTableScan |
|
AlreadyExistsException |
Exception raised when attempting to create a table that already exists.
|
AncestorsOfProcedure |
|
And |
|
AppendFiles |
API for appending new files in a table.
|
ApplyNameMapping |
An Avro Schema visitor to apply a name mapping to add Iceberg field IDs.
|
ArrayUtil |
|
ArrowAllocation |
|
ArrowReader |
|
ArrowSchemaUtil |
|
ArrowVectorAccessor<DecimalT,Utf8StringT,ArrayT,ChildVectorT extends java.lang.AutoCloseable> |
|
ArrowVectorAccessors |
|
AssumeRoleAwsClientFactory |
|
AuthConfig |
|
Avro |
|
Avro.DataWriteBuilder |
|
Avro.DeleteWriteBuilder |
|
Avro.ReadBuilder |
|
Avro.WriteBuilder |
|
AvroEncoderUtil |
|
AvroGenericRecordConverter |
|
AvroGenericRecordFileScanTaskReader |
|
AvroGenericRecordReaderFunction |
Deprecated.
|
AvroGenericRecordToRowDataMapper |
This util class converts Avro GenericRecord to Flink RowData.
|
AvroIterable<D> |
|
AvroMetrics |
|
AvroSchemaUtil |
|
AvroSchemaVisitor<T> |
|
AvroSchemaWithTypeVisitor<T> |
|
AvroUtil |
Class for Avro-related utility methods.
|
AvroWithFlinkSchemaVisitor<T> |
|
AvroWithPartnerByStructureVisitor<P,T> |
A abstract avro schema visitor with partner type.
|
AvroWithPartnerVisitor<P,R> |
|
AvroWithPartnerVisitor.FieldIDAccessors |
|
AvroWithPartnerVisitor.PartnerAccessors<P> |
|
AvroWithSparkSchemaVisitor<T> |
|
AvroWithTypeByStructureVisitor<T> |
|
AwsClientFactories |
|
AwsClientFactory |
Interface to customize AWS clients used by Iceberg.
|
AwsClientProperties |
|
AwsProperties |
|
AzureProperties |
|
BadRequestException |
Exception thrown on HTTP 400 - Bad Request
|
BaseBatchReader<T> |
A base BatchReader class that contains common functionality
|
BaseColumnIterator |
|
BaseCombinedScanTask |
|
BaseDeleteLoader |
|
BaseFileScanTask |
|
BaseFileWriterFactory<T> |
A base writer factory to be extended by query engine integrations.
|
BaseMetadataTable |
Base class for metadata tables.
|
BaseMetastoreCatalog |
|
BaseMetastoreOperations |
|
BaseMetastoreOperations.CommitStatus |
|
BaseMetastoreTableOperations |
|
BaseMetastoreViewCatalog |
|
BaseOverwriteFiles |
|
BasePageIterator |
|
BasePageIterator.IntIterator |
|
BaseParquetReaders<T> |
|
BaseParquetWriter<T> |
|
BasePositionDeltaWriter<T> |
|
BaseReplacePartitions |
|
BaseReplaceSortOrder |
|
BaseRewriteDataFilesAction<ThisT> |
|
BaseRewriteManifests |
|
BaseScanTaskGroup<T extends ScanTask> |
|
BaseSessionCatalog |
|
BaseTable |
Base Table implementation.
|
BaseTaskWriter<T> |
|
BaseTransaction |
|
BaseVectorizedParquetValuesReader |
A values reader for Parquet's run-length encoded data that reads column data in batches instead
of one value at a time.
|
BaseView |
|
BaseViewOperations |
|
BaseViewSessionCatalog |
|
BatchScan |
API for configuring a batch scan.
|
BinaryUtil |
|
Binder |
Rewrites expressions by replacing unbound named references with references to
fields in a struct schema.
|
BinPacking |
|
BinPacking.ListPacker<T> |
|
BinPacking.PackingIterable<T> |
|
Blob |
|
BlobMetadata |
A metadata about a statistics or indices blob.
|
BlobMetadata |
|
Bound<T> |
Represents a bound value expression.
|
BoundAggregate<T,C> |
|
BoundLiteralPredicate<T> |
|
BoundPredicate<T> |
|
BoundReference<T> |
|
BoundSetPredicate<T> |
|
BoundTerm<T> |
Represents a bound term.
|
BoundTransform<S,T> |
A transform expression.
|
BoundUnaryPredicate<T> |
|
BucketFunction |
A Spark function implementation for the Iceberg bucket transform.
|
BucketFunction.BucketBase |
|
BucketFunction.BucketBinary |
|
BucketFunction.BucketDecimal |
|
BucketFunction.BucketInt |
|
BucketFunction.BucketLong |
|
BucketFunction.BucketString |
|
BucketUtil |
Contains the logic for hashing various types for use with the bucket partition
transformations
|
BulkDeletionFailureException |
|
ByteBufferInputStream |
|
ByteBuffers |
|
CachedClientPool |
A ClientPool that caches the underlying HiveClientPool instances.
|
CachingCatalog |
Class that wraps an Iceberg Catalog to cache tables.
|
Catalog |
A Catalog API for table create, drop, and load operations.
|
Catalog.TableBuilder |
|
CatalogHandlers |
|
CatalogLoader |
Serializable loader to load an Iceberg Catalog .
|
CatalogLoader.CustomCatalogLoader |
|
CatalogLoader.HadoopCatalogLoader |
|
CatalogLoader.HiveCatalogLoader |
|
CatalogLoader.RESTCatalogLoader |
|
CatalogProperties |
|
Catalogs |
Class for catalog resolution and accessing the common functions for Catalog API.
|
CatalogUtil |
|
ChangelogIterator |
An iterator that transforms rows from changelog tables within a single Spark task.
|
ChangelogOperation |
An enum representing possible operations in a changelog.
|
ChangelogScanTask |
A changelog scan task.
|
ChangelogUtil |
|
CharSequenceMap<V> |
A map that uses char sequences as keys.
|
CharSequenceSet |
|
CharSequenceUtil |
|
CharSequenceWrapper |
Wrapper class to adapt CharSequence for use in maps and sets.
|
CheckCompatibility |
|
CherrypickAncestorCommitException |
This exception occurs when one cherrypicks an ancestor or when the picked snapshot is already
linked to a published ancestor.
|
Ciphers |
|
Ciphers.AesGcmDecryptor |
|
Ciphers.AesGcmEncryptor |
|
CleanableFailure |
A marker interface for commit exceptions where the state is known to be failure and uncommitted
metadata can be cleaned up.
|
ClientPool<C,E extends java.lang.Exception> |
|
ClientPool.Action<R,C,E extends java.lang.Exception> |
|
ClientPoolImpl<C,E extends java.lang.Exception> |
|
CloseableGroup |
This class acts as a helper for handling the closure of multiple resource.
|
CloseableIterable<T> |
|
CloseableIterable.ConcatCloseableIterable<E> |
|
CloseableIterator<T> |
|
ClosingIterator<T> |
A convenience wrapper around CloseableIterator , providing auto-close functionality when
all of the elements in the iterator are consumed.
|
ClusteredDataWriter<T> |
A data writer capable of writing to multiple specs and partitions that requires the incoming
records to be properly clustered by partition spec and by partition within each spec.
|
ClusteredEqualityDeleteWriter<T> |
An equality delete writer capable of writing to multiple specs and partitions that requires the
incoming delete records to be properly clustered by partition spec and by partition within each
spec.
|
ClusteredPositionDeleteWriter<T> |
A position delete writer capable of writing to multiple specs and partitions that requires the
incoming delete records to be properly clustered by partition spec and by partition within each
spec.
|
ColumnarBatch |
This class is inspired by Spark's ColumnarBatch .
|
ColumnarBatchReader |
VectorizedReader that returns Spark's ColumnarBatch to support Spark's vectorized
read path.
|
ColumnIterator<T> |
|
ColumnStatsWatermarkExtractor |
|
ColumnVector |
This class is inspired by Spark's ColumnVector .
|
ColumnVectorWithFilter |
|
ColumnWriter<T> |
|
CombinedScanTask |
A scan task made of several ranges from files.
|
CommitComplete |
A control event payload for events sent by a coordinator that indicates it has completed a commit
cycle.
|
CommitFailedException |
Exception raised when a commit fails because of out of date metadata.
|
CommitMetadata |
utility class to accept thread local commit properties
|
CommitMetrics |
Carries all metrics for a particular commit
|
CommitMetricsResult |
A serializable version of CommitMetrics that carries its results.
|
CommitReport |
A commit report that contains all relevant information from a Snapshot.
|
CommitReportParser |
|
CommitStateUnknownException |
Exception for a failure to confirm either affirmatively or negatively that a commit was applied.
|
Committer |
|
CommitterImpl |
|
CommitToTable |
A control event payload for events sent by a coordinator that indicates it has completed a commit
cycle.
|
CommitTransactionRequest |
|
CommitTransactionRequestParser |
|
Comparators |
|
ComputeTableStats |
An action that collects statistics of an Iceberg table and writes to Puffin files.
|
ComputeTableStats.Result |
The result of table statistics collection.
|
ComputeTableStatsSparkAction |
Computes the statistics of the given columns and stores it as Puffin files.
|
ComputeUpdateIterator |
An iterator that finds delete/insert rows which represent an update, and converts them into
update records from changelog tables within a single Spark task.
|
ConfigProperties |
|
ConfigResponse |
Represents a response to requesting server-side provided configuration for the REST catalog.
|
ConfigResponse.Builder |
|
ConfigResponseParser |
|
Configurable<C> |
Interface used to avoid runtime dependencies on Hadoop Configurable
|
Container<T> |
A simple container of objects that you can get and set.
|
ContentCache |
Class that provides file-content caching during reading.
|
ContentFile<F> |
|
ContentFileUtil |
|
ContentScanTask<F extends ContentFile<F>> |
A scan task over a range of bytes in a content file.
|
ContinuousIcebergEnumerator |
|
ContinuousSplitPlanner |
This interface is introduced so that we can plug in different split planner for unit test
|
ContinuousSplitPlannerImpl |
|
Conversions |
|
ConvertEqualityDeleteFiles |
An action for converting the equality delete files to position delete files.
|
ConvertEqualityDeleteFiles.Result |
The action result that contains a summary of the execution.
|
ConvertEqualityDeleteStrategy |
A strategy for the action to convert equality delete to position deletes.
|
ConverterReaderFunction<T> |
|
CountAggregate<T> |
|
Counter |
Generalized Counter interface for creating telemetry-related instances when counting events.
|
CounterResult |
A serializable version of a Counter that carries its result.
|
CountNonNull<T> |
|
CountStar<T> |
|
CreateChangelogViewProcedure |
A procedure that creates a view for changed rows.
|
CreateNamespaceRequest |
A REST request to create a namespace, with an optional set of properties.
|
CreateNamespaceRequest.Builder |
|
CreateNamespaceResponse |
Represents a REST response for a request to create a namespace / database.
|
CreateNamespaceResponse.Builder |
|
CreateSnapshotEvent |
|
CreateTableRequest |
A REST request to create a table, either via direct commit or staging the creation of the table
as part of a transaction.
|
CreateTableRequest.Builder |
|
CreateViewRequest |
|
CreateViewRequestParser |
|
Credential |
|
CredentialParser |
|
CredentialSupplier |
Interface used to expose credentials held by a FileIO instance.
|
DataComplete |
A control event payload for events sent by a worker that indicates it has finished sending all
data for a commit request.
|
DataFile |
Interface for data files listed in a table manifest.
|
DataFiles |
|
DataFiles.Builder |
|
DataFileSet |
|
DataFilesTable |
A Table implementation that exposes a table's data files as rows.
|
DataFilesTable.DataFilesTableScan |
|
DataIterator<T> |
|
DataIteratorBatcher<T> |
Batcher converts iterator of T into iterator of batched
RecordsWithSplitIds<RecordAndPosition<T>> , as FLIP-27's SplitReader.fetch() returns
batched records.
|
DataIteratorReaderFunction<T> |
|
DataOperations |
Data operations that produce snapshots.
|
DataReader<T> |
|
DataStatisticsCoordinatorProvider |
DataStatisticsCoordinatorProvider provides the method to create new DataStatisticsCoordinator
|
DataStatisticsOperator |
DataStatisticsOperator collects traffic distribution statistics.
|
DataStatisticsOperatorFactory |
|
DataTableScan |
|
DataTask |
A task that returns data as rows instead of where to read data.
|
DataTaskReader |
|
DataWriter<T> |
|
DataWriter<T> |
|
DataWriteResult |
A result of writing data files.
|
DataWritten |
A control event payload for events sent by a worker that contains the table data that has been
written and is ready to commit.
|
DateTimeUtil |
|
Days<T> |
|
DaysFunction |
A Spark function implementation for the Iceberg day transform.
|
DaysFunction.DateToDaysFunction |
|
DaysFunction.TimestampNtzToDaysFunction |
|
DaysFunction.TimestampToDaysFunction |
|
DecimalUtil |
|
DecimalVectorUtil |
|
DecoderResolver |
Resolver to resolve Decoder to a ResolvingDecoder .
|
DefaultCounter |
A default Counter implementation that uses an AtomicLong to count events.
|
DefaultMetricsContext |
A default MetricsContext implementation that uses native Java counters/timers.
|
DefaultSplitAssigner |
Since all methods are called in the source coordinator thread by enumerator, there is no need for
locking.
|
DefaultTimer |
A default Timer implementation that uses a Stopwatch instance internally to
measure time.
|
DelegateFileIO |
This interface is intended as an extension for FileIO implementations that support being a
delegate target.
|
DelegatingInputStream |
|
DelegatingOutputStream |
|
DeleteCounter |
A counter to be used to count deletes as they are applied.
|
DeletedColumnVector |
|
DeletedDataFileScanTask |
A scan task for deletes generated by removing a data file from the table.
|
DeletedRowsScanTask |
A scan task for deletes generated by adding delete files to the table.
|
DeleteFile |
Interface for delete files listed in a table delete manifest.
|
DeleteFiles |
API for deleting files from a table.
|
DeleteFileSet |
|
DeleteFilesTable |
A Table implementation that exposes a table's delete files as rows.
|
DeleteFilesTable.DeleteFilesTableScan |
|
DeleteFilter<T> |
|
DeleteGranularity |
An enum that represents the granularity of deletes.
|
DeleteLoader |
An API for loading delete file content into in-memory data structures.
|
DeleteOrphanFiles |
An action that deletes orphan metadata, data and delete files in a table.
|
DeleteOrphanFiles.PrefixMismatchMode |
Defines the action behavior when location prefixes (scheme/authority) mismatch.
|
DeleteOrphanFiles.Result |
The action result that contains a summary of the execution.
|
DeleteOrphanFilesSparkAction |
An action that removes orphan metadata, data and delete files by listing a given location and
comparing the actual files in that location with content and metadata files referenced by all
valid snapshots.
|
DeleteOrphanFilesSparkAction.FileURI |
|
DeleteReachableFiles |
An action that deletes all files referenced by a table metadata file.
|
DeleteReachableFiles.Result |
The action result that contains a summary of the execution.
|
DeleteReachableFilesSparkAction |
An implementation of DeleteReachableFiles that uses metadata tables in Spark to determine
which files should be deleted.
|
Deletes |
|
DeleteSchemaUtil |
|
DeleteWriteResult |
A result of writing delete files.
|
DellClientFactories |
|
DellClientFactory |
|
DellProperties |
|
DeltaLakeToIcebergMigrationActionsProvider |
An API that provide actions for migration from a delta lake table to an iceberg table.
|
DeltaLakeToIcebergMigrationActionsProvider.DefaultDeltaLakeToIcebergMigrationActions |
|
DictEncodedArrowConverter |
This converts dictionary encoded arrow vectors to a correctly typed arrow vector.
|
DistributionMode |
Enum of supported write distribution mode, it defines the write behavior of batch or streaming
job:
|
DoubleFieldMetrics |
Iceberg internally tracked field level metrics, used by Parquet and ORC writers only.
|
DoubleFieldMetrics.Builder |
|
DuplicateWAPCommitException |
This exception occurs when the WAP workflow detects a duplicate wap commit.
|
DynamoDbCatalog |
DynamoDB implementation of Iceberg catalog
|
DynamoDbLockManager |
DynamoDB implementation for the lock manager.
|
DynClasses |
|
DynClasses.Builder |
|
DynConstructors |
Copied from parquet-common
|
DynConstructors.Builder |
|
DynConstructors.Ctor<C> |
|
DynFields |
|
DynFields.BoundField<T> |
|
DynFields.Builder |
|
DynFields.StaticField<T> |
|
DynFields.UnboundField<T> |
Convenience wrapper class around Field .
|
DynMethods |
Copied from parquet-common
|
DynMethods.BoundMethod |
|
DynMethods.Builder |
|
DynMethods.StaticMethod |
|
DynMethods.UnboundMethod |
Convenience wrapper class around Method .
|
EcsCatalog |
|
EcsFileIO |
FileIO implementation backed by Dell EMC ECS.
|
EcsTableOperations |
|
ElapsedTimeGauge |
|
EncryptedFiles |
|
EncryptedInputFile |
Thin wrapper around an InputFile instance that is encrypted.
|
EncryptedOutputFile |
Thin wrapper around a OutputFile that is encrypting bytes written to the underlying file
system, via an encryption key that is symbolized by the enclosed EncryptionKeyMetadata .
|
EncryptingFileIO |
|
EncryptionAlgorithm |
Algorithm supported for file encryption.
|
EncryptionKeyMetadata |
Light typedef over a ByteBuffer that indicates that the given bytes represent metadata about an
encrypted data file's encryption key.
|
EncryptionKeyMetadatas |
|
EncryptionManager |
Module for encrypting and decrypting table data files.
|
EncryptionUtil |
|
Endpoint |
Holds an endpoint definition that consists of the HTTP method (GET, POST, DELETE, ...) and the
resource path as defined in the Iceberg OpenAPI REST specification without parameter
substitution, such as /v1/{prefix}/namespaces/{namespace}.
|
EnvironmentContext |
|
EnvironmentUtil |
|
EqualityDeleteFiles |
|
EqualityDeleteRowReader |
|
EqualityDeleteWriter<T> |
|
EqualityDeltaWriter<T> |
A writer capable of writing data and equality deletes that may belong to different specs and
partitions.
|
ErrorHandler |
|
ErrorHandlers |
A set of consumers to handle errors for requests for table entities or for namespace entities, to
throw the correct exception.
|
ErrorResponse |
Standard response body for all API errors
|
ErrorResponse.Builder |
|
ErrorResponseParser |
|
EstimateOrcAvgWidthVisitor |
|
Evaluator |
|
Event |
Class representing all events produced to the control topic.
|
Exceptions |
|
ExceptionUtil |
|
ExceptionUtil.Block<R,E1 extends java.lang.Exception,E2 extends java.lang.Exception,E3 extends java.lang.Exception> |
|
ExceptionUtil.CatchBlock |
|
ExceptionUtil.FinallyBlock |
|
ExpireSnapshots |
An action that expires snapshots in a table.
|
ExpireSnapshots |
|
ExpireSnapshots.Result |
The action result that contains a summary of the execution.
|
ExpireSnapshotsProcedure |
A procedure that expires snapshots in a table.
|
ExpireSnapshotsSparkAction |
An action that performs the same operation as ExpireSnapshots but uses
Spark to determine the delta in files between the pre and post-expiration table metadata.
|
Expression |
Represents a boolean expression tree.
|
Expression.Operation |
|
ExpressionParser |
|
Expressions |
|
ExpressionUtil |
Expression utility methods.
|
ExpressionVisitors |
|
ExpressionVisitors.BoundExpressionVisitor<R> |
|
ExpressionVisitors.BoundVisitor<R> |
|
ExpressionVisitors.CustomOrderExpressionVisitor<R> |
|
ExpressionVisitors.ExpressionVisitor<R> |
|
ExtendedParser |
|
ExtendedParser.RawOrderField |
|
False |
|
FanoutDataWriter<T> |
A data writer capable of writing to multiple specs and partitions that keeps data writers for
each seen spec/partition pair open until this writer is closed.
|
FanoutPositionOnlyDeleteWriter<T> |
A position delete writer capable of writing to multiple specs and partitions if the incoming
stream of deletes is not ordered.
|
FastForwardBranchProcedure |
|
FieldMetrics<T> |
Iceberg internally tracked field level metrics.
|
FileAppender<D> |
|
FileAppenderFactory<T> |
|
FileContent |
Content type stored in a file, one of DATA, POSITION_DELETES, or EQUALITY_DELETES.
|
FileFormat |
Enum of supported file formats.
|
FileInfo |
|
FileInfo |
|
FileIO |
Pluggable module for reading, writing, and deleting files.
|
FileIOMetricsContext |
Extension of MetricsContext for use with FileIO to define standard metrics that should be
reported.
|
FileIOParser |
|
FileIOTracker |
|
FileMetadata |
|
FileMetadata |
|
FileMetadata.Builder |
|
FileMetadataParser |
|
FileRewriteCoordinator |
|
FileRewriter<T extends ContentScanTask<F>,F extends ContentFile<F>> |
A class for rewriting content files.
|
Files |
|
FileScanTask |
A scan task over a range of bytes in a single data file.
|
FileScanTaskParser |
|
FileScanTaskReader<T> |
|
FileScopedPositionDeleteWriter<T> |
A position delete writer that produces a separate delete file for each referenced data file.
|
FilesTable |
A Table implementation that exposes a table's files as rows.
|
FilesTable.FilesTableScan |
|
FileWriter<T,R> |
A writer capable of writing files of a single type (i.e.
|
FileWriterFactory<T> |
A factory for creating data and delete writers.
|
Filter<T> |
A Class for generic filters
|
FilterIterator<T> |
An Iterator that filters another Iterator.
|
FindFiles |
|
FindFiles.Builder |
|
FixedReservoirHistogram |
A Histogram implementation with reservoir sampling.
|
FixupTypes |
This is used to fix primitive types to match a table schema.
|
FlinkAlterTableUtil |
|
FlinkAppenderFactory |
|
FlinkAvroReader |
Deprecated.
|
FlinkAvroWriter |
|
FlinkCatalog |
A Flink Catalog implementation that wraps an Iceberg Catalog .
|
FlinkCatalogFactory |
A Flink Catalog factory implementation that creates FlinkCatalog .
|
FlinkCompatibilityUtil |
This is a small util class that try to hide calls to Flink Internal or PublicEvolve interfaces as
Flink can change those APIs during minor version release.
|
FlinkConfigOptions |
When constructing Flink Iceberg source via Java API, configs can be set in Configuration
passed to source builder.
|
FlinkDynamicTableFactory |
|
FlinkFilters |
|
FlinkInputFormat |
Flink InputFormat for Iceberg.
|
FlinkInputSplit |
|
FlinkOrcReader |
|
FlinkOrcWriter |
|
FlinkPackage |
|
FlinkParquetReaders |
|
FlinkParquetWriters |
|
FlinkPlannedAvroReader |
|
FlinkReadConf |
|
FlinkReadOptions |
Flink source read options
|
FlinkSchemaUtil |
Converter between Flink types and Iceberg type.
|
FlinkSink |
|
FlinkSink.Builder |
|
FlinkSource |
Deprecated.
|
FlinkSource.Builder |
Source builder to build DataStream .
|
FlinkSourceFilter |
|
FlinkSplitPlanner |
|
FlinkTypeVisitor<T> |
|
FlinkValueReaders |
|
FlinkValueWriters |
|
FlinkWriteConf |
A class for common Iceberg configs for Flink writes.
|
FlinkWriteOptions |
Flink sink write options
|
FlinkWriteResult |
|
FloatFieldMetrics |
Iceberg internally tracked field level metrics, used by Parquet and ORC writers only.
|
FloatFieldMetrics.Builder |
|
ForbiddenException |
Exception thrown on HTTP 403 Forbidden - Failed authorization checks.
|
GCPProperties |
|
GCSFileIO |
FileIO Implementation backed by Google Cloud Storage (GCS)
|
GenericAppenderFactory |
|
GenericArrowVectorAccessorFactory<DecimalT,Utf8StringT,ArrayT,ChildVectorT extends java.lang.AutoCloseable> |
|
GenericArrowVectorAccessorFactory.ArrayFactory<ChildVectorT,ArrayT> |
Create an array value of type ArrayT from arrow vector value.
|
GenericArrowVectorAccessorFactory.DecimalFactory<DecimalT> |
Create a decimal value of type DecimalT from arrow vector value.
|
GenericArrowVectorAccessorFactory.StringFactory<Utf8StringT> |
Create a UTF8 String value of type Utf8StringT from arrow vector value.
|
GenericArrowVectorAccessorFactory.StructChildFactory<ChildVectorT> |
Create a struct child vector of type ChildVectorT from arrow vector value.
|
GenericAvroReader<T> |
|
GenericAvroWriter<T> |
|
GenericBlobMetadata |
|
GenericDeleteFilter |
|
GenericManifestFile |
|
GenericManifestFile.CopyBuilder |
|
GenericOrcReader |
|
GenericOrcReaders |
|
GenericOrcWriter |
|
GenericOrcWriters |
|
GenericOrcWriters.StructWriter<S> |
|
GenericParquetReaders |
|
GenericParquetWriter |
|
GenericPartitionFieldSummary |
|
GenericPartitionStatisticsFile |
|
GenericRecord |
|
GenericStatisticsFile |
|
GetNamespaceResponse |
Represents a REST response to fetch a namespace and its metadata properties
|
GetNamespaceResponse.Builder |
|
GetSplitResult |
|
GetSplitResult.Status |
|
GlueCatalog |
|
GuavaClasses |
|
HadoopCatalog |
HadoopCatalog provides a way to use table names like db.table to work with path-based tables
under a common location.
|
HadoopConfigurable |
An interface that extends the Hadoop Configurable interface to offer better serialization
support for customizable Iceberg objects such as FileIO .
|
HadoopFileIO |
|
HadoopInputFile |
InputFile implementation using the Hadoop FileSystem API.
|
HadoopMetricsContext |
FileIO Metrics implementation that delegates to Hadoop FileSystem statistics implementation using
the provided scheme.
|
HadoopOutputFile |
OutputFile implementation using the Hadoop FileSystem API.
|
HadoopStreams |
Convenience methods to get Parquet abstractions for Hadoop data streams.
|
HadoopTableOperations |
TableOperations implementation for file systems that support atomic rename.
|
HadoopTables |
Implementation of Iceberg tables that uses the Hadoop FileSystem to store metadata and manifests.
|
HasIcebergCatalog |
|
HasTableOperations |
Used to expose a table's TableOperations.
|
HiddenPathFilter |
A PathFilter that filters out hidden paths.
|
Histogram |
|
Histogram.Statistics |
|
HistoryEntry |
Table history entry.
|
HistoryTable |
A Table implementation that exposes a table's history as rows.
|
HiveCatalog |
|
HiveClientPool |
|
HiveHadoopUtil |
|
HiveIcebergFilterFactory |
|
HiveIcebergInputFormat |
|
HiveIcebergMetaHook |
|
HiveIcebergOutputCommitter |
An Iceberg table committer for adding data files to the Iceberg tables.
|
HiveIcebergOutputFormat<T> |
|
HiveIcebergSerDe |
|
HiveIcebergSplit |
|
HiveIcebergStorageHandler |
|
HiveSchemaUtil |
|
HiveTableOperations |
TODO we should be able to extract some more commonalities to BaseMetastoreTableOperations to
avoid code duplication between this class and Metacat Tables.
|
HiveVersion |
|
Hours<T> |
|
HoursFunction |
A Spark function implementation for the Iceberg hour transform.
|
HoursFunction.TimestampNtzToHoursFunction |
|
HoursFunction.TimestampToHoursFunction |
|
HTTPClient |
An HttpClient for usage with the REST catalog.
|
HTTPClient.Builder |
|
HttpClientProperties |
|
IcebergArrowColumnVector |
Implementation of Spark's ColumnVector interface.
|
IcebergBinaryObjectInspector |
|
IcebergBuild |
Loads iceberg-version.properties with build information.
|
IcebergDateObjectInspector |
|
IcebergDecimalObjectInspector |
|
IcebergDecoder<D> |
|
IcebergEncoder<D> |
|
IcebergEnumeratorState |
Enumerator state for checkpointing
|
IcebergEnumeratorStateSerializer |
|
IcebergFixedObjectInspector |
|
IcebergGenerics |
|
IcebergGenerics.ScanBuilder |
|
IcebergInputFormat<T> |
Generic Mrv2 InputFormat API for Iceberg.
|
IcebergObjectInspector |
|
IcebergPigInputFormat<T> |
Deprecated.
|
IcebergRecordObjectInspector |
|
IcebergSink |
Flink v2 sink offer different hooks to insert custom topologies into the sink.
|
IcebergSink.Builder |
|
IcebergSinkConfig |
|
IcebergSinkConnector |
|
IcebergSinkTask |
|
IcebergSource<T> |
|
IcebergSource |
The IcebergSource loads/writes tables with format "iceberg".
|
IcebergSource.Builder<T> |
|
IcebergSourceReader<T> |
|
IcebergSourceReaderMetrics |
|
IcebergSourceSplit |
|
IcebergSourceSplitSerializer |
|
IcebergSourceSplitState |
|
IcebergSourceSplitStatus |
|
IcebergSpark |
|
IcebergSplit |
|
IcebergSplitContainer |
|
IcebergSqlExtensionsBaseListener |
This class provides an empty implementation of IcebergSqlExtensionsListener ,
which can be extended to create a listener which only needs to handle a subset
of the available methods.
|
IcebergSqlExtensionsBaseVisitor<T> |
This class provides an empty implementation of IcebergSqlExtensionsVisitor ,
which can be extended to create a visitor which only needs to handle a subset
of the available methods.
|
IcebergSqlExtensionsLexer |
|
IcebergSqlExtensionsListener |
|
IcebergSqlExtensionsParser |
|
IcebergSqlExtensionsParser.AddPartitionFieldContext |
|
IcebergSqlExtensionsParser.ApplyTransformContext |
|
IcebergSqlExtensionsParser.BigDecimalLiteralContext |
|
IcebergSqlExtensionsParser.BigIntLiteralContext |
|
IcebergSqlExtensionsParser.BooleanLiteralContext |
|
IcebergSqlExtensionsParser.BooleanValueContext |
|
IcebergSqlExtensionsParser.BranchOptionsContext |
|
IcebergSqlExtensionsParser.CallArgumentContext |
|
IcebergSqlExtensionsParser.CallContext |
|
IcebergSqlExtensionsParser.ConstantContext |
|
IcebergSqlExtensionsParser.CreateOrReplaceBranchContext |
|
IcebergSqlExtensionsParser.CreateOrReplaceTagContext |
|
IcebergSqlExtensionsParser.CreateReplaceBranchClauseContext |
|
IcebergSqlExtensionsParser.CreateReplaceTagClauseContext |
|
IcebergSqlExtensionsParser.DecimalLiteralContext |
|
IcebergSqlExtensionsParser.DoubleLiteralContext |
|
IcebergSqlExtensionsParser.DropBranchContext |
|
IcebergSqlExtensionsParser.DropIdentifierFieldsContext |
|
IcebergSqlExtensionsParser.DropPartitionFieldContext |
|
IcebergSqlExtensionsParser.DropTagContext |
|
IcebergSqlExtensionsParser.ExponentLiteralContext |
|
IcebergSqlExtensionsParser.ExpressionContext |
|
IcebergSqlExtensionsParser.FieldListContext |
|
IcebergSqlExtensionsParser.FloatLiteralContext |
|
IcebergSqlExtensionsParser.IdentifierContext |
|
IcebergSqlExtensionsParser.IdentityTransformContext |
|
IcebergSqlExtensionsParser.IntegerLiteralContext |
|
IcebergSqlExtensionsParser.MaxSnapshotAgeContext |
|
IcebergSqlExtensionsParser.MinSnapshotsToKeepContext |
|
IcebergSqlExtensionsParser.MultipartIdentifierContext |
|
IcebergSqlExtensionsParser.NamedArgumentContext |
|
IcebergSqlExtensionsParser.NonReservedContext |
|
IcebergSqlExtensionsParser.NumberContext |
|
IcebergSqlExtensionsParser.NumericLiteralContext |
|
IcebergSqlExtensionsParser.NumSnapshotsContext |
|
IcebergSqlExtensionsParser.OrderContext |
|
IcebergSqlExtensionsParser.OrderFieldContext |
|
IcebergSqlExtensionsParser.PositionalArgumentContext |
|
IcebergSqlExtensionsParser.QuotedIdentifierAlternativeContext |
|
IcebergSqlExtensionsParser.QuotedIdentifierContext |
|
IcebergSqlExtensionsParser.RefRetainContext |
|
IcebergSqlExtensionsParser.ReplacePartitionFieldContext |
|
IcebergSqlExtensionsParser.SetIdentifierFieldsContext |
|
IcebergSqlExtensionsParser.SetWriteDistributionAndOrderingContext |
|
IcebergSqlExtensionsParser.SingleOrderContext |
|
IcebergSqlExtensionsParser.SingleStatementContext |
|
IcebergSqlExtensionsParser.SmallIntLiteralContext |
|
IcebergSqlExtensionsParser.SnapshotIdContext |
|
IcebergSqlExtensionsParser.SnapshotRetentionContext |
|
IcebergSqlExtensionsParser.StatementContext |
|
IcebergSqlExtensionsParser.StringArrayContext |
|
IcebergSqlExtensionsParser.StringLiteralContext |
|
IcebergSqlExtensionsParser.StringMapContext |
|
IcebergSqlExtensionsParser.TagOptionsContext |
|
IcebergSqlExtensionsParser.TimeUnitContext |
|
IcebergSqlExtensionsParser.TinyIntLiteralContext |
|
IcebergSqlExtensionsParser.TransformArgumentContext |
|
IcebergSqlExtensionsParser.TransformContext |
|
IcebergSqlExtensionsParser.TypeConstructorContext |
|
IcebergSqlExtensionsParser.UnquotedIdentifierContext |
|
IcebergSqlExtensionsParser.WriteDistributionSpecContext |
|
IcebergSqlExtensionsParser.WriteOrderingSpecContext |
|
IcebergSqlExtensionsParser.WriteSpecContext |
|
IcebergSqlExtensionsVisitor<T> |
|
IcebergStorage |
Deprecated.
|
IcebergTableSink |
|
IcebergTableSource |
Flink Iceberg table source.
|
IcebergTimeObjectInspector |
|
IcebergTimestampObjectInspector |
|
IcebergTimestampWithZoneObjectInspector |
|
IcebergUUIDObjectInspector |
|
IcebergVersionFunction |
A function for use in SQL that returns the current Iceberg version, e.g.
|
IcebergWriterResult |
|
IdentityPartitionConverters |
|
InclusiveMetricsEvaluator |
|
IncrementalAppendScan |
API for configuring an incremental table scan for appends only snapshots
|
IncrementalChangelogScan |
API for configuring a scan for table changes.
|
IncrementalScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>> |
API for configuring an incremental scan.
|
IncrementalScanEvent |
Event sent to listeners when an incremental table scan is planned.
|
IndexByName |
|
IndexedDeleteFiles |
|
IndexParents |
|
InMemoryCatalog |
Catalog implementation that uses in-memory data-structures to store the namespaces and tables.
|
InMemoryFileIO |
|
InMemoryInputFile |
|
InMemoryMetricsReporter |
|
InMemoryOutputFile |
|
InputFile |
|
InputFilesDecryptor |
|
InputFormatConfig |
|
InputFormatConfig.ConfigBuilder |
|
InputFormatConfig.InMemoryDataModel |
|
InternalReader<T> |
A reader that produces Iceberg's internal in-memory object model.
|
InternalRecordWrapper |
|
IOUtil |
|
IsolationLevel |
An isolation level in a table.
|
JavaHash<T> |
|
JavaHashes |
|
JdbcCatalog |
|
JdbcClientPool |
|
JdbcLockFactory |
|
JdbcViewOperations |
JDBC implementation of Iceberg ViewOperations.
|
JobGroupInfo |
Captures information about the current job which is used for displaying on the UI
|
JobGroupUtils |
|
JsonUtil |
|
JsonUtil.FromJson<T> |
|
JsonUtil.ToJson |
|
KmsClient |
Deprecated.
|
KmsClient.KeyGenerationResult |
For KMS systems that support key generation, this class keeps the key generation result - the
raw secret key, and its wrap.
|
LakeFormationAwsClientFactory |
|
Listener<E> |
A listener interface that can receive notifications.
|
Listeners |
Static registration and notification for listeners.
|
ListNamespacesResponse |
|
ListNamespacesResponse.Builder |
|
ListTablesResponse |
A list of table identifiers in a given namespace.
|
ListTablesResponse.Builder |
|
Literal<T> |
Represents a literal fixed value in an expression predicate
|
LoadCredentialsResponse |
|
LoadCredentialsResponseParser |
|
LoadTableResponse |
A REST response that is used when a table is successfully loaded.
|
LoadTableResponse.Builder |
|
LoadTableResponseParser |
|
LoadViewResponse |
|
LoadViewResponseParser |
|
LocationProvider |
Interface for providing data file locations to write tasks.
|
LocationProviders |
|
LocationUtil |
|
LockManager |
An interface for locking, used to ensure commit isolation.
|
LockManagers |
|
LockManagers.BaseLockManager |
|
LockRemover |
Manages locks and collect Metric for the Maintenance Tasks.
|
LoggingMetricsReporter |
|
LogicalMap |
|
ManageSnapshots |
API for managing snapshots.
|
ManifestContent |
Content type stored in a manifest file, either DATA or DELETES.
|
ManifestEntriesTable |
A Table implementation that exposes a table's manifest entries as rows, for both delete
and data files.
|
ManifestEvaluator |
|
ManifestFile |
Represents a manifest file that can be scanned to find files in a table.
|
ManifestFile.PartitionFieldSummary |
Summarizes the values of one partition field stored in a manifest file.
|
ManifestFileBean |
A serializable bean that contains a bare minimum to read a manifest.
|
ManifestFiles |
|
ManifestFileUtil |
|
ManifestReader<F extends ContentFile<F>> |
Base reader for data and delete manifest files.
|
ManifestReader.FileType |
|
ManifestsTable |
A Table implementation that exposes a table's manifest files as rows.
|
ManifestWriter<F extends ContentFile<F>> |
Writer for manifest files.
|
MappedField |
An immutable mapping between a field ID and a set of names.
|
MappedFields |
|
MappingUtil |
|
MapredIcebergInputFormat<T> |
Generic MR v1 InputFormat API for Iceberg.
|
MapredIcebergInputFormat.CompatibilityTaskAttemptContextImpl |
|
MaxAggregate<T> |
|
MergeableScanTask<ThisT> |
A scan task that can be potentially merged with other scan tasks.
|
MetadataColumns |
|
MetadataLogEntriesTable |
|
MetaDataReaderFunction |
Reading metadata tables (like snapshots, manifests, etc.)
|
MetadataTableType |
|
MetadataTableUtils |
|
MetadataUpdate |
Represents a change to table or view metadata.
|
MetadataUpdate.AddPartitionSpec |
|
MetadataUpdate.AddSchema |
|
MetadataUpdate.AddSnapshot |
|
MetadataUpdate.AddSortOrder |
|
MetadataUpdate.AddViewVersion |
|
MetadataUpdate.AssignUUID |
|
MetadataUpdate.RemovePartitionStatistics |
|
MetadataUpdate.RemoveProperties |
|
MetadataUpdate.RemoveSnapshot |
|
MetadataUpdate.RemoveSnapshotRef |
|
MetadataUpdate.RemoveStatistics |
|
MetadataUpdate.SetCurrentSchema |
|
MetadataUpdate.SetCurrentViewVersion |
|
MetadataUpdate.SetDefaultPartitionSpec |
|
MetadataUpdate.SetDefaultSortOrder |
|
MetadataUpdate.SetLocation |
|
MetadataUpdate.SetPartitionStatistics |
|
MetadataUpdate.SetProperties |
|
MetadataUpdate.SetSnapshotRef |
|
MetadataUpdate.SetStatistics |
|
MetadataUpdate.UpgradeFormatVersion |
|
MetadataUpdateParser |
|
MetastoreUtil |
|
Metrics |
Iceberg file format metrics.
|
MetricsAwareDatumWriter<D> |
Wrapper writer around DatumWriter with metrics support.
|
MetricsConfig |
|
MetricsContext |
Generalized interface for creating telemetry related instances for tracking operations.
|
MetricsContext.Counter<T extends java.lang.Number> |
Deprecated.
|
MetricsContext.Unit |
|
MetricsModes |
This class defines different metrics modes, which allow users to control the collection of
value_counts, null_value_counts, nan_value_counts, lower_bounds, upper_bounds for different
columns in metadata.
|
MetricsModes.Counts |
Under this mode, only value_counts, null_value_counts, nan_value_counts are persisted.
|
MetricsModes.Full |
Under this mode, value_counts, null_value_counts, nan_value_counts and full lower_bounds,
upper_bounds are persisted.
|
MetricsModes.MetricsMode |
A metrics calculation mode.
|
MetricsModes.None |
Under this mode, value_counts, null_value_counts, nan_value_counts, lower_bounds, upper_bounds
are not persisted.
|
MetricsModes.Truncate |
Under this mode, value_counts, null_value_counts, nan_value_counts and truncated lower_bounds,
upper_bounds are persisted.
|
MetricsReport |
|
MetricsReporter |
This interface defines the basic API for reporting metrics for operations to a Table.
|
MetricsReporters |
|
MetricsUtil |
|
MetricsUtil.ReadableColMetricsStruct |
A struct of readable metric values for a primitive column
|
MetricsUtil.ReadableMetricColDefinition |
Fixed definition of a readable metric column, ie a mapping of a raw metric to a readable metric
|
MetricsUtil.ReadableMetricColDefinition.MetricFunction |
|
MetricsUtil.ReadableMetricColDefinition.TypeFunction |
|
MetricsUtil.ReadableMetricsStruct |
|
MicroBatches |
|
MicroBatches.MicroBatch |
|
MicroBatches.MicroBatchBuilder |
|
MigrateTable |
An action that migrates an existing table to Iceberg.
|
MigrateTable.Result |
The action result that contains a summary of the execution.
|
MigrateTableSparkAction |
Takes a Spark table in the source catalog and attempts to transform it into an Iceberg table in
the same location with the same identifier.
|
MinAggregate<T> |
|
Months<T> |
|
MonthsFunction |
A Spark function implementation for the Iceberg month transform.
|
MonthsFunction.DateToMonthsFunction |
|
MonthsFunction.TimestampNtzToMonthsFunction |
|
MonthsFunction.TimestampToMonthsFunction |
|
NamedReference<T> |
|
NameMapping |
Represents a mapping from external schema names to Iceberg type IDs.
|
NameMappingDatumReader<D> |
A delegating DatumReader that applies a name mapping to a file schema to enable reading Avro
files that were written without field IDs.
|
NameMappingParser |
Parses external name mappings from a JSON representation.
|
NameMappingWithAvroSchema |
|
Namespace |
|
NamespaceNotEmptyException |
Exception raised when attempting to drop a namespace that is not empty.
|
NaNUtil |
|
NativeEncryptionInputFile |
|
NativeEncryptionKeyMetadata |
|
NativeEncryptionOutputFile |
|
NativeFileCryptoParameters |
Barebone encryption parameters, one object per content file.
|
NativeFileCryptoParameters.Builder |
|
NativelyEncryptedFile |
This interface is applied to OutputFile and InputFile implementations, in order to enable
delivery of crypto parameters (such as encryption keys etc) from the Iceberg key management
module to the writers/readers of file formats that support encryption natively (Parquet and ORC).
|
NDVSketchUtil |
|
NessieCatalog |
Nessie implementation of Iceberg Catalog.
|
NessieIcebergClient |
|
NessieTableOperations |
Nessie implementation of Iceberg TableOperations.
|
NessieUtil |
|
NessieViewOperations |
|
NoLock |
|
NoSuchIcebergTableException |
NoSuchTableException thrown when a table is found but it is not an Iceberg table.
|
NoSuchIcebergViewException |
NoSuchIcebergViewException thrown when a view is found, but it is not an Iceberg view.
|
NoSuchNamespaceException |
Exception raised when attempting to load a namespace that does not exist.
|
NoSuchProcedureException |
|
NoSuchTableException |
Exception raised when attempting to load a table that does not exist.
|
NoSuchViewException |
Exception raised when attempting to load a view that does not exist.
|
Not |
|
NotAuthorizedException |
Exception thrown on HTTP 401 Unauthorized.
|
NotFoundException |
Exception raised when attempting to read a file that does not exist.
|
NotRunningException |
|
NullabilityHolder |
Instances of this class simply track whether a value at an index is null.
|
NullOrder |
|
NumDeletes |
|
NumSplits |
|
OAuth2Properties |
|
OAuth2RefreshCredentialsHandler |
|
OAuth2Util |
|
OAuth2Util.AuthSession |
Class to handle authorization headers and token refresh.
|
OAuthErrorResponseParser |
|
OAuthTokenResponse |
|
OAuthTokenResponse.Builder |
|
Offset |
|
Or |
|
ORC |
|
ORC.DataWriteBuilder |
|
ORC.DeleteWriteBuilder |
|
ORC.ReadBuilder |
|
ORC.WriteBuilder |
|
OrcBatchReader<T> |
Used for implementing ORC batch readers.
|
OrcMetrics |
|
OrcRowReader<T> |
Used for implementing ORC row readers.
|
OrcRowWriter<T> |
Write data value of a schema.
|
ORCSchemaUtil |
Utilities for mapping Iceberg to ORC schemas.
|
ORCSchemaUtil.BinaryType |
|
ORCSchemaUtil.LongType |
|
OrcSchemaVisitor<T> |
Generic visitor of an ORC Schema.
|
OrcSchemaWithTypeVisitor<T> |
|
OrcValueReader<T> |
|
OrcValueReaders |
|
OrcValueReaders.StructReader<T> |
|
OrcValueWriter<T> |
|
OrderedSplitAssignerFactory |
Create default assigner with a comparator that hands out splits where the order of the splits
will be defined by the SerializableComparator .
|
OSSFileIO |
FileIO implementation backed by OSS.
|
OSSOutputStream |
|
OSSURI |
This class represents a fully qualified location in OSS for input/output operations expressed as
as URI.
|
OutputFile |
|
OutputFileFactory |
Factory responsible for generating unique but recognizable data/delete file names.
|
OutputFileFactory.Builder |
|
OverwriteFiles |
API for overwriting files in a table.
|
Pair<X,Y> |
|
ParallelIterable<T> |
|
Parquet |
|
Parquet.DataWriteBuilder |
|
Parquet.DeleteWriteBuilder |
|
Parquet.ReadBuilder |
|
Parquet.WriteBuilder |
|
ParquetAvroReader |
|
ParquetAvroValueReaders |
|
ParquetAvroValueReaders.TimeMillisReader |
|
ParquetAvroValueReaders.TimestampMillisReader |
|
ParquetAvroWriter |
|
ParquetBloomRowGroupFilter |
|
ParquetCodecFactory |
This class implements a codec factory that is used when reading from Parquet.
|
ParquetDictionaryRowGroupFilter |
|
ParquetIterable<T> |
|
ParquetMetricsRowGroupFilter |
|
ParquetReader<T> |
|
ParquetSchemaUtil |
|
ParquetSchemaUtil.HasIds |
|
ParquetTypeVisitor<T> |
|
ParquetUtil |
|
ParquetValueReader<T> |
|
ParquetValueReaders |
|
ParquetValueReaders.BinaryAsDecimalReader |
|
ParquetValueReaders.ByteArrayReader |
|
ParquetValueReaders.BytesReader |
|
ParquetValueReaders.FloatAsDoubleReader |
|
ParquetValueReaders.IntAsLongReader |
|
ParquetValueReaders.IntegerAsDecimalReader |
|
ParquetValueReaders.ListReader<E> |
|
ParquetValueReaders.LongAsDecimalReader |
|
ParquetValueReaders.MapReader<K,V> |
|
ParquetValueReaders.PrimitiveReader<T> |
|
ParquetValueReaders.RepeatedKeyValueReader<M,I,K,V> |
|
ParquetValueReaders.RepeatedReader<T,I,E> |
|
ParquetValueReaders.ReusableEntry<K,V> |
|
ParquetValueReaders.StringReader |
|
ParquetValueReaders.StructReader<T,I> |
|
ParquetValueReaders.UnboxedReader<T> |
|
ParquetValueWriter<T> |
|
ParquetValueWriters |
|
ParquetValueWriters.PositionDeleteStructWriter<R> |
|
ParquetValueWriters.PrimitiveWriter<T> |
|
ParquetValueWriters.RepeatedKeyValueWriter<M,K,V> |
|
ParquetValueWriters.RepeatedWriter<L,E> |
|
ParquetValueWriters.StructWriter<S> |
|
ParquetWithFlinkSchemaVisitor<T> |
|
ParquetWithSparkSchemaVisitor<T> |
Visitor for traversing a Parquet type with a companion Spark type.
|
ParquetWriteAdapter<D> |
Deprecated.
|
PartitionData |
|
PartitionedFanoutWriter<T> |
|
PartitionedWriter<T> |
|
PartitionField |
|
Partitioning |
|
PartitioningWriter<T,R> |
A writer capable of writing files of a single type (i.e.
|
PartitionKey |
A struct of partition values.
|
PartitionMap<V> |
A map that uses a pair of spec ID and partition tuple as keys.
|
PartitionScanTask |
A scan task for data within a particular partition
|
PartitionSet |
|
PartitionSpec |
Represents how to produce partition data for a table.
|
PartitionSpec.Builder |
|
PartitionSpecParser |
|
PartitionSpecVisitor<T> |
|
PartitionsTable |
A Table implementation that exposes a table's partitions as rows.
|
PartitionStatisticsFile |
Represents a partition statistics file that can be used to read table data more efficiently.
|
PartitionStatisticsFileParser |
|
PartitionStats |
|
PartitionStatsUtil |
|
PartitionUtil |
|
PathIdentifier |
|
Payload |
Interface for an element that is an event payload.
|
PayloadType |
Control event types.
|
PendingUpdate<T> |
API for table metadata changes.
|
PigParquetReader |
Deprecated.
|
PlaintextEncryptionManager |
|
PlanningMode |
|
PositionalDeleteFiles |
|
PositionDelete<R> |
|
PositionDeleteIndex |
|
PositionDeleteIndexUtil |
|
PositionDeletesRewriteCoordinator |
|
PositionDeletesScanTask |
|
PositionDeletesTable |
|
PositionDeletesTable.PositionDeletesBatchScan |
|
PositionDeleteWriter<T> |
A position delete writer that can handle deletes ordered by file and position.
|
PositionDeltaWriter<T> |
A writer capable of writing data and position deletes that may belong to different specs and
partitions.
|
PositionOutputStream |
|
Predicate<T,C extends Term> |
|
Procedure |
An interface representing a stored procedure available for execution.
|
ProcedureCatalog |
A catalog API for working with stored procedures.
|
ProcedureParameter |
|
ProjectionDatumReader<D> |
|
Projections |
Utils to project expressions on rows to expressions on partitions.
|
Projections.ProjectionEvaluator |
A class that projects expressions for a table's data rows into expressions on the table's
partition values, for a table's partition spec .
|
PropertiesSerDesUtil |
Convert Map properties to bytes.
|
PropertyUtil |
|
PruneColumnsWithoutReordering |
|
PruneColumnsWithReordering |
|
Puffin |
Utility class for reading and writing Puffin files.
|
Puffin.ReadBuilder |
|
Puffin.WriteBuilder |
|
PuffinCompressionCodec |
|
PuffinReader |
|
PuffinWriter |
|
RangePartitioner |
|
RangeReadable |
RangeReadable is an interface that allows for implementations of InputFile
streams to perform positional, range-based reads, which are more efficient than unbounded reads
in many cloud provider object stores.
|
RawDecoder<D> |
|
ReachableFileUtil |
|
ReaderFunction<T> |
|
Record |
|
RecordAndPosition<T> |
A record along with the reader position to be stored in the checkpoint.
|
Reference<T> |
|
RefsTable |
A Table implementation that exposes a table's known snapshot references as rows.
|
RegisterTableRequest |
|
RegisterTableRequestParser |
|
RemoveDanglingDeleteFiles |
An action that removes dangling delete files from the current snapshot.
|
RemoveDanglingDeleteFiles.Result |
An action that remove dangling deletes.
|
RemoveIds |
|
RemoveIds |
|
RemoveNetCarryoverIterator |
This class computes the net changes across multiple snapshots.
|
RemoveOrphanFilesProcedure |
A procedure that removes orphan files in a table.
|
RenameTableRequest |
A REST request to rename a table or a view.
|
RenameTableRequest.Builder |
|
ReplacePartitions |
API for overwriting files in a table by partition.
|
ReplaceSortOrder |
API for replacing table sort order with a newly created order.
|
ReplaceViewVersion |
API for replacing a view's version.
|
ReportMetricsRequest |
|
ReportMetricsRequest.ReportType |
|
ReportMetricsRequestParser |
|
ResidualEvaluator |
|
ResolvingFileIO |
FileIO implementation that uses location scheme to choose the correct FileIO implementation.
|
ResourcePaths |
|
RESTCatalog |
|
RESTClient |
Interface for a basic HTTP Client for interfacing with the REST catalog.
|
RESTException |
Base class for REST client exceptions
|
RESTMessage |
Interface to mark both REST requests and responses.
|
RESTRequest |
Interface to mark a REST request.
|
RESTResponse |
Interface to mark a REST response
|
RESTSerializers |
|
RESTSerializers.CommitTransactionRequestDeserializer |
|
RESTSerializers.CommitTransactionRequestSerializer |
|
RESTSerializers.ErrorResponseDeserializer |
|
RESTSerializers.ErrorResponseSerializer |
|
RESTSerializers.MetadataUpdateDeserializer |
|
RESTSerializers.MetadataUpdateSerializer |
|
RESTSerializers.NamespaceDeserializer |
|
RESTSerializers.NamespaceSerializer |
|
RESTSerializers.OAuthTokenResponseDeserializer |
|
RESTSerializers.OAuthTokenResponseSerializer |
|
RESTSerializers.RegisterTableRequestDeserializer<T extends RegisterTableRequest> |
|
RESTSerializers.RegisterTableRequestSerializer<T extends RegisterTableRequest> |
|
RESTSerializers.ReportMetricsRequestDeserializer<T extends ReportMetricsRequest> |
|
RESTSerializers.ReportMetricsRequestSerializer<T extends ReportMetricsRequest> |
|
RESTSerializers.SchemaDeserializer |
|
RESTSerializers.SchemaSerializer |
|
RESTSerializers.TableIdentifierDeserializer |
|
RESTSerializers.TableIdentifierSerializer |
|
RESTSerializers.TableMetadataDeserializer |
|
RESTSerializers.TableMetadataSerializer |
|
RESTSerializers.UnboundPartitionSpecDeserializer |
|
RESTSerializers.UnboundPartitionSpecSerializer |
|
RESTSerializers.UnboundSortOrderDeserializer |
|
RESTSerializers.UnboundSortOrderSerializer |
|
RESTSerializers.UpdateTableRequestDeserializer |
|
RESTSerializers.UpdateTableRequestSerializer |
|
RESTSessionCatalog |
|
RESTSigV4Signer |
Provides a request interceptor for use with the HTTPClient that calculates the required signature
for the SigV4 protocol and adds the necessary headers for all requests created by the client.
|
RESTUtil |
|
ResultDataFiles |
|
ResultDeleteFiles |
|
RetryDetector |
Metrics are the only reliable way provided by the AWS SDK to determine if an API call was
retried.
|
RewriteDataFiles |
An action for rewriting data files according to a rewrite strategy.
|
RewriteDataFiles.FileGroupFailureResult |
For a file group that failed to rewrite.
|
RewriteDataFiles.FileGroupInfo |
A description of a file group, when it was processed, and within which partition.
|
RewriteDataFiles.FileGroupRewriteResult |
For a particular file group, the number of files which are newly created and the number of
files which were formerly part of the table but have been rewritten.
|
RewriteDataFiles.Result |
A map of file group information to the results of rewriting that file group.
|
RewriteDataFilesAction |
|
RewriteDataFilesActionResult |
|
RewriteDataFilesCommitManager |
Functionality used by RewriteDataFile Actions from different platforms to handle commits.
|
RewriteDataFilesSparkAction |
|
RewriteFileGroup |
Container class representing a set of files to be rewritten by a RewriteAction and the new files
which have been written by the action.
|
RewriteFiles |
API for replacing files in a table.
|
RewriteJobOrder |
Enum of supported rewrite job order, it defines the order in which the file groups should be
written.
|
RewriteManifests |
An action that rewrites manifests.
|
RewriteManifests |
API for rewriting manifests for a table.
|
RewriteManifests.Result |
The action result that contains a summary of the execution.
|
RewriteManifestsSparkAction |
An action that rewrites manifests in a distributed manner and co-locates metadata for partitions.
|
RewritePositionDeleteFiles |
An action for rewriting position delete files.
|
RewritePositionDeleteFiles.FileGroupInfo |
A description of a position delete file group, when it was processed, and within which
partition.
|
RewritePositionDeleteFiles.FileGroupRewriteResult |
For a particular position delete file group, the number of position delete files which are
newly created and the number of files which were formerly part of the table but have been
rewritten.
|
RewritePositionDeleteFiles.Result |
The action result that contains a summary of the execution.
|
RewritePositionDeleteFilesProcedure |
A procedure that rewrites position delete files in a table.
|
RewritePositionDeleteFilesSparkAction |
|
RewritePositionDeletesCommitManager |
|
RewritePositionDeletesGroup |
Container class representing a set of position delete files to be rewritten by a RewritePositionDeleteFiles and the new files which have been written by the action.
|
RewriteTablePath |
An action that rewrites the table's metadata files to a staging directory, replacing all source
prefixes in absolute paths with a specified target prefix.
|
RewriteTablePath.Result |
The action result that contains a summary of the execution.
|
RollbackStagedTable |
An implementation of StagedTable that mimics the behavior of Spark's non-atomic CTAS and RTAS.
|
RollingDataWriter<T> |
A rolling data writer that splits incoming data into multiple files within one spec/partition
based on the target file size.
|
RollingEqualityDeleteWriter<T> |
A rolling equality delete writer that splits incoming deletes into multiple files within one
spec/partition based on the target file size.
|
RollingManifestWriter<F extends ContentFile<F>> |
As opposed to ManifestWriter , a rolling writer could produce multiple manifest files.
|
RollingPositionDeleteWriter<T> |
A rolling position delete writer that splits incoming deletes into multiple files within one
spec/partition based on the target file size.
|
RowDataConverter<T> |
Convert RowData to a different output type.
|
RowDataFileScanTaskReader |
|
RowDataProjection |
|
RowDataReaderFunction |
|
RowDataRewriter |
|
RowDataRewriter.RewriteMap |
|
RowDataTaskWriterFactory |
|
RowDataToAvroGenericRecordConverter |
This is not serializable because Avro Schema is not actually serializable, even though it
implements Serializable interface.
|
RowDataUtil |
|
RowDataWrapper |
|
RowDelta |
API for encoding row-level changes to a table.
|
RowLevelOperationMode |
Iceberg supports two ways to modify records in a table: copy-on-write and merge-on-read.
|
RowPositionColumnVector |
|
RuntimeIOException |
Exception used to wrap IOException as a RuntimeException and add context.
|
RuntimeMetaException |
Exception used to wrap MetaException as a RuntimeException and add context.
|
S3FileIO |
FileIO implementation backed by S3.
|
S3FileIOAwsClientFactories |
|
S3FileIOAwsClientFactory |
|
S3FileIOProperties |
|
S3InputFile |
|
S3ObjectMapper |
|
S3ObjectMapper.S3SignRequestDeserializer<T extends S3SignRequest> |
|
S3ObjectMapper.S3SignRequestSerializer<T extends S3SignRequest> |
|
S3ObjectMapper.S3SignResponseDeserializer<T extends S3SignResponse> |
|
S3ObjectMapper.S3SignResponseSerializer<T extends S3SignResponse> |
|
S3OutputFile |
|
S3RequestUtil |
|
S3SignRequest |
|
S3SignRequestParser |
|
S3SignResponse |
|
S3SignResponseParser |
|
S3V4RestSignerClient |
|
Scan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>> |
Scan objects are immutable and can be shared between threads.
|
ScanContext |
Context object with optional arguments for a Flink Scan.
|
ScanContext.Builder |
|
ScanEvent |
Event sent to listeners when a table scan is planned.
|
ScanMetrics |
Carries all metrics for a particular scan
|
ScanMetricsResult |
A serializable version of ScanMetrics that carries its results.
|
ScanMetricsUtil |
|
ScannedDataManifests |
|
ScannedDeleteManifests |
|
ScanReport |
A Table Scan report that contains all relevant information from a Table Scan.
|
ScanReportParser |
|
ScanSummary |
|
ScanSummary.Builder |
|
ScanSummary.PartitionMetrics |
|
ScanTask |
A scan task.
|
ScanTaskGroup<T extends ScanTask> |
A scan task that may include partial input files, multiple input files or both.
|
ScanTaskParser |
|
ScanTaskSetManager |
|
Schema |
The schema of a data table.
|
SchemaParser |
|
SchemaUtil |
Deprecated.
|
SchemaWithPartnerVisitor<P,R> |
|
SchemaWithPartnerVisitor.PartnerAccessors<P> |
|
SeekableInputStream |
SeekableInputStream is an interface with the methods needed to read data from a file or
Hadoop data stream.
|
SerializableComparator<T> |
|
SerializableConfiguration |
Wraps a Configuration object in a Serializable layer.
|
SerializableFunction<S,T> |
A concrete transform function that applies a transform to values of a certain type.
|
SerializableMap<K,V> |
|
SerializableRecordEmitter<T> |
|
SerializableSupplier<T> |
|
SerializableTable |
A read-only serializable table that can be sent to other nodes in a cluster.
|
SerializableTable.SerializableMetadataTable |
|
SerializableTableWithSize |
This class provides a serializable table with a known size estimate.
|
SerializableTableWithSize.SerializableMetadataTableWithSize |
|
SerializationUtil |
|
ServiceFailureException |
Exception thrown on HTTP 5XX Server Error.
|
ServiceUnavailableException |
Exception thrown on HTTP 503: service is unavailable
|
SessionCatalog |
A Catalog API for table and namespace operations that includes session context.
|
SessionCatalog.SessionContext |
Context for a session.
|
SetAccumulator<T> |
|
SetLocation |
|
SetPartitionStatistics |
|
SetStatistics |
|
SimpleSplitAssignerFactory |
Create simple assigner that hands out splits without any guarantee in order or locality.
|
SingleThreadedIteratorSource<T> |
Implementation of the Source V2 API which uses an iterator to read the elements, and uses a
single thread to do so.
|
SingleValueParser |
|
SinkWriter |
|
SinkWriterResult |
|
SizeBasedDataRewriter |
|
SizeBasedFileRewriter<T extends ContentScanTask<F>,F extends ContentFile<F>> |
A file rewriter that determines which files to rewrite based on their size.
|
SizeBasedPositionDeletesRewriter |
|
SkippedDataFiles |
|
SkippedDataManifests |
|
SkippedDeleteFiles |
|
SkippedDeleteManifests |
|
Snapshot |
A snapshot of the data in a table at a point in time.
|
SnapshotDeltaLakeTable |
Snapshot an existing Delta Lake table to Iceberg in place.
|
SnapshotDeltaLakeTable.Result |
The action result that contains a summary of the execution.
|
SnapshotIdGeneratorUtil |
|
SnapshotManager |
|
SnapshotParser |
|
SnapshotRef |
|
SnapshotRef.Builder |
|
SnapshotRefParser |
|
SnapshotScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>> |
This is a common base class to share code between different BaseScan implementations that handle
scans of a particular snapshot.
|
SnapshotsTable |
A Table implementation that exposes a table's known snapshots as rows.
|
SnapshotSummary |
|
SnapshotSummary.Builder |
|
SnapshotTable |
An action that creates an independent snapshot of an existing table.
|
SnapshotTable.Result |
The action result that contains a summary of the execution.
|
SnapshotTableSparkAction |
Creates a new Iceberg table based on a source Spark table.
|
SnapshotUpdate<ThisT,R> |
An action that produces snapshots.
|
SnapshotUpdate<ThisT> |
API for table changes that produce snapshots.
|
SnapshotUpdateAction<ThisT,R> |
|
SnapshotUtil |
|
SnowflakeCatalog |
|
SortDirection |
|
SortedMerge<T> |
An Iterable that merges the items from other Iterables in order.
|
SortField |
|
SortingPositionOnlyDeleteWriter<T> |
A position delete writer that is capable of handling unordered deletes without rows.
|
SortKey |
A struct of flattened sort field values.
|
SortOrder |
A sort order that defines how data and delete files should be ordered in a table.
|
SortOrder.Builder |
|
SortOrderBuilder<R> |
Methods for building a sort order.
|
SortOrderComparators |
|
SortOrderParser |
|
SortOrderUtil |
|
SortOrderVisitor<T> |
|
Spark3Util |
|
Spark3Util.CatalogAndIdentifier |
This mimics a class inside of Spark which is private inside of LookupCatalog.
|
Spark3Util.DescribeSchemaVisitor |
|
SparkActions |
|
SparkAggregates |
|
SparkAvroReader |
Deprecated.
|
SparkAvroWriter |
|
SparkCachedTableCatalog |
An internal table catalog that is capable of loading tables from a cache.
|
SparkCatalog |
A Spark TableCatalog implementation that wraps an Iceberg Catalog .
|
SparkChangelogTable |
|
SparkContentFile<F> |
|
SparkDataFile |
|
SparkDeleteFile |
|
SparkDistributedDataScan |
A batch data scan that can utilize Spark cluster resources for planning.
|
SparkExceptionUtil |
|
SparkExecutorCache |
An executor cache for reducing the computation and IO overhead in tasks.
|
SparkFilters |
|
SparkFunctionCatalog |
A function catalog that can be used to resolve Iceberg functions without a metastore connection.
|
SparkFunctions |
|
SparkMetadataColumn |
|
SparkMicroBatchStream |
|
SparkOrcReader |
Converts the OrcIterator, which returns ORC's VectorizedRowBatch to a set of Spark's UnsafeRows.
|
SparkOrcValueReaders |
|
SparkOrcWriter |
This class acts as an adaptor from an OrcFileAppender to a FileAppender<InternalRow>.
|
SparkParquetReaders |
|
SparkParquetWriters |
|
SparkPartitionedFanoutWriter |
|
SparkPartitionedWriter |
|
SparkPlannedAvroReader |
|
SparkPositionDeletesRewrite |
Write class for rewriting position delete files from Spark.
|
SparkPositionDeletesRewrite.DeleteTaskCommit |
|
SparkPositionDeletesRewriteBuilder |
Builder class for rewrites of position delete files from Spark.
|
SparkProcedures |
|
SparkProcedures.ProcedureBuilder |
|
SparkReadConf |
A class for common Iceberg configs for Spark reads.
|
SparkReadOptions |
Spark DF read options
|
SparkScanBuilder |
|
SparkSchemaUtil |
Helper methods for working with Spark/Hive metadata.
|
SparkSessionCatalog<T extends org.apache.spark.sql.connector.catalog.TableCatalog & org.apache.spark.sql.connector.catalog.FunctionCatalog & org.apache.spark.sql.connector.catalog.SupportsNamespaces> |
A Spark catalog that can also load non-Iceberg tables.
|
SparkSQLProperties |
|
SparkStructLike |
|
SparkTable |
|
SparkTableCache |
|
SparkTableUtil |
Java version of the original SparkTableUtil.scala
https://github.com/apache/iceberg/blob/apache-iceberg-0.8.0-incubating/spark/src/main/scala/org/apache/iceberg/spark/SparkTableUtil.scala
|
SparkTableUtil.SparkPartition |
Class representing a table partition.
|
SparkUtil |
|
SparkV2Filters |
|
SparkValueConverter |
A utility class that converts Spark values to Iceberg's internal representation.
|
SparkValueReaders |
|
SparkValueWriters |
|
SparkView |
|
SparkWriteConf |
A class for common Iceberg configs for Spark writes.
|
SparkWriteOptions |
Spark DF write options
|
SparkWriteRequirements |
A set of requirements such as distribution and ordering reported to Spark during writes.
|
SparkWriteUtil |
A utility that contains helper methods for working with Spark writes.
|
SplitAssigner |
SplitAssigner interface is extracted out as a separate component so that we can plug in different
split assignment strategy for different requirements.
|
SplitAssignerFactory |
|
SplitAssignerType |
|
SplitComparators |
|
SplitRequestEvent |
We can remove this class once FLINK-21364 is resolved.
|
SplittableScanTask<ThisT> |
A scan task that can be split into smaller scan tasks.
|
SplitWatermarkExtractor |
The interface used to extract watermarks from splits.
|
SQLViewRepresentation |
SQLViewRepresentation represents views in SQL with a given dialect
|
StagedSparkTable |
|
StandardBlobTypes |
|
StandardEncryptionManager |
|
StandardPuffinProperties |
|
StartCommit |
A control event payload for events sent by a coordinator to request workers to send back the
table data that has been written and is ready to commit.
|
StaticIcebergEnumerator |
One-time split enumeration at the start-up for batch execution
|
StaticTableOperations |
TableOperations implementation that provides access to metadata for a Table at some point in
time, using a table metadata location.
|
StatisticsFile |
Represents a statistics file in the Puffin format, that can be used to read table data more
efficiently.
|
StatisticsFileParser |
|
StatisticsOrRecord |
The wrapper class for data statistics and record.
|
StatisticsType |
Range distribution requires gathering statistics on the sort keys to determine proper range
boundaries to distribute/cluster rows before writer operators.
|
StreamingDelete |
Delete implementation that avoids loading full manifests in memory.
|
StreamingMonitorFunction |
This is the single (non-parallel) monitoring task which takes a FlinkInputFormat , it is
responsible for:
Monitoring snapshots of the Iceberg table.
|
StreamingReaderOperator |
|
StreamingStartingStrategy |
Starting strategy for streaming execution.
|
StrictMetricsEvaluator |
|
StructLike |
Interface for accessing data by position in a schema.
|
StructLikeMap<T> |
|
StructLikeSet |
|
StructLikeWrapper |
Wrapper to adapt StructLike for use in maps and sets by implementing equals and hashCode.
|
StructProjection |
|
StructRowData |
|
SupportsBulkOperations |
|
SupportsIndexProjection |
|
SupportsNamespaces |
Catalog methods for working with namespaces.
|
SupportsPrefixOperations |
This interface is intended as an extension for FileIO implementations to provide additional
prefix based operations that may be useful in performing supporting operations.
|
SupportsRecoveryOperations |
This interface is intended as an extension for FileIO implementations to provide additional
best-effort recovery operations that can be useful for repairing corrupted tables where there are
reachable files missing from disk.
|
SupportsReplaceView |
|
SupportsRowPosition |
Interface for readers that accept a callback to determine the starting row position of an Avro
split.
|
SystemConfigs |
Configuration properties that are controlled by Java system properties or environmental variable.
|
SystemConfigs.ConfigEntry<T> |
|
SystemProperties |
Deprecated.
|
Table |
Represents a table.
|
TableCommit |
|
TableIdentifier |
Identifies a table in iceberg catalog.
|
TableIdentifierParser |
Parses TableIdentifiers from a JSON representation, which is the JSON representation utilized in
the REST catalog.
|
TableLoader |
Serializable loader to load an Iceberg Table .
|
TableLoader.CatalogTableLoader |
|
TableLoader.HadoopTableLoader |
|
TableMaintenanceMetrics |
|
TableMetadata |
Metadata for a table.
|
TableMetadata.Builder |
|
TableMetadata.MetadataLogEntry |
|
TableMetadata.SnapshotLogEntry |
|
TableMetadataParser |
|
TableMetadataParser.Codec |
|
TableMigrationUtil |
|
TableOperations |
SPI interface to abstract table metadata access and updates.
|
TableProperties |
|
TableReference |
Element representing a table identifier, with namespace and name.
|
Tables |
Generic interface for creating and loading a table implementation.
|
TableScan |
API for configuring a table scan.
|
TableScanUtil |
|
TableSinkConfig |
|
TaskEqualityDeleteFiles |
|
TaskIndexedDeleteFiles |
|
TaskNumDeletes |
|
TaskNumSplits |
|
TaskPositionalDeleteFiles |
|
TaskResult |
The result of a single Maintenance Task.
|
TaskResultDataFiles |
|
TaskResultDeleteFiles |
|
Tasks |
|
Tasks.Builder<I> |
|
Tasks.FailureTask<I,E extends java.lang.Exception> |
|
Tasks.Task<I,E extends java.lang.Exception> |
|
Tasks.UnrecoverableException |
|
TaskScannedDataManifests |
|
TaskScannedDeleteManifests |
|
TaskSkippedDataFiles |
|
TaskSkippedDataManifests |
|
TaskSkippedDeleteFiles |
|
TaskSkippedDeleteManifests |
|
TaskTotalDataFileSize |
|
TaskTotalDataManifests |
|
TaskTotalDeleteFileSize |
|
TaskTotalDeleteManifests |
|
TaskTotalPlanningDuration |
|
TaskWriter<T> |
The writer interface could accept records and provide the generated data files.
|
TaskWriterFactory<T> |
|
Term |
An expression that evaluates to a value.
|
TezUtil |
|
ThreadPools |
|
Timer |
Generalized Timer interface for creating telemetry related instances for measuring duration of
operations.
|
Timer.Timed |
A timing sample that carries internal state about the Timer's start position.
|
TimerResult |
A serializable version of a Timer that carries its result.
|
TopicPartitionOffset |
Element representing an offset, with topic name, partition number, and offset.
|
TotalDataFileSize |
|
TotalDataManifests |
|
TotalDeleteFileSize |
|
TotalDeleteManifests |
|
TotalPlanningDuration |
|
Transaction |
A transaction for performing multiple updates to a table.
|
Transactions |
|
Transform<S,T> |
A transform function used for partitioning.
|
Transforms |
Factory methods for transforms.
|
TriggerLockFactory |
Lock interface for handling locks for the Flink Table Maintenance jobs.
|
TriggerLockFactory.Lock |
|
TripleWriter<T> |
|
True |
|
TruncateFunction |
A Spark function implementation for the Iceberg truncate transform.
|
TruncateFunction.TruncateBase<T> |
|
TruncateFunction.TruncateBigInt |
|
TruncateFunction.TruncateBinary |
|
TruncateFunction.TruncateDecimal |
|
TruncateFunction.TruncateInt |
|
TruncateFunction.TruncateSmallInt |
|
TruncateFunction.TruncateString |
|
TruncateFunction.TruncateTinyInt |
|
TruncateUtil |
Contains the logic for various truncate transformations for various types.
|
Type |
|
Type.NestedType |
|
Type.PrimitiveType |
|
Type.TypeID |
|
Types |
|
Types.BinaryType |
|
Types.BooleanType |
|
Types.DateType |
|
Types.DecimalType |
|
Types.DoubleType |
|
Types.FixedType |
|
Types.FloatType |
|
Types.IntegerType |
|
Types.ListType |
|
Types.LongType |
|
Types.MapType |
|
Types.NestedField |
|
Types.NestedField.Builder |
|
Types.StringType |
|
Types.StructType |
|
Types.TimestampNanoType |
|
Types.TimestampType |
|
Types.TimeType |
|
Types.UUIDType |
|
TypeToMessageType |
|
TypeUtil |
|
TypeUtil.CustomOrderSchemaVisitor<T> |
|
TypeUtil.GetID |
Interface for passing a function that assigns column IDs from the previous Id.
|
TypeUtil.NextID |
Interface for passing a function that assigns column IDs.
|
TypeUtil.SchemaVisitor<T> |
|
TypeWithSchemaVisitor<T> |
Visitor for traversing a Parquet type with a companion Iceberg type.
|
Unbound<T,B> |
Represents an unbound expression node.
|
UnboundAggregate<T> |
|
UnboundPartitionSpec |
|
UnboundPredicate<T> |
|
UnboundSortOrder |
|
UnboundTerm<T> |
Represents an unbound term.
|
UnboundTransform<S,T> |
|
UncheckedInterruptedException |
|
UncheckedSQLException |
|
UnicodeUtil |
|
UnionByNameVisitor |
Visitor class that accumulates the set of changes needed to evolve an existing schema into the
union of the existing and a new schema.
|
UnknownTransform<S,T> |
|
UnknownViewRepresentation |
|
UnpartitionedWriter<T> |
|
UnprocessableEntityException |
REST exception thrown when a request is well-formed but cannot be applied.
|
UpdateLocation |
API for setting a table's or view's base location.
|
UpdateNamespacePropertiesRequest |
A REST request to set and/or remove properties on a namespace.
|
UpdateNamespacePropertiesRequest.Builder |
|
UpdateNamespacePropertiesResponse |
A REST response to a request to set and/or remove properties on a namespace.
|
UpdateNamespacePropertiesResponse.Builder |
|
UpdatePartitionSpec |
API for partition spec evolution.
|
UpdatePartitionStatistics |
API for updating partition statistics files in a table.
|
UpdateProperties |
API for updating table properties.
|
UpdateRequirement |
|
UpdateRequirement.AssertCurrentSchemaID |
|
UpdateRequirement.AssertDefaultSortOrderID |
|
UpdateRequirement.AssertDefaultSpecID |
|
UpdateRequirement.AssertLastAssignedFieldId |
|
UpdateRequirement.AssertLastAssignedPartitionId |
|
UpdateRequirement.AssertRefSnapshotID |
|
UpdateRequirement.AssertTableDoesNotExist |
|
UpdateRequirement.AssertTableUUID |
|
UpdateRequirement.AssertViewUUID |
|
UpdateRequirementParser |
|
UpdateRequirements |
|
UpdateSchema |
API for schema evolution.
|
UpdateStatistics |
API for updating statistics files in a table.
|
UpdateTableRequest |
|
UpdateTableRequestParser |
|
UpdateViewProperties |
API for updating view properties.
|
Util |
|
UUIDConversion |
|
UUIDUtil |
|
ValidationException |
Exception which is raised when the arguments are valid in isolation, but not in conjunction with
other arguments or state, as opposed to IllegalArgumentException which is raised when an
argument value is always invalid.
|
ValueReader<T> |
|
ValueReaders |
|
ValueReaders.PlannedStructReader<S> |
|
ValueReaders.StructReader<S> |
|
ValuesAsBytesReader |
Implements a ValuesReader specifically to read given number of bytes from the underlying
ByteBufferInputStream .
|
ValueWriter<D> |
|
ValueWriters |
|
ValueWriters.StructWriter<S> |
|
VectorHolder |
Container class for holding the Arrow vector storing a batch of values along with other state
needed for reading values out of it.
|
VectorHolder.ConstantVectorHolder<T> |
A Vector Holder which does not actually produce values, consumers of this class should use the
constantValue to populate their ColumnVector implementation.
|
VectorHolder.DeletedVectorHolder |
|
VectorHolder.PositionVectorHolder |
|
VectorizedArrowReader |
|
VectorizedArrowReader.ConstantVectorReader<T> |
A Dummy Vector Reader which doesn't actually read files, instead it returns a dummy
VectorHolder which indicates the constant value which should be used for this column.
|
VectorizedArrowReader.DeletedVectorReader |
A Dummy Vector Reader which doesn't actually read files.
|
VectorizedColumnIterator |
Vectorized version of the ColumnIterator that reads column values in data pages of a column in a
row group in a batched fashion.
|
VectorizedDictionaryEncodedParquetValuesReader |
This decoder reads Parquet dictionary encoded data in a vectorized fashion.
|
VectorizedPageIterator |
|
VectorizedParquetDefinitionLevelReader |
|
VectorizedParquetReader<T> |
|
VectorizedReader<T> |
Interface for vectorized Iceberg readers.
|
VectorizedReaderBuilder |
|
VectorizedRowBatchIterator |
An adaptor so that the ORC RecordReader can be used as an Iterator.
|
VectorizedSparkOrcReaders |
|
VectorizedSparkParquetReaders |
|
VectorizedSupport |
Copied here from Hive for compatibility
|
VectorizedSupport.Support |
|
VectorizedTableScanIterable |
A vectorized implementation of the Iceberg reader that iterates over the table scan.
|
VendedCredentialsProvider |
|
VersionBuilder<T> |
|
View |
Interface for view definition.
|
ViewBuilder |
A builder used to create or replace a SQL View .
|
ViewCatalog |
A Catalog API for view create, drop, and load operations.
|
ViewHistoryEntry |
View history entry.
|
ViewMetadata |
|
ViewMetadata.Builder |
|
ViewMetadataParser |
|
ViewOperations |
SPI interface to abstract view metadata access and updates.
|
ViewProperties |
View properties that can be set during CREATE/REPLACE view or using updateProperties API.
|
ViewRepresentation |
|
ViewRepresentation.Type |
|
ViewSessionCatalog |
A session Catalog API for view create, drop, and load operations.
|
ViewUtil |
|
ViewVersion |
A version of the view at a point in time.
|
ViewVersionParser |
|
WapUtil |
|
WriteObjectInspector |
Interface for converting the Hive primitive objects for the objects which could be added to an
Iceberg Record.
|
WriteResult |
|
WriteResult.Builder |
|
YearsFunction |
A Spark function implementation for the Iceberg year transform.
|
YearsFunction.DateToYearsFunction |
|
YearsFunction.TimestampNtzToYearsFunction |
|
YearsFunction.TimestampToYearsFunction |
|
Zorder |
|
ZOrderByteUtils |
Within Z-Ordering the byte representations of objects being compared must be ordered, this
requires several types to be transformed when converted to bytes.
|