Package org.apache.iceberg.spark.actions
Class ComputeTableStatsSparkAction
java.lang.Object
org.apache.iceberg.spark.actions.ComputeTableStatsSparkAction
- All Implemented Interfaces:
- Action<ComputeTableStats,,- ComputeTableStats.Result> - ComputeTableStats
Computes the statistics of the given columns and stores it as Puffin files.
- 
Nested Class SummaryNested classes/interfaces inherited from interface org.apache.iceberg.actions.ComputeTableStatsComputeTableStats.Result
- 
Field SummaryFieldsModifier and TypeFieldDescriptionprotected static final org.apache.iceberg.relocated.com.google.common.base.Joinerprotected static final org.apache.iceberg.relocated.com.google.common.base.Splitterprotected static final Stringprotected static final Stringprotected static final Stringprotected static final Stringprotected static final Stringprotected static final String
- 
Method SummaryModifier and TypeMethodDescriptionprotected org.apache.spark.sql.Dataset<FileInfo> Choose the set of columns to collect stats, by default all columns are chosen.protected org.apache.spark.sql.Dataset<FileInfo> contentFileDS(Table table) protected org.apache.spark.sql.Dataset<FileInfo> contentFileDS(Table table, Set<Long> snapshotIds) protected org.apache.iceberg.spark.actions.BaseSparkAction.DeleteSummarydeleteFiles(ExecutorService executorService, Consumer<String> deleteFunc, Iterator<FileInfo> files) Deletes files and keeps track of how many files were removed for each file type.protected org.apache.iceberg.spark.actions.BaseSparkAction.DeleteSummarydeleteFiles(SupportsBulkOperations io, Iterator<FileInfo> files) execute()Executes this action.protected org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> loadMetadataTable(Table table, MetadataTableType type) protected org.apache.spark.sql.Dataset<FileInfo> manifestDS(Table table) protected org.apache.spark.sql.Dataset<FileInfo> manifestDS(Table table, Set<Long> snapshotIds) protected org.apache.spark.sql.Dataset<FileInfo> manifestListDS(Table table) protected org.apache.spark.sql.Dataset<FileInfo> manifestListDS(Table table, Set<Long> snapshotIds) protected JobGroupInfonewJobGroupInfo(String groupId, String desc) protected TablenewStaticTable(TableMetadata metadata, FileIO io) options()protected org.apache.spark.sql.Dataset<FileInfo> otherMetadataFileDS(Table table) protected ComputeTableStatsSparkActionself()snapshot(long newSnapshotId) Choose the table snapshot to compute stats, by default the current snapshot is used.protected org.apache.spark.sql.SparkSessionspark()protected org.apache.spark.api.java.JavaSparkContextprotected org.apache.spark.sql.Dataset<FileInfo> statisticsFileDS(Table table, Set<Long> snapshotIds) protected <T> TwithJobGroupInfo(JobGroupInfo info, Supplier<T> supplier) 
- 
Field Details- 
MANIFEST- See Also:
 
- 
MANIFEST_LIST- See Also:
 
- 
STATISTICS_FILES- See Also:
 
- 
OTHERS- See Also:
 
- 
FILE_PATH- See Also:
 
- 
LAST_MODIFIED- See Also:
 
- 
COMMA_SPLITTERprotected static final org.apache.iceberg.relocated.com.google.common.base.Splitter COMMA_SPLITTER
- 
COMMA_JOINERprotected static final org.apache.iceberg.relocated.com.google.common.base.Joiner COMMA_JOINER
 
- 
- 
Method Details- 
self
- 
columnsDescription copied from interface:ComputeTableStatsChoose the set of columns to collect stats, by default all columns are chosen.- Specified by:
- columnsin interface- ComputeTableStats
- Parameters:
- newColumns- a set of column names to be analyzed
- Returns:
- this for method chaining
 
- 
snapshotDescription copied from interface:ComputeTableStatsChoose the table snapshot to compute stats, by default the current snapshot is used.- Specified by:
- snapshotin interface- ComputeTableStats
- Parameters:
- newSnapshotId- long ID of the snapshot for which stats need to be computed
- Returns:
- this for method chaining
 
- 
executeDescription copied from interface:ActionExecutes this action.- Specified by:
- executein interface- Action<ComputeTableStats,- ComputeTableStats.Result> 
- Returns:
- the result of this action
 
- 
sparkprotected org.apache.spark.sql.SparkSession spark()
- 
sparkContextprotected org.apache.spark.api.java.JavaSparkContext sparkContext()
- 
option
- 
options
- 
options
- 
withJobGroupInfo
- 
newJobGroupInfo
- 
newStaticTable
- 
contentFileDS
- 
contentFileDS
- 
manifestDS
- 
manifestDS
- 
manifestListDS
- 
manifestListDS
- 
statisticsFileDS
- 
otherMetadataFileDS
- 
allReachableOtherMetadataFileDS
- 
loadMetadataTableprotected org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> loadMetadataTable(Table table, MetadataTableType type) 
- 
deleteFilesprotected org.apache.iceberg.spark.actions.BaseSparkAction.DeleteSummary deleteFiles(ExecutorService executorService, Consumer<String> deleteFunc, Iterator<FileInfo> files) Deletes files and keeps track of how many files were removed for each file type.- Parameters:
- executorService- an executor service to use for parallel deletes
- deleteFunc- a delete func
- files- an iterator of Spark rows of the structure (path: String, type: String)
- Returns:
- stats on which files were deleted
 
- 
deleteFilesprotected org.apache.iceberg.spark.actions.BaseSparkAction.DeleteSummary deleteFiles(SupportsBulkOperations io, Iterator<FileInfo> files) 
 
-