Class SparkDistributedDataScan
This scan remotely filters manifests, fetching only the relevant data and delete files to the driver. The delete file assignment is done locally after the remote filtering step. Such approach is beneficial if the remote parallelism is much higher than the number of driver cores.
This scan is best suited for queries with selective filters on lower/upper bounds across all partitions, or against poorly clustered metadata. This allows job planning to benefit from highly concurrent remote filtering while not incurring high serialization and data transfer costs. This class is also useful for full table scans over large tables but the cost of bringing data and delete file details to the driver may become noticeable. Make sure to follow the performance tips below in such cases.
Ensure the filtered metadata size doesn't exceed the driver's max result size. For large table scans, consider increasing `spark.driver.maxResultSize` to avoid job failures.
Performance tips:
- Enable Kryo serialization (`spark.serializer`)
- Increase the number of driver cores (`spark.driver.cores`)
- Tune the number of threads used to fetch task results (`spark.resultGetter.threads`)
-
Field Summary
Modifier and TypeFieldDescriptionprotected static final boolean
-
Constructor Summary
ConstructorDescriptionSparkDistributedDataScan
(org.apache.spark.sql.SparkSession spark, Table table, SparkReadConf readConf) -
Method Summary
Modifier and TypeMethodDescriptioncaseSensitive
(boolean caseSensitive) Create a new scan from this that, if data columns where selected viaScan.select(java.util.Collection)
, controls whether the match to the schema will be done with case sensitivity.protected org.apache.iceberg.TableScanContext
context()
protected PlanningMode
Returns which planning mode to use for data.protected PlanningMode
Returns which planning mode to use for deletes.protected CloseableIterable
<ScanTask> filter()
Returns this scan's filterExpression
.filter
(Expression expr) Create a new scan from the results of this filtered by theExpression
.Create a new scan from this that applies data filtering to files but not to rows in those files.Create a new scan from this that loads the column stats with each data file.includeColumnStats
(Collection<String> requestedColumns) Create a new scan from this that loads the column stats for the specific columns with each data file.protected FileIO
io()
boolean
Returns whether this scan is case-sensitive with respect to column names.metricsReporter
(MetricsReporter reporter) Create a new scan that will report scan metrics to the provided reporter in addition to reporters maintained by the scan.protected org.apache.iceberg.ManifestGroup
newManifestGroup
(List<ManifestFile> dataManifests, boolean withColumnStats) protected org.apache.iceberg.ManifestGroup
newManifestGroup
(List<ManifestFile> dataManifests, List<ManifestFile> deleteManifests) protected org.apache.iceberg.ManifestGroup
newManifestGroup
(List<ManifestFile> dataManifests, List<ManifestFile> deleteManifests, boolean withColumnStats) protected BatchScan
newRefinedScan
(Table newTable, Schema newSchema, org.apache.iceberg.TableScanContext newContext) Create a new scan from this scan's configuration that will override theTable
's behavior based on the incoming pair.options()
protected Iterable
<CloseableIterable<DataFile>> planDataRemotely
(List<ManifestFile> dataManifests, boolean withColumnStats) Plans data remotely.protected org.apache.iceberg.DeleteFileIndex
planDeletesRemotely
(List<ManifestFile> deleteManifests) Plans deletes remotely.protected ExecutorService
Plan balanced task groups for this scan by splitting large and combining small tasks.planWith
(ExecutorService executorService) Create a new scan to use a particular executor to plan.Create a new scan from this with the schema as its projection.protected int
Returns the cluster parallelism.protected Expression
schema()
Returns this scan's projectionSchema
.select
(Collection<String> columns) Create a new scan from this that will read the given data columns.protected boolean
Controls whether defensive copies are created for remotely planned data files.protected boolean
protected boolean
protected boolean
int
Returns the split lookback for this scan.long
Returns the split open file cost for this scan.table()
protected Schema
long
Returns the target split size for this scan.protected boolean
Methods inherited from class org.apache.iceberg.SnapshotScan
asOfTime, planFiles, scanMetrics, snapshot, snapshotId, toString, useRef, useSnapshot
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
Methods inherited from interface org.apache.iceberg.BatchScan
asOfTime, snapshot, table, useRef, useSnapshot
Methods inherited from interface org.apache.iceberg.Scan
caseSensitive, filter, filter, ignoreResiduals, includeColumnStats, includeColumnStats, isCaseSensitive, metricsReporter, option, planFiles, planWith, project, schema, select, select, splitLookback, splitOpenFileCost, targetSplitSize
-
Field Details
-
SCAN_COLUMNS
-
SCAN_WITH_STATS_COLUMNS
-
DELETE_SCAN_COLUMNS
-
DELETE_SCAN_WITH_STATS_COLUMNS
-
PLAN_SCANS_WITH_WORKER_POOL
protected static final boolean PLAN_SCANS_WITH_WORKER_POOL
-
-
Constructor Details
-
SparkDistributedDataScan
public SparkDistributedDataScan(org.apache.spark.sql.SparkSession spark, Table table, SparkReadConf readConf)
-
-
Method Details
-
newRefinedScan
-
remoteParallelism
protected int remoteParallelism()Returns the cluster parallelism.This value indicates the maximum number of manifests that can be processed concurrently by the cluster. Implementations should take into account both the currently available processing slots and potential dynamic allocation, if applicable.
The remote parallelism is compared against the size of the thread pool available locally to determine the feasibility of remote planning. This value is ignored if the planning mode is set explicitly as local or distributed.
-
dataPlanningMode
Returns which planning mode to use for data. -
shouldCopyRemotelyPlannedDataFiles
protected boolean shouldCopyRemotelyPlannedDataFiles()Controls whether defensive copies are created for remotely planned data files.By default, this class creates defensive copies for each data file that is planned remotely, assuming the provided iterable can be lazy and may reuse objects. If unnecessary and data file objects can be safely added into a collection, implementations can override this behavior.
-
planDataRemotely
protected Iterable<CloseableIterable<DataFile>> planDataRemotely(List<ManifestFile> dataManifests, boolean withColumnStats) Plans data remotely.Implementations are encouraged to return groups of matching data files, enabling this class to process multiple groups concurrently to speed up the remaining work. This is particularly useful when dealing with equality deletes, as delete index lookups with such delete files require comparing bounds and typically benefit from parallelization.
If the result iterable reuses objects,
shouldCopyRemotelyPlannedDataFiles()
must return true.The input data manifests have been already filtered to include only potential matches based on the scan filter. Implementations are expected to further filter these manifests and only return files that may hold data matching the scan filter.
- Parameters:
dataManifests
- data manifests that may contain files matching the scan filterwithColumnStats
- a flag whether to load column stats- Returns:
- groups of data files planned remotely
-
deletePlanningMode
Returns which planning mode to use for deletes. -
planDeletesRemotely
protected org.apache.iceberg.DeleteFileIndex planDeletesRemotely(List<ManifestFile> deleteManifests) Plans deletes remotely.The input delete manifests have been already filtered to include only potential matches based on the scan filter. Implementations are expected to further filter these manifests and return files that may hold deletes matching the scan filter.
- Parameters:
deleteManifests
- delete manifests that may contain files matching the scan filter- Returns:
- a delete file index planned remotely
-
doPlanFiles
- Specified by:
doPlanFiles
in classSnapshotScan<BatchScan,
ScanTask, ScanTaskGroup<ScanTask>>
-
planTasks
Description copied from interface:Scan
Plan balanced task groups for this scan by splitting large and combining small tasks.Task groups created by this method may read partial input files, multiple input files or both.
-
useSnapshotSchema
protected boolean useSnapshotSchema()- Overrides:
useSnapshotSchema
in classSnapshotScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>>
-
newManifestGroup
protected org.apache.iceberg.ManifestGroup newManifestGroup(List<ManifestFile> dataManifests, List<ManifestFile> deleteManifests) -
newManifestGroup
protected org.apache.iceberg.ManifestGroup newManifestGroup(List<ManifestFile> dataManifests, boolean withColumnStats) -
newManifestGroup
protected org.apache.iceberg.ManifestGroup newManifestGroup(List<ManifestFile> dataManifests, List<ManifestFile> deleteManifests, boolean withColumnStats) -
table
-
io
-
tableSchema
-
context
protected org.apache.iceberg.TableScanContext context() -
options
-
scanColumns
-
shouldReturnColumnStats
protected boolean shouldReturnColumnStats() -
columnsToKeepStats
-
shouldIgnoreResiduals
protected boolean shouldIgnoreResiduals() -
residualFilter
-
shouldPlanWithExecutor
protected boolean shouldPlanWithExecutor() -
planExecutor
-
option
Description copied from interface:Scan
Create a new scan from this scan's configuration that will override theTable
's behavior based on the incoming pair. Unknown properties will be ignored.- Specified by:
option
in interfaceScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>> - Parameters:
property
- name of the table property to be overriddenvalue
- value to override with- Returns:
- a new scan based on this with overridden behavior
-
project
Description copied from interface:Scan
Create a new scan from this with the schema as its projection.- Specified by:
project
in interfaceScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>> - Parameters:
projectedSchema
- a projection schema- Returns:
- a new scan based on this with the given projection
-
caseSensitive
Description copied from interface:Scan
Create a new scan from this that, if data columns where selected viaScan.select(java.util.Collection)
, controls whether the match to the schema will be done with case sensitivity. Default is true.- Specified by:
caseSensitive
in interfaceScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>> - Returns:
- a new scan based on this with case sensitivity as stated
-
isCaseSensitive
public boolean isCaseSensitive()Description copied from interface:Scan
Returns whether this scan is case-sensitive with respect to column names.- Specified by:
isCaseSensitive
in interfaceScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>> - Returns:
- true if case-sensitive, false otherwise.
-
includeColumnStats
Description copied from interface:Scan
Create a new scan from this that loads the column stats with each data file.Column stats include: value count, null value count, lower bounds, and upper bounds.
- Specified by:
includeColumnStats
in interfaceScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>> - Returns:
- a new scan based on this that loads column stats.
-
includeColumnStats
Description copied from interface:Scan
Create a new scan from this that loads the column stats for the specific columns with each data file.Column stats include: value count, null value count, lower bounds, and upper bounds.
- Specified by:
includeColumnStats
in interfaceScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>> - Parameters:
requestedColumns
- column names for which to keep the stats.- Returns:
- a new scan based on this that loads column stats for specific columns.
-
select
Description copied from interface:Scan
Create a new scan from this that will read the given data columns. This produces an expected schema that includes all fields that are either selected or used by this scan's filter expression.- Specified by:
select
in interfaceScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>> - Parameters:
columns
- column names from the table's schema- Returns:
- a new scan based on this with the given projection columns
-
filter
Description copied from interface:Scan
Create a new scan from the results of this filtered by theExpression
.- Specified by:
filter
in interfaceScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>> - Parameters:
expr
- a filter expression- Returns:
- a new scan based on this with results filtered by the expression
-
filter
Description copied from interface:Scan
Returns this scan's filterExpression
.- Specified by:
filter
in interfaceScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>> - Returns:
- this scan's filter expression
-
ignoreResiduals
Description copied from interface:Scan
Create a new scan from this that applies data filtering to files but not to rows in those files.- Specified by:
ignoreResiduals
in interfaceScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>> - Returns:
- a new scan based on this that does not filter rows in files.
-
planWith
Description copied from interface:Scan
Create a new scan to use a particular executor to plan. The default worker pool will be used by default.- Specified by:
planWith
in interfaceScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>> - Parameters:
executorService
- the provided executor- Returns:
- a table scan that uses the provided executor to access manifests
-
schema
Description copied from interface:Scan
Returns this scan's projectionSchema
.If the projection schema was set directly using
Scan.project(Schema)
, returns that schema.If the projection schema was set by calling
Scan.select(Collection)
, returns a projection schema that includes the selected data fields and any fields used in the filter expression.- Specified by:
schema
in interfaceScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>> - Returns:
- this scan's projection schema
-
targetSplitSize
public long targetSplitSize()Description copied from interface:Scan
Returns the target split size for this scan.- Specified by:
targetSplitSize
in interfaceScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>>
-
splitLookback
public int splitLookback()Description copied from interface:Scan
Returns the split lookback for this scan.- Specified by:
splitLookback
in interfaceScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>>
-
splitOpenFileCost
public long splitOpenFileCost()Description copied from interface:Scan
Returns the split open file cost for this scan.- Specified by:
splitOpenFileCost
in interfaceScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>>
-
metricsReporter
Description copied from interface:Scan
Create a new scan that will report scan metrics to the provided reporter in addition to reporters maintained by the scan.- Specified by:
metricsReporter
in interfaceScan<ThisT,
T extends ScanTask, G extends ScanTaskGroup<T>>
-