Class SparkDistributedDataScan
- java.lang.Object
-
- org.apache.iceberg.SnapshotScan<ThisT,T,G>
-
- org.apache.iceberg.SparkDistributedDataScan
-
public class SparkDistributedDataScan extends SnapshotScan<ThisT,T,G>
A batch data scan that can utilize Spark cluster resources for planning.This scan remotely filters manifests, fetching only the relevant data and delete files to the driver. The delete file assignment is done locally after the remote filtering step. Such approach is beneficial if the remote parallelism is much higher than the number of driver cores.
This scan is best suited for queries with selective filters on lower/upper bounds across all partitions, or against poorly clustered metadata. This allows job planning to benefit from highly concurrent remote filtering while not incurring high serialization and data transfer costs. This class is also useful for full table scans over large tables but the cost of bringing data and delete file details to the driver may become noticeable. Make sure to follow the performance tips below in such cases.
Ensure the filtered metadata size doesn't exceed the driver's max result size. For large table scans, consider increasing `spark.driver.maxResultSize` to avoid job failures.
Performance tips:
- Enable Kryo serialization (`spark.serializer`)
- Increase the number of driver cores (`spark.driver.cores`)
- Tune the number of threads used to fetch task results (`spark.resultGetter.threads`)
-
-
Field Summary
Fields Modifier and Type Field Description protected static java.util.List<java.lang.String>
DELETE_SCAN_COLUMNS
protected static java.util.List<java.lang.String>
DELETE_SCAN_WITH_STATS_COLUMNS
protected static boolean
PLAN_SCANS_WITH_WORKER_POOL
protected static java.util.List<java.lang.String>
SCAN_COLUMNS
protected static java.util.List<java.lang.String>
SCAN_WITH_STATS_COLUMNS
-
Constructor Summary
Constructors Constructor Description SparkDistributedDataScan(org.apache.spark.sql.SparkSession spark, Table table, SparkReadConf readConf)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description ThisT
caseSensitive(boolean caseSensitive)
Create a new scan from this that, if data columns where selected viaScan.select(java.util.Collection)
, controls whether the match to the schema will be done with case sensitivity.protected org.apache.iceberg.TableScanContext
context()
protected PlanningMode
dataPlanningMode()
Returns which planning mode to use for data.protected PlanningMode
deletePlanningMode()
Returns which planning mode to use for deletes.protected CloseableIterable<ScanTask>
doPlanFiles()
Expression
filter()
Returns this scan's filterExpression
.ThisT
filter(Expression expr)
Create a new scan from the results of this filtered by theExpression
.ThisT
ignoreResiduals()
Create a new scan from this that applies data filtering to files but not to rows in those files.ThisT
includeColumnStats()
Create a new scan from this that loads the column stats with each data file.protected FileIO
io()
boolean
isCaseSensitive()
Returns whether this scan is case-sensitive with respect to column names.ThisT
metricsReporter(MetricsReporter reporter)
Create a new scan that will report scan metrics to the provided reporter in addition to reporters maintained by the scan.protected org.apache.iceberg.ManifestGroup
newManifestGroup(java.util.List<ManifestFile> dataManifests, boolean withColumnStats)
protected org.apache.iceberg.ManifestGroup
newManifestGroup(java.util.List<ManifestFile> dataManifests, java.util.List<ManifestFile> deleteManifests)
protected org.apache.iceberg.ManifestGroup
newManifestGroup(java.util.List<ManifestFile> dataManifests, java.util.List<ManifestFile> deleteManifests, boolean withColumnStats)
protected BatchScan
newRefinedScan(Table newTable, Schema newSchema, org.apache.iceberg.TableScanContext newContext)
ThisT
option(java.lang.String property, java.lang.String value)
Create a new scan from this scan's configuration that will override theTable
's behavior based on the incoming pair.protected java.util.Map<java.lang.String,java.lang.String>
options()
protected java.lang.Iterable<CloseableIterable<DataFile>>
planDataRemotely(java.util.List<ManifestFile> dataManifests, boolean withColumnStats)
Plans data remotely.protected org.apache.iceberg.DeleteFileIndex
planDeletesRemotely(java.util.List<ManifestFile> deleteManifests)
Plans deletes remotely.protected java.util.concurrent.ExecutorService
planExecutor()
CloseableIterable<ScanTaskGroup<ScanTask>>
planTasks()
Plan balanced task groups for this scan by splitting large and combining small tasks.ThisT
planWith(java.util.concurrent.ExecutorService executorService)
Create a new scan to use a particular executor to plan.ThisT
project(Schema projectedSchema)
Create a new scan from this with the schema as its projection.protected int
remoteParallelism()
Returns the cluster parallelism.protected Expression
residualFilter()
protected java.util.List<java.lang.String>
scanColumns()
Schema
schema()
Returns this scan's projectionSchema
.ThisT
select(java.util.Collection<java.lang.String> columns)
Create a new scan from this that will read the given data columns.protected boolean
shouldCopyRemotelyPlannedDataFiles()
Controls whether defensive copies are created for remotely planned data files.protected boolean
shouldIgnoreResiduals()
protected boolean
shouldPlanWithExecutor()
protected boolean
shouldReturnColumnStats()
int
splitLookback()
Returns the split lookback for this scan.long
splitOpenFileCost()
Returns the split open file cost for this scan.Table
table()
protected Schema
tableSchema()
long
targetSplitSize()
Returns the target split size for this scan.protected boolean
useSnapshotSchema()
-
Methods inherited from class org.apache.iceberg.SnapshotScan
asOfTime, planFiles, scanMetrics, snapshot, snapshotId, toString, useRef, useSnapshot
-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
-
Methods inherited from interface org.apache.iceberg.BatchScan
asOfTime, snapshot, table, useRef, useSnapshot
-
Methods inherited from interface org.apache.iceberg.Scan
caseSensitive, filter, filter, ignoreResiduals, includeColumnStats, isCaseSensitive, metricsReporter, option, planFiles, planWith, project, schema, select, select, splitLookback, splitOpenFileCost, targetSplitSize
-
-
-
-
Field Detail
-
SCAN_COLUMNS
protected static final java.util.List<java.lang.String> SCAN_COLUMNS
-
SCAN_WITH_STATS_COLUMNS
protected static final java.util.List<java.lang.String> SCAN_WITH_STATS_COLUMNS
-
DELETE_SCAN_COLUMNS
protected static final java.util.List<java.lang.String> DELETE_SCAN_COLUMNS
-
DELETE_SCAN_WITH_STATS_COLUMNS
protected static final java.util.List<java.lang.String> DELETE_SCAN_WITH_STATS_COLUMNS
-
PLAN_SCANS_WITH_WORKER_POOL
protected static final boolean PLAN_SCANS_WITH_WORKER_POOL
-
-
Constructor Detail
-
SparkDistributedDataScan
public SparkDistributedDataScan(org.apache.spark.sql.SparkSession spark, Table table, SparkReadConf readConf)
-
-
Method Detail
-
newRefinedScan
protected BatchScan newRefinedScan(Table newTable, Schema newSchema, org.apache.iceberg.TableScanContext newContext)
-
remoteParallelism
protected int remoteParallelism()
Returns the cluster parallelism.This value indicates the maximum number of manifests that can be processed concurrently by the cluster. Implementations should take into account both the currently available processing slots and potential dynamic allocation, if applicable.
The remote parallelism is compared against the size of the thread pool available locally to determine the feasibility of remote planning. This value is ignored if the planning mode is set explicitly as local or distributed.
-
dataPlanningMode
protected PlanningMode dataPlanningMode()
Returns which planning mode to use for data.
-
shouldCopyRemotelyPlannedDataFiles
protected boolean shouldCopyRemotelyPlannedDataFiles()
Controls whether defensive copies are created for remotely planned data files.By default, this class creates defensive copies for each data file that is planned remotely, assuming the provided iterable can be lazy and may reuse objects. If unnecessary and data file objects can be safely added into a collection, implementations can override this behavior.
-
planDataRemotely
protected java.lang.Iterable<CloseableIterable<DataFile>> planDataRemotely(java.util.List<ManifestFile> dataManifests, boolean withColumnStats)
Plans data remotely.Implementations are encouraged to return groups of matching data files, enabling this class to process multiple groups concurrently to speed up the remaining work. This is particularly useful when dealing with equality deletes, as delete index lookups with such delete files require comparing bounds and typically benefit from parallelization.
If the result iterable reuses objects,
shouldCopyRemotelyPlannedDataFiles()
must return true.The input data manifests have been already filtered to include only potential matches based on the scan filter. Implementations are expected to further filter these manifests and only return files that may hold data matching the scan filter.
- Parameters:
dataManifests
- data manifests that may contain files matching the scan filterwithColumnStats
- a flag whether to load column stats- Returns:
- groups of data files planned remotely
-
deletePlanningMode
protected PlanningMode deletePlanningMode()
Returns which planning mode to use for deletes.
-
planDeletesRemotely
protected org.apache.iceberg.DeleteFileIndex planDeletesRemotely(java.util.List<ManifestFile> deleteManifests)
Plans deletes remotely.The input delete manifests have been already filtered to include only potential matches based on the scan filter. Implementations are expected to further filter these manifests and return files that may hold deletes matching the scan filter.
- Parameters:
deleteManifests
- delete manifests that may contain files matching the scan filter- Returns:
- a delete file index planned remotely
-
doPlanFiles
protected CloseableIterable<ScanTask> doPlanFiles()
- Specified by:
doPlanFiles
in classSnapshotScan<BatchScan,ScanTask,ScanTaskGroup<ScanTask>>
-
planTasks
public CloseableIterable<ScanTaskGroup<ScanTask>> planTasks()
Description copied from interface:Scan
Plan balanced task groups for this scan by splitting large and combining small tasks.Task groups created by this method may read partial input files, multiple input files or both.
-
useSnapshotSchema
protected boolean useSnapshotSchema()
- Overrides:
useSnapshotSchema
in classSnapshotScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>>
-
newManifestGroup
protected org.apache.iceberg.ManifestGroup newManifestGroup(java.util.List<ManifestFile> dataManifests, java.util.List<ManifestFile> deleteManifests)
-
newManifestGroup
protected org.apache.iceberg.ManifestGroup newManifestGroup(java.util.List<ManifestFile> dataManifests, boolean withColumnStats)
-
newManifestGroup
protected org.apache.iceberg.ManifestGroup newManifestGroup(java.util.List<ManifestFile> dataManifests, java.util.List<ManifestFile> deleteManifests, boolean withColumnStats)
-
table
public Table table()
-
io
protected FileIO io()
-
tableSchema
protected Schema tableSchema()
-
context
protected org.apache.iceberg.TableScanContext context()
-
options
protected java.util.Map<java.lang.String,java.lang.String> options()
-
scanColumns
protected java.util.List<java.lang.String> scanColumns()
-
shouldReturnColumnStats
protected boolean shouldReturnColumnStats()
-
shouldIgnoreResiduals
protected boolean shouldIgnoreResiduals()
-
residualFilter
protected Expression residualFilter()
-
shouldPlanWithExecutor
protected boolean shouldPlanWithExecutor()
-
planExecutor
protected java.util.concurrent.ExecutorService planExecutor()
-
option
public ThisT option(java.lang.String property, java.lang.String value)
Description copied from interface:Scan
Create a new scan from this scan's configuration that will override theTable
's behavior based on the incoming pair. Unknown properties will be ignored.- Specified by:
option
in interfaceScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>>
- Parameters:
property
- name of the table property to be overriddenvalue
- value to override with- Returns:
- a new scan based on this with overridden behavior
-
project
public ThisT project(Schema projectedSchema)
Description copied from interface:Scan
Create a new scan from this with the schema as its projection.- Specified by:
project
in interfaceScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>>
- Parameters:
projectedSchema
- a projection schema- Returns:
- a new scan based on this with the given projection
-
caseSensitive
public ThisT caseSensitive(boolean caseSensitive)
Description copied from interface:Scan
Create a new scan from this that, if data columns where selected viaScan.select(java.util.Collection)
, controls whether the match to the schema will be done with case sensitivity. Default is true.- Specified by:
caseSensitive
in interfaceScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>>
- Returns:
- a new scan based on this with case sensitivity as stated
-
isCaseSensitive
public boolean isCaseSensitive()
Description copied from interface:Scan
Returns whether this scan is case-sensitive with respect to column names.- Specified by:
isCaseSensitive
in interfaceScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>>
- Returns:
- true if case-sensitive, false otherwise.
-
includeColumnStats
public ThisT includeColumnStats()
Description copied from interface:Scan
Create a new scan from this that loads the column stats with each data file.Column stats include: value count, null value count, lower bounds, and upper bounds.
- Specified by:
includeColumnStats
in interfaceScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>>
- Returns:
- a new scan based on this that loads column stats.
-
select
public ThisT select(java.util.Collection<java.lang.String> columns)
Description copied from interface:Scan
Create a new scan from this that will read the given data columns. This produces an expected schema that includes all fields that are either selected or used by this scan's filter expression.- Specified by:
select
in interfaceScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>>
- Parameters:
columns
- column names from the table's schema- Returns:
- a new scan based on this with the given projection columns
-
filter
public ThisT filter(Expression expr)
Description copied from interface:Scan
Create a new scan from the results of this filtered by theExpression
.- Specified by:
filter
in interfaceScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>>
- Parameters:
expr
- a filter expression- Returns:
- a new scan based on this with results filtered by the expression
-
filter
public Expression filter()
Description copied from interface:Scan
Returns this scan's filterExpression
.- Specified by:
filter
in interfaceScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>>
- Returns:
- this scan's filter expression
-
ignoreResiduals
public ThisT ignoreResiduals()
Description copied from interface:Scan
Create a new scan from this that applies data filtering to files but not to rows in those files.- Specified by:
ignoreResiduals
in interfaceScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>>
- Returns:
- a new scan based on this that does not filter rows in files.
-
planWith
public ThisT planWith(java.util.concurrent.ExecutorService executorService)
Description copied from interface:Scan
Create a new scan to use a particular executor to plan. The default worker pool will be used by default.- Specified by:
planWith
in interfaceScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>>
- Parameters:
executorService
- the provided executor- Returns:
- a table scan that uses the provided executor to access manifests
-
schema
public Schema schema()
Description copied from interface:Scan
Returns this scan's projectionSchema
.If the projection schema was set directly using
Scan.project(Schema)
, returns that schema.If the projection schema was set by calling
Scan.select(Collection)
, returns a projection schema that includes the selected data fields and any fields used in the filter expression.- Specified by:
schema
in interfaceScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>>
- Returns:
- this scan's projection schema
-
targetSplitSize
public long targetSplitSize()
Description copied from interface:Scan
Returns the target split size for this scan.- Specified by:
targetSplitSize
in interfaceScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>>
-
splitLookback
public int splitLookback()
Description copied from interface:Scan
Returns the split lookback for this scan.- Specified by:
splitLookback
in interfaceScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>>
-
splitOpenFileCost
public long splitOpenFileCost()
Description copied from interface:Scan
Returns the split open file cost for this scan.- Specified by:
splitOpenFileCost
in interfaceScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>>
-
metricsReporter
public ThisT metricsReporter(MetricsReporter reporter)
Description copied from interface:Scan
Create a new scan that will report scan metrics to the provided reporter in addition to reporters maintained by the scan.- Specified by:
metricsReporter
in interfaceScan<ThisT,T extends ScanTask,G extends ScanTaskGroup<T>>
-
-