To use Iceberg in Spark, first configure Spark catalogs.
Some plans are only available when using Iceberg SQL extensions in Spark 3.x.
Iceberg uses Apache Spark’s DataSourceV2 API for data source and catalog implementations. Spark DSv2 is an evolving API with different levels of support in Spark versions:
|Feature support||Spark 3.0||Spark 2.4||Notes|
|SQL insert into||✔️|
|SQL merge into||✔️||⚠ Requires Iceberg Spark extensions|
|SQL insert overwrite||✔️|
|SQL delete from||✔️||⚠ Row-level delete requires Spark extensions|
|DataFrame overwrite||✔️||✔️||⚠ Behavior changed in Spark 3.0|
|DataFrame CTAS and RTAS||✔️|
Writing with SQL¶
Spark 3 supports SQL
MERGE INTO, and
INSERT OVERWRITE, as well as the new
To append new data to a table, use
INSERT INTO prod.db.table VALUES (1, 'a'), (2, 'b')
INSERT INTO prod.db.table SELECT ...
Spark 3 added support for
MERGE INTO queries that can express row-level updates.
MERGE INTO by rewriting data files that contain rows that need to be updated in an
MERGE INTO is recommended instead of
INSERT OVERWRITE because Iceberg can replace only the affected data files, and because the data overwritten by a dynamic overwrite may change if the table’s partitioning changes.
MERGE INTO syntax¶
MERGE INTO updates a table, called the target table, using a set of updates from another query, called the source. The update for a row in the target table is found using the
ON clause that is like a join condition.
MERGE INTO prod.db.target t -- a target table USING (SELECT ...) s -- the source updates ON t.id = s.id -- condition to find updates for target rows WHEN ... -- updates
Updates to rows in the target table are listed using
WHEN MATCHED ... THEN .... Multiple
MATCHED clauses can be added with conditions that determine when each match should be applied. The first matching expression is used.
WHEN MATCHED AND s.op = 'delete' THEN DELETE WHEN MATCHED AND t.count IS NULL AND s.op = 'increment' THEN UPDATE SET t.count = 0 WHEN MATCHED AND s.op = 'increment' THEN UPDATE SET t.count = t.count + 1
Source rows (updates) that do not match can be inserted:
WHEN NOT MATCHED THEN INSERT *
Inserts also support additional conditions:
WHEN NOT MATCHED AND s.event_time > still_valid_threshold THEN INSERT (id, count) VALUES (s.id, 1)
Only one record in the source data can update any given row of the target table, or else an error will be thrown.
INSERT OVERWRITE can replace data in the table with the result of a query. Overwrites are atomic operations for Iceberg tables.
The partitions that will be replaced by
INSERT OVERWRITE depends on Spark’s partition overwrite mode and the partitioning of a table.
MERGE INTO can rewrite only affected data files and has more easily understood behavior, so it is recommended instead of
Spark’s default overwrite mode is static, but dynamic overwrite mode is recommended when writing to Iceberg tables. Static overwrite mode determines which partitions to overwrite in a table by converting the
PARTITION clause to a filter, but the
PARTITION clause can only reference table columns.
Dynamic overwrite mode is configured by setting
To demonstrate the behavior of dynamic and static overwrites, consider a
logs table defined by the following DDL:
CREATE TABLE prod.my_app.logs ( uuid string NOT NULL, level string NOT NULL, ts timestamp NOT NULL, message string) USING iceberg PARTITIONED BY (level, hours(ts))
When Spark’s overwrite mode is dynamic, partitions that have rows produced by the
SELECT query will be replaced.
For example, this query removes duplicate log events from the example
INSERT OVERWRITE prod.my_app.logs SELECT uuid, first(level), first(ts), first(message) FROM prod.my_app.logs WHERE cast(ts as date) = '2020-07-01' GROUP BY uuid
In dynamic mode, this will replace any partition with rows in the
SELECT result. Because the date of all rows is restricted to 1 July, only hours of that day will be replaced.
When Spark’s overwrite mode is static, the
PARTITION clause is converted to a filter that is used to delete from the table. If the
PARTITION clause is omitted, all partitions will be replaced.
Because there is no
PARTITION clause in the query above, it will drop all existing rows in the table when run in static mode, but will only write the logs from 1 July.
To overwrite just the partitions that were loaded, add a
PARTITION clause that aligns with the
SELECT query filter:
INSERT OVERWRITE prod.my_app.logs PARTITION (level = 'INFO') SELECT uuid, first(level), first(ts), first(message) FROM prod.my_app.logs WHERE level = 'INFO' GROUP BY uuid
Note that this mode cannot replace hourly partitions like the dynamic example query because the
PARTITION clause can only reference table columns, not hidden partitions.
Spark 3 added support for
DELETE FROM queries to remove data from tables.
Delete queries accept a filter to match rows to delete.
DELETE FROM prod.db.table WHERE ts >= '2020-05-01 00:00:00' and ts < '2020-06-01 00:00:00'
If the delte filter matches entire partitions of the table, Iceberg will perform a metadata-only delete. If the filter matches individual rows of a table, then Iceberg will rewrite only the affected data files.
Writing with DataFrames¶
Spark 3 introduced the new
DataFrameWriterV2 API for writing to tables using data frames. The v2 API is recommended for several reasons:
- CTAS, RTAS, and overwrite by filter are supported
- All operations consistently write columns to a table by name
- Hidden partition expressions are supported in
- Overwrite behavior is explicit, either dynamic or by a user-supplied filter
- The behavior of each operation corresponds to SQL statements
df.writeTo(t).create()is equivalent to
CREATE TABLE AS SELECT
df.writeTo(t).replace()is equivalent to
REPLACE TABLE AS SELECT
df.writeTo(t).append()is equivalent to
df.writeTo(t).overwritePartitions()is equivalent to dynamic
The v1 DataFrame
write API is still supported, but is not recommended.
When writing with the v1 DataFrame API in Spark 3, use
insertInto to load tables with a catalog.
format("iceberg") loads an isolated table reference that will not automatically refresh tables used by queries.
To append a dataframe to an Iceberg table, use
val data: DataFrame = ... data.writeTo("prod.db.table").append()
In Spark 2.4, use the v1 API with
append mode and
data.write .format("iceberg") .mode("append") .save("db.table")
To overwrite partitions dynamically, use
val data: DataFrame = ... data.writeTo("prod.db.table").overwritePartitions()
To explicitly overwrite partitions, use
overwrite to supply a filter:
data.writeTo("prod.db.table").overwrite($"level" === "INFO")
In Spark 2.4, overwrite values in an Iceberg table with
overwrite mode and
data.write .format("iceberg") .mode("overwrite") .save("db.table")
The behavior of overwrite mode changed between Spark 2.4 and Spark 3.
The behavior of DataFrameWriter overwrite mode was undefined in Spark 2.4, but is required to overwrite the entire table in Spark 3. Because of this new requirement, the Iceberg source’s behavior changed in Spark 3. In Spark 2.4, the behavior was to dynamically overwrite partitions. To use the Spark 2.4 behavior, add option
To run a CTAS or RTAS, use
val data: DataFrame = ... data.writeTo("prod.db.table").create()
Create and replace operations support table configuration methods, like
data.writeTo("prod.db.table") .tableProperty("write.format.default", "orc") .partitionBy($"level", days($"ts")) .createOrReplace()
Writing to partitioned tables¶
Iceberg requires the data to be sorted according to the partition spec per task (Spark partition) in prior to write against partitioned table. This applies both Writing with SQL and Writing with DataFrames.
Explicit sort is necessary because Spark doesn’t allow Iceberg to request a sort before writing as of Spark 3.0. SPARK-23889 is filed to enable Iceberg to require specific distribution & sort order to Spark.
Both global sort (
sort) and local sort (
sortWithinPartitions) work for the requirement.
Let’s go through writing the data against below sample table:
CREATE TABLE prod.db.sample ( id bigint, data string, category string, ts timestamp) USING iceberg PARTITIONED BY (days(ts), category)
To write data to the sample table, your data needs to be sorted by
If you’re inserting data with SQL statement, you can use
ORDER BY to achieve it, like below:
INSERT INTO prod.db.sample SELECT id, data, category, ts FROM another_table ORDER BY ts, category
If you’re inserting data with DataFrame, you can use either
sort to trigger global sort, or
to trigger local sort. Local sort for example:
data.sortWithinPartitions("ts", "category") .writeTo("prod.db.sample") .append()
You can simply add the original column to the sort condition for the most partition transformations, except
bucket partition transformation, you need to register the Iceberg transform function in Spark to specify it during sort.
Let’s go through another sample table having bucket partition:
CREATE TABLE prod.db.sample ( id bigint, data string, category string, ts timestamp) USING iceberg PARTITIONED BY (bucket(16, id))
You need to register the function to deal with bucket, like below:
import org.apache.iceberg.spark.IcebergSpark import org.apache.spark.sql.types.DataTypes IcebergSpark.registerBucketUDF(spark, "iceberg_bucket16", DataTypes.LongType, 16)
Explicit registration of the function is necessary because Spark doesn’t allow Iceberg to provide functions. SPARK-27658 is filed to enable Iceberg to provide functions which can be used in query.
Here we just registered the bucket function as
iceberg_bucket16, which can be used in sort clause.
If you’re inserting data with SQL statement, you can use the function like below:
INSERT INTO prod.db.sample SELECT id, data, category, ts FROM another_table ORDER BY iceberg_bucket16(id)
If you’re inserting data with DataFrame, you can use the function like below:
data.sortWithinPartitions(expr("iceberg_bucket16(id)")) .writeTo("prod.db.sample") .append()
Spark and Iceberg support different set of types. Iceberg does the type conversion automatically, but not for all combinations, so you may want to understand the type conversion in Iceberg in prior to design the types of columns in your tables.
Spark type to Iceberg type¶
This type conversion table describes how Spark types are converted to the Iceberg types. The conversion applies on both creating Iceberg table and writing to Iceberg table via Spark.
|timestamp||timestamp with timezone|
The table is based on representing conversion during creating table. In fact, broader supports are applied on write. Here’re some points on write:
- Iceberg numeric types (
decimal) support promotion during writes. e.g. You can write Spark types
longto Iceberg type
- You can write to Iceberg
fixedtype using Spark
binarytype. Note that assertion on the length will be performed.
Iceberg type to Spark type¶
This type conversion table describes how Iceberg types are converted to the Spark types. The conversion applies on reading from Iceberg table via Spark.
|timestamp with timezone||timestamp|
|timestamp without timezone||Not supported|