Spark and Iceberg Quickstart
This guide will get you up and running with an Iceberg and Spark environment, including sample code to highlight some powerful features. You can learn more about Iceberg’s Spark runtime by checking out the Spark section.
- Docker-Compose
- Creating a table
- Writing Data to a Table
- Reading Data from a Table
- Adding A Catalog
- Next Steps
Docker-Compose
The fastest way to get started is to use a docker-compose file that uses the the tabulario/spark-iceberg image which contains a local Spark cluster with a configured Iceberg catalog. To use this, you’ll need to install the Docker CLI as well as the Docker Compose CLI.
Once you have those, save the yaml below into a file named docker-compose.yml
:
version: "3"
services:
spark-iceberg:
image: tabulario/spark-iceberg
depends_on:
- postgres
container_name: spark-iceberg
environment:
- SPARK_HOME=/opt/spark
- PYSPARK_PYTON=/usr/bin/python3.9
- PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/spark/bin
volumes:
- ./warehouse:/home/iceberg/warehouse
- ./notebooks:/home/iceberg/notebooks/notebooks
ports:
- 8888:8888
- 8080:8080
- 18080:18080
postgres:
image: postgres:13.4-bullseye
container_name: postgres
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=password
- POSTGRES_DB=demo_catalog
volumes:
- ./postgres/data:/var/lib/postgresql/data
Next, start up the docker containers with this command:
docker-compose up
You can then run any of the following commands to start a Spark session.
docker exec -it spark-iceberg spark-sql
docker exec -it spark-iceberg spark-shell
docker exec -it spark-iceberg pyspark
docker exec -it spark-iceberg notebook
.
The notebook server will be available at http://localhost:8888Creating a table
To create your first Iceberg table in Spark, run a CREATE TABLE
command. Let’s create a table
using demo.nyc.taxis
where demo
is the catalog name, nyc
is the database name, and taxis
is the table name.
CREATE TABLE demo.nyc.taxis
(
vendor_id bigint,
trip_id bigint,
trip_distance float,
fare_amount double,
store_and_fwd_flag string
)
PARTITIONED BY (vendor_id);
import org.apache.spark.sql.types._
import org.apache.spark.sql.Row
val schema = StructType( Array(
StructField("vendor_id", LongType,true),
StructField("trip_id", LongType,true),
StructField("trip_distance", FloatType,true),
StructField("fare_amount", DoubleType,true),
StructField("store_and_fwd_flag", StringType,true)
))
val df = spark.createDataFrame(spark.sparkContext.emptyRDD[Row],schema)
df.writeTo("demo.nyc.taxis").create()
from pyspark.sql.types import DoubleType, FloatType, LongType, StructType,StructField, StringType
schema = StructType([
StructField("vendor_id", LongType(), True),
StructField("trip_id", LongType(), True),
StructField("trip_distance", FloatType(), True),
StructField("fare_amount', DoubleType(), True),
StructField("store_and_fwd_flag', StringType(), True)
])
df = spark.createDataFrame([], schema)
df.writeTo("demo.nyc.taxis").create()
Iceberg catalogs support the full range of SQL DDL commands, including:
Writing Data to a Table
Once your table is created, you can insert records.
INSERT INTO demo.nyc.taxis
VALUES (1, 1000371, 1.8, 15.32, 'N'), (2, 1000372, 2.5, 22.15, 'N'), (2, 1000373, 0.9, 9.01, 'N'), (1, 1000374, 8.4, 42.13, 'Y');
import org.apache.spark.sql.Row
val schema = spark.table("demo.nyc.taxis").schema
val data = Seq(
Row(1: Long, 1000371: Long, 1.8f: Float, 15.32: Double, "N": String),
Row(2: Long, 1000372: Long, 2.5f: Float, 22.15: Double, "N": String),
Row(2: Long, 1000373: Long, 0.9f: Float, 9.01: Double, "N": String),
Row(1: Long, 1000374: Long, 8.4f: Float, 42.13: Double, "Y": String)
)
val df = spark.createDataFrame(spark.sparkContext.parallelize(data), schema)
df.writeTo("demo.nyc.taxis").append()
schema = spark.table("demo.nyc.taxis").schema
data = [
(1, 1000371, 1.8, 15.32, "N"),
(2, 1000372, 2.5, 22.15, "N"),
(2, 1000373, 0.9, 9.01, "N"),
(1, 1000374, 8.4, 42.13, "Y")
]
df = spark.createDataFrame(data, schema)
df.writeTo("demo.nyc.taxis").append()
Reading Data from a Table
To read a table, simply use the Iceberg table’s name.
SELECT * FROM demo.nyc.taxis;
val df = spark.table("demo.nyc.taxis").show()
df = spark.table("demo.nyc.taxis").show()
Adding A Catalog
Iceberg has several catalog back-ends that can be used to track tables, like JDBC, Hive MetaStore and Glue.
Catalogs are configured using properties under spark.sql.catalog.(catalog_name)
. In this guide,
we use JDBC, but you can follow these instructions to configure other catalog types. To learn more, check out
the Catalog page in the Spark section.
This configuration creates a path-based catalog named local
for tables under $PWD/warehouse
and adds support for Iceberg tables to Spark’s built-in catalog.
spark-sql --packages org.apache.iceberg:iceberg-spark-runtime-3.2_2.12:0.14.0\
--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \
--conf spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkSessionCatalog \
--conf spark.sql.catalog.spark_catalog.type=hive \
--conf spark.sql.catalog.demo=org.apache.iceberg.spark.SparkCatalog \
--conf spark.sql.catalog.demo.type=hadoop \
--conf spark.sql.catalog.demo.warehouse=$PWD/warehouse \
--conf spark.sql.defaultCatalog=demo
spark.jars.packages org.apache.iceberg:iceberg-spark-runtime-3.2_2.12:0.14.0
spark.sql.extensions org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
spark.sql.catalog.spark_catalog org.apache.iceberg.spark.SparkSessionCatalog
spark.sql.catalog.spark_catalog.type hive
spark.sql.catalog.demo org.apache.iceberg.spark.SparkCatalog
spark.sql.catalog.demo.type hadoop
spark.sql.catalog.demo.warehouse $PWD/warehouse
spark.sql.defaultCatalog demo
USE demo;
Next steps
Adding Iceberg to Spark
If you already have a Spark environment, you can add Iceberg, using the --packages
option.
spark-sql --packages org.apache.iceberg:iceberg-spark-runtime-3.2_2.12:0.14.0
spark-shell --packages org.apache.iceberg:iceberg-spark-runtime-3.2_2.12:0.14.0
pyspark --packages org.apache.iceberg:iceberg-spark-runtime-3.2_2.12:0.14.0
jars
folder.
You can download the runtime by visiting to the Releases page.Learn More
Now that you’re up an running with Iceberg and Spark, check out the Iceberg-Spark docs to learn more!