public static class FlinkSink.Builder
extends java.lang.Object
| Modifier and Type | Method and Description | 
|---|---|
org.apache.flink.streaming.api.datastream.DataStreamSink<java.lang.Void> | 
append()
Append the iceberg sink operators to write records to iceberg table. 
 | 
org.apache.flink.streaming.api.datastream.DataStreamSink<org.apache.flink.table.data.RowData> | 
build()
Deprecated. 
 
this will be removed in 0.14.0; use  
append() because its returned DataStreamSink
 has a more correct data type. | 
FlinkSink.Builder | 
distributionMode(DistributionMode mode)
Configure the write  
DistributionMode that the flink sink will use. | 
FlinkSink.Builder | 
equalityFieldColumns(java.util.List<java.lang.String> columns)
Configuring the equality field columns for iceberg table that accept CDC or UPSERT events. 
 | 
FlinkSink.Builder | 
overwrite(boolean newOverwrite)  | 
FlinkSink.Builder | 
table(Table newTable)
 | 
FlinkSink.Builder | 
tableLoader(TableLoader newTableLoader)
The table loader is used for loading tables in  
IcebergFilesCommitter lazily, we need this loader because
 Table is not serializable and could not just use the loaded table from Builder#table in the remote task
 manager. | 
FlinkSink.Builder | 
tableSchema(org.apache.flink.table.api.TableSchema newTableSchema)  | 
FlinkSink.Builder | 
uidPrefix(java.lang.String newPrefix)
Set the uid prefix for FlinkSink operators. 
 | 
FlinkSink.Builder | 
upsert(boolean enabled)
All INSERT/UPDATE_AFTER events from input stream will be transformed to UPSERT events, which means it will
 DELETE the old records and then INSERT the new records. 
 | 
FlinkSink.Builder | 
writeParallelism(int newWriteParallelism)
Configuring the write parallel number for iceberg stream writer. 
 | 
public FlinkSink.Builder table(Table newTable)
Table instance is used for initializing IcebergStreamWriter which will write all
 the records into DataFiles and emit them to downstream operator. Providing a table would avoid so many
 table loading from each separate task.newTable - the loaded iceberg table instance.FlinkSink.Builder to connect the iceberg table.public FlinkSink.Builder tableLoader(TableLoader newTableLoader)
IcebergFilesCommitter lazily, we need this loader because
 Table is not serializable and could not just use the loaded table from Builder#table in the remote task
 manager.newTableLoader - to load iceberg table inside tasks.FlinkSink.Builder to connect the iceberg table.public FlinkSink.Builder tableSchema(org.apache.flink.table.api.TableSchema newTableSchema)
public FlinkSink.Builder overwrite(boolean newOverwrite)
public FlinkSink.Builder distributionMode(DistributionMode mode)
DistributionMode that the flink sink will use. Currently, flink support
 DistributionMode.NONE and DistributionMode.HASH.mode - to specify the write distribution mode.FlinkSink.Builder to connect the iceberg table.public FlinkSink.Builder writeParallelism(int newWriteParallelism)
newWriteParallelism - the number of parallel iceberg stream writer.FlinkSink.Builder to connect the iceberg table.public FlinkSink.Builder upsert(boolean enabled)
enabled - indicate whether it should transform all INSERT/UPDATE_AFTER events to UPSERT.FlinkSink.Builder to connect the iceberg table.public FlinkSink.Builder equalityFieldColumns(java.util.List<java.lang.String> columns)
columns - defines the iceberg table's key.FlinkSink.Builder to connect the iceberg table.public FlinkSink.Builder uidPrefix(java.lang.String newPrefix)
pipeline.auto-generate-uid=false to disable auto-generation and force explicit setting of all operator uid.
 --allowNonRestoredState to ignore the previous sink state. During restore Flink sink state is used to check if
 last commit was actually successful or not. --allowNonRestoredState can lead to data loss if the
 Iceberg commit failed in the last completed checkpoint.newPrefix - prefix for Flink sink operator uid and nameFlinkSink.Builder to connect the iceberg table.@Deprecated public org.apache.flink.streaming.api.datastream.DataStreamSink<org.apache.flink.table.data.RowData> build()
append() because its returned DataStreamSink
 has a more correct data type.DataStreamSink for sink.public org.apache.flink.streaming.api.datastream.DataStreamSink<java.lang.Void> append()
DataStreamSink for sink.