Class CreateChangelogViewProcedure
- java.lang.Object
- 
- org.apache.iceberg.spark.procedures.CreateChangelogViewProcedure
 
- 
- All Implemented Interfaces:
- Procedure
 
 public class CreateChangelogViewProcedure extends java.lang.ObjectA procedure that creates a view for changed rows.The procedure removes the carry-over rows by default. If you want to keep them, you can set "remove_carryovers" to be false in the options. The procedure doesn't compute the pre/post update images by default. If you want to compute them, you can set "compute_updates" to be true in the options. Carry-over rows are the result of a removal and insertion of the same row within an operation because of the copy-on-write mechanism. For example, given a file which contains row1 (id=1, data='a') and row2 (id=2, data='b'). A copy-on-write delete of row2 would require erasing this file and preserving row1 in a new file. The changelog table would report this as (id=1, data='a', op='DELETE') and (id=1, data='a', op='INSERT'), despite it not being an actual change to the table. The procedure finds the carry-over rows and removes them from the result. Pre/post update images are converted from a pair of a delete row and an insert row. Identifier columns are used for determining whether an insert and a delete record refer to the same row. If the two records share the same values for the identity columns they are considered to be before and after states of the same row. You can either set identifier fields in the table schema or input them as the procedure parameters. Here is an example of pre/post update images with an identifier column(id). A pair of a delete row and an insert row with the same id: - (id=1, data='a', op='DELETE')
- (id=1, data='b', op='INSERT')
 will be marked as pre/post update images: - (id=1, data='a', op='UPDATE_BEFORE')
- (id=1, data='b', op='UPDATE_AFTER')
 
- 
- 
Field SummaryFields Modifier and Type Field Description protected static org.apache.spark.sql.types.DataTypeSTRING_ARRAYprotected static org.apache.spark.sql.types.DataTypeSTRING_MAP
 - 
Method SummaryAll Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description protected SparkActionsactions()static SparkProcedures.ProcedureBuilderbuilder()org.apache.spark.sql.catalyst.InternalRow[]call(org.apache.spark.sql.catalyst.InternalRow args)Executes this procedure.protected voidcloseService()Closes this procedure's executor service if a new one was created withexecutorService(int, String).java.lang.Stringdescription()Returns the description of this procedure.protected java.util.concurrent.ExecutorServiceexecutorService(int threadPoolSize, java.lang.String nameFormat)Starts a new executor service which can be used by this procedure in its work.protected org.apache.spark.sql.Dataset<org.apache.spark.sql.Row>loadRows(org.apache.spark.sql.connector.catalog.Identifier tableIdent, java.util.Map<java.lang.String,java.lang.String> options)protected SparkTableloadSparkTable(org.apache.spark.sql.connector.catalog.Identifier ident)protected <T> TmodifyIcebergTable(org.apache.spark.sql.connector.catalog.Identifier ident, java.util.function.Function<Table,T> func)protected org.apache.spark.sql.catalyst.InternalRownewInternalRow(java.lang.Object... values)org.apache.spark.sql.types.StructTypeoutputType()Returns the type of rows produced by this procedure.ProcedureParameter[]parameters()Returns the input parameters of this procedure.protected voidrefreshSparkCache(org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.connector.catalog.Table table)protected org.apache.spark.sql.SparkSessionspark()protected org.apache.spark.sql.connector.catalog.TableCatalogtableCatalog()protected Spark3Util.CatalogAndIdentifiertoCatalogAndIdentifier(java.lang.String identifierAsString, java.lang.String argName, org.apache.spark.sql.connector.catalog.CatalogPlugin catalog)protected org.apache.spark.sql.connector.catalog.IdentifiertoIdentifier(java.lang.String identifierAsString, java.lang.String argName)protected <T> TwithIcebergTable(org.apache.spark.sql.connector.catalog.Identifier ident, java.util.function.Function<Table,T> func)
 
- 
- 
- 
Method Detail- 
builderpublic static SparkProcedures.ProcedureBuilder builder() 
 - 
parameterspublic ProcedureParameter[] parameters() Description copied from interface:ProcedureReturns the input parameters of this procedure.
 - 
outputTypepublic org.apache.spark.sql.types.StructType outputType() Description copied from interface:ProcedureReturns the type of rows produced by this procedure.
 - 
callpublic org.apache.spark.sql.catalyst.InternalRow[] call(org.apache.spark.sql.catalyst.InternalRow args) Description copied from interface:ProcedureExecutes this procedure.Spark will align the provided arguments according to the input parameters defined in Procedure.parameters()either by position or by name before execution.Implementations may provide a summary of execution by returning one or many rows as a result. The schema of output rows must match the defined output type in Procedure.outputType().- Parameters:
- args- input arguments
- Returns:
- the result of executing this procedure with the given arguments
 
 - 
descriptionpublic java.lang.String description() Description copied from interface:ProcedureReturns the description of this procedure.
 - 
sparkprotected org.apache.spark.sql.SparkSession spark() 
 - 
actionsprotected SparkActions actions() 
 - 
tableCatalogprotected org.apache.spark.sql.connector.catalog.TableCatalog tableCatalog() 
 - 
modifyIcebergTableprotected <T> T modifyIcebergTable(org.apache.spark.sql.connector.catalog.Identifier ident, java.util.function.Function<Table,T> func)
 - 
withIcebergTableprotected <T> T withIcebergTable(org.apache.spark.sql.connector.catalog.Identifier ident, java.util.function.Function<Table,T> func)
 - 
toIdentifierprotected org.apache.spark.sql.connector.catalog.Identifier toIdentifier(java.lang.String identifierAsString, java.lang.String argName)
 - 
toCatalogAndIdentifierprotected Spark3Util.CatalogAndIdentifier toCatalogAndIdentifier(java.lang.String identifierAsString, java.lang.String argName, org.apache.spark.sql.connector.catalog.CatalogPlugin catalog) 
 - 
loadSparkTableprotected SparkTable loadSparkTable(org.apache.spark.sql.connector.catalog.Identifier ident) 
 - 
loadRowsprotected org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> loadRows(org.apache.spark.sql.connector.catalog.Identifier tableIdent, java.util.Map<java.lang.String,java.lang.String> options)
 - 
refreshSparkCacheprotected void refreshSparkCache(org.apache.spark.sql.connector.catalog.Identifier ident, org.apache.spark.sql.connector.catalog.Table table)
 - 
newInternalRowprotected org.apache.spark.sql.catalyst.InternalRow newInternalRow(java.lang.Object... values) 
 - 
closeServiceprotected void closeService() Closes this procedure's executor service if a new one was created withexecutorService(int, String). Does not block for any remaining tasks.
 - 
executorServiceprotected java.util.concurrent.ExecutorService executorService(int threadPoolSize, java.lang.String nameFormat)Starts a new executor service which can be used by this procedure in its work. The pool will be automatically shut down ifwithIcebergTable(Identifier, Function)ormodifyIcebergTable(Identifier, Function)are called. If these methods are not used then the service can be shut down withcloseService()or left to be closed when this class is finalized.- Parameters:
- threadPoolSize- number of threads in the service
- nameFormat- name prefix for threads created in this service
- Returns:
- the new executor service owned by this procedure
 
 
- 
 
-