Package org.apache.iceberg.spark.source
Class SparkMicroBatchStream
java.lang.Object
org.apache.iceberg.spark.source.SparkMicroBatchStream
- All Implemented Interfaces:
- org.apache.spark.sql.connector.read.streaming.MicroBatchStream,- org.apache.spark.sql.connector.read.streaming.SparkDataStream,- org.apache.spark.sql.connector.read.streaming.SupportsAdmissionControl
public class SparkMicroBatchStream
extends Object
implements org.apache.spark.sql.connector.read.streaming.MicroBatchStream, org.apache.spark.sql.connector.read.streaming.SupportsAdmissionControl
- 
Method SummaryModifier and TypeMethodDescriptionvoidcommit(org.apache.spark.sql.connector.read.streaming.Offset end) org.apache.spark.sql.connector.read.PartitionReaderFactoryorg.apache.spark.sql.connector.read.streaming.OffsetdeserializeOffset(String json) org.apache.spark.sql.connector.read.streaming.ReadLimitorg.apache.spark.sql.connector.read.streaming.Offsetorg.apache.spark.sql.connector.read.streaming.Offsetorg.apache.spark.sql.connector.read.streaming.OffsetlatestOffset(org.apache.spark.sql.connector.read.streaming.Offset startOffset, org.apache.spark.sql.connector.read.streaming.ReadLimit limit) org.apache.spark.sql.connector.read.InputPartition[]planInputPartitions(org.apache.spark.sql.connector.read.streaming.Offset start, org.apache.spark.sql.connector.read.streaming.Offset end) voidstop()Methods inherited from class java.lang.Objectclone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface org.apache.spark.sql.connector.read.streaming.SupportsAdmissionControlreportLatestOffset
- 
Method Details- 
latestOffsetpublic org.apache.spark.sql.connector.read.streaming.Offset latestOffset()- Specified by:
- latestOffsetin interface- org.apache.spark.sql.connector.read.streaming.MicroBatchStream
 
- 
planInputPartitionspublic org.apache.spark.sql.connector.read.InputPartition[] planInputPartitions(org.apache.spark.sql.connector.read.streaming.Offset start, org.apache.spark.sql.connector.read.streaming.Offset end) - Specified by:
- planInputPartitionsin interface- org.apache.spark.sql.connector.read.streaming.MicroBatchStream
 
- 
createReaderFactorypublic org.apache.spark.sql.connector.read.PartitionReaderFactory createReaderFactory()- Specified by:
- createReaderFactoryin interface- org.apache.spark.sql.connector.read.streaming.MicroBatchStream
 
- 
initialOffsetpublic org.apache.spark.sql.connector.read.streaming.Offset initialOffset()- Specified by:
- initialOffsetin interface- org.apache.spark.sql.connector.read.streaming.SparkDataStream
 
- 
deserializeOffset- Specified by:
- deserializeOffsetin interface- org.apache.spark.sql.connector.read.streaming.SparkDataStream
 
- 
commitpublic void commit(org.apache.spark.sql.connector.read.streaming.Offset end) - Specified by:
- commitin interface- org.apache.spark.sql.connector.read.streaming.SparkDataStream
 
- 
stoppublic void stop()- Specified by:
- stopin interface- org.apache.spark.sql.connector.read.streaming.SparkDataStream
 
- 
latestOffsetpublic org.apache.spark.sql.connector.read.streaming.Offset latestOffset(org.apache.spark.sql.connector.read.streaming.Offset startOffset, org.apache.spark.sql.connector.read.streaming.ReadLimit limit) - Specified by:
- latestOffsetin interface- org.apache.spark.sql.connector.read.streaming.SupportsAdmissionControl
 
- 
getDefaultReadLimitpublic org.apache.spark.sql.connector.read.streaming.ReadLimit getDefaultReadLimit()- Specified by:
- getDefaultReadLimitin interface- org.apache.spark.sql.connector.read.streaming.SupportsAdmissionControl
 
 
-