T - T is the in memory data model which can either be Pig tuples, Hive rows. Default is
Iceberg recordspublic class IcebergInputFormat<T>
extends org.apache.hadoop.mapreduce.InputFormat<java.lang.Void,T>
| Constructor and Description |
|---|
IcebergInputFormat() |
| Modifier and Type | Method and Description |
|---|---|
static InputFormatConfig.ConfigBuilder |
configure(org.apache.hadoop.mapreduce.Job job)
Configures the
Job to use the IcebergInputFormat and returns a helper to add
further configuration. |
org.apache.hadoop.mapreduce.RecordReader<java.lang.Void,T> |
createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context) |
java.util.List<org.apache.hadoop.mapreduce.InputSplit> |
getSplits(org.apache.hadoop.mapreduce.JobContext context) |
public static InputFormatConfig.ConfigBuilder configure(org.apache.hadoop.mapreduce.Job job)
Job to use the IcebergInputFormat and returns a helper to add
further configuration.job - the Job to configurepublic java.util.List<org.apache.hadoop.mapreduce.InputSplit> getSplits(org.apache.hadoop.mapreduce.JobContext context)
getSplits in class org.apache.hadoop.mapreduce.InputFormat<java.lang.Void,T>public org.apache.hadoop.mapreduce.RecordReader<java.lang.Void,T> createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context)
createRecordReader in class org.apache.hadoop.mapreduce.InputFormat<java.lang.Void,T>