public class SparkUtil
extends java.lang.Object
| Modifier and Type | Field and Description | 
|---|---|
static java.lang.String | 
TIMESTAMP_WITHOUT_TIMEZONE_ERROR  | 
| Modifier and Type | Method and Description | 
|---|---|
static <C,T> Pair<C,T> | 
catalogAndIdentifier(java.util.List<java.lang.String> nameParts,
                    java.util.function.Function<java.lang.String,C> catalogProvider,
                    java.util.function.BiFunction<java.lang.String[],java.lang.String,T> identiferProvider,
                    C currentCatalog,
                    java.lang.String[] currentNamespace)
A modified version of Spark's LookupCatalog.CatalogAndIdentifier.unapply
 Attempts to find the catalog and identifier a multipart identifier represents 
 | 
static org.apache.hadoop.conf.Configuration | 
hadoopConfCatalogOverrides(org.apache.spark.sql.SparkSession spark,
                          java.lang.String catalogName)
Pulls any Catalog specific overrides for the Hadoop conf from the current SparkSession, which can be
 set via `spark.sql.catalog.$catalogName.hadoop.*`
 Mirrors the override of hadoop configurations for a given spark session using `spark.hadoop.*`. 
 | 
static boolean | 
hasTimestampWithoutZone(Schema schema)
Responsible for checking if the table schema has a timestamp without timezone column 
 | 
static java.util.List<org.apache.spark.sql.catalyst.expressions.Expression> | 
partitionMapToExpression(org.apache.spark.sql.types.StructType schema,
                        java.util.Map<java.lang.String,java.lang.String> filters)
Get a List of Spark filter Expression. 
 | 
static FileIO | 
serializableFileIO(Table table)  | 
static boolean | 
useTimestampWithoutZoneInNewTables(org.apache.spark.sql.RuntimeConfig sessionConf)
Checks whether timestamp types for new tables should be stored with timezone info. 
 | 
static void | 
validatePartitionTransforms(PartitionSpec spec)
Check whether the partition transforms in a spec can be used to write data. 
 | 
public static final java.lang.String TIMESTAMP_WITHOUT_TIMEZONE_ERROR
public static void validatePartitionTransforms(PartitionSpec spec)
spec - a PartitionSpecjava.lang.UnsupportedOperationException - if the spec contains unknown partition transformspublic static <C,T> Pair<C,T> catalogAndIdentifier(java.util.List<java.lang.String> nameParts, java.util.function.Function<java.lang.String,C> catalogProvider, java.util.function.BiFunction<java.lang.String[],java.lang.String,T> identiferProvider, C currentCatalog, java.lang.String[] currentNamespace)
nameParts - Multipart identifier representing a tablepublic static boolean hasTimestampWithoutZone(Schema schema)
schema - table schema to check if it contains a timestamp without timezone columnpublic static boolean useTimestampWithoutZoneInNewTables(org.apache.spark.sql.RuntimeConfig sessionConf)
 The default value is false and all timestamp fields are stored as Types.TimestampType#withZone().
 If enabled, all timestamp fields in new tables will be stored as Types.TimestampType#withoutZone().
sessionConf - a Spark runtime configpublic static org.apache.hadoop.conf.Configuration hadoopConfCatalogOverrides(org.apache.spark.sql.SparkSession spark,
                                                                              java.lang.String catalogName)
spark - The current Spark sessioncatalogName - Name of the catalog to find overrides for.public static java.util.List<org.apache.spark.sql.catalyst.expressions.Expression> partitionMapToExpression(org.apache.spark.sql.types.StructType schema,
                                                                                                            java.util.Map<java.lang.String,java.lang.String> filters)
schema - table schemafilters - filters in the format of a Map, where key is one of the table column name,
                and value is the specific value to be filtered on the column.