Package org.apache.iceberg.hadoop
Class HadoopTables
java.lang.Object
org.apache.iceberg.hadoop.HadoopTables
- All Implemented Interfaces:
- org.apache.hadoop.conf.Configurable,- Tables
Implementation of Iceberg tables that uses the Hadoop FileSystem to store metadata and manifests.
- 
Field SummaryFields
- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescriptionbuildTable(String location, Schema schema) create(Schema schema, PartitionSpec spec, SortOrder order, Map<String, String> properties, String location) Create a table using the FileSystem implementation resolve from location.booleanDrop a table and delete all data and metadata files.booleanDrop a table; optionally delete data and metadata files.booleanorg.apache.hadoop.conf.ConfigurationgetConf()Loads the table location from a FileSystem path location.newCreateTableTransaction(String location, Schema schema, PartitionSpec spec, Map<String, String> properties) Start a transaction to create a table.newReplaceTableTransaction(String location, Schema schema, PartitionSpec spec, Map<String, String> properties, boolean orCreate) Start a transaction to replace a table.voidsetConf(org.apache.hadoop.conf.Configuration conf) 
- 
Field Details- 
LOCK_PROPERTY_PREFIX- See Also:
 
 
- 
- 
Constructor Details- 
HadoopTablespublic HadoopTables()
- 
HadoopTablespublic HadoopTables(org.apache.hadoop.conf.Configuration conf) 
 
- 
- 
Method Details- 
loadLoads the table location from a FileSystem path location.
- 
exists
- 
createpublic Table create(Schema schema, PartitionSpec spec, SortOrder order, Map<String, String> properties, String location) Create a table using the FileSystem implementation resolve from location.- Specified by:
- createin interface- Tables
- Parameters:
- schema- iceberg schema used to create the table
- spec- partitioning spec, if null the table will be unpartitioned
- properties- a string map of table properties, initialized to empty if null
- location- a path URI (e.g. hdfs:///warehouse/my_table)
- Returns:
- newly created table implementation
 
- 
dropTableDrop a table and delete all data and metadata files.- Parameters:
- location- a path URI (e.g. hdfs:///warehouse/my_table)
- Returns:
- true if the table was dropped, false if it did not exist
 
- 
dropTableDrop a table; optionally delete data and metadata files.If purge is set to true the implementation should delete all data and metadata files. - Parameters:
- location- a path URI (e.g. hdfs:///warehouse/my_table)
- purge- if true, delete all data and metadata files in the table
- Returns:
- true if the table was dropped, false if it did not exist
 
- 
newCreateTableTransactionpublic Transaction newCreateTableTransaction(String location, Schema schema, PartitionSpec spec, Map<String, String> properties) Start a transaction to create a table.- Parameters:
- location- a location for the table
- schema- a schema
- spec- a partition spec
- properties- a string map of table properties
- Returns:
- a Transactionto create the table
- Throws:
- AlreadyExistsException- if the table already exists
 
- 
newReplaceTableTransactionpublic Transaction newReplaceTableTransaction(String location, Schema schema, PartitionSpec spec, Map<String, String> properties, boolean orCreate) Start a transaction to replace a table.- Parameters:
- location- a location for the table
- schema- a schema
- spec- a partition spec
- properties- a string map of table properties
- orCreate- whether to create the table if not exists
- Returns:
- a Transactionto replace the table
- Throws:
- NoSuchTableException- if the table doesn't exist and orCreate is false
 
- 
buildTable
- 
setConfpublic void setConf(org.apache.hadoop.conf.Configuration conf) - Specified by:
- setConfin interface- org.apache.hadoop.conf.Configurable
 
- 
getConfpublic org.apache.hadoop.conf.Configuration getConf()- Specified by:
- getConfin interface- org.apache.hadoop.conf.Configurable
 
 
-