PXF is compatible with Cloudera, Hortonworks Data Platform, and generic Apache Hadoop distributions. This topic describes how configure the PXF Hadoop, Hive, and HBase connectors.
@@ -10,7 +10,7 @@ When Kerberos is enabled for your HDFS filesystem, PXF, as an HDFS client, requi
Before you configure PXF for access to a secure HDFS filesystem, ensure that you have:
- Configured, initialized, and started PXF as described in [Installing and Configuring PXF](instcfg_pxf.html), including enabling PXF and Hadoop user impersonation.
- Configured, initialized, and started PXF as described in [Configuring PXF](instcfg_pxf.html), including enabling PXF and Hadoop user impersonation.
- Enabled Kerberos for your Hadoop cluster per the instructions for your specific distribution and verified the configuration.
@@ -94,7 +94,7 @@ Before attempting this exercise, ensure that you have:
- Built the *Demo* connector as described in [Example: Building the Demo Connector JAR File](build_conn.html#demo_buildjar).
- Administrative access to a running Greenplum Database cluster.
- Installed the Hadoop clients and initialized and started the PXF agent on each Greenplum Database segment host as described in [Installing and Configuring PXF](https://gpdb.docs.pivotal.io/latest/pxf/instcfg_pxf.html).
- Initialized, configured, and started the PXF agent on each Greenplum Database segment host as described in [Configuring PXF](https://gpdb.docs.pivotal.io/latest/pxf/instcfg_pxf.html).
- Enabled the PXF extension in the database, and optionally granted specific Greenplum Database roles access to the `pxf` protocol; [Enabling/Disabling PXF](https://gpdb.docs.pivotal.io/latest/pxf/using_pxf.html#enable-pxf-ext) and [Granting Access to PXF](https://gpdb.docs.pivotal.io/latest/pxf/using_pxf.html#access_pxf) describe these procedures.