gpt4 book ai didi

hadoop - KMS Hadoop 的身份验证问题

转载 作者:可可西里 更新时间:2023-11-01 15:29:21 27 4
gpt4 key购买 nike

我是 hadoop KMS 的新手,我已经使用 hadoop 启动了 KMS。现在我尝试运行这个 curl 命令

curl -i --header "Accept:application/json" -H "Content-Type:application/json" --user hdfs:hdfs -X GET http://192.168.23.199:16000/kms/v1/keys/names

hdfs 是我的 hadoop 用户并且有 hdfs 密码。

当我运行这个 curl 命令时,我得到以下响应

HTTP/1.1 401 Unauthorized Server: Apache-Coyote/1.1 WWW-Authenticate: PseudoAuth Set-Cookie: hadoop.auth=; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly Content-Type: text/html;charset=utf-8 Content-Length: 997

这是我的 kms-site.sml 内容

<configuration>

<property>
<name>hadoop.kms.key.provider.uri</name>
<value>kms://http@192.168.23.109:16000/kms</value>
<description>
URI of the backing KeyProvider for the KMS.
</description>
</property>

<property>
<name>hadoop.security.keystore.JavaKeyStoreProvider.password</name>
<value>none</value>
<description>
If using the JavaKeyStoreProvider, the password for the keystore file.
</description>
</property>

<!-- KMS Cache -->
<property>
<name>hadoop.kms.cache.enable</name>
<value>true</value>
<description>
Whether the KMS will act as a cache for the backing KeyProvider.
When the cache is enabled, operations like getKeyVersion, getMetadata,
and getCurrentKey will sometimes return cached data without consulting
the backing KeyProvider. Cached values are flushed when keys are deleted
or modified.
</description>
</property>

<property>
<name>hadoop.kms.cache.timeout.ms</name>
<value>600000</value>
<description>
Expiry time for the KMS key version and key metadata cache, in
milliseconds. This affects getKeyVersion and getMetadata.
</description>
</property>

<property>
<name>hadoop.kms.current.key.cache.timeout.ms</name>
<value>30000</value>
<description>
Expiry time for the KMS current key cache, in milliseconds. This
affects getCurrentKey operations.
</description>
</property>

<!-- KMS Audit -->

<property>
<name>hadoop.kms.audit.aggregation.window.ms</name>
<value>10000</value>
<description>
Duplicate audit log events within the aggregation window (specified in
ms) are quashed to reduce log traffic. A single message for aggregated
events is printed at the end of the window, along with a count of the
number of aggregated events.
</description>
</property>

<!-- KMS Security -->

<property>
<name>hadoop.kms.authentication.type</name>
<value>simple</value>
<description>
Authentication type for the KMS. Can be either &quot;simple&quot;
or &quot;kerberos&quot;.
</description>
</property>

<property>
<name>hadoop.kms.authentication.kerberos.keytab</name>
<value>${user.home}/kms.keytab</value>
<description>
Path to the keytab with credentials for the configured Kerberos principal.
</description>
</property>

<property>
<name>hadoop.kms.authentication.kerberos.keytab</name>
<value>${user.home}/kms.keytab</value>
<description>
Path to the keytab with credentials for the configured Kerberos principal.
</description>
</property>

<property>
<name>hadoop.kms.authentication.kerberos.principal</name>
<value>HTTP/localhost</value>
<description>
The Kerberos principal to use for the HTTP endpoint.
The principal must start with 'HTTP/' as per the Kerberos HTTP SPNEGO specification.
</description>
</property>

<property>
<name>hadoop.kms.authentication.kerberos.name.rules</name>
<value>DEFAULT</value>
<description>
Rules used to resolve Kerberos principal names.
</description>
</property>
<!-- Authentication cookie signature source -->

<property>
<name>hadoop.kms.authentication.signer.secret.provider</name>
<value>random</value>
<description>
Indicates how the secret to sign the authentication cookies will be
stored. Options are 'random' (default), 'string' and 'zookeeper'.
If using a setup with multiple KMS instances, 'zookeeper' should be used.
</description>
</property>

<!-- Configuration for 'zookeeper' authentication cookie signature source -->

<property>
<name>hadoop.kms.authentication.signer.secret.provider.zookeeper.path</name>
<value>/hadoop-kms/hadoop-auth-signature-secret</value>
<description>
The Zookeeper ZNode path where the KMS instances will store and retrieve
the secret from.
</description>
</property>

<property>
<name>hadoop.kms.authentication.signer.secret.provider.zookeeper.connection.string</name>
<value>192.168.23.199:2181</value>
<description>
The Zookeeper connection string, a list of hostnames and port comma
separated.
</description>
</property>

<property>
<name>hadoop.kms.authentication.signer.secret.provider.zookeeper.auth.type</name>
<value>none</value>
<description>
The Zookeeper authentication type, 'none' or 'sasl' (Kerberos).
</description>
</property>

<property>
<name>hadoop.kms.authentication.signer.secret.provider.zookeeper.kerberos.keytab</name>
<value>/etc/hadoop/conf/kms.keytab</value>
<description>
The absolute path for the Kerberos keytab with the credentials to
connect to Zookeeper.
</description>
</property>

<property>
<name>hadoop.kms.authentication.signer.secret.provider.zookeeper.kerberos.principal</name>
<value>kms/#HOSTNAME#</value>
<description>
The Kerberos service principal used to connect to Zookeeper.
</description>
</property>

</configuration>

最佳答案

如果您不使用 kerberos,我通常最终会在 curl 命令中传递用户信息。 curl 看起来像。

curl -k http://localhost:16000/kms/v1/keys/names?user.name=hdfs

关于hadoop - KMS Hadoop 的身份验证问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37601763/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com