gpt4 book ai didi

hadoop - hive 与 Tez : Unable to load AWS credentials from any provider in the chain

转载 作者:可可西里 更新时间:2023-11-01 15:56:56 26 4
gpt4 key购买 nike

环境:Hadoop 2.7.3、hive-2.2.0-SNAPSHOT、Tez 0.8.4

我的 core-site.xml:

 <property>  
<name>fs.s3a.aws.credentials.provider</name>
<value>
org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider,
org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider,
com.amazonaws.auth.EnvironmentVariableCredentialsProvider
</value>
<property>
<name>fs.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
<description></description>
</property>
<property>
<name>fs.s3a.access.key</name>
<value>GOODKEYVALUE</value>
<description>AWS access key ID. Omit for Role-based authentication. </description>
</property>
<property>
<name>fs.s3a.secret.key</name>
<value>SECRETKEYVALUE</value>
<description>AWS secret key. Omit for Role-based authentication.</description>
</property>

我可以从 hadoop 命令行正确访问 s3a uri。我可以创建外部表和如下命令:

create external table mytable(a string, b string) location 's3a://mybucket/myfolder/';  
select * from mytable limit 20;

正确执行,但是

select count(*) from mytable; 

失败:

Error: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1489267689011_0001_1_00, diagnostics=[Vertex vertex_1489267689011_0001_1_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: url_sum_master initializer failed, vertex=vertex_1489267689011_0001_1_00 [Map 1], com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:131)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1110)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:759)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:723)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194)
at com.amazonaws.services.s3.AmazonS3Client.getBucketRegionViaHeadRequest(AmazonS3Client.java:4949)
at com.amazonaws.services.s3.AmazonS3Client.fetchRegionFromCache(AmazonS3Client.java:4923)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4178)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4141)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1313)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1270)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:258)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:365)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:483)
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:196)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1489267689011_0001_1_01, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_1489267689011_0001_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:393)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:250)
at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:340)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:353)

让它工作的唯一方法是在 uri 本身中使用 accesskey:secretkey,这对于生产代码是不可能的。

谢谢。

最佳答案

你是对的,你不想在 URI 中拥有 secret 。很快 Hadoop 就会告诉你不要这样做,在某个时候它可能会完全停止它。

查看 latest s3a docs 的故障排除 S3a 部分.

如果您自己构建 Hadoop(您的 SDK 版本选择暗示),则构建 Hadoop 2.8/2.9 并在 s3a 包中启动调试。那里有更多的安全日志记录,但仍然有意记录比您想要的少的日志记录,以保密这些 key 。

您也可以尝试在目标机器上设置 AWS 环境变量。这并不能解决问题,但可以帮助隔离问题。

关于hadoop - hive 与 Tez : Unable to load AWS credentials from any provider in the chain,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42742086/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com