gpt4 book ai didi

python - 如何解决使用 Python 读取和写入 HDFS 时的代理错误?

转载 作者:行者123 更新时间:2023-12-02 20:44:18 30 4
gpt4 key购买 nike

我有一个 HDFS,我想使用 Python 脚本对其进行读写。

import requests
import json
import os
import kerberos
import sys

node = os.getenv("namenode").split(",")
print (node)

local_file_path = sys.argv[1]
remote_file_path = sys.argv[2]
read_or_write = sys.argv[3]
print (local_file_path,remote_file_path)

def check_node_status(node):
for name in node:
print (name)
request = requests.get("%s/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"%name,
verify=False).json()
status = request["beans"][0]["State"]
if status =="active":
nnhost = request["beans"][0]["HostAndPort"]
splitaddr = nnhost.split(":")
nnaddress = splitaddr[0]
print(nnaddress)
break
return status,name,nnaddress

def kerberos_auth(nnaddress):
__, krb_context = kerberos.authGSSClientInit("HTTP@%s"%nnaddress)
kerberos.authGSSClientStep(krb_context, "")
negotiate_details = kerberos.authGSSClientResponse(krb_context)
headers = {"Authorization": "Negotiate " + negotiate_details,
"Content-Type":"application/binary"}
return headers

def kerberos_hdfs_upload(status,name,headers):
print("running upload function")
if status =="active":
print("if function")
data=open('%s'%local_file_path, 'rb').read()
write_req = requests.put("%s/webhdfs/v1%s?op=CREATE&overwrite=true"%(name,remote_file_path),
headers=headers,
verify=False,
allow_redirects=True,
data=data)
print(write_req.text)

def kerberos_hdfs_read(status,name,headers):
if status == "active":
read = requests.get("%s/webhdfs/v1%s?op=OPEN"%(name,remote_file_path),
headers=headers,
verify=False,
allow_redirects=True)

if read.status_code == 200:
data=open('%s'%local_file_path, 'wb')
data.write(read.content)
data.close()
else :
print(read.content)


status, name, nnaddress= check_node_status(node)
headers = kerberos_auth(nnaddress)
if read_or_write == "write":
kerberos_hdfs_upload(status,name,headers)
elif read_or_write == "read":
print("fun")
kerberos_hdfs_read(status,name,headers)

该代码可以在我自己的机器上运行,该机器没有任何代理。但是在代理后面的办公室机器上运行它时,它会给出以下代理错误:
$ python3 python_hdfs.py ./1.png /user/testuser/2018-02-07_1.png write
['https://<servername>:50470', 'https:// <servername>:50470']
./1.png /user/testuser/2018-02-07_1.png
https://<servername>:50470
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 555, in urlopen
self._prepare_proxy(conn)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 753, in _prepare_proxy
conn.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 230, in connect
self._tunnel()
File "/usr/lib/python3.5/http/client.py", line 832, in _tunnel
message.strip()))
OSError: Tunnel connection failed: 504 Unknown Host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 376, in send
timeout=timeout
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 610, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 273, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='<servername>', port=50470): Max retries exceeded with url: /jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 504 Unknown Host',)))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "python_hdfs.py", line 68, in <module>
status, name, nnaddress= check_node_status(node)
File "python_hdfs.py", line 23, in check_node_status
verify=False).json()
File "/usr/lib/python3/dist-packages/requests/api.py", line 67, in get
return request('get', url, params=params, **kwargs)
File "/usr/lib/python3/dist-packages/requests/api.py", line 53, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 437, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='<server_name>', port=50470): Max retries exceeded with url: /jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 504 Unknown Host',)))

我尝试在代码中提供代理信息,如下所示:
proxies = {
"http": "<proxy_username>:<proxy_password>@<proxy_IP>:<proxy_port>",
"https": "<proxy_username>:<proxy_password>@<proxy_IP>:<proxy_port>",
}

node = os.getenv("namenode").split(",")
print (node)
local_file_path = sys.argv[1]
remote_file_path = sys.argv[2]
print (local_file_path,remote_file_path)


local_file_path = sys.argv[1]
remote_file_path = sys.argv[2]
read_or_write = sys.argv[3]
print (local_file_path,remote_file_path)

def check_node_status(node):
for name in node:
print (name)
request = requests.get("%s/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"%name,proxies=proxies,
verify=False).json()
status = request["beans"][0]["State"]
if status =="active":
nnhost = request["beans"][0]["HostAndPort"]
splitaddr = nnhost.split(":")
nnaddress = splitaddr[0]
print(nnaddress)
break
return status,name,nnaddress
### Rest of the code is the same

现在它给出以下错误:
$ python3 python_hdfs.py ./1.png /user/testuser/2018-02-07_1.png write
['https://<servername>:50470', 'https:// <servername>:50470']
./1.png /user/testuser/2018-02-07_1.png
https://<servername>:50470
Traceback (most recent call last):
File "python_hdfs.py", line 73, in <module>
status, name, nnaddress= check_node_status(node)
File "python_hdfs.py", line 28, in check_node_status
verify=False).json()
File "/usr/lib/python3/dist-packages/requests/api.py", line 67, in get
return request('get', url, params=params, **kwargs)
File "/usr/lib/python3/dist-packages/requests/api.py", line 53, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 343, in send
conn = self.get_connection(request.url, proxies)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 254, in get_connection
proxy_manager = self.proxy_manager_for(proxy)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 160, in proxy_manager_for
**proxy_kwargs)
File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 281, in proxy_from_url
return ProxyManager(proxy_url=url, **kw)
File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 232, in __init__
raise ProxySchemeUnknown(proxy.scheme)
requests.packages.urllib3.exceptions.ProxySchemeUnknown: Not supported proxy scheme <proxy_username>

所以,我的问题是,我是否需要在 kerberos 中设置代理才能使其正常工作?如果是这样,怎么做?我对kerberos不太熟悉。我跑 kinit在运行 python 代码之前,为了进入 kerberos 领域,该领域运行良好并在没有代理的情况下连接到适当的 HDFS 服务器。所以我不知道为什么在读取或写入相同的 HDFS 服务器时会出现此错误。任何帮助表示赞赏。

我还在 /etc/apt/apt.conf 中设置了代理像这样:
Acquire::http::proxy  "http://<proxy_username>:<proxy_password>@<proxy_IP>:<proxy_port>/";
Acquire::https::proxy "https://<proxy_username>:<proxy_password>@<proxy_IP>:<proxy_port>/";

我还尝试了以下方法:
$ export http_proxy="http://<user>:<pass>@<proxy>:<port>"
$ export HTTP_PROXY="http://<user>:<pass>@<proxy>:<port>"

$ export https_proxy="http://<user>:<pass>@<proxy>:<port>"
$ export HTTPS_PROXY="http://<user>:<pass>@<proxy>:<port>"

import os

proxy = 'http://<user>:<pass>@<proxy>:<port>'

os.environ['http_proxy'] = proxy
os.environ['HTTP_PROXY'] = proxy
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy

#rest of the code is same

但错误仍然存​​在。

更新:我也尝试过以下方法。
  • 有人建议我们已经在 /etc/apt/apt.conf 中设置了代理。连接到网络。但也许我们不需要代理来连接到 HDFS。因此,请尝试在 /etc/apt/apt.conf 中评论代理,然后再次运行 python 脚本。我这样做了。

    $ 环境 | grep 代理
    http_proxy= http://hfli:Test6969@192.168.44.217:8080
    https_proxy= https://hfli:Test6969@192.168.44.217:8080
    $ 未设置 http_proxy
    $ 取消设置 https_proxy
    $ 环境 | grep 代理
    $

  • 并再次运行 python 脚本 - (i) 没有在 python 脚本中定义代理,并且 (ii) 使用 python 脚本中定义的代理。在这两种情况下,我都遇到了相同的原始代理错误。
  • 我发现了以下 Java 程序,据说可以访问在 HDFS 上运行 Java 程序:

    导入 com.sun.security.auth.callback.TextCallbackHandler;
    导入 org.apache.hadoop.fs.FSDataOutputStream;
    导入 org.apache.hadoop.fs.FileSystem;
    导入 org.apache.hadoop.fs.Path;
    导入 java.io.BufferedReader;
    导入 java.io.InputStreamReader;
    导入 javax.security.auth.Subject;
    导入 javax.security.auth.login.LoginContext;

    导入 org.apache.hadoop.conf.Configuration;
    导入 org.apache.hadoop.security.UserGroupInformation;

    公共(public)类 HDFS_RW_Secure
    {
    公共(public)静态 void main(String[] args) 抛出异常
    {
    System.setProperty("java.security.auth.login.config", "/tmp/sc3_temp/hadoop_kdc.txt");
    System.setProperty("java.security.krb5.conf", "/tmp/sc3_temp/hadoop_krb.txt");
    配置 hadoopConf= new Configuration();
    //本例使用密码登录,可以改为使用Keytab登录
    登录上下文 lc;
    科目科目;
    lc = new LoginContext("JaasSample", new TextCallbackHandler());
    lc.login();
    System.out.println("登录");
        subject = lc.getSubject();
    UserGroupInformation.setConfiguration(hadoopConf);
    UserGroupInformation ugi = UserGroupInformation.getUGIFromSubject(subject);
    UserGroupInformation.setLoginUser(ugi);

    Path pt=new Path("hdfs://edhcluster"+args[0]);

    FileSystem fs = FileSystem.get(hadoopConf);

    //write
    FSDataOutputStream fin = fs.create(pt);
    fin.writeUTF("Hello!");
    fin.close();

    BufferedReader br=new BufferedReader(new InputStreamReader(fs.open(pt)));
    String line;
    line=br.readLine();
    while (line != null)
    {
    System.out.println(line);
    line=br.readLine();
    }
    fs.close();
    System.out.println("This is the end.");

    }
    }

  • 我们需要获取它的 jar 文件, HDFS.jar , 并运行以下 shell 脚本以使 Java 程序能够在 HDFS 上运行。
    nano run.sh
    # contents of the run.sh file:
    /tmp/sc3_temp/jre1.8.0_161/bin/java -Djavax.net.ssl.trustStore=/tmp/sc3_temp/cacerts -Djavax.net.ssl.trustStorePassword=changeit -jar /tmp/sc3_temp/HDFS.jar $1

    所以,我可以用 /user/testuser 运行这个 shell 脚本。作为允许它访问在 HDFS 中运行 Java 程序的参数:
    ./run.sh /user/testuser/test2

    给出以下输出:
    Debug is  true storeKey false useTicketCache false useKeyTab false doNotPrompt false ticketCache is null isInitiator true KeyTab is null refreshKrb5Config is false principal is null tryFirstPass is false useFirstPass is false storePass is false clearPass is false
    Kerberos username [testuser]: testuser
    Kerberos password for testuser:
    [Krb5LoginModule] user entered username: testuser

    principal is testuser@KRB.REALM
    Commit Succeeded

    login
    2018-02-08 14:09:30,020 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

    Hello!
    This is the end.

    所以,我想这是可行的。但是我如何编写一个等效的 shell 脚本来运行 Python 代码呢?

    最佳答案

    我找到了解决方案。原来,我找错地方了。似乎用户帐户设置错误。我尝试做一些更简单的事情,比如将网页下载到服务器中。我注意到它正在下载页面,但没有修复它的权限。因此,我进行了更多探索,发现在创建用户帐户时,并没有为其分配适当的所有权。因此,一旦我为用户帐户分配了正确的所有者,代理错误就消失了。 (唉,浪费了这么多时间。)

    我已经更详细地写了here .

    关于python - 如何解决使用 Python 读取和写入 HDFS 时的代理错误?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48676514/

    30 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com