gpt4 book ai didi

python - 如何在EMR核心节点上启用python库以启动EMR Spark应用程序步骤

转载 作者:行者123 更新时间:2023-12-02 19:54:20 27 4
gpt4 key购买 nike

我正在尝试使用非常简单的python脚本运行EMR(1个主节点和2个核心节点)步骤,我将其上传到s3以用于EMR spark应用程序步骤。该脚本在S3中读取data.txt文件并将其保存回去,如下所示,

import pyspark
import boto3

sc = SparkContext()
text_file = sc.textFile('s3://First_bucket/data.txt')
text_file.repartition(1).saveAsTextFile('s3://First_bucket/logdata')
sc.stop()

但是,当不使用 import boto3 时,此简单脚本不会引起错误。为了解决此问题,我尝试在创建EMR群集时使用boto.sh文件添加引导操作。我使用的boto.sh文件如下所示,
#!/bin/bash

sudo easy_install-3.6 pip
sudo pip install --target /usr/lib/spark/python/ boto3

不幸的是,这只是在主节点 而不是核心节点上启用了boto3库。再次执行此操作的EMR步骤失败,并且错误日志文件为:
2020-02-08T20:56:49.698Z INFO Ensure step 4 jar file command-runner.jar
2020-02-08T20:56:49.699Z INFO StepRunner: Created Runner for step 4
INFO startExec 'hadoop jar /var/lib/aws/emr/step-runner/hadoop-jars/command-runner.jar spark-submit --deploy-mode cluster s3://First_bucket/data.py'
INFO Environment:
PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/sbin:/opt/aws/bin
LESS_TERMCAP_md=[01;38;5;208m
LESS_TERMCAP_me=[0m
HISTCONTROL=ignoredups
LESS_TERMCAP_mb=[01;31m
AWS_AUTO_SCALING_HOME=/opt/aws/apitools/as
UPSTART_JOB=rc
LESS_TERMCAP_se=[0m
HISTSIZE=1000
HADOOP_ROOT_LOGGER=INFO,DRFA
JAVA_HOME=/etc/alternatives/jre
AWS_DEFAULT_REGION=eu-central-1
AWS_ELB_HOME=/opt/aws/apitools/elb
LESS_TERMCAP_us=[04;38;5;111m
EC2_HOME=/opt/aws/apitools/ec2
TERM=linux
runlevel=3
LANG=en_US.UTF-8
AWS_CLOUDWATCH_HOME=/opt/aws/apitools/mon
MAIL=/var/spool/mail/hadoop
LESS_TERMCAP_ue=[0m
LOGNAME=hadoop
PWD=/
LANGSH_SOURCED=1
HADOOP_CLIENT_OPTS=-Djava.io.tmpdir=/mnt/var/lib/hadoop/steps/s-2V51S7I25TLLW/tmp
_=/etc/alternatives/jre/bin/java
CONSOLETYPE=serial
RUNLEVEL=3
LESSOPEN=||/usr/bin/lesspipe.sh %s
previous=N
UPSTART_EVENTS=runlevel
AWS_PATH=/opt/aws
USER=hadoop
UPSTART_INSTANCE=
PREVLEVEL=N
HADOOP_LOGFILE=syslog
PYTHON_INSTALL_LAYOUT=amzn
HOSTNAME=ip-***-***-***-***
HADOOP_LOG_DIR=/mnt/var/log/hadoop/steps/s-2V51S7I25TLLW
EC2_AMITOOL_HOME=/opt/aws/amitools/ec2
EMR_STEP_ID=s-2V51S7I25TLLW
SHLVL=5
HOME=/home/hadoop
HADOOP_IDENT_STRING=hadoop
INFO redirectOutput to /mnt/var/log/hadoop/steps/s-2V51S7I25TLLW/stdout
INFO redirectError to /mnt/var/log/hadoop/steps/s-2V51S7I25TLLW/stderr
INFO Working dir /mnt/var/lib/hadoop/steps/s-2V51S7I25TLLW
INFO ProcessRunner started child process 22893
2020-02-08T20:56:49.705Z INFO HadoopJarStepRunner.Runner: startRun() called for s-2V51S7I25TLLW Child Pid: 22893
INFO Synchronously wait child process to complete : hadoop jar /var/lib/aws/emr/step-runner/hadoop-...
INFO waitProcessCompletion ended with exit code 1 : hadoop jar /var/lib/aws/emr/step-runner/hadoop-...
INFO total process run time: 26 seconds
2020-02-08T20:57:15.787Z INFO Step created jobs:
2020-02-08T20:57:15.787Z WARN Step failed with exitCode 1 and took 26 seconds

我的问题是如何将EMR spark应用程序步骤与包含诸如boto3之类的库的python脚本一起使用。预先感谢。

最佳答案

答案是引导操作

在创建集群并添加引导操作[1]时,您将能够安装boto3软件包。否则,对于正在运行的群集,您将需要通过连接到节点或使用Chef,ansible,...手动在所有节点上安装boto3。

引导 Action 将类似于:

sudo pip-3.6 install boto3 

要么
sudo pip install boto3 

:引导操作在Amazon EMR安装创建集群时指定的应用程序之前以及集群节点开始处理数据之前运行。

运行boostrap操作的 日志将位于所有节点上的'/ mnt / var / log / bootstrap-actions'中。

[1]-创建引导操作以安装其他软件- https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-bootstrap.html

关于python - 如何在EMR核心节点上启用python库以启动EMR Spark应用程序步骤,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60131607/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com