gpt4 book ai didi

python - 无法创建 Dataproc 集群

转载 作者:行者123 更新时间:2023-12-05 06:11:38 25 4
gpt4 key购买 nike

我尝试通过 Airflow 和 Google 云用户界面创建 Dataproc 集群,但集群创建最后总是失败。以下是我用来创建集群的气流代码 -

# STEP 1: Libraries needed
from datetime import timedelta, datetime
from airflow import models
from airflow.operators.bash_operator import BashOperator
from airflow.contrib.operators import dataproc_operator
from airflow.utils import trigger_rule
from poc.utils.transform import main
from airflow.contrib.hooks.gcp_dataproc_hook import DataProcHook
from airflow.operators.python_operator import BranchPythonOperator

import os

YESTERDAY = datetime.combine(
datetime.today() - timedelta(1),
datetime.min.time())
project_name = os.environ['GCP_PROJECT']

# Can pull in spark code from a gcs bucket
# SPARK_CODE = ('gs://us-central1-cl-composer-tes-fa29d311-bucket/spark_files/transformation.py')
dataproc_job_name = 'spark_job_dataproc'

default_dag_args = {
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'start_date': YESTERDAY,
'retry_delay': timedelta(minutes=5),
'project_id': project_name,
'owner': 'DataProc',
}

with models.DAG(
'dataproc-poc',
description='Dag to run a simple dataproc job',
schedule_interval=timedelta(days=1),
default_args=default_dag_args) as dag:

CLUSTER_NAME = 'dataproc-cluster'
def ensure_cluster_exists(ds, **kwargs):
cluster = DataProcHook().get_conn().projects().regions().clusters().get(
projectId=project_name,
region='us-east1',
clusterName=CLUSTER_NAME
).execute(num_retries=5)
print(cluster)
if cluster is None or len(cluster) == 0 or 'clusterName' not in cluster:
return 'create_dataproc'
else:
return 'run_spark'

# start = BranchPythonOperator(
# task_id='start',
# provide_context=True,
# python_callable=ensure_cluster_exists,
# )

print_date = BashOperator(
task_id='print_date',
bash_command='date'
)

create_dataproc = dataproc_operator.DataprocClusterCreateOperator(task_id='create_dataproc',
cluster_name=CLUSTER_NAME,
num_workers=2,
use_if_exists='true',
zone='us-east1-b',
master_machine_type='n1-standard-1',
worker_machine_type='n1-standard-1')

# Run the PySpark job
run_spark = dataproc_operator.DataProcPySparkOperator(
task_id='run_spark',
main=main,
cluster_name=CLUSTER_NAME,
job_name=dataproc_job_name
)
# dataproc_operator
# Delete Cloud Dataproc cluster.
# delete_dataproc = dataproc_operator.DataprocClusterDeleteOperator(
# task_id='delete_dataproc',
# cluster_name='dataproc-cluster-demo-{{ ds_nodash }}',
# trigger_rule=trigger_rule.TriggerRule.ALL_DONE)
# STEP 6: Set DAGs dependencies
# Each task should run after have finished the task before.
print_date >> create_dataproc >> run_spark
# print_date >> start >> create_dataproc >> run_spark
# start >> run_spark

我检查了集群日志,看到了以下错误-

  1. 无法存储主 key 1
  2. 无法存储主 key 2
  3. 初始化失败。退出125防止重启
  4. 无法启动主机:WAITING 2 个数据节点和节点管理器超时。操作超时:2 个最低要求的数据节点中只有 0 个正在运行。操作超时:2 个最低要求的节点管理器中只有 0 个正在运行。

最佳答案

无法启动主机:WAITING 2 个数据节点和节点管理器超时。操作超时:2 个最低要求的数据节点中只有 0 个正在运行。操作超时:2 个最低要求的节点管理器中只有 0 个正在运行。

此错误表明工作节点无法与主节点通信。当工作节点无法在给定时间范围内向主节点报告时,集群创建失败。

请检查您是否设置了正确的防火墙规则以允许 VM 之间的通信。

您可以引用以下网络配置最佳实践:https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/network#overview

关于python - 无法创建 Dataproc 集群,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63907419/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com