gpt4 book ai didi

linux - 使用本地存储库的 Ambari 蓝图安装 (HDP)

转载 作者:太空宇宙 更新时间:2023-11-04 12:15:44 25 4
gpt4 key购买 nike

我已经创建了 HDP & HDP-UTILS-1.1.0.21 内部存储库映射如下:

curl -H "X-Requested-By: ambari" -X PUT -u admin:admin http://ambari-server-hostname:8080/api/v1/stacks/HDP/versions/2.6/operating_systems/redhat7/repositories/HDP-2.6 -d @repo.json

payload:

{
"Repositories" : {
"base_url" : "http://ip-address/repo/HDP/centos7/2.6.3.0-235",
"verify_base_url" : true
}
}

curl -H "X-Requested-By: ambari" -X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.6/operating_systems/redhat7/repositories/HDP-UTILS-1.1.0.21 -d @hdputils.json

payload:

{
"Repositories" : {
"base_url" : "http://ip-address/repo/HDP_UTILS",
"verify_base_url" : true
}
}

但是在安装过程中,公共(public)存储库被组件调用,而不是使用我使用 Rest API 注册的本地存储库。请在下面找到安装日志:

    2017-11-27 17:00:33,237 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-11-27 17:00:33,243 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2017-11-27 17:00:33,244 - Group['hdfs'] {}
2017-11-27 17:00:33,244 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2017-11-27 17:00:33,245 - FS Type:
2017-11-27 17:00:33,245 - Directory['/etc/hadoop'] {'mode': 0755}
2017-11-27 17:00:33,245 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-11-27 17:00:33,264 - Repository['HDP-2.6-repo-1'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.0.3', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2017-11-27 17:00:33,310 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': InlineTemplate(...)}
2017-11-27 17:00:33,311 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because it doesn't exist
2017-11-27 17:00:33,312 - Repository['HDP-UTILS-1.1.0.21-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2017-11-27 17:00:33,315 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.0.3\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.21-repo-1]\nname=HDP-UTILS-1.1.0.21-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-11-27 17:00:33,315 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2017-11-27 17:00:33,316 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-27 17:00:33,530 - Skipping installation of existing package unzip
2017-11-27 17:00:33,531 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-27 17:00:33,551 - Skipping installation of existing package curl
2017-11-27 17:00:33,551 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-27 17:00:33,571 - Installing package hdp-select ('/usr/bin/yum -d 0 -e 0 -y install hdp-select')
2017-11-27 17:00:35,393 - Execution of '/usr/bin/yum -d 0 -e 0 -y install hdp-select' returned 1. Error downloading packages:
hdp-select-2.6.3.0-235.noarch: [Errno 256] No more mirrors to try.
2017-11-27 17:00:35,393 - Failed to install package hdp-select. Executing '/usr/bin/yum clean metadata'
2017-11-27 17:00:35,653 - Retrying to install package hdp-select after 30 seconds
2017-11-27 17:01:10,397 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2017-11-27 17:01:10,406 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-27 17:01:10,407 - Group['hdfs'] {}
2017-11-27 17:01:10,408 - Group['hadoop'] {}
2017-11-27 17:01:10,408 - Group['users'] {}
2017-11-27 17:01:10,409 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-11-27 17:01:10,410 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2017-11-27 17:01:10,411 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2017-11-27 17:01:10,412 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-11-27 17:01:10,413 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-11-27 17:01:10,414 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-27 17:01:10,415 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-11-27 17:01:10,420 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2017-11-27 17:01:10,421 - Group['hdfs'] {}
2017-11-27 17:01:10,421 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2017-11-27 17:01:10,422 - FS Type:
2017-11-27 17:01:10,422 - Directory['/etc/hadoop'] {'mode': 0755}
2017-11-27 17:01:10,423 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}

如何在通过蓝图执行集群安装时强制使用本地存储库?

谢谢。萨姆帕特

最佳答案

尝试重新启动 Ambari 服务器。

来自 community 的回答!

关于linux - 使用本地存储库的 Ambari 蓝图安装 (HDP),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47510547/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com