gpt4 book ai didi

azure - Kubernetes:AKS 上的某些服务 (LoadBalancer) 的外部 IP 正在等待处理

转载 作者:行者123 更新时间:2023-12-02 07:46:56 25 4
gpt4 key购买 nike

我有一个用于部署 Pod 和服务的 k8s 模板。我正在使用此模板根据 AKS 上的某些参数(不同名称、标签)部署不同的服务。

某些服务获取其外部 IP,而少数服务外部 IP 始终处于待处理状态

NAME                          TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S) 
service/ca1st-orgc LoadBalancer 10.0.25.227 <pending> 7054:30907/TCP 17m
service/ca1st-orgc-db-mysql LoadBalancer 10.0.97.81 52.13.67.9 3306:31151/TCP 17m
service/kafka1st ClusterIP 10.0.15.90 <none> 9092/TCP,9093/TCP 17m
service/kafka2nd ClusterIP 10.0.17.22 <none> 9092/TCP,9093/TCP 17m
service/kafka3rd ClusterIP 10.0.02.07 <none> 9092/TCP,9093/TCP 17m
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 20m
service/orderer1st-orgc LoadBalancer 10.0.17.19 <pending> 7050:30971/TCP 17m
service/orderer2nd-orgc LoadBalancer 10.0.02.15 13.06.27.31 7050:31830/TCP 17m
service/peer1st-orga LoadBalancer 10.0.10.19 <pending> 7051:31402/TCP,7052:32368/TCP,7053:31786/TCP,5984:30721/TCP 17m
service/peer1st-orgb LoadBalancer 10.0.218.48 13.06.25.13 7051:31892/TCP,7052:30326/TCP,7053:31419/TCP,5984:31882/TCP 17m
service/peer2nd-orga LoadBalancer 10.0.86.64 <pending> 7051:30590/TCP,7052:31870/TCP,7053:30362/TCP,5984:30036/TCP 17m
service/peer2nd-orgb LoadBalancer 10.0.195.212 52.13.58.3 7051:30476/TCP,7052:30091/TCP,7053:30099/TCP,5984:32614/TCP 17m
service/zookeeper1st ClusterIP 10.0.57.192 <none> 2888/TCP,3888/TCP,2181/TCP 17m
service/zookeeper2nd ClusterIP 10.0.174.25 <none> 2888/TCP,3888/TCP,2181/TCP 17m
service/zookeeper3rd ClusterIP 10.0.210.166 <none> 2888/TCP,3888/TCP,2181/TCP 17m

有趣的是,它是用于部署所有相关服务的相同模板。例如,以 peer 为前缀的服务由同一模板部署。

有人遇到过这种情况吗?

Deployment template for an orderer Pod

apiVersion: v1
kind: Pod
metadata:
name: {{ orderer.name }}
labels:
k8s-app: {{ orderer.name }}
type: orderer
{% if (project_version is version('1.4.0','>=') or 'stable' in project_version or 'latest' in project_version) and fabric.metrics is defined and fabric.metrics %}
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /metrics
prometheus.io/port: '8443'
prometheus.io/scheme: 'http'
{% endif %}
spec:
{% if creds %}
imagePullSecrets:
- name: regcred
{% endif %}
restartPolicy: OnFailure
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: fabriccerts
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: type
operator: In
values:
- orderer
topologyKey: kubernetes.io/hostname
containers:
- name: {{ orderer.name }}
image: {{ fabric.repo.url }}fabric-orderer:{{ fabric.baseimage_tag }}
{% if 'latest' in project_version or 'stable' in project_version %}
imagePullPolicy: Always
{% else %}
imagePullPolicy: IfNotPresent
{% endif %}
env:
{% if project_version is version('1.3.0','<') %}
- { name: "ORDERER_GENERAL_LOGLEVEL", value: "{{ fabric.logging_level | default('ERROR') | lower }}" }
{% elif project_version is version('1.4.0','>=') or 'stable' in project_version or 'latest' in project_version %}
- { name: "FABRIC_LOGGING_SPEC", value: "{{ fabric.logging_level | default('ERROR') | lower }}" }
{% endif %}
- { name: "ORDERER_GENERAL_LISTENADDRESS", value: "0.0.0.0" }
- { name: "ORDERER_GENERAL_GENESISMETHOD", value: "file" }
- { name: "ORDERER_GENERAL_GENESISFILE", value: "/etc/hyperledger/fabric/artifacts/keyfiles/genesis.block" }
- { name: "ORDERER_GENERAL_LOCALMSPID", value: "{{ orderer.org }}" }
- { name: "ORDERER_GENERAL_LOCALMSPDIR", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/msp" }
- { name: "ORDERER_GENERAL_TLS_ENABLED", value: "{{ tls | lower }}" }
{% if tls %}
- { name: "ORDERER_GENERAL_TLS_PRIVATEKEY", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/server.key" }
- { name: "ORDERER_GENERAL_TLS_CERTIFICATE", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/server.crt" }
- { name: "ORDERER_GENERAL_TLS_ROOTCAS", value: "[/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/ca.crt]" }
{% endif %}
{% if (project_version is version_compare('2.0.0','>=') or ('stable' in project_version or 'latest' in project_version)) and fabric.consensus_type is defined and fabric.consensus_type == 'etcdraft' %}
- { name: "ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/server.key" }
- { name: "ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/server.crt" }
- { name: "ORDERER_GENERAL_CLUSTER_ROOTCAS", value: "[/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/ca.crt]" }
{% elif fabric.consensus_type | default('kafka') == 'kafka' %}
- { name: "ORDERER_KAFKA_RETRY_SHORTINTERVAL", value: "1s" }
- { name: "ORDERER_KAFKA_RETRY_SHORTTOTAL", value: "30s" }
- { name: "ORDERER_KAFKA_VERBOSE", value: "true" }
{% endif %}
{% if mutualtls %}
{% if project_version is version('1.1.0','>=') or 'stable' in project_version or 'latest' in project_version %}
- { name: "ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED", value: "true" }
{% else %}
- { name: "ORDERER_GENERAL_TLS_CLIENTAUTHENABLED", value: "true" }
{% endif %}
- { name: "ORDERER_GENERAL_TLS_CLIENTROOTCAS", value: "[{{ rootca | list | join (", ")}}]" }
{% endif %}
{% if (project_version is version('1.4.0','>=') or 'stable' in project_version or 'latest' in project_version) and fabric.metrics is defined and fabric.metrics %}
- { name: "ORDERER_OPERATIONS_LISTENADDRESS", value: ":8443" }
- { name: "ORDERER_OPERATIONS_TLS_ENABLED", value: "false" }
- { name: "ORDERER_METRICS_PROVIDER", value: "prometheus" }
{% endif %}
{% if fabric.orderersettings is defined and fabric.orderersettings.ordererenv is defined %}
{% for pkey, pvalue in fabric.orderersettings.ordererenv.items() %}
- { name: "{{ pkey }}", value: "{{ pvalue }}" }
{% endfor %}
{% endif %}
{% include './resource.j2' %}
volumeMounts:
- { mountPath: "/etc/hyperledger/fabric/artifacts", name: "task-pv-storage" }
command: ["orderer"]

Deployment config for LoadBalancer

kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: {{ orderer.name }}
name: {{ orderer.name }}
spec:
selector:
k8s-app: {{ orderer.name }}
{% if fabric.k8s.exposeserviceport %}
type: LoadBalancer
{% endif %}
ports:
- name: port1
port: 7050
{% if fabric.metrics is defined and fabric.metrics %}
- name: scrapeport
port: 8443
{% endif %}

Interesting thing is, I don't see any Events(on running kubectl describe service orderer1st-orgc) for the services which haven't got their External-IP

Session Affinity:         None
External Traffic Policy: Cluster
Events: <none>

请分享您的想法。

最佳答案

我的集群出现问题。我不确定它是什么,但是同一组 LoadBalancer 从未用于获取其公共(public) IP。无论您清理所有 pvc、服务和 pod 多少​​次。我删除了集群并重新创建了一个。在新集群中一切都按预期运行。

所有 LoadBalancer 都获得其公共(public) IP。

关于azure - Kubernetes:AKS 上的某些服务 (LoadBalancer) 的外部 IP 正在等待处理,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55980984/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com