gpt4 book ai didi

kubernetes - 通过Kubernetes Services进行的基本网络在Minikube中不起作用

转载 作者:行者123 更新时间:2023-12-02 11:56:09 25 4
gpt4 key购买 nike

我正在运行

  • 3个服务,可为以下项目提供部署:Mongodb,Postgres和Rest服务器
  • Mongo和Postgres服务作为ClusterIP,但是Rest-Server使用NodePort
  • 当我将kubectl exec和shell本身放入Pod中时,我可以访问Mongo / Postgres,但可以使用docker网络IP地址
  • 当我尝试使用kubernetes服务IP地址(由Minikube上的ClusterIP给出)时,我无法通过

  • 这是一些显示问题的示例命令

    装在:
    HOST$ kubectl exec -it my-system-mongo-54b8c75798-lptzq /bin/bash

    进入后,我使用docker网络IP连接到mongo:
    MONGO-POD# mongo mongodb://172.17.0.6
    Welcome to the MongoDB shell.
    > exit
    bye

    现在,我尝试使用K8服务IP(DNS可以正常工作,因为它将转换为10.96.154.36,如下所示)
    MONGO-POD# mongo mongodb://my-system-mongo
    MongoDB shell version v3.6.3
    connecting to: mongodb://my-system-mongo
    2020-01-03T02:39:55.883+0000 W NETWORK [thread1] Failed to connect to 10.96.154.36:27017 after 5000ms milliseconds, giving up.
    2020-01-03T02:39:55.903+0000 E QUERY [thread1] Error: couldn't connect to server my-system-mongo:27017, connection attempt failed :
    connect@src/mongo/shell/mongo.js:251:13
    @(connect):1:6
    exception: connect failed

    ping也不起作用
    MONGO-POD# ping my-system-mongo
    PING my-system-mongo.default.svc.cluster.local (10.96.154.36) 56(84) bytes of data.
    --- my-system-mongo.default.svc.cluster.local ping statistics ---
    112 packets transmitted, 0 received, 100% packet loss, time 125365ms

    我的设置是运行带有Kubernetes 1.17和Helm 3.0.2的Minikube 1.6.2。这是我完整的(帮助创建的)空运行yaml文件:
    NAME: mysystem-1578018793
    LAST DEPLOYED: Thu Jan 2 18:33:13 2020
    NAMESPACE: default
    STATUS: pending-install
    REVISION: 1
    HOOKS:
    ---
    # Source: mysystem/templates/tests/test-connection.yaml
    apiVersion: v1
    kind: Pod
    metadata:
    name: "my-system-test-connection"
    labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    annotations:
    "helm.sh/hook": test-success
    spec:
    containers:
    - name: wget
    image: busybox
    command: ['wget']
    args: ['my-system:']
    restartPolicy: Never
    MANIFEST:
    ---
    # Source: mysystem/templates/configmap.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: my-system-configmap
    labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    data:
    _lots_of_key_value_pairs: here-I-shortened-it
    ---
    # Source: mysystem/templates/my-system-mongo-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
    name: my-system-mongo
    labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: mongo
    spec:
    type: ClusterIP
    ports:
    - port: 27017
    targetPort: 27017
    protocol: TCP
    name: mongo
    selector:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/component: mongo
    ---
    # Source: mysystem/templates/my-system-pg-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
    name: my-system-postgres
    labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: postgres
    spec:
    type: ClusterIP
    ports:
    - port: 5432
    targetPort: 5432
    protocol: TCP
    name: postgres
    selector:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/component: postgres
    ---
    # Source: mysystem/templates/my-system-restsrv-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
    name: my-system-rest-server
    labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: rest-server
    spec:
    type: NodePort
    ports:
    #- port: 8009
    # targetPort: 8009
    # protocol: TCP
    # name: jpda
    - port: 8080
    targetPort: 8080
    protocol: TCP
    name: http
    selector:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/component: rest-server
    ---
    # Source: mysystem/templates/my-system-mongo-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: my-system-mongo
    labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: mongo
    spec:
    replicas: 1
    selector:
    matchLabels:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/component: mongo
    template:
    metadata:
    labels:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/component: mongo
    spec:
    imagePullSecrets:
    - name: regcred
    serviceAccountName: default
    securityContext:
    {}
    containers:
    - name: my-system-mongo-pod
    securityContext:
    {}
    image: private.hub.net/my-system-mongo:latest
    imagePullPolicy: Always
    envFrom:
    - configMapRef:
    name: my-system-configmap
    ports:
    - name: "mongo"
    containerPort: 27017
    protocol: TCP
    resources:
    {}
    ---
    # Source: mysystem/templates/my-system-pg-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: my-system-postgres
    labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: postgres
    spec:
    replicas: 1
    selector:
    matchLabels:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/component: postgres
    template:
    metadata:
    labels:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/component: postgres
    spec:
    imagePullSecrets:
    - name: regcred
    serviceAccountName: default
    securityContext:
    {}
    containers:
    - name: mysystem
    securityContext:
    {}
    image: private.hub.net/my-system-pg:latest
    imagePullPolicy: Always
    envFrom:
    - configMapRef:
    name: my-system-configmap
    ports:
    - name: postgres
    containerPort: 5432
    protocol: TCP
    resources:
    {}
    ---
    # Source: mysystem/templates/my-system-restsrv-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: my-system-rest-server
    labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: rest-server
    spec:
    replicas: 1
    selector:
    matchLabels:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/component: rest-server
    template:
    metadata:
    labels:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/component: rest-server
    spec:
    imagePullSecrets:
    - name: regcred
    serviceAccountName: default
    securityContext:
    {}
    containers:
    - name: mysystem
    securityContext:
    {}
    image: private.hub.net/my-system-restsrv:latest
    imagePullPolicy: Always
    envFrom:
    - configMapRef:
    name: my-system-configmap
    ports:
    - name: rest-server
    containerPort: 8080
    protocol: TCP
    #- name: "jpda"
    # containerPort: 8009
    # protocol: TCP
    resources:
    {}

    NOTES:
    1. Get the application URL by running these commands:
    export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=mysystem,app.kubernetes.io/instance=mysystem-1578018793" -o jsonpath="{.items[0].metadata.name}")
    echo "Visit http://127.0.0.1:8080 to use your application"
    kubectl --namespace default port-forward $POD_NAME 8080:80

    我最好的理论(部分在 working through this之后)是 kube-proxy在minikube中无法正常工作,但是我不确定如何解决此问题。什么时候通过journalctl将shell封装到minikube和grep中以获得代理:
    # grep proxy journal.log
    Jan 03 02:16:02 minikube sudo[2780]: docker : TTY=unknown ; PWD=/home/docker ; USER=root ; COMMAND=/bin/touch -d 2020-01-02 18:16:03.05808666 -0800 /var/lib/minikube/certs/proxy-client.crt
    Jan 03 02:16:02 minikube sudo[2784]: docker : TTY=unknown ; PWD=/home/docker ; USER=root ; COMMAND=/bin/touch -d 2020-01-02 18:16:03.05908666 -0800 /var/lib/minikube/certs/proxy-client.key
    Jan 03 02:16:15 minikube kubelet[2821]: E0103 02:16:15.423027 2821 reflector.go:156] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
    Jan 03 02:16:15 minikube kubelet[2821]: I0103 02:16:15.503466 2821 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-n78g9" (UniqueName: "kubernetes.io/secret/50fbf70b-724a-4b76-af7f-5f4b91735c84-kube-proxy-token-n78g9") pod "kube-proxy-pbs6s" (UID: "50fbf70b-724a-4b76-af7f-5f4b91735c84")
    Jan 03 02:16:15 minikube kubelet[2821]: I0103 02:16:15.503965 2821 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/50fbf70b-724a-4b76-af7f-5f4b91735c84-xtables-lock") pod "kube-proxy-pbs6s" (UID: "50fbf70b-724a-4b76-af7f-5f4b91735c84")
    Jan 03 02:16:15 minikube kubelet[2821]: I0103 02:16:15.530948 2821 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/50fbf70b-724a-4b76-af7f-5f4b91735c84-lib-modules") pod "kube-proxy-pbs6s" (UID: "50fbf70b-724a-4b76-af7f-5f4b91735c84")
    Jan 03 02:16:15 minikube kubelet[2821]: I0103 02:16:15.538938 2821 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/50fbf70b-724a-4b76-af7f-5f4b91735c84-kube-proxy") pod "kube-proxy-pbs6s" (UID: "50fbf70b-724a-4b76-af7f-5f4b91735c84")
    Jan 03 02:16:15 minikube systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/50fbf70b-724a-4b76-af7f-5f4b91735c84/volumes/kubernetes.io~secret/kube-proxy-token-n78g9.
    Jan 03 02:16:16 minikube kubelet[2821]: E0103 02:16:16.670527 2821 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
    Jan 03 02:16:16 minikube kubelet[2821]: E0103 02:16:16.670670 2821 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/50fbf70b-724a-4b76-af7f-5f4b91735c84-kube-proxy\" (\"50fbf70b-724a-4b76-af7f-5f4b91735c84\")" failed. No retries permitted until 2020-01-03 02:16:17.170632812 +0000 UTC m=+13.192986021 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50fbf70b-724a-4b76-af7f-5f4b91735c84-kube-proxy\") pod \"kube-proxy-pbs6s\" (UID: \"50fbf70b-724a-4b76-af7f-5f4b91735c84\") : failed to sync configmap cache: timed out waiting for the condition"

    尽管这确实显示出一些问题,但我不确定如何对它们采取行动或予以纠正。

    更新:

    我在浏览日记时发现了这一点:
    # grep conntrack journal.log
    Jan 03 02:16:04 minikube kubelet[2821]: W0103 02:16:04.286682 2821 hostport_manager.go:69] The binary conntrack is not installed, this can cause failures in network connection cleanup.

    调查conntrack,尽管minikube VM没有yum或apt!

    最佳答案

    让我们看一下相关的服务:

    apiVersion: v1
    kind: Service
    metadata:
    name: my-system-mongo
    spec:
    ports:
    - port: 27017 # note typo here, see @aviator's answer
    targetPort: 27017
    protocol: TCP
    name: mongo
    selector:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793

    特别注意 selector:;这样可以将流量路由到具有这两个标签的任何广告连播。例如,这是一个有效的目标:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: my-system-postgres
    spec:
    selector:
    matchLabels:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    template:
    metadata:
    labels:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793

    由于每个Pod都具有相同的标签对,因此任何服务都可以将流量发送到任何Pod。您的“MongoDB”服务不一定要针对实际的MongoDB容器。您的部署规范也有同样的问题,如果 kubectl get pods输出有点混乱,我也不会感到惊讶。

    正确的答案是添加另一个标签,以区分应用程序的不同部分。 The Helm docs推荐

    app.kubernetes.io/component: mongodb

    它必须出现在部署中嵌入的pod规范,匹配的部署选择器和匹配的服务选择器的标签中;只需将其设置在所有相关对象(包括部署和服务标签)上就可以了。

    关于kubernetes - 通过Kubernetes Services进行的基本网络在Minikube中不起作用,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59572721/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com