gpt4 book ai didi

kubernetes - Kubernetes 集群上的 VerneMQ

转载 作者:行者123 更新时间:2023-12-05 00:56:35 25 4
gpt4 key购买 nike

我正在尝试通过 Oracle OCI 使用 Helm chart 在 Kubernetes 集群上安装 VerneMQ。

Kubernetes 基础设施似乎已启动并运行,我可以毫无问题地部署我的自定义微服务。

我正在按照 https://github.com/vernemq/docker-vernemq 中的说明进行操作

步骤如下:

  • helm install --name="broker"./ 来自 helm/vernemq 目录

输出是:

NAME:   broker
LAST DEPLOYED: Fri Mar 1 11:07:37 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/RoleBinding
NAME AGE
broker-vernemq 1s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
broker-vernemq-headless ClusterIP None <none> 4369/TCP 1s
broker-vernemq ClusterIP 10.96.120.32 <none> 1883/TCP 1s

==> v1/StatefulSet
NAME DESIRED CURRENT AGE
broker-vernemq 3 1 1s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
broker-vernemq-0 0/1 ContainerCreating 0 1s

==> v1/ServiceAccount
NAME SECRETS AGE
broker-vernemq 1 1s

==> v1/Role
NAME AGE
broker-vernemq 1s


NOTES:
1. Check your VerneMQ cluster status:
kubectl exec --namespace default broker-vernemq-0 /usr/sbin/vmq-admin cluster show

2. Get VerneMQ MQTT port
echo "Subscribe/publish MQTT messages there: 127.0.0.1:1883"
kubectl port-forward svc/broker-vernemq 1883:1883

但是当我做这个检查的时候

kubectl exec --namespace default broker-vernemq-0 vmq-admin cluster show

我得到了

Node 'VerneMQ@broker-vernemq-0..default.svc.cluster.local' not responding to pings.
command terminated with exit code 1

我认为子域有问题(双点之间没有任何内容)

执行此命令

kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c kubedns

最后一行日志是

I0301 10:07:38.366826       1 dns.go:552] Could not find endpoints for service "broker-vernemq-headless" in namespace "default". DNS records will be created once endpoints show up.

我也试过这个自定义 yaml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: default
name: vernemq
labels:
app: vernemq
spec:
serviceName: vernemq
replicas: 3
selector:
matchLabels:
app: vernemq
template:
metadata:
labels:
app: vernemq
spec:
containers:
- name: vernemq
image: erlio/docker-vernemq:latest
imagePullPolicy: Always
ports:
- containerPort: 1883
name: mqtt
- containerPort: 8883
name: mqtts
- containerPort: 4369
name: epmd
env:
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
value: "off"
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
value: "/etc/vernemq-passwd/vmq.passwd"
volumeMounts:
- name: vernemq-passwd
mountPath: /etc/vernemq-passwd
readOnly: true

volumes:
- name: vernemq-passwd
secret:
secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: epmd
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: ClusterIP
selector:
app: vernemq
ports:
- port: 1883
name: mqtt
---
apiVersion: v1
kind: Service
metadata:
name: mqtts
labels:
app: mqtts
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- port: 8883
name: mqtts

有什么建议吗?

非常感谢
jack

最佳答案

这似乎是 Docker 镜像中的一个错误。 github上的建议是自己构建镜像或者使用后来的VerneMQ镜像(1.6.x之后)已经修复了。

这里提到的建议:https://github.com/vernemq/docker-vernemq/pull/92

可能修复的拉取请求:https://github.com/vernemq/docker-vernemq/pull/97

编辑:

我只让它在没有 Helm 的情况下工作。使用 kubectl create -f ./cluster.yaml,使用以下 cluster.yaml:

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: vernemq
namespace: default
spec:
serviceName: vernemq
replicas: 3
selector:
matchLabels:
app: vernemq
template:
metadata:
labels:
app: vernemq
spec:
serviceAccountName: vernemq
containers:
- name: vernemq
image: erlio/docker-vernemq:latest
ports:
- containerPort: 1883
name: mqttlb
- containerPort: 1883
name: mqtt
- containerPort: 4369
name: epmd
- containerPort: 44053
name: vmq
- containerPort: 9100
- containerPort: 9101
- containerPort: 9102
- containerPort: 9103
- containerPort: 9104
- containerPort: 9105
- containerPort: 9106
- containerPort: 9107
- containerPort: 9108
- containerPort: 9109
env:
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
value: "9100"
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
value: "9109"
- name: DOCKER_VERNEMQ_KUBERNETES_INSECURE
value: "1"
# only allow anonymous access for development / testing purposes!
# - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
# value: "on"
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: empd
- port: 44053
name: vmq
---
apiVersion: v1
kind: Service
metadata:
name: mqttlb
labels:
app: mqttlb
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- port: 1883
name: mqttlb
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: NodePort
selector:
app: vernemq
ports:
- port: 1883
name: mqtt
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vernemq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["endpoints", "deployments", "replicasets", "pods"]
verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: endpoint-reader
subjects:
- kind: ServiceAccount
name: vernemq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: endpoint-reader

需要几秒钟时间让 pod 准备就绪。

关于kubernetes - Kubernetes 集群上的 VerneMQ,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54942723/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com