gpt4 book ai didi

azure - 科达服务总线: All the messages getting processed by some pods of keda scaledJob

转载 作者:行者123 更新时间:2023-12-03 06:19:57 26 4
gpt4 key购买 nike

我正在使用 Azure Kubernetes Service(AKS) 平台,并使用 KEDA“ScaledJob”来执行长时间运行的作业。在此,Azure服务总线队列触发器用于自动触发作业。现在,当我在 Azure 服务总线中添加消息时,KEDA 将自动触发作业并根据配置创建节点/pod。但在这种情况下,所有消息都会被一些 Pod 获取并处理。预计每个扩展的 Pod 都会处理单个消息并终止。

以下是我的 yml 文件

apiVersion: keda.sh/v1alpha1
kind: ScaledJob
metadata:
name: {{ .Chart.Name }}
spec:
jobTargetRef:
backoffLimit: 4
parallelism: 1
completions: 1
activeDeadlineSeconds: 300
template:
spec:
imagePullSecrets:
- name: {{ .Values.image.imagePullSecrets }}
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
volumes:
- name: azure
azureFile:
shareName: sharenameone
secretName: secret-sharenameone
readOnly: true
- name: one-storage
emptyDir: {}
- name: "logging-volume-file"
persistentVolumeClaim:
claimName: "azure-file-logging"
initContainers:
- name: test-java-init
image: {{ .Values.global.imageRegistryURI }}/{{ .Values.image.javaInitImage.name}}:{{ .Values.image.javaInitImage.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
readOnlyRootFilesystem: true
resources:
requests:
cpu: 100m
memory: 300Mi
limits:
cpu: 200m
memory: 400Mi
volumeMounts:
- name: azure
mountPath: /mnt/azure
- name: one-storage
mountPath: /certs
containers:
- name: {{ .Chart.Name }}
image: {{ .Values.global.imageRegistryURI }}/tests/{{ .Chart.Name }}:{{ .Values.version }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- include "chart.envVars" . | nindent 14 }}
- name: JAVA_OPTS
value: >-
{{ .Values.application.javaOpts }}
- name: application_name
value: "test_application"
- name: queueName
value: "test-queue-name"
- name: servicebusconnstrenv
valueFrom:
secretKeyRef:
name: secrets-service-bus
key: service_bus_conn_str
volumeMounts:
- name: cert-storage
mountPath: /certs
- name: "logging-volume-azure-file"
mountPath: "/mnt/logging"
resources:
{{- toYaml .Values.resources | nindent 14 }}
pollingInterval: 30
maxReplicaCount: 5
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 20
triggers:
- type: azure-servicebus
metadata:
queueName: "test-queue-name"
connectionFromEnv: servicebusconnstrenv
messageCount: "1"

这是我的 azure 函数监听器

 @FunctionName("TestServiceBusTrigger")
public void TestServiceBusTriggerHandler(
@ServiceBusQueueTrigger(
name = "msg",
queueName = "%TEST_QUEUE_NAME%",
connection = "ServiceBusConnectionString")
final String inputMessage,
final ExecutionContext context) {

final java.util.logging.Logger contextLogger = context.getLogger();
System.setProperty("javax.net.ssl.trustStore", "/certs/cacerts");

try {
// all the processing goes here
} catch (Exception e) {
//Exception handling
}
}

需要添加哪些配置,以便每个扩展的 Pod 处理单个消息并终止?

最佳答案

这不是 Azure Functions 的设计方式,甚至不是 KEDA 的一般使用方式。如果已经运行的容器在配置后能够处理尽可能多的消息,那就更理想了。

话虽这么说,如果您的场景仍然需要这样做,您可以编写一个直接使用 Azure Service Bus SDK 的简单脚本。仅获取一条消息,处理它,然后终止。

关于azure - 科达服务总线: All the messages getting processed by some pods of keda scaledJob,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/75976349/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com