gpt4 book ai didi

OpenShift 上的 Db2

转载 作者:行者123 更新时间:2023-12-04 09:31:51 25 4
gpt4 key购买 nike

我是 OpenShift 的新手,正在尝试在 OCP 上部署 IBM Db2。
关注 https://github.com/IBM/charts/tree/master/stable/ibm-db2
但是一旦我部署了 Pod,这些 Pod 就会永远处于挂起状态,并将错误显示为:

Events:

Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 8s (x65 over 95m) default-scheduler 0/5 nodes are available: 5 node(s) didn't match node selector.
即使我将专用标记为 false(在 README.md 的配置部分下的 github 链接中提到的参数),或者如果我将节点(3 master 2 worker)标记为 icp4data=icp4data 我得到同样的错误。
其中一个 pod(总共 8 个)输出到“oc get po -o yaml db2u-release-2-db2u-engn-update-job-8k7h8”:
apiVersion: v1
kind: Pod
metadata:
annotations:
openshift.io/scc: db2oltp-scc
productID: 5737-K75
productName: Db2 Community Edition
productVersion: 11.5.4.0
creationTimestamp: "2020-07-09T12:27:00Z"
generateName: db2u-release-2-db2u-engn-update-job-
labels:
app: db2u-release-2
chart: ibm-db2
controller-uid: 4631c618-9904-4978-b5c9-43edd827ce9e
heritage: Tiller
icpdsupport/app: db2u-release-2
icpdsupport/serviceInstanceId: db2u-relea-ibm-db2
job-name: db2u-release-2-db2u-engn-update-job
release: db2u-release-2
name: db2u-release-2-db2u-engn-update-job-8k7h8
namespace: db2-spm
ownerReferences:
- apiVersion: batch/v1
blockOwnerDeletion: true
controller: true
kind: Job
name: db2u-release-2-db2u-engn-update-job
uid: 4631c618-9904-4978-b5c9-43edd827ce9e
resourceVersion: "2173118"
selfLink: /api/v1/namespaces/db2-spm/pods/db2u-release-2-db2u-engn-update-job-8k7h8
uid: 09ef3921-63e0-4761-83fe-8ad5986c59d4
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- unknown
containers:
- command:
- /bin/sh
- -c
- "DETERMINATION_FILE=/mnt/blumeta0/nodeslist\nCAT_NODE=$(head -1 $DETERMINATION_FILE)\ncmd=\"\"\nupdt_upgrd_opt=\"-all\"\nhadr_enabled=\"false\"\nRC=0\n\nkubectl
exec -it -n db2-spm ${CAT_NODE?} -- bash -c '[[ -f /mnt/blumeta0/vrmf/change.out
]] || exit 0; exit $(cat /mnt/blumeta0/vrmf/change.out)' 2>/dev/null\nvrmf_chk=$?\necho
\"VRMF check status bit: ${vrmf_chk}\"\n\n# If HADR is enabled dont run the
DB update/upgrade scripts. This will be handled\n# by external mechanics to
work around rolling updates.\nkubectl exec -it -n db2-spm ${CAT_NODE?} -- bash
-c 'grep -qE \"^HADR_ENABLED.*true\" /mnt/blumeta0/configmap/hadr/*' 2>/dev/null\n[[
$? -eq 0 ]] && hadr_enabled=\"true\"\n[[ \"${hadr_enabled}\" == \"true\" ]]
&& updt_upgrd_opt=\"-inst\"\n\n# Check VRMF change bit and execute Db2 update
or upgrade process\nif [[ $vrmf_chk -ne 0 ]]; then\n if [[ $vrmf_chk -eq
1 ]]; then\n echo \"Running the Db2 engine update script ...\"\n cmd=\"su
- db2inst1 -c '/db2u/scripts/db2u_update.sh ${updt_upgrd_opt}'\"\n elif [[
$vrmf_chk -eq 2 ]]; then\n echo \"Running the Db2 engine upgrade script
...\"\n cmd=\"su - db2inst1 -c '/db2u/scripts/db2u_upgrade.sh ${updt_upgrd_opt}'\"\n
\ fi\n [[ -n \"$cmd\" ]] && kubectl exec -it -n db2-spm ${CAT_NODE?} --
bash -c \"$cmd\"\n RC=$?\n [[ $RC -ne 0 ]] && exit $RC\n\n # If HADR
is enabled, dont start Woliverine HA\n [[ \"${hadr_enabled}\" == \"true\"
]] && exit $RC\n\n # For all other Db2 engine update/upgrade scenarios, start
Woliverine HA on all Db2U PODs now\n echo \"Starting Wolverine HA ...\"\n
\ cmd=\"source /db2u/scripts/include/common_functions.sh && start_wvha_allnodes\"\n
\ kubectl exec -it -n db2-spm ${CAT_NODE?} -- bash -c \"$cmd\"\n RC=$?\nfi\nexit
$RC \n"
image: icr.io/obs/hdm/db2u/db2u.tools:11.5.4.0-56-unknown
imagePullPolicy: IfNotPresent
name: engn-update
resources:
limits:
cpu: 200m
memory: 250Mi
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsNonRoot: true
runAsUser: 500
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /mnt/blumeta0
name: metavol
- mountPath: /mnt/blumeta0/configmap/hadr
name: db2u-release-2-db2u-hadr-config-volume
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: db2u-token-sv5sw
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
imagePullSecrets:
- name: db2u-dockercfg-ndhhw
- name: ibm-registry
initContainers:
- args:
- -cx
- /tools/post-install/db2u_ready.sh --replicas 1 --template db2u-release-2 --namespace
db2-spm --dbType db2oltp
command:
- /bin/sh
image: icr.io/obs/hdm/db2u/db2u.tools:11.5.4.0-56-unknown
imagePullPolicy: IfNotPresent
name: condition-ready
resources:
limits:
cpu: 200m
memory: 250Mi
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsNonRoot: true
runAsUser: 500
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: db2u-token-sv5sw
readOnly: true
- command:
- /bin/sh
- -ec
- |
DETERMINATION_FILE=/mnt/blumeta0/nodeslist
CAT_NODE=$(head -1 $DETERMINATION_FILE)
# After INSTDB job completes, Db2 instance home is persisted on disk. Which is a
# prerequisite for the VRMF detection code, since it depends on ~/sqllib/.instuse file.
kubectl wait --for=condition=complete job/db2u-release-2-db2u-sqllib-shared-job -n db2-spm
kubectl exec -it -n db2-spm ${CAT_NODE?} -- bash -c "sudo /db2u/scripts/detect_db2_vrmf_change.sh -file"
image: icr.io/obs/hdm/db2u/db2u.tools:11.5.4.0-56-unknown
imagePullPolicy: IfNotPresent
name: detect-vrmf-change
resources:
limits:
cpu: 200m
memory: 500Mi
requests:
cpu: 100m
memory: 250Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsNonRoot: true
runAsUser: 500
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /mnt/blumeta0
name: metavol
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: db2u-token-sv5sw
readOnly: true
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext:
runAsNonRoot: true
seLinuxOptions:
level: s0:c24,c14
serviceAccount: db2u
serviceAccountName: db2u
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
- effect: NoSchedule
key: node.kubernetes.io/memory-pressure
operator: Exists
volumes:
- name: metavol
persistentVolumeClaim:
claimName: db2u-release-2-db2u-meta-storage
- configMap:
defaultMode: 420
name: db2u-release-2-db2u-hadr-config
name: db2u-release-2-db2u-hadr-config-volume
- name: db2u-token-sv5sw
secret:
defaultMode: 420
secretName: db2u-token-sv5sw
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-07-09T12:27:00Z"
message: '0/5 nodes are available: 5 node(s) didn''t match node selector.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: Burstable
以及“oc get nodes --show-labels”的输出:
NAME                                        STATUS   ROLES    AGE   VERSION           LABELS
ip-10-0-51-114.eu-west-2.compute.internal Ready worker 10d v1.17.1+912792b beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m4.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-west-2,failure-domain.beta.kubernetes.io/zone=eu-west-2a,icp4data=icp4data,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-0-51-114,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.kubernetes.io/instance-type=m4.xlarge,node.openshift.io/os_id=rhcos,topology.kubernetes.io/region=eu-west-2,topology.kubernetes.io/zone=eu-west-2a
ip-10-0-52-157.eu-west-2.compute.internal Ready master 10d v1.17.1+912792b beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m4.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-west-2,failure-domain.beta.kubernetes.io/zone=eu-west-2a,icp4data=icp4data,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-0-52-157,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.kubernetes.io/instance-type=m4.xlarge,node.openshift.io/os_id=rhcos,topology.kubernetes.io/region=eu-west-2,topology.kubernetes.io/zone=eu-west-2a
ip-10-0-56-116.eu-west-2.compute.internal Ready worker 10d v1.17.1+912792b beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m4.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-west-2,failure-domain.beta.kubernetes.io/zone=eu-west-2a,icp4data=icp4data,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-0-56-116,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.kubernetes.io/instance-type=m4.xlarge,node.openshift.io/os_id=rhcos,topology.kubernetes.io/region=eu-west-2,topology.kubernetes.io/zone=eu-west-2a
ip-10-0-60-205.eu-west-2.compute.internal Ready master 10d v1.17.1+912792b beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m4.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-west-2,failure-domain.beta.kubernetes.io/zone=eu-west-2a,icp4data=icp4data,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-0-60-205,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.kubernetes.io/instance-type=m4.xlarge,node.openshift.io/os_id=rhcos,topology.kubernetes.io/region=eu-west-2,topology.kubernetes.io/zone=eu-west-2a
ip-10-0-63-107.eu-west-2.compute.internal Ready master 10d v1.17.1+912792b beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m4.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-west-2,failure-domain.beta.kubernetes.io/zone=eu-west-2a,icp4data=icp4data,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-0-63-107,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.kubernetes.io/instance-type=m4.xlarge,node.openshift.io/os_id=rhcos,topology.kubernetes.io/region=eu-west-2,topology.kubernetes.io/zone=eu-west-2a

最佳答案

pod 没有被调度,因为没有节点标签与 pod 的亲和性部分正在寻找的内容相匹配:

nodeSelectorTerms: 
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- unknown
这看起来像是在寻找标签 beta.kubernetes.io/arch=unknown .假设 pod 是由部署、副本集甚至是作业创建的,您需要 oc edit该 Controller 资源并将 nodeSelectorTerms 值更改为 amd64 ,且应满足调度条件。
oc describe您的 pod 应该告诉您 Controller 资源,即 Controlled By: 的值

关于OpenShift 上的 Db2,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62810901/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com