- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在 k8s (EKS) 上部署 haVault,并在其中一个 Vault Pod 上收到此错误,我认为这也会导致其他 Pod 失败:这是 kubectl get events
的输出:
搜索:节点可用:1 内存不足
26m Normal Created pod/vault-1 Created container vault
26m Normal Started pod/vault-1 Started container vault
26m Normal Pulled pod/vault-1 Container image "hashicorp/vault-enterprise:1.5.0_ent" already present on machine
7m40s Warning BackOff pod/vault-1 Back-off restarting failed container
2m38s Normal Scheduled pod/vault-1 Successfully assigned vault-foo/vault-1 to ip-10-101-0-103.ec2.internal
2m35s Normal SuccessfulAttachVolume pod/vault-1 AttachVolume.Attach succeeded for volume "pvc-acfc7e26-3616-4075-ab79-0c3f7b0f6470"
2m35s Normal SuccessfulAttachVolume pod/vault-1 AttachVolume.Attach succeeded for volume "pvc-19d03d48-1de2-41f8-aadf-02d0a9f4bfbd"
48s Normal Pulled pod/vault-1 Container image "hashicorp/vault-enterprise:1.5.0_ent" already present on machine
48s Normal Created pod/vault-1 Created container vault
99s Normal Started pod/vault-1 Started container vault
60s Warning BackOff pod/vault-1 Back-off restarting failed container
27m Normal TaintManagerEviction pod/vault-2 Cancelling deletion of Pod vault-foo/vault-2
28m Warning FailedScheduling pod/vault-2 0/4 nodes are available: 1 Insufficient memory, 4 Insufficient cpu.
28m Warning FailedScheduling pod/vault-2 0/5 nodes are available: 1 Insufficient memory, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 4 Insufficient cpu.
27m Normal Scheduled pod/vault-2 Successfully assigned vault-foo/vault-2 to ip-10-101-0-103.ec2.internal
27m Normal SuccessfulAttachVolume pod/vault-2 AttachVolume.Attach succeeded for volume "pvc-fb91141d-ebd9-4767-b122-da8c98349cba"
27m Normal SuccessfulAttachVolume pod/vault-2 AttachVolume.Attach succeeded for volume "pvc-95effe76-6e01-49ad-9bec-14e091e1a334"
27m Normal Pulling pod/vault-2 Pulling image "hashicorp/vault-enterprise:1.5.0_ent"
27m Normal Pulled pod/vault-2 Successfully pulled image "hashicorp/vault-enterprise:1.5.0_ent"
26m Normal Created pod/vault-2 Created container vault
26m Normal Started pod/vault-2 Started container vault
26m Normal Pulled pod/vault-2 Container image "hashicorp/vault-enterprise:1.5.0_ent" already present on machine
7m26s Warning BackOff pod/vault-2 Back-off restarting failed container
2m36s Warning FailedScheduling pod/vault-2 0/7 nodes are available: 1 Insufficient memory, 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 1 node(s) had volume node affinity conflict, 1 node(s) were unschedulable, 4 Insufficient cpu.
114s Warning FailedScheduling pod/vault-2 0/8 nodes are available: 1 Insufficient memory, 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 1 node(s) were unschedulable, 4 Insufficient cpu.
104s Warning FailedScheduling pod/vault-2 0/9 nodes are available: 1 Insufficient memory, 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 1 node(s) had volume node affinity conflict, 1 node(s) were unschedulable, 2 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 4 Insufficient cpu.
93s Normal Scheduled pod/vault-2 Successfully assigned vault-foo/vault-2 to ip-10-101-0-82.ec2.internal
88s Normal SuccessfulAttachVolume pod/vault-2 AttachVolume.Attach succeeded for volume "pvc-fb91141d-ebd9-4767-b122-da8c98349cba"
88s Normal SuccessfulAttachVolume pod/vault-2 AttachVolume.Attach succeeded for volume "pvc-95effe76-6e01-49ad-9bec-14e091e1a334"
83s Normal Pulling pod/vault-2 Pulling image "hashicorp/vault-enterprise:1.5.0_ent"
81s Normal Pulled pod/vault-2 Successfully pulled image "hashicorp/vault-enterprise:1.5.0_ent"
38s Normal Created pod/vault-2 Created container vault
37s Normal Started pod/vault-2 Started container vault
38s Normal Pulled pod/vault-2 Container image "hashicorp/vault-enterprise:1.5.0_ent" already present on machine
4s Warning BackOff pod/vault-2 Back-off restarting failed container
2m38s Normal Scheduled pod/vault-agent-injector-d54bdc675-qwsmz Successfully assigned vault-foo/vault-agent-injector-d54bdc675-qwsmz to ip-10-101-2-91.ec2.internal
2m37s Normal Pulling pod/vault-agent-injector-d54bdc675-qwsmz Pulling image "hashicorp/vault-k8s:latest"
2m36s Normal Pulled pod/vault-agent-injector-d54bdc675-qwsmz Successfully pulled image "hashicorp/vault-k8s:latest"
2m36s Normal Created pod/vault-agent-injector-d54bdc675-qwsmz Created container sidecar-injector
2m35s Normal Started pod/vault-agent-injector-d54bdc675-qwsmz Started container sidecar-injector
28m Normal Scheduled pod/vault-agent-injector-d54bdc675-wz9ws Successfully assigned vault-foo/vault-agent-injector-d54bdc675-wz9ws to ip-10-101-0-87.ec2.internal
28m Normal Pulled pod/vault-agent-injector-d54bdc675-wz9ws Container image "hashicorp/vault-k8s:latest" already present on machine
28m Normal Created pod/vault-agent-injector-d54bdc675-wz9ws Created container sidecar-injector
28m Normal Started pod/vault-agent-injector-d54bdc675-wz9ws Started container sidecar-injector
3m22s Normal Killing pod/vault-agent-injector-d54bdc675-wz9ws Stopping container sidecar-injector
3m22s Warning Unhealthy pod/vault-agent-injector-d54bdc675-wz9ws Readiness probe failed: Get https://10.101.0.73:8080/health/ready: dial tcp 10.101.0.73:8080: connect: connection refused
3m18s Warning Unhealthy pod/vault-agent-injector-d54bdc675-wz9ws Liveness probe failed: Get https://10.101.0.73:8080/health/ready: dial tcp 10.101.0.73:8080: connect: no route to host
28m Normal SuccessfulCreate replicaset/vault-agent-injector-d54bdc675 Created pod: vault-agent-injector-d54bdc675-wz9ws
2m38s Normal SuccessfulCreate replicaset/vault-agent-injector-d54bdc675 Created pod: vault-agent-injector-d54bdc675-qwsmz
28m Normal ScalingReplicaSet deployment/vault-agent-injector Scaled up replica set vault-agent-injector-d54bdc675 to 1
2m38s Normal ScalingReplicaSet deployment/vault-agent-injector Scaled up replica set vault-agent-injector-d54bdc675 to 1
28m Normal EnsuringLoadBalancer service/vault-ui Ensuring load balancer
28m Normal EnsuredLoadBalancer service/vault-ui Ensured load balancer
26m Normal UpdatedLoadBalancer service/vault-ui Updated load balancer with new hosts
3m24s Normal DeletingLoadBalancer service/vault-ui Deleting load balancer
3m23s Warning PortNotAllocated service/vault-ui Port 32476 is not allocated; repairing
3m23s Warning ClusterIPNotAllocated service/vault-ui Cluster IP 172.20.216.143 is not allocated; repairing
3m22s Warning FailedToUpdateEndpointSlices service/vault-ui Error updating Endpoint Slices for Service vault-foo/vault-ui: failed to update vault-ui-crtg4 EndpointSlice for Service vault-foo/vault-ui: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "vault-ui-crtg4": the object has been modified; please apply your changes to the latest version and try again
3m16s Warning FailedToUpdateEndpoint endpoints/vault-ui Failed to update endpoint vault-foo/vault-ui: Operation cannot be fulfilled on endpoints "vault-ui": the object has been modified; please apply your changes to the latest version and try again
2m52s Normal DeletedLoadBalancer service/vault-ui Deleted load balancer
2m39s Normal EnsuringLoadBalancer service/vault-ui Ensuring load balancer
2m36s Normal EnsuredLoadBalancer service/vault-ui Ensured load balancer
96s Normal UpdatedLoadBalancer service/vault-ui Updated load balancer with new hosts
28m Normal NoPods poddisruptionbudget/vault No matching pods found
28m Normal SuccessfulCreate statefulset/vault create Pod vault-0 in StatefulSet vault successful
28m Normal SuccessfulCreate statefulset/vault create Pod vault-1 in StatefulSet vault successful
28m Normal SuccessfulCreate statefulset/vault create Pod vault-2 in StatefulSet vault successful
2m40s Normal NoPods poddisruptionbudget/vault No matching pods found
2m38s Normal SuccessfulCreate statefulset/vault create Pod vault-0 in StatefulSet vault successful
2m38s Normal SuccessfulCreate statefulset/vault create Pod vault-1 in StatefulSet vault successful
2m38s Normal SuccessfulCreate statefulset/vault create Pod vault-2 in StatefulSet vault successful
这是我的 Helm :
# Vault Helm Chart Value Overrides
global:
enabled: true
tlsDisable: false
injector:
enabled: true
# Use the Vault K8s Image https://github.com/hashicorp/vault-k8s/
image:
repository: "hashicorp/vault-k8s"
tag: "latest"
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 256Mi
cpu: 250m
server:
# Use the Enterprise Image
image:
repository: "hashicorp/vault-enterprise"
tag: "1.5.0_ent"
# These Resource Limits are in line with node requirements in the
# Vault Reference Architecture for a Small Cluster
resources:
requests:
memory: 8Gi
cpu: 2000m
limits:
memory: 16Gi
cpu: 2000m
# For HA configuration and because we need to manually init the vault,
# we need to define custom readiness/liveness Probe settings
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
livenessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true"
initialDelaySeconds: 60
# extraEnvironmentVars is a list of extra environment variables to set with the stateful set. These could be
# used to include variables required for auto-unseal.
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/vault-server-tls/vault.ca
# extraVolumes is a list of extra volumes to mount. These will be exposed
# to Vault in the path .
#extraVolumes:
# - type: secret
# name: tls-server
# - type: secret
# name: tls-ca
# - type: secret
# name: kms-creds
extraVolumes:
- type: secret
name: vault-server-tls
# This configures the Vault Statefulset to create a PVC for audit logs.
# See https://www.vaultproject.io/docs/audit/index.html to know more
auditStorage:
enabled: true
standalone:
enabled: false
# Run Vault in "HA" mode.
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
tls_key_file = "/vault/userconfig/vault-server-tls/vault.key"
tls_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.ca"
}
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "http://vault-0.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.ca"
leader_client_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
leader_client_key_file = "/vault/userconfig/vault-server-tls/vault.key"
}
retry_join {
leader_api_addr = "http://vault-1.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.ca"
leader_client_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
leader_client_key_file = "/vault/userconfig/vault-server-tls/vault.key"
}
retry_join {
leader_api_addr = "http://vault-2.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.ca"
leader_client_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
leader_client_key_file = "/vault/userconfig/vault-server-tls/vault.key"
}
}
service_registration "kubernetes" {}
# Vault UI
ui:
enabled: true
serviceType: "LoadBalancer"
serviceNodePort: null
externalPort: 8200
# For Added Security, edit the below
#loadBalancerSourceRanges:
# - < Your IP RANGE Ex. 10.0.0.0/16 >
# - < YOUR SINGLE IP Ex. 1.78.23.3/32 >
我哪里配置不对?
最佳答案
这里有几个问题,它们都由如下错误消息表示:
0/9 nodes are available: 1 Insufficient memory, 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 1 node(s) had volume node affinity conflict, 1 node(s) were unschedulable, 2 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 4 Insufficient cpu.
您有 9 个节点,但由于条件不同,没有一个节点可用于调度。请注意,每个节点都可能受到多个问题的影响,因此这些数字加起来可能会超过节点总数。
让我们一一分解:
Insufficient memory
:执行kubectl describe node <node-name>
检查有多少可用内存。检查 pod 的请求和限制。请注意,Kubernetes 将阻止 pod 请求的全部内存,无论该 pod 使用了多少内存。
Insufficient cpu
:同上类推。
node(s) didn't match pod affinity/anti-affinity
:检查您的affinity/anti-affinity规则。
node(s) didn't satisfy existing pods anti-affinity rules
:同上。
node(s) had volume node affinity conflict
:当 pod 由于无法从另一个可用区连接到卷而无法调度时会发生。您可以通过创建 storageclass
来解决此问题对于单个区域,然后使用 storageclass
在你的 PVC 中。
node(s) were unschedulable
:这是因为该节点被标记为 Unschedulable
。这将我们引向下面的下一期:
node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate
:这对应于NodeCondition
Ready
=False
。您可以使用kubectl描述节点来检查污点和kubectl taint nodes <node-name> <taint-name>-
以便删除它们。检查Taints and Tolerations了解更多详情。
还有一个GitHub thread遇到类似的问题,您可能会觉得有用。
尝试逐一检查/消除这些问题(从上面列出的第一个问题开始),因为它们在某些情况下可能会产生“链式 react ”。
关于kubernetes - k8s : getting error 1 Insufficient memory, 1 个节点上的 HashicorpVault 与 Pod 亲和性/反亲和性不匹配,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/65841399/
我正在尝试从我的谷歌驱动器下载电子表格。我正在关注 this reference并且在许可方面遇到问题。我从 https://www.googleapis.com/auth/drive.file 开始
我正在尝试使用以下命令将一个目录从我的虚拟机复制到我本地机器的桌面: gcloud compute scp --recurse instance-1:~/directory ~/Desktop/ 我尝
我正在尝试获取Youtube的评论,但它引发了此异常- {“Google.Apis.Requests.RequestError \ r \ n权限不足:请求的身份验证范围不足。[403] \ r \
我试图用 jmap 捕获 tomcat 进程转储,但是我得到错误“内存不足或附加权限不足”,显然有足够的内存,并且当前登录用户在角色本地管理员组中。如果我以管理员身份运行 cmd 命令,它仍然失败。
为了测试我们的应用程序,我需要插入/更新谷歌日历事件并验证边缘情况,比如 session 邀请是否超过 30 天,它不应该显示给最终用户。我正在为单个 gmail id testaccount@.co
我正在使用这个 endpoint : get_media(bucket=*, object=*, ifGenerationNotMatch=None, generation=None, ifMeta
我正在尝试将 Stripe 的 Connect 实现到我的应用程序中。我已经完成了数小时的研究和试错方法调试,现在我遇到的情况是没有出现技术错误,但出现错误: Insufficient funds i
我正在尝试在我的应用程序中使用静默推送通知,它似乎可以正常工作一两个小时,但在此期间之后将不会发送通知,并且我收到“高优先级推送:bundleID-资源不足”警告。任何人都知道可能是什么问题? 最佳答
我正在尝试使用以下命令添加以下条目: ldapadd -Y EXTERNAL -H ldapi:/// -f server5_ldap.ldif server5_ldap.ldif 的内容如下: #
我已完成描述的 Azure Cats&Dogs 教程 here我在 AKS 中启动应用程序的最后一步中遇到错误。 Kubernetes 报告我的 Pod 不足,但我不确定为什么会出现这种情况。几周前我
这是我在这里的第一个问题。 我一直想用流行的 IMDb 数据集创建一个数据集用于学习目的。目录如下: .../train/pos/和 .../train/neg/。我创建了一个函数,它将文本文件与其标
当我使用 gradle 构建时,它以信息失败: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000788800000
我正在执行 https://hub.docker.com/r/apache/nifi 的“独立实例,双向 SSL”部分中的步骤。 .但是,当我访问 NiFi 页面时,我的用户权限不足。以下是我正在使用
我在向部署在我的 Google Cloud Kubernetes 集群中的 Spring Boot 应用程序发送请求时遇到困难。我的应用程序收到一张照片并将其发送到 Google Vision API
我开发了第一个 Firestore 应用程序并没有真正意识到我没有构建权限模型,事实上我后来用 bolt 固定了它。 (如果这是问题的根源,我愿意接受有关权限最佳实践和/或如何更好地实现权限规则的反馈
我在 Ubuntu 上遇到 Apache-Karaf 3.0.0 的问题我想用命令“start”启动一个包。但我收到以下错误: Error executing command: Insufficien
我在 Ubuntu 12.04 上新部署服务器程序“MyServer”时遇到问题。该程序在第一台机器上运行良好。 但是在新机器上,MyServer程序在mysql_init()期间返回异常:“内存不足
所以我刚开始用 CUDA 编写,遵循 An Even Easier Introduction to CUDA指导。到目前为止,一切都很好。然后我想实现一个神经网络,这让我对函数 cudaMallocM
我正在用 c++ 风格的 opencv 2.3 开发一个项目。 在应用程序中,我加载视频并处理每一帧,并对 Mat 对象做一些事情。一段时间后,我收到内存不足错误。 我像这样捕捉帧: FCapture
我正在使用 web3.js v1.0.0-beta.34 和 nodeJS v9.11.2 在 Kovan 测试网上执行智能合约。同样的方法适用于我在 Ropsten 上使用另一个智能合约。这是我通过
我是一名优秀的程序员,十分优秀!