- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
所以我有这个项目,我已经部署在 GKE 中,我正在尝试从 github 操作中制作 CI/CD。所以我添加了包含
的工作流文件name: Build and Deploy to GKE
on:
push:
branches:
- main
env:
PROJECT_ID: ${{ secrets.GKE_PROJECT }}
GKE_CLUSTER: ${{ secrets.GKE_CLUSTER }} # Add your cluster name here.
GKE_ZONE: ${{ secrets.GKE_ZONE }} # Add your cluster zone here.
DEPLOYMENT_NAME: ems-app # Add your deployment name here.
IMAGE: ciputra-ems-backend
jobs:
setup-build-publish-deploy:
name: Setup, Build, Publish, and Deploy
runs-on: ubuntu-latest
environment: production
steps:
- name: Checkout
uses: actions/checkout@v2
# Setup gcloud CLI
- uses: google-github-actions/setup-gcloud@94337306dda8180d967a56932ceb4ddcf01edae7
with:
service_account_key: ${{ secrets.GKE_SA_KEY }}
project_id: ${{ secrets.GKE_PROJECT }}
# Configure Docker to use the gcloud command-line tool as a credential
# helper for authentication
- run: |-
gcloud --quiet auth configure-docker
# Get the GKE credentials so we can deploy to the cluster
- uses: google-github-actions/get-gke-credentials@fb08709ba27618c31c09e014e1d8364b02e5042e
with:
cluster_name: ${{ env.GKE_CLUSTER }}
location: ${{ env.GKE_ZONE }}
credentials: ${{ secrets.GKE_SA_KEY }}
# Build the Docker image
- name: Build
run: |-
docker build \
--tag "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA" \
--build-arg GITHUB_SHA="$GITHUB_SHA" \
--build-arg GITHUB_REF="$GITHUB_REF" \
.
# Push the Docker image to Google Container Registry
- name: Publish
run: |-
docker push "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA"
# Set up kustomize
- name: Set up Kustomize
run: |-
curl -sfLo kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v3.1.0/kustomize_3.1.0_linux_amd64
chmod u+x ./kustomize
# Deploy the Docker image to the GKE cluster
- name: Deploy
run: |-
./kustomize edit set image LOCATION-docker.pkg.dev/PROJECT_ID/REPOSITORY/IMAGE:TAG=$GAR_LOCATION-docker.pkg.dev/$PROJECT_ID/$REPOSITORY/$IMAGE:$GITHUB_SHA
./kustomize build . | kubectl apply -k ./
kubectl rollout status deployment/$DEPLOYMENT_NAME
kubectl get services -o wide
但是当工作流到达部署部分时,它显示错误
The Service "ems-app-service" is invalid: metadata.resourceVersion: Invalid value: "": must be specified for an update
现在我已经搜索过这实际上不是真的,因为 resourceVersion 应该会随着每次更新而改变,所以我只是删除了它
这是我的 kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yaml
我的 deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 1
labels:
app: ems-app
name: ems-app
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: ems-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: ems-app
spec:
containers:
- image: gcr.io/ciputra-nusantara/ems@sha256:70c34c5122039cb7fa877fa440fc4f98b4f037e06c2e0b4be549c4c992bcc86c
imagePullPolicy: IfNotPresent
name: ems-sha256-1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
和我的 service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: ems-app
name: ems-app-service
namespace: default
spec:
clusterIP: 10.88.10.114
clusterIPs:
- 10.88.10.114
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 30261
port: 80
protocol: TCP
targetPort: 80
selector:
app: ems-app
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 34.143.255.159
最佳答案
由于这个问题的标题与 Kubernetes 相关多于与 GCP 相关,所以我会回答,因为我在使用 AWS EKS 时遇到了同样的问题。
How to fix metadata.resourceVersion: Invalid value: 0x0: must be specified for an update
是使用kubectl apply
时可能出现的错误
Kubectl apply
生成 three-way-merge在本地文件、实时 kubernetes 对象 list 和该实时对象 list 中的注释 kubectl.kubernetes.io/last-applied-configuration
之间。
因此,出于某种原因,值 resourceVersion
设法写入了您的 last-applied-configuration
,可能是因为有人将实时 list 导出到文件中,修改它,然后再次应用它。
当您尝试应用没有该值的新本地文件 - 并且不应该有它 - 但该值存在于 last-applied-configuration
中时,它认为它应该从你的实时 list 中删除,并在随后的 patch
操作中专门发送它,比如 resourceVersion: null
,这应该会摆脱它。但它不起作用,本地文件违反了规则(据我所知)并变得无效。
作为feichashao提到,解决它的方法是删除 last-applied-configuration
注释并重新应用您的本地文件。
一旦你解决了,你的kubectl apply
输出将是这样的:
Warning: resource <your_resource> is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
并且您的实时 list 将被更新。
关于kubernetes - 如何修复 metadata.resourceVersion : Invalid value: "": must be specified for an update,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/71581444/
我们在来自事件导出器容器的 GCP kubernetes 集群事件日志中收到了很多警告。 event-exporter Jun 4, 2018, 10:45:15 AM W0604 14:45
所以我有这个项目,我已经部署在 GKE 中,我正在尝试从 github 操作中制作 CI/CD。所以我添加了包含的工作流文件 name: Build and Deploy to GKE on: p
我是一名优秀的程序员,十分优秀!