gpt4 book ai didi

GO 获取 K8S api 服务器健康状态

转载 作者:行者123 更新时间:2023-12-02 11:24:16 26 4
gpt4 key购买 nike

我有一个 golang 程序,我需要向 K8S API server 添加一个新调用status (livez) api 来获取健康状态。
https://kubernetes.io/docs/reference/using-api/health-checks/
程序应该在 上运行同一个集群 api 服务器,需要获取 /livez状态,我试图在 client-go lib 中找到这个 API,但没有找到实现它的方法......
https://github.com/kubernetes/client-go
有没有办法从运行在 API 服务器运行的同一个集群上的 Go 程序中做到这一点?

最佳答案

更新(最终答案)
附加
OP 要求我修改答案以显示“微调”或“特定”服务帐户的配置,而不使用集群管理员。
据我所知,每个 pod 都有读取 /healthz 的权限。默认情况下。例如下面的 CronJob不使用 ServiceAccount 也能正常工作根本:

# cronjob
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: is-healthz-ok-no-svc
spec:
schedule: "*/5 * * * *" # at every fifth minute
jobTemplate:
spec:
template:
spec:
######### serviceAccountName: health-reader-sa
containers:
- name: is-healthz-ok-no-svc
image: oze4/is-healthz-ok:latest
restartPolicy: OnFailure
enter image description here
原来的
我继续为此写了一个概念证明。 You can find the full repo here ,但代码如下。
main.go
package main

import (
"os"
"errors"
"fmt"

"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)

func main() {
client, err := newInClusterClient()
if err != nil {
panic(err.Error())
}

path := "/healthz"
content, err := client.Discovery().RESTClient().Get().AbsPath(path).DoRaw()
if err != nil {
fmt.Printf("ErrorBadRequst : %s\n", err.Error())
os.Exit(1)
}

contentStr := string(content)
if contentStr != "ok" {
fmt.Printf("ErrorNotOk : response != 'ok' : %s\n", contentStr)
os.Exit(1)
}

fmt.Printf("Success : ok!")
os.Exit(0)
}

func newInClusterClient() (*kubernetes.Clientset, error) {
config, err := rest.InClusterConfig()
if err != nil {
return &kubernetes.Clientset{}, errors.New("Failed loading client config")
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return &kubernetes.Clientset{}, errors.New("Failed getting clientset")
}
return clientset, nil
}
文件
FROM golang:latest
RUN mkdir /app
ADD . /app
WORKDIR /app
RUN go build -o main .
CMD ["/app/main"]
部署.yaml
(作为 CronJob)
# cronjob
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: is-healthz-ok
spec:
schedule: "*/5 * * * *" # at every fifth minute
jobTemplate:
spec:
template:
spec:
serviceAccountName: is-healthz-ok
containers:
- name: is-healthz-ok
image: oze4/is-healthz-ok:latest
restartPolicy: OnFailure
---
# service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: is-healthz-ok
namespace: default
---
# cluster role binding
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: is-healthz-ok
subjects:
- kind: ServiceAccount
name: is-healthz-ok
namespace: default
roleRef:
kind: ClusterRole
##########################################################################
# Instead of assigning cluster-admin you can create your own ClusterRole #
# I used cluster-admin because this is a homelab #
##########################################################################
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
截屏
成功运行 CronJob
enter image description here

更新 1
OP 询问如何部署“in-cluster-client-config”,所以我提供了一个示例部署(我正在使用的一个)。
你可以找到 repo here
示例部署(我使用的是 CronJob,但它可以是任何东西):
cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: remove-terminating-namespaces-cronjob
spec:
schedule: "0 */1 * * *" # at minute 0 of each hour aka once per hour
#successfulJobsHistoryLimit: 0
#failedJobsHistoryLimit: 0
jobTemplate:
spec:
template:
spec:
serviceAccountName: svc-remove-terminating-namespaces
containers:
- name: remove-terminating-namespaces
image: oze4/service.remove-terminating-namespaces:latest
restartPolicy: OnFailure
rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: svc-remove-terminating-namespaces
namespace: default
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: crb-namespace-reader-writer
subjects:
- kind: ServiceAccount
name: svc-remove-terminating-namespaces
namespace: default
roleRef:
kind: ClusterRole
##########################################################################
# Instead of assigning cluster-admin you can create your own ClusterRole #
# I used cluster-admin because this is a homelab #
##########################################################################
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---

原答案
听起来您正在寻找的是来自 client-go 的“in-cluster-client-config”。
请务必记住,当使用“in-cluster-client-config”时,Go 代码中的 API 调用使用“那个”pod 的服务帐户。只是想确保您正在使用有权读取“/livez”的帐户进行测试。
我测试了以下代码,我能够获得“livez”状态..
package main

import (
"errors"
"flag"
"fmt"
"path/filepath"

"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/rest"
"k8s.io/client-go/util/homedir"
)

func main() {
// I find it easiest to use "out-of-cluster" for tetsing
// client, err := newOutOfClusterClient()

client, err := newInClusterClient()
if err != nil {
panic(err.Error())
}

livez := "/livez"
content, _ := client.Discovery().RESTClient().Get().AbsPath(livez).DoRaw()

fmt.Println(string(content))
}

func newInClusterClient() (*kubernetes.Clientset, error) {
config, err := rest.InClusterConfig()
if err != nil {
return &kubernetes.Clientset{}, errors.New("Failed loading client config")
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return &kubernetes.Clientset{}, errors.New("Failed getting clientset")
}
return clientset, nil
}

// I find it easiest to use "out-of-cluster" for tetsing
func newOutOfClusterClient() (*kubernetes.Clientset, error) {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()

// use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
return nil, err
}

// create the clientset
client, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, err
}

return client, nil
}

关于GO 获取 K8S api 服务器健康状态,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64113932/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com