- VisualStudio2022插件的安装及使用-编程手把手系列文章
- pprof-在现网场景怎么用
- C#实现的下拉多选框,下拉多选树,多级节点
- 【学习笔记】基础数据结构:猫树
之前在单机测试k8s的kind最近故障了,虚拟机运行个几分钟后就宕机了,不知道是根因是什么,而且kind部署k8s不太好做一些个性化配置,干脆用二进制方式重新搭一个单机k8s.
因为是用来开发测试的,所以control-panel就不做高可用了,etcd+apiserver+controller-manager+scheduler都只有一个实例.
环境信息:
本文中的大部分配置文件已上传到gitee - k8s-note,目录为"安装k8s/二进制单机部署k8s-v1.30.5",如有需要可直接clone repo. 。
本节命令大部分都要root权限,如果执行命令时提示权限不足,可自行切换root用户或使用sudo.
hostnamectl set-hostname k8s-node1
/etc/hosts
文件。如果内网有自建DNS可忽略192.168.0.31 k8s-node1
sudo apt install -y chrony
sudo systemctl start chrony
/etc/fstab
文件,将swap相关配置行删除或注释。sudo swapoff -a
# 1. 添加配置
cat <<EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
# 2. 立即装载
modprobe overlay
modprobe br_netfilter
# 3. 检查装载。如果没有输出结果则说明没有装载成功。
lsmod | grep br_netfilter
net.bridge.bridge-nf-call-ip6tables
、net.bridge.bridge-nf-call-iptables
和net.ipv4.ip_forward
这三个参数,其它参数可按情况自行修改。# 1. 添加配置文件
cat << EOF > /etc/sysctl.d/k8s-sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
vm.swappiness = 0
EOF
# 2. 配置生效
sysctl -p /etc/sysctl.d/k8s-sysctl.conf
# 1. 安装依赖
apt install -y ipset ipvsadm
# 2. 立即装载
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
# 3. 固化到配置文件
cat << EOF > /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF
# 4. 检查是否已装载
lsmod |grep ip_vs
k8s在1.24版本之后不再直接支持docker作为容器运行时,所以本文使用使用containerd。二进制安装包可从GitHub - containerd下载,注意要下载cri-containerd-cni版本的 。
tar xf cri-containerd-cni-1.7.22-linux-amd64.tar.gz -C /
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
/etc/containerd/config.toml
,修改以下内容# 对于使用systemd作为init system的linux发行版,官方建议用systemd作为容器cgroup driver
# false改成true
SystemdCgroup = true
# pause镜像的地址改为自己在阿里云上传的镜像地址。如果是内网环境,可改为内网registry的地址
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/rainux/pause:3.9"
systemctl start containerd
systemctl enable containerd
crictl images
后面的k8s和etcd集群都会用到ca证书。如果组织能提供统一的CA认证中心,则直接使用组织颁发的CA证书即可。如果没有统一的CA认证中心,则可以通过颁发自签名的CA证书来完成安全配置。这里自行生成一个ca证书.
# 生成私钥文件ca.key
openssl genrsa -out ca.key 2048
# 根据私钥文件生成根证书文件ca.crt
# /CN为master的主机名或IP地址
# days为证书的有效期
openssl req -x509 -new -nodes -key ca.key -subj "/CN=k8s-node1" -days 36500 -out ca.crt
# 拷贝ca证书到/etc/kubernetes/pki
mkdir -p /etc/kubernetes/pki
cp ca.crt ca.key /etc/kubernetes/pki/
etcd的安装包可以从官网下载,下载后解压。可以将压缩包中的etcd和etcdctl放到环境变量PATH中的目录.
etcd_ssl.cnf
。IP地址为etcd节点。[ req ]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[ req_distinguished_name ]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[ alt_names ]
IP.1 = 192.168.0.31
openssl genrsa -out etcd_server.key 2048
openssl req -new -key etcd_server.key -config etcd_ssl.cnf -subj "/CN=etcd-server" -out etcd_server.csr
openssl x509 -req -in etcd_server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 3650 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_server.crt
openssl genrsa -out etcd_client.key 2048
openssl req -new -key etcd_client.key -config etcd_ssl.cnf -subj "/CN=etcd-client" -out etcd_client.csr
openssl x509 -req -in etcd_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 3650 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_client.crt
ETCD_NAME=etcd1
ETCD_DATA_DIR=/home/rainux/apps/etcd/data
ETCD_CERT_FILE=/home/rainux/apps/etcd/certs/etcd_server.crt
ETCD_KEY_FILE=/home/rainux/apps/etcd/certs/etcd_server.key
ETCD_TRUSTED_CA_FILE=/home/rainux/apps/certs/ca.crt
ETCD_CLIENT_CERT_AUTH=true
ETCD_LISTEN_CLIENT_URLS=https://192.168.0.31:2379
ETCD_ADVERTISE_CLIENT_URLS=https://192.168.0.31:2379
ETCD_PEER_CERT_FILE=/home/rainux/apps/etcd/certs/etcd_server.crt
ETCD_PEER_KEY_FILE=/home/rainux/apps/etcd/certs/etcd_server.key
ETCD_PEER_TRUSTED_CA_FILE=/home/rainux/apps/certs/ca.crt
ETCD_LISTEN_PEER_URLS=https://192.168.0.31:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.0.31:2380
ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.0.31:2380"
ETCD_INITIAL_CLUSTER_STATE=new
/etc/systemd/system/etcd.service
,注意根据实际修改配置文件和etcd二进制文件的路径[Unit]
Description=etcd key-value store
Documentation=https://github.com/etcd-io/etcd
After=network.target
[Service]
User=rainux
EnvironmentFile=/home/rainux/apps/etcd/conf/etcd.conf
ExecStart=/home/rainux/apps/etcd/etcd
Restart=on-failure
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
# 检查service状态
systemctl status etcd
etcdctl --cacert=/etc/kubernetes/pki/ca.crt --cert=$HOME/apps/certs/etcd_client.crt --key=$HOME/apps/certs/etcd_client.key --endpoints=https://192.168.0.31:2379 endpoint health
# 正常情况下会有类似以下输出
https://192.168.0.31:2379 is healthy: successfully committed proposal: took = 13.705325ms
k8s的二进制文件安装包可以从github下载:https://github.com/kubernetes/kubernetes/releases 。
在changelog中找到二进制包的下载链接,下载server binary即可,里面包含了master和node的二进制文件.
解压后将其中的二进制文件挪到 /usr/local/bin目录 。
apiserver的核心功能是提供k8s各类资源对象的增删改查及watch等HTTP REST接口,成为集群内各个功能模块之间数据交互和通信的中心枢纽,是整个系统的数据总线和数据中心。除此之外,它还是集群管理的API入口,是资源配额控制的入口,提供了完备的集群安全机制.
/etc/hosts
。IP.1为Master Service虚拟服务的Cluster IP地址,IP.2为apiserver的服务器IP[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = k8s-node1
IP.1 = 169.169.0.1
IP.2 = 192.168.0.31
openssl genrsa -out apiserver.key 2048
openssl req -new -key apiserver.key -config master_ssl.cnf -subj "/CN=k8s-node1" -out apiserver.csr
openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile master_ssl.cnf -out apiserver.crt
cat<<EOF > sa-csr.json
{
"CN":"sa",
"key":{
"algo":"rsa",
"size":2048
},
"names":[
{
"C":"CN",
"L":"BeiJing",
"ST":"BeiJing",
"O":"k8s",
"OU":"System"
}
]
}
EOF
cfssl gencert -initca sa-csr.json | cfssljson -bare sa -
openssl x509 -in sa.pem -pubkey -noout > sa.pub
KUBE_API_ARGS="--secure-port=6443 \
--tls-cert-file=/home/rainux/apps/certs/apiserver.crt \
--tls-private-key-file=/home/rainux/apps/certs/apiserver.key \
--client-ca-file=/home/rainux/apps/certs/ca.crt \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--service-account-key-file=/home/rainux/apps/certs/sa.pub \
--service-account-signing-key-file=/home/rainux/apps/certs/sa-key.pem \
--apiserver-count=1 \
--endpoint-reconciler-type=master-count \
--etcd-servers=https://192.168.0.31:2379 \
--etcd-cafile=/home/rainux/apps/certs/ca.crt \
--etcd-certfile=/home/rainux/apps/certs/etcd_client.crt \
--etcd-keyfile=/home/rainux/apps/certs/etcd_client.key \
--service-cluster-ip-range=169.169.0.0/16 \
--service-node-port-range=30000-32767 \
--allow-privileged=true \
--audit-log-maxsize=100 \
--audit-log-maxage=15 \
--audit-log-path=/home/rainux/apps/kubernetes/logs/apiserver.log --v=2"
/etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
[Service]
EnvironmentFile=/home/rainux/apps/kubernetes/conf/apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
# 检查service状态
systemctl status kube-apiserver
openssl genrsa -out client.key 2048
# /CN的名称用于标识连接apiserver的客户端用户名称
openssl req -new -key client.key -subj "/CN=admin" -out client.csr
openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 36500
$HOME/.kube/config
apiVersion: v1
kind: Config
clusters:
- name: default
cluster:
server: https://192.168.0.31:6443
certificate-authority: /home/rainux/apps/certs/ca.crt
users:
- name: admin
user:
client-certificate: /home/rainux/apps/certs/client.crt
client-key: /home/rainux/apps/certs/client.key
contexts:
- context:
cluster: default
user: admin
name: default
current-context: default
controller-manager通过apiserver提供的接口实时监控集群中特定资源的状态变化,当资源对象不符合预期状态时,controller-manager会尝试将其状态调整为期望的状态.
/home/rainux/apps/kubernetes/conf/kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/home/rainux/.kube/config \
--leader-elect=true \
--service-cluster-ip-range=169.169.0.0/16 \
--service-account-private-key-file=/home/rainux/apps/certs/apiserver.key \
--root-ca-file=/home/rainux/apps/certs/ca.crt \
--v=0"
/etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
[Service]
EnvironmentFile=/home/rainux/apps/kubernetes/conf/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
/home/rainux/apps/kubernetes/conf/kube-scheduler.conf
KUBE_SCHEDULER_ARGS="--kubeconfig=/home/rainux/.kube/config \
--leader-elect=true \
--v=0"
/etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
[Service]
EnvironmentFile=/home/rainux/apps/kubernetes/conf/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
hostname-override
和kubeconfig
。KUBELET_ARGS="--kubeconfig=/home/rainux/.kube/config \
--config=/home/rainux/apps/kubernetes/conf/kubelet.config \
--hostname-override=k8s-node1 \
--v=0 \
--container-runtime-endpoint="unix:///run/containerd/containerd.sock"
/home/rainux/apps/kubernetes/conf/kubelet.config
文件。kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0 # 服务监听地址
port: 10250 # 服务监听端口号
cgroupDriver: systemd # cgroup驱动,默认为cgroupfs, 建议systemd
clusterDNS: ["169.169.0.100"] # 集群DNS地址
clusterDomain: cluster.local # 服务DNS域名后缀
authentication: # 是否允许匿名访问或者是否使用webhook鉴权
anonymous:
enabled: true
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
[Service]
EnvironmentFile=/home/rainux/apps/kubernetes/conf/kubelet.conf
ExecStart=/usr/local/bin/kubelet $KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
/home/rainux/apps/kubernetes/conf/kube-proxy.conf
。proxy-mode
参数默认为iptables,如果安装了ipvs,建议修改为ipvs
KUBE_PROXY_ARGS="--kubeconfig=/home/rainux/.kube/config \
--hostname-override=k8s-node1 \
--proxy-mode=ipvs \
--v=0"
/etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=kubelet.service
[Service]
EnvironmentFile=/home/rainux/apps/kubernetes/conf/kube-proxy.conf
ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
wget https://docs.projectcalico.org/manifests/calico.yaml
image: registry.cn-hangzhou.aliyuncs.com/rainux/calico:cni-v3.25.0
image: registry.cn-hangzhou.aliyuncs.com/rainux/calico:node-v3.25.0
image: registry.cn-hangzhou.aliyuncs.com/rainux/calico:kube-controllers-v3.25.0
kubectl create -f calico.yaml
kubectl get pods -A
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
cluster.local {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local 169.169.0.0/16 {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
. {
cache 30
loadbalance
forward . /etc/resolv.conf
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/name: "CoreDNS"
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
priorityClassName: system-cluster-critical
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
containers:
- name: coredns
image: registry.cn-hangzhou.aliyuncs.com/rainux/coredns:1.11.3
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 169.169.0.100
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
kubectl create -f coredns.yaml
test-dns.yaml
用来测试dns是否正常。创建这个测试对象后,在debian的pod中安装nslookup,测试能否解析出svc-nginx# 创建测试dns的pod
kubectl create -f test-dns.yaml
# 在debian的pod中安装nslookup和curl
apt update -y
apt install -y dnsutils curl
# 使用nslookup和curl测试能否通过域名请求到nginx服务
nslookup svc-nginx
curl http://svc-nginx
在新版k8s中,系统资源的采集和HPA功能均需要使用metrics-server 。
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=10250
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls # 添加该行参数以使用自签名证书
image: registry.cn-hangzhou.aliyuncs.com/rainux/metrics-server:v0.7.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 10250
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
kubectl create -f metrics-server.yaml
kubectl top node
kubectl top pod
按照以上步骤执行完成后,一个用于开发测试的单机k8s就搭建好了,而且增加节点也比较方便,同时二进制部署方式也便于修改集群参数.
最后此篇关于[kubernetes]二进制方式部署单机k8s-v1.30.5的文章就讲到这里了,如果你想了解更多关于[kubernetes]二进制方式部署单机k8s-v1.30.5的内容请搜索CFSDN的文章或继续浏览相关文章,希望大家以后支持我的博客! 。
我正在尝试将谷歌地图集成到 Xamarin Android。但是,如标题中所写,收到错误。此错误出现在我的 SetContentView (Resource.Layout.Main); 上,如下所示:
在 Delphi 中如何以非文本模式打开二进制文件?类似于 C 函数 fopen(filename,"rb") 最佳答案 有几个选项。 1。使用文件流 var Stream: TFileStrea
我现在正在处理一个问题,如下所示: 有两个数字 x1 和 x2 并且 x2 > x1。 例如 x1 = 5; x2 = 10; 而且我必须在二进制表示中找到 x1 和 x2 之间的总和。 5 = 10
我有这个“程序集”文件(仅包含 directives ) // declare protected region as somewhere within the stack .equiv prot_s
有没有办法在powershell中确定指定的文件是否包含指定的字节数组(在任何位置)? 就像是: fgrep --binary-files=binary "$data" "$filepath" 当然,
我是一名工程师,而不是软件程序员,所以请原谅我的无知。 我编写了一个 Delphi(7SE) 程序,用于从连接到两个数字温度计的 USB 端口读取“真实”数据类型。 我已经完成了该计划的大部分内容。
我有一些代码,例如: u=(float *)calloc(n, sizeof(float)); for(i=1; i
typedef struct pixel_type { unsigned char r; unsigned char g; unsigned char b;
如何判断二进制数是否为负数? 目前我有下面的代码。它可以很好地转换为二进制文件。转换为十进制时,我需要知道最左边的位是否为 1 以判断它是否为负数,但我似乎无法弄清楚该怎么做。 此外,我如何才能让它返
我有一个带有适当重载的 Vect*float 运算符的 vector 类,我正在尝试创建全局/非成员 float*Vect 运算符,如下所示:(注意这是一个经过大量编辑的示例) class Vect
对于使用 C 编程的项目,我们正在尝试将图像转换为二进制数据,反之亦然。我们在网上找到的所有其他解决方案都是用 C++ 或 Java 编写的。这是我们尝试过的方法: 将图像转换为包含二进制数据的文本文
我需要对列表的元素求和,其中包含所有零或一,如果列表中有 1,则结果为 1,否则为 0。 def binary_search(l, low=0,high=-1): if not l: retu
我到处搜索以找到将 float 转换为八进制或二进制的方法。我知道 float.hex 和 float.fromhex。是否有模块可以对八进制/二进制值执行相同的工作? 例如:我有一个 float 1
当我阅读有关 list.h 文件中的 hlist 的 FreeBSD 源代码时,我对这个宏感到困惑: #define hlist_for_each_entry_safe(tp, p, n, head,
我不知道出了什么问题,也不知道为什么会出现此错误。我四处搜索,但我终究无法弄明白。 void print_arb_base(unsigned int n, unsigned int b) {
在任何语言中都可以轻松地将十进制转换为二进制,反之亦然,但我需要一个稍微复杂一点的函数。 给定一个十进制数和一个二进制位,我需要知道二进制位是开还是关(真或假)。 示例: IsBitTrue(30,1
在下面的代码中,我创建了两个文件,一个是文本格式,另一个是二进制格式。文件的图标显示相同。但是这两个文件的特征完全相同,包括大小、字符集(==二进制)和流(八位字节)。为什么没有文本文件?因为如果我明
我想通读一个二进制文件。谷歌搜索“python binary eof”引导我here . 现在,问题: 为什么容器(SO 答案中的 x)不包含单个(当前)字节而是包含一大堆字节?我做错了什么? 如果应
为什么只允许以 10 为基数使用小数点?为什么以下会引发语法错误? 0b1011101.1101 我输入的数字是否有歧义?除了 93.8125 之外,字符串似乎没有其他可能的数字 同样的问题也适用于其
boost 库中有二进制之类的东西吗?例如我想写: binary a; 我很惭愧地承认我曾尝试找到它(Google、Boost)但没有结果。他们提到了一些关于 binary_int<> 的内容,但我既
我是一名优秀的程序员,十分优秀!