- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
说明
我对 Kubernetes 比较陌生。我可以在使用默认套接字 (/var/run/dockershim.sock) 时运行我的集群,但是当我尝试使用 crio 套接字从我的私有(private)仓库中提取图像时,我注意到速度甚至无法与之相比。
我正在尝试将所有节点配置为使用 crio.socket,但我无法使用此套接字启动主节点。
我遵循了 kubernetes Configuring each kubelet in your cluster using kubeadm 和 git 文档 cri-o 的文档。
不幸的是,我无法让它工作,因为它似乎忽略了私有(private) repo 标志。
重现问题的步骤:
kubeadm init \
--upload-certs \
--cri-socket=/var/run/crio/crio.sock \
--node-name=my_node_name \
--image-repository=my.private.repo \
--pod-network-cidr=10.96.0.0/16 \
--kubernetes-version=v1.18.2 \
--control-plane-endpoint=ip:6443 \
--apiserver-cert-extra-sans=ip \
--apiserver-advertise-address=ip
journalctl -xeu crio -f
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.043499089+02:00" level=debug msg="Trying to access \"k8s.gcr.io/pause:3.2\"" file="docker/docker_image_src.go:68"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.043547722+02:00" level=debug msg="Credentials not found" file="config/config.go:123"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.043576124+02:00" level=debug msg="Using registries.d directory /etc/containers/registries.d for sigstore configuration" file="docker/lookaside.go:51"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.043706369+02:00" level=debug msg=" Using \"default-docker\" configuration" file="docker/lookaside.go:169"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.043736378+02:00" level=debug msg=" No signature storage configuration found for k8s.gcr.io/pause:3.2" file="docker/lookaside.go:174"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.043769424+02:00" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/k8s.gcr.io" file="tlsclientconfig/tlsclientconfig.go:21"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.043858410+02:00" level=debug msg="GET https://k8s.gcr.io/v2/" file="docker/docker_client.go:516"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.046154250+02:00" level=debug msg="Ping https://k8s.gcr.io/v2/ err Get \"https://k8s.gcr.io/v2/\": dial tcp 10.254.3.15:443: connect: connection refused (&url.Error{Op:\"Get\", URL:\"https://k8s.gcr.io/v2/\", Err:(*net.OpError)(0xc00084d5e0)})" file="docker/docker_client.go:708"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.046239456+02:00" level=debug msg="GET https://k8s.gcr.io/v1/_ping" file="docker/docker_client.go:516"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.048653448+02:00" level=debug msg="Ping https://k8s.gcr.io/v1/_ping err Get \"https://k8s.gcr.io/v1/_ping\": dial tcp 10.254.3.15:443: connect: connection refused (&url.Error{Op:\"Get\", URL:\"https://k8s.gcr.io/v1/_ping\", Err:(*net.OpError)(0xc0006b0690)})" file="docker/docker_client.go:735"
描述你期望的结果:
# kubeadm init \
--upload-certs \
--cri-socket=/var/run/dockershim.sock \
--node-name=my_node_name \
--image-repository=my.private.repo \
--pod-network-cidr=10.96.0.0/16 \
--kubernetes-version=v1.18.2 \
--control-plane-endpoint=ip:6443 \
--apiserver-cert-extra-sans=ip \
--apiserver-advertise-address=ip
W0630 20:24:33.223266 29033 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0630 20:24:35.839949 29033 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0630 20:24:35.841420 29033 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.003647 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
key
[mark-control-plane] Marking the node hostname as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node hostname as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: token
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join ip:6443 --token token \
--discovery-token-ca-cert-hash sha256:hash \
--control-plane --certificate-key key
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join ip:6443 --token token \
--discovery-token-ca-cert-hash sha256:hash
如果我使用 crio 套接字启动节点:
# kubeadm init \
--upload-certs \
--cri-socket=/var/run/crio/crio.sock \
--node-name=my_node_name \
--image-repository=my.private.repo \
--pod-network-cidr=10.96.0.0/16 \
--kubernetes-version=v1.18.2 \
--control-plane-endpoint=ip:6443 \
--apiserver-cert-extra-sans=ip \
--apiserver-advertise-address=ip
W0630 20:32:33.827957 2916 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [hostname kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.96.134.57 10.96.134.57 10.96.134.57]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [hostname localhost] and IPs [10.96.134.57 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [hostname localhost] and IPs [10.96.134.57 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0630 20:32:37.829806 2916 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0630 20:32:37.830826 2916 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
我可以看到 localhost 正在监听端口 10248:
# curl -sSL http://localhost:10248/healthz
ok
crio 套接字示例(如文档中所述):
# curl -v --unix-socket /var/run/crio/crio.sock http://localhost/info | jq
* About to connect() to localhost port 80 (#0)
* Trying /var/run/crio/crio.sock...
* Failed to set TCP_KEEPIDLE on fd 3
* Failed to set TCP_KEEPINTVL on fd 3
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to localhost (/var/run/crio/crio.sock) port 80 (#0)
> GET /info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Date: Tue, 30 Jun 2020 18:36:35 GMT
< Content-Length: 240
<
{ [data not shown]
100 240 100 240 0 0 144k 0 --:--:-- --:--:-- --:--:-- 234k
* Connection #0 to host localhost left intact
{
"storage_driver": "overlay2",
"storage_root": "/var/lib/containers/storage",
"cgroup_driver": "systemd",
"default_id_mappings": {
"uids": [
{
"container_id": 0,
"host_id": 0,
"size": 4294967295
}
],
"gids": [
{
"container_id": 0,
"host_id": 0,
"size": 4294967295
}
]
}
}
kubelet status
的输出
# systemctl status kubelet -l
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Tue 2020-06-30 20:39:49 CEST; 6s ago
Docs: https://kubernetes.io/docs/
Main PID: 8502 (kubelet)
Tasks: 15
Memory: 20.1M
CGroup: /system.slice/kubelet.service
└─8502 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=hostname
Jun 30 20:39:55 hostname kubelet[8502]: I0630 20:39:55.369441 8502 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Jun 30 20:39:55 hostname kubelet[8502]: I0630 20:39:55.399015 8502 kubelet_node_status.go:70] Attempting to register node hostname
Jun 30 20:39:55 hostname kubelet[8502]: E0630 20:39:55.403707 8502 kubelet.go:2267] node "hostname" not found
Jun 30 20:39:55 hostname kubelet[8502]: E0630 20:39:55.503871 8502 kubelet.go:2267] node "hostname" not found
Jun 30 20:39:55 hostname kubelet[8502]: E0630 20:39:55.604115 8502 kubelet.go:2267] node "hostname" not found
Jun 30 20:39:55 hostname kubelet[8502]: E0630 20:39:55.704324 8502 kubelet.go:2267] node "hostname" not found
Jun 30 20:39:55 hostname kubelet[8502]: E0630 20:39:55.769448 8502 kubelet_node_status.go:92] Unable to register node "hostname" with API server: Post https://ip:6443/api/v1/nodes: dial tcp ip:6443: connect: connection refused
Jun 30 20:39:55 hostname kubelet[8502]: E0630 20:39:55.805779 8502 kubelet.go:2267] node "hostname" not found
Jun 30 20:39:55 hostname kubelet[8502]: E0630 20:39:55.906014 8502 kubelet.go:2267] node "hostname" not found
Jun 30 20:39:56 hostname kubelet[8502]: E0630 20:39:56.007272 8502 kubelet.go:2267] node "hostname" not found
据我所知,网络错误无关紧要,因为我还没有启动网络容器,所以此时会出现错误。
crio --version
的输出:
# crio --version
crio version 1.18.2
Version: 1.18.2
GitCommit: 7f261aeebffed079b4475dde8b9d602b01973d33
GitTreeState: clean
BuildDate: 2020-06-18T21:05:27Z
GoVersion: go1.14
Compiler: gc
Platform: linux/amd64
Linkmode: static
kubelet --version
的输出:
# kubelet --version
Kubernetes v1.18.2
LinuxOS version
的输出:
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.8 (Maipo)
其他环境详细信息(AWS、VirtualBox、物理等):
# cat /etc/default/kubelet
KUBELET_EXTRA_ARGS=--feature-gates="AllAlpha=false,RunAsGroup=true" --container-runtime=remote --cgroup-driver=systemd --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --runtime-request-timeout=5m
更新: 我在 github
Kubernetes v1.18.2 with crio version 1.18.2 failing to sync with kubelet on RH7 #3915 中提出了一张票。看起来有一个错误,因为 cri-o 无法处理远程存储库,它正在尝试提取默认 repo k8s.io。我会在获得更多信息后立即更新票证。
最佳答案
所以问题不完全是 CRI-O
上的错误。正如我们最初所想的那样(也是 CRI-O
开发团队),但如果用户希望使用 CRI-O
,似乎需要应用很多配置。作为 CRI
对于 kubernetes
并且还希望使用私有(private) repo 。
所以我不会把 CRI-O 的配置放在这里,因为它已经记录在我与团队提出的票证上 Kubernetes v1.18.2 with crio version 1.18.2 failing to sync with kubelet on RH7#3915 .
有人应该应用的第一个配置是配置将提取图像的容器的注册表:
$ cat /etc/containers/registries.conf
[[registry]]
prefix = "k8s.gcr.io"
insecure = false
blocked = false
location = "k8s.gcr.io"
[[registry.mirror]]
location = "my.private.repo"
CRI-O 建议将此配置作为标志传递给 kubelet (
haircommander/cri-o-kubeadm),但对我而言,它不仅仅使用此配置。
/var/lib/kubelet/config.yaml
。在运行期间。对我来说这是不可能的,因为节点需要从 CRI-O 套接字而不是任何其他套接字开始(引用
Configure cgroup driver used by kubelet on control-plane node)。
$ cat /tmp/config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: node.name
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
controlPlaneEndpoint: 1.2.3.4:6443
imageRepository: my.private.repo
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
networking:
dnsDomain: cluster.local
podSubnet: 10.85.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
然后用户可以简单地使用标志
--config <file.yml>
启动主/工作节点。并且节点将成功启动。
关于kubernetes - kubeadm v1.18.2 和 crio 版本 1.18.2 无法从 Centos7/RH7 上的私有(private)仓库启动主节点,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62675268/
无法使用 Hive 版本 1.1.0 HBase 版本 0.94.8 和 hadoop 版本 2.7.0 从 hive 创建 Hbase 表 hive (default)> CREATE TABLE
我试图为 electron app 创建可执行文件但面临这个问题 Unable to determine Electron version. Please specify an Electron ve
我正在尝试让自适应阈值在 python 绑定(bind)到 opencv 中工作(swig 一个 - 无法让 opencv 2.0 工作,因为我正在使用 beagleboard 因为交叉编译还没有工作
我一直在 linux 机器上使用 JMeter,在命令行下使用了一段时间。工作正常。 今天,我在 Windows 机器(新客户端等)上尝试了它,它确实可以工作,但在控制台窗口中输出有很大不同。 Lin
在我的编码环境中,我通常使用最新版本的 Java 和 Eclipse。当我编写源代码时,我不会注意我使用的 API 方法或类是否向后兼容旧版本的 Java 或 Eclipse。在 javadoc 中存
问题是关于版本的特定组合,但更普遍。 我刚刚从 Kubuntu 12.04 升级到 14.04。现在,当我想编译 CUDA 代码(使用 CUDA 6.5)时,我得到: #error -- unsupp
我目前正在对我的一些应用程序进行沙箱处理,看来我必须删除一些功能才能满足 Mac App Store 沙箱(和其他)规则。 显然用户不会因为失去功能而感到高兴,我担心他们不会指责苹果制定了愚蠢的规则,
我用 flash 和 js 版本创建了一个动画横幅。 是否可以检测低于版本 9 的 ie 版本,然后提供 Flash 横幅,否则提供 js 横幅。 最佳答案 您可以使用条件注释来检测 IE 版本
我有一个处理不同位置的数据库的应用程序,我想检查这些数据库是否使用 Firebird 2.5 或更高版本打开。我们最近从 Firebird 2.0 迁移到了 2.5,我们有很多数据库可以响应 sele
我正在开发一个应用程序,我使用托管在我的服务器上的 Java 和 Jersey 构建了后端部分。我在服务器上使用 Tomcat7 来调用 Web 服务。 我以前有一台安装了 Ubuntu 的计算机,我
我可以使用 GetVersionEx() 函数来获取 Windows 版本,但是这个函数将返回一个数字而不是一个字符串。但是没有问题,因为我可以将数字转换为字符串,例如: if (osvi.dwMaj
我已经在我的系统中安装了 Anaconda 2 & 3。 Anaconda 2 包含 python 2.7 & Anaconda 3 包含 python 3.6。 我需要使用命令提示符运行我的 pyt
我正在尝试构建一个 Android 项目,但发生了以下错误 Error:(10, 1) A problem occurred evaluating project ':app'. > Failed t
关闭。这个问题需要更多focused .它目前不接受答案。 想改进这个问题吗? 更新问题,使其只关注一个问题 editing this post . 关闭 4 年前。 Improve this qu
在降级我的 GCC 之前,我想知道是否有办法确定我的机器中的哪些程序/框架或依赖项会中断,以及是否有更好的方法来执行 openpose 安装? (例如,在 CMake 中更改某些内容) 有没有办法在不
我已经在终端的代码sudo apt-get install Shadowsocks-qt5中安装了Shadowsocks-Qt5,然后我可以通过搜索找到启动图标,但是它当我点击图标时打不开。然后我尝试
在网络上找到的文档说,MLLP V2(第 2 版)是用于传输 HL7 版本 3 内容的所有消息传输协议(protocol)的要求。似乎 MLLP 第 2 版主要用于 HL7 第 3 版。 我们可以/应
我正在使用带有 selinium webdriver 的 Protractor 。我的chromeDriver版本是78.0.1,chrome版本是78.0.3904.97。两个版本都匹配,应该不会有
我正在按照教程设置 mysql 数据库并做一些事情。我无法找到数据库资源管理器。我读了很多,但在 Window->show View-> Dataxxx 或右侧上部选项卡中无法正常工作。 最佳答案 从
我已经在 KDE 桌面上安装了 Anaconda 2.0.1。当我运行 python 并看到所有已安装的模块时,我收到此消息“无法将不兼容的 Qt 库(版本 0x40801)与该库(版本 0x4080
我是一名优秀的程序员,十分优秀!