- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我有一个现有的微服务基础结构,我使用keycloak作为身份验证提供程序。在我为k8s命名空间启用istio服务网格之前,它一直运行良好。现在,除Keycloak之外,所有其他容器都可以工作。我对keycloak的部署如下所示:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: keycloak
namespace: lms
spec:
replicas: 1
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: jboss/keycloak
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
env:
- name: DB_DATABASE
value: lms
- name: DB_USER
value: root
- name: DB_PASSWORD
value: "some pass"
- name: DB_ADDR
value: mysql
- name: DB_PORT
value: "3306"
- name: KEYCLOAK_USER
value: admin
- name: KEYCLOAK_PASSWORD
value: "some pass"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
readinessProbe:
httpGet:
path: /auth/
port: 8080
initialDelaySeconds: 120
periodSeconds: 10
livenessProbe:
httpGet:
path: /auth/
port: 8080
initialDelaySeconds: 300
periodSeconds: 10
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned lms/keycloak-8cccb54c6-7czmq to gke-ing-standard-cluster-default-pool-59e1dee5-d4sn
Normal Pulled 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Container image "docker.io/istio/proxy_init:1.2.4" already present on machine
Normal Created 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Created container
Normal Started 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Started container
Normal Pulling 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn pulling image "jboss/keycloak"
Normal Pulled 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Successfully pulled image "jboss/keycloak"
Normal Created 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Created container
Normal Started 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Started container
Normal Pulled 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Container image "docker.io/istio/proxyv2:1.2.4" already present on machine
Normal Created 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Created container
Normal Started 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Started container
Warning Unhealthy 8m30s kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34376->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 8m20s kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34432->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 8m10s kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34490->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 8m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34548->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 7m50s kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34616->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 7m40s kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34676->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 7m30s kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34736->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 7m20s kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34808->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 7m10s kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34866->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 30s (x31 over 7m) kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn (combined from similar events): Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:37320->10.8.0.67:8080: read: connection reset by peer
2020-02-28T10:55:25.880386Z info FLAG: --zipkinAddress="zipkin.istio-system:9411"
2020-02-28T10:55:25.880485Z info Version root@ubuntu-docker.io/istio-94746ccd404a8e056483dd02e4e478097b950da6-dirty-94746ccd404a8e056483dd02e4e478097b950da6-dirty-Modified
2020-02-28T10:55:25.880701Z info Obtained private IP [10.8.0.67]
2020-02-28T10:55:25.881093Z info Proxy role: &model.Proxy{ClusterID:"", Type:"sidecar", IPAddresses:[]string{"10.8.0.67", "10.8.0.67"}, ID:"keycloak-8cccb54c6-7czmq.lms", Locality:(*core.Locality)(nil), DNSDomain:"lms.svc.cluster.local", TrustDomain:"cluster.local", PilotIdentity:"", MixerIdentity:"", ConfigNamespace:"", Metadata:map[string]string{}, SidecarScope:(*model.SidecarScope)(nil), ServiceInstances:[]*model.ServiceInstance(nil), WorkloadLabels:model.LabelsCollection(nil)}
2020-02-28T10:55:25.881244Z info PilotSAN []string{"spiffe://cluster.local/ns/istio-system/sa/istio-pilot-service-account"}
2020-02-28T10:55:25.882101Z info Effective config: binaryPath: /usr/local/bin/envoy
concurrency: 2
configPath: /etc/istio/proxy
connectTimeout: 10s
controlPlaneAuthPolicy: MUTUAL_TLS
discoveryAddress: istio-pilot.istio-system:15011
drainDuration: 45s
parentShutdownDuration: 60s
proxyAdminPort: 15000
serviceCluster: keycloak.lms
statNameLength: 189
tracing:
zipkin:
address: zipkin.istio-system:9411
2020-02-28T10:55:25.882325Z info Monitored certs: []string{"/etc/certs/key.pem", "/etc/certs/root-cert.pem", "/etc/certs/cert-chain.pem"}
2020-02-28T10:55:25.882399Z info waiting 2m0s for /etc/certs/key.pem
2020-02-28T10:55:25.882487Z info waiting 2m0s for /etc/certs/root-cert.pem
2020-02-28T10:55:25.882556Z info waiting 2m0s for /etc/certs/cert-chain.pem
2020-02-28T10:55:25.882634Z info PilotSAN []string{"spiffe://cluster.local/ns/istio-system/sa/istio-pilot-service-account"}
2020-02-28T10:55:25.882932Z info Opening status port 15020
2020-02-28T10:55:25.883419Z info Starting proxy agent
2020-02-28T10:55:25.883790Z info watching /etc/certs for changes
2020-02-28T10:55:25.883872Z info Received new config, resetting budget
2020-02-28T10:55:25.883950Z info Reconciling retry (budget 10)
2020-02-28T10:55:25.884016Z info Epoch 0 starting
2020-02-28T10:55:25.914398Z info Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster keycloak.lms --service-node sidecar~10.8.0.67~keycloak-8cccb54c6-7czmq.lms~lms.svc.cluster.local --max-obj-name-len 189 --local-address-ip-version v4 --allow-unknown-fields -l warning --component-log-level misc:error --concurrency 2]
[2020-02-28 10:55:25.976][12][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 14, no healthy upstream
[2020-02-28 10:55:25.976][12][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:49] Unable to establish new stream
2020-02-28T10:55:28.462367Z info Envoy proxy is ready
[2020-02-28 10:56:01.031][12][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 13,
[2020-02-28T10:57:00.599Z] "- - -" 0 - "-" "-" 1432 1491 226010 - "-" "-" "-" "-" "10.8.2.28:3306" outbound|3306||mysql.lms.svc.cluster.local 10.8.0.67:49060 10.8.2.28:3306 10.8.0.67:49058 -
[2020-02-28T10:56:51.178Z] "- - -" 0 - "-" "-" 48058 142271 235436 - "-" "-" "-" "-" "10.8.2.28:3306" outbound|3306||mysql.lms.svc.cluster.local 10.8.0.67:48996 10.8.2.28:3306 10.8.0.67:48994 -
[2020-02-28T10:56:50.072Z] "- - -" 0 - "-" "-" 2350 2551 236544 - "-" "-" "-" "-" "10.8.2.28:3306" outbound|3306||mysql.lms.svc.cluster.local 10.8.0.67:48980 10.8.2.28:3306 10.8.0.67:48978 -
[2020-02-28 11:01:02.564][12][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 13,
[2020-02-28T11:02:08.889Z] "- - -" 0 - "-" "-" 1432 1491 247625 - "-" "-" "-" "-" "10.8.2.28:3306" outbound|3306||mysql.lms.svc.cluster.local 10.8.0.67:50964 10.8.2.28:3306 10.8.0.67:50962 -
[2020-02-28T11:01:59.772Z] "- - -" 0 - "-" "-" 48058 142271 256746 - "-" "-" "-" "-" "10.8.2.28:3306" outbound|3306||mysql.lms.svc.cluster.local 10.8.0.67:50908 10.8.2.28:3306 10.8.0.67:50906 -
[2020-02-28T11:01:58.988Z] "- - -" 0 - "-" "-" 2350 2551 257532 - "-" "-" "-" "-" "10.8.2.28:3306" outbound|3306||mysql.lms.svc.cluster.local 10.8.0.67:50902 10.8.2.28:3306 10.8.0.67:50900 -
[2020-02-28 11:07:05.000][12][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 13,
最佳答案
经过很多头痛之后,我发现服务器实际上可以正常启动,并且可以从Pod内部进行访问。由于运行状况检查会强制重新创建容器,因此实际的RC会产生误导。最终,sidecar.istio.io/rewriteAppHTTPProbers:“true”解决了该问题。
spec:
replicas: 1
template:
metadata:
labels:
app: keycloak
annotations:
sidecar.istio.io/rewriteAppHTTPProbers: "true"
关于kubernetes - Keycloak和istio服务网格无法正常工作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60450545/
我通过 spring ioc 编写了一些 Rest 应用程序。但我无法解决这个问题。这是我的异常(exception): org.springframework.beans.factory.BeanC
我对 TestNG、Spring 框架等完全陌生,我正在尝试使用注释 @Value通过 @Configuration 访问配置文件注释。 我在这里想要实现的目标是让控制台从配置文件中写出“hi”,通过
为此工作了几个小时。我完全被难住了。 这是 CS113 的实验室。 如果用户在程序(二进制计算器)结束时选择继续,我们需要使用 goto 语句来到达程序的顶部。 但是,我们还需要释放所有分配的内存。
我正在尝试使用 ffmpeg 库构建一个小的 C 程序。但是我什至无法使用 avformat_open_input() 打开音频文件设置检查错误代码的函数后,我得到以下输出: Error code:
使用 Spring Initializer 创建一个简单的 Spring boot。我只在可用选项下选择 DevTools。 创建项目后,无需对其进行任何更改,即可正常运行程序。 现在,当我尝试在项目
所以我只是在 Mac OS X 中通过 brew 安装了 qt。但是它无法链接它。当我尝试运行 brew link qt 或 brew link --overwrite qt 我得到以下信息: ton
我在提交和 pull 时遇到了问题:在提交的 IDE 中,我看到: warning not all local changes may be shown due to an error: unable
我跑 man gcc | grep "-L" 我明白了 Usage: grep [OPTION]... PATTERN [FILE]... Try `grep --help' for more inf
我有一段代码,旨在接收任何 URL 并将其从网络上撕下来。到目前为止,它运行良好,直到有人给了它这个 URL: http://www.aspensurgical.com/static/images/a
在过去的 5 个小时里,我一直在尝试在我的服务器上设置 WireGuard,但在完成所有设置后,我无法 ping IP 或解析域。 下面是服务器配置 [Interface] Address = 10.
我正在尝试在 GitLab 中 fork 我的一个私有(private)项目,但是当我按下 fork 按钮时,我会收到以下信息: No available namespaces to fork the
我这里遇到了一些问题。我是 node.js 和 Rest API 的新手,但我正在尝试自学。我制作了 REST API,使用 MongoDB 与我的数据库进行通信,我使用 Postman 来测试我的路
下面的代码在控制台中给出以下消息: Uncaught DOMException: Failed to execute 'appendChild' on 'Node': The new child el
我正在尝试调用一个新端点来显示数据,我意识到在上一组有效的数据中,它在数据周围用一对额外的“[]”括号进行控制台,我认为这就是问题是,而新端点不会以我使用数据的方式产生它! 这是 NgFor 失败的原
我正在尝试将我的 Symfony2 应用程序部署到我的 Azure Web 应用程序,但遇到了一些麻烦。 推送到远程时,我在终端中收到以下消息 remote: Updating branch 'mas
Minikube已启动并正在运行,没有任何错误,但是我无法 curl IP。我在这里遵循:https://docs.traefik.io/user-guide/kubernetes/,似乎没有提到关闭
每当我尝试docker组成任何项目时,都会出现以下错误。 我尝试过有和没有sudo 我在这台机器上只有这个问题。我可以在Mac和Amazon WorkSpace上运行相同的容器。 (myslabs)
我正在尝试 pip install stanza 并收到此消息: ERROR: No matching distribution found for torch>=1.3.0 (from stanza
DNS 解析看起来不错,但我无法 ping 我的服务。可能是什么原因? 来自集群中的另一个 Pod: $ ping backend PING backend.default.svc.cluster.l
我正在使用Hibernate 4 + Spring MVC 4当我开始 Apache Tomcat Server 8我收到此错误: Error creating bean with name 'wel
我是一名优秀的程序员,十分优秀!