- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我有一个现有的微服务基础结构,我使用keycloak作为身份验证提供程序。在我为k8s命名空间启用istio服务网格之前,它一直运行良好。现在,除Keycloak之外,所有其他容器都可以工作。我对keycloak的部署如下所示:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: keycloak
namespace: lms
spec:
replicas: 1
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: jboss/keycloak
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
env:
- name: DB_DATABASE
value: lms
- name: DB_USER
value: root
- name: DB_PASSWORD
value: "some pass"
- name: DB_ADDR
value: mysql
- name: DB_PORT
value: "3306"
- name: KEYCLOAK_USER
value: admin
- name: KEYCLOAK_PASSWORD
value: "some pass"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
readinessProbe:
httpGet:
path: /auth/
port: 8080
initialDelaySeconds: 120
periodSeconds: 10
livenessProbe:
httpGet:
path: /auth/
port: 8080
initialDelaySeconds: 300
periodSeconds: 10
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned lms/keycloak-8cccb54c6-7czmq to gke-ing-standard-cluster-default-pool-59e1dee5-d4sn
Normal Pulled 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Container image "docker.io/istio/proxy_init:1.2.4" already present on machine
Normal Created 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Created container
Normal Started 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Started container
Normal Pulling 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn pulling image "jboss/keycloak"
Normal Pulled 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Successfully pulled image "jboss/keycloak"
Normal Created 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Created container
Normal Started 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Started container
Normal Pulled 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Container image "docker.io/istio/proxyv2:1.2.4" already present on machine
Normal Created 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Created container
Normal Started 10m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Started container
Warning Unhealthy 8m30s kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34376->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 8m20s kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34432->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 8m10s kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34490->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 8m kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34548->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 7m50s kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34616->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 7m40s kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34676->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 7m30s kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34736->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 7m20s kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34808->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 7m10s kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:34866->10.8.0.67:8080: read: connection reset by peer
Warning Unhealthy 30s (x31 over 7m) kubelet, gke-ing-standard-cluster-default-pool-59e1dee5-d4sn (combined from similar events): Readiness probe failed: Get http://10.8.0.67:8080/health: read tcp 10.8.0.1:37320->10.8.0.67:8080: read: connection reset by peer
2020-02-28T10:55:25.880386Z info FLAG: --zipkinAddress="zipkin.istio-system:9411"
2020-02-28T10:55:25.880485Z info Version root@ubuntu-docker.io/istio-94746ccd404a8e056483dd02e4e478097b950da6-dirty-94746ccd404a8e056483dd02e4e478097b950da6-dirty-Modified
2020-02-28T10:55:25.880701Z info Obtained private IP [10.8.0.67]
2020-02-28T10:55:25.881093Z info Proxy role: &model.Proxy{ClusterID:"", Type:"sidecar", IPAddresses:[]string{"10.8.0.67", "10.8.0.67"}, ID:"keycloak-8cccb54c6-7czmq.lms", Locality:(*core.Locality)(nil), DNSDomain:"lms.svc.cluster.local", TrustDomain:"cluster.local", PilotIdentity:"", MixerIdentity:"", ConfigNamespace:"", Metadata:map[string]string{}, SidecarScope:(*model.SidecarScope)(nil), ServiceInstances:[]*model.ServiceInstance(nil), WorkloadLabels:model.LabelsCollection(nil)}
2020-02-28T10:55:25.881244Z info PilotSAN []string{"spiffe://cluster.local/ns/istio-system/sa/istio-pilot-service-account"}
2020-02-28T10:55:25.882101Z info Effective config: binaryPath: /usr/local/bin/envoy
concurrency: 2
configPath: /etc/istio/proxy
connectTimeout: 10s
controlPlaneAuthPolicy: MUTUAL_TLS
discoveryAddress: istio-pilot.istio-system:15011
drainDuration: 45s
parentShutdownDuration: 60s
proxyAdminPort: 15000
serviceCluster: keycloak.lms
statNameLength: 189
tracing:
zipkin:
address: zipkin.istio-system:9411
2020-02-28T10:55:25.882325Z info Monitored certs: []string{"/etc/certs/key.pem", "/etc/certs/root-cert.pem", "/etc/certs/cert-chain.pem"}
2020-02-28T10:55:25.882399Z info waiting 2m0s for /etc/certs/key.pem
2020-02-28T10:55:25.882487Z info waiting 2m0s for /etc/certs/root-cert.pem
2020-02-28T10:55:25.882556Z info waiting 2m0s for /etc/certs/cert-chain.pem
2020-02-28T10:55:25.882634Z info PilotSAN []string{"spiffe://cluster.local/ns/istio-system/sa/istio-pilot-service-account"}
2020-02-28T10:55:25.882932Z info Opening status port 15020
2020-02-28T10:55:25.883419Z info Starting proxy agent
2020-02-28T10:55:25.883790Z info watching /etc/certs for changes
2020-02-28T10:55:25.883872Z info Received new config, resetting budget
2020-02-28T10:55:25.883950Z info Reconciling retry (budget 10)
2020-02-28T10:55:25.884016Z info Epoch 0 starting
2020-02-28T10:55:25.914398Z info Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster keycloak.lms --service-node sidecar~10.8.0.67~keycloak-8cccb54c6-7czmq.lms~lms.svc.cluster.local --max-obj-name-len 189 --local-address-ip-version v4 --allow-unknown-fields -l warning --component-log-level misc:error --concurrency 2]
[2020-02-28 10:55:25.976][12][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 14, no healthy upstream
[2020-02-28 10:55:25.976][12][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:49] Unable to establish new stream
2020-02-28T10:55:28.462367Z info Envoy proxy is ready
[2020-02-28 10:56:01.031][12][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 13,
[2020-02-28T10:57:00.599Z] "- - -" 0 - "-" "-" 1432 1491 226010 - "-" "-" "-" "-" "10.8.2.28:3306" outbound|3306||mysql.lms.svc.cluster.local 10.8.0.67:49060 10.8.2.28:3306 10.8.0.67:49058 -
[2020-02-28T10:56:51.178Z] "- - -" 0 - "-" "-" 48058 142271 235436 - "-" "-" "-" "-" "10.8.2.28:3306" outbound|3306||mysql.lms.svc.cluster.local 10.8.0.67:48996 10.8.2.28:3306 10.8.0.67:48994 -
[2020-02-28T10:56:50.072Z] "- - -" 0 - "-" "-" 2350 2551 236544 - "-" "-" "-" "-" "10.8.2.28:3306" outbound|3306||mysql.lms.svc.cluster.local 10.8.0.67:48980 10.8.2.28:3306 10.8.0.67:48978 -
[2020-02-28 11:01:02.564][12][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 13,
[2020-02-28T11:02:08.889Z] "- - -" 0 - "-" "-" 1432 1491 247625 - "-" "-" "-" "-" "10.8.2.28:3306" outbound|3306||mysql.lms.svc.cluster.local 10.8.0.67:50964 10.8.2.28:3306 10.8.0.67:50962 -
[2020-02-28T11:01:59.772Z] "- - -" 0 - "-" "-" 48058 142271 256746 - "-" "-" "-" "-" "10.8.2.28:3306" outbound|3306||mysql.lms.svc.cluster.local 10.8.0.67:50908 10.8.2.28:3306 10.8.0.67:50906 -
[2020-02-28T11:01:58.988Z] "- - -" 0 - "-" "-" 2350 2551 257532 - "-" "-" "-" "-" "10.8.2.28:3306" outbound|3306||mysql.lms.svc.cluster.local 10.8.0.67:50902 10.8.2.28:3306 10.8.0.67:50900 -
[2020-02-28 11:07:05.000][12][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 13,
最佳答案
经过很多头痛之后,我发现服务器实际上可以正常启动,并且可以从Pod内部进行访问。由于运行状况检查会强制重新创建容器,因此实际的RC会产生误导。最终,sidecar.istio.io/rewriteAppHTTPProbers:“true”解决了该问题。
spec:
replicas: 1
template:
metadata:
labels:
app: keycloak
annotations:
sidecar.istio.io/rewriteAppHTTPProbers: "true"
关于kubernetes - Keycloak和istio服务网格无法正常工作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60450545/
我们有一个将导出流量路由到服务网格之外的外部服务的最小示例。 apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata:
如何在 istio 的目标规则中使用部分匹配的 URI? 试图实现这样的目标: apiVersion: networking.istio.io/v1alpha3 kind: VirtualServic
我看到了Istio site提到速率限制支持,但我只能找到全局速率限制示例。 是否可以在用户级别这样做?例如,如果我的用户登录但在一秒钟内发送超过 50 个请求,那么我想阻止该用户等。在类似情况下,如
这里是 VirtualService 的例子,同时使用了超时和重试。 apiVersion: networking.istio.io/v1alpha3 kind: VirtualService met
这里是 VirtualService 的例子,同时使用了超时和重试。 apiVersion: networking.istio.io/v1alpha3 kind: VirtualService met
场景 我正在使用 Istio 1.5 来自 this question我知道 Istio 使用的默认 envoy 访问日志格式,即 \[%{TIMESTAMP_ISO8601:timestamp}\]
我们已经在开发和生产环境中使用 helm-chart 部署了 Istio 1.11.0。我们在 istio configmap 中使用以下配置,我们已通过 istio-control helm-cha
文档中没有提到主体名称是如何构造的。我所说的主体是指 istio sidecar 创建的客户端证书中的名称,并且可以在规范的 from 指令中的 istio AuthorizationPolicy 对
我目前正在研究 1.6 版的服务网格 Istio。数据平面(Envoy 代理)由控制平面配置。尤其是 Pilot( istiod 的一部分)负责将路由规则和配置传播给特使。我想知道通信是如何工作的?
当用户有多个与之关联的角色时,授权持有者 token 将超过 istio 提供的最大 header 大小,从而阻止所有请求。 授权不记名 token 超过 35,000 个字符,大小为 34K。 如何
我似乎无法找到/理解如何更改 kubernetes 中 Istio 负载均衡器的默认错误登录页面。 例如 503“上游不健康”页面。 是否可以在 Istio 中更改这些?如果是这样,我将如何去做? 提
我知道您可以使用 istio 在服务没有响应时打开断路器。而不是返回 503 , 是否可以重定向到不同的 URL?同样的问题,但是当原始服务返回 500 时,我们可以重定向到另一个 URL 吗? 或者
如何为 Kubernetes 禁用 Istio sidecar 注入(inject) Job ? apiVersion: batch/v1beta1 kind: CronJob metadata:
我正在尝试让 lua envoy 过滤器与 istio 网关一起工作,但我添加到集群并且它正在工作,就好像过滤器不存在一样。 我已经使用本指南在 GKE 上配置了我的 istio 集群 https:/
我想在 kubernetes 集群中运行的不同服务之间实现 TLS 相互身份验证,我发现 Istio 是一个很好的解决方案,无需对代码进行任何更改即可实现这一目标。 我正在尝试使用 Istio sid
我正在尝试让 lua envoy 过滤器与 istio 网关一起工作,但我添加到集群并且它正在工作,就好像过滤器不存在一样。 我已经使用本指南在 GKE 上配置了我的 istio 集群 https:/
我已经使用 Helm 安装了 ISTIO。我忘了启用 grafana、kiali 和 jaeger。安装 istio 后如何启用上述所有服务? 最佳答案 这里是 howto : 来自官方仓库。 你需要
据我了解,无论如何,您的 Istio 网关前都会有一个 NLB 或 ALB? 但我很困惑,因为 Istio Gateway 似乎为 ALB 为第 7 层甚至更多做了很多事情? 所以我读了 ALB ->
我正在使用 helm 安装带有 --set grafana.enabled=true 的 istio-1.0.0 版本. 要访问 grafana 仪表板,我必须使用 kubectl 进行端口转发命令。
我已经使用 Helm 安装了 ISTIO。我忘了启用 grafana、kiali 和 jaeger。安装 istio 后如何启用上述所有服务? 最佳答案 这里是 howto : 来自官方仓库。 你需要
我是一名优秀的程序员,十分优秀!