- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试使用 Terraform 资源创建一个带有 ALB 入口的 AWS EKS 集群。
This document表示入口将自动创建一个带有关联监听器和目标组的负载均衡器。
Kubernetes Ingress 创建 ALB 负载均衡器、安全组和规则,但不创建目标组或监听器。我曾尝试使用网关或应用程序子网,但没有任何区别。我尝试设置安全组,但 ALB 设置并使用了它自己的自我管理的安全组。
我靠this guide
ALB 的 curl 让我感到
Failed to connect tode59ecbf-default-mainingre-8687-1051686593.ap-southeast-1.elb.amazonaws.comport 80: Connection refused
kubectl
单独应用 kubernetes 入口,但结果相同。它创建 ALB 和一个安全组,其中包含端口规则,但没有目标组或监听器。
aws eks describe-cluster --name my-tf-eks-cluster --query "cluster.endpoint"
粘贴集群端点时进入浏览器我得到这个:
{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "forbidden: User "system:anonymous" cannot get path "/"", "reason": "Forbidden","details": {}, "code": 403 }
kubectl describe ingresses
Name: main-ingress
Namespace: default
Address:
Default backend: go-hello-world:8080 (<none>)
Rules:
Host Path Backends
---- ---- --------
* * go-hello-world:8080 (<none>)
aws eks describe-cluster --name my-tf-eks-cluster --query cluster.endpoint"
"https://88888888B.gr7.ap-southeast-1.eks.amazonaws.com"
curl https://88888888B.gr7.ap-southeast-1.eks.amazonaws.com
curl: (60) SSL certificate problem: unable to get local issuer certificate
编辑:IAM 集群策略缺少这些权限。我决定使用 ELB 可能会更好,因为它们可以终止 ssl 证书,然后使用 traefik 作为后端代理,所以我现在无法真正测试它。谁能确认 ALB 是否需要这些权限?
"elasticloadbalancing:DescribeListenerCertificates",
"elasticloadbalancing:AddListenerCertificates",
"elasticloadbalancing:RemoveListenerCertificates"
这是我的 EKS 主资源:
data "aws_iam_role" "tf-eks-master" {
name = "terraform-eks-cluster"
}
resource "aws_eks_cluster" "tf_eks" {
name = var.cluster_name
role_arn = data.aws_iam_role.tf-eks-master.arn
vpc_config {
security_group_ids = [aws_security_group.master.id]
subnet_ids = var.application_subnet_ids
endpoint_private_access = true
endpoint_public_access = true
}
}
ALB 入口 Controller :
output "vpc_id" {
value = data.aws_vpc.selected
}
data "aws_subnet_ids" "selected" {
vpc_id = data.aws_vpc.selected.id
tags = map(
"Name", "application",
)
}
resource "kubernetes_deployment" "alb-ingress" {
metadata {
name = "alb-ingress-controller"
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
namespace = "kube-system"
}
spec {
selector {
match_labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
template {
metadata {
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
spec {
volume {
name = kubernetes_service_account.alb-ingress.default_secret_name
secret {
secret_name = kubernetes_service_account.alb-ingress.default_secret_name
}
}
container {
# This is where you change the version when Amazon comes out with a new version of the ingress controller
image = "docker.io/amazon/aws-alb-ingress-controller:v1.1.8"
name = "alb-ingress-controller"
args = [
"--ingress-class=alb",
"--cluster-name=${var.cluster_name}",
"--aws-vpc-id=${data.aws_vpc.selected.id}",
"--aws-region=${var.aws_region}"
]
volume_mount {
name = kubernetes_service_account.alb-ingress.default_secret_name
mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
read_only = true
}
}
service_account_name = "alb-ingress-controller"
}
}
}
}
resource "kubernetes_service_account" "alb-ingress" {
metadata {
name = "alb-ingress-controller"
namespace = "kube-system"
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
automount_service_account_token = true
}
kubernetes_ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/target-type: "ip"
alb.ingress.kubernetes.io/subnets: 'subnet-0ab65d9cec9451287, subnet-034bf8856ab9157b7, subnet-0c16b1d382fadd0b4'
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80},{"HTTPS": 443}]'
spec:
backend:
serviceName: go-hello-world
servicePort: 8080
角色
resource "kubernetes_cluster_role" "alb-ingress" {
metadata {
name = "alb-ingress-controller"
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
rule {
api_groups = ["", "extensions"]
resources = ["configmaps", "endpoints", "events", "ingresses", "ingresses/status", "services"]
verbs = ["create", "get", "list", "update", "watch", "patch"]
}
rule {
api_groups = ["", "extensions"]
resources = ["nodes", "pods", "secrets", "services", "namespaces"]
verbs = ["get", "list", "watch"]
}
}
resource "kubernetes_cluster_role_binding" "alb-ingress" {
metadata {
name = "alb-ingress-controller"
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "alb-ingress-controller"
}
subject {
kind = "ServiceAccount"
name = "alb-ingress-controller"
namespace = "kube-system"
}
}
来自 VPC 的一些代码
data "aws_availability_zones" "available" {}
resource "aws_subnet" "gateway" {
count = var.subnet_count
availability_zone = data.aws_availability_zones.available.names[count.index]
cidr_block = "10.0.1${count.index}.0/24"
vpc_id = aws_vpc.tf_eks.id
tags = map(
"Name", "gateway",
)
}
resource "aws_subnet" "application" {
count = var.subnet_count
availability_zone = data.aws_availability_zones.available.names[count.index]
cidr_block = "10.0.2${count.index}.0/24"
vpc_id = aws_vpc.tf_eks.id
tags = map(
"Name", "application",
"kubernetes.io/cluster/${var.cluster_name}", "shared",
"kubernetes.io/role/elb", "1",
)
}
resource "aws_subnet" "database" {
count = var.subnet_count
availability_zone = data.aws_availability_zones.available.names[count.index]
cidr_block = "10.0.3${count.index}.0/24"
vpc_id = aws_vpc.tf_eks.id
tags = map(
"Name", "database"
)
}
resource "aws_route_table" "application" {
count = var.subnet_count
vpc_id = aws_vpc.tf_eks.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.tf_eks.*.id[count.index]
}
tags = {
Name = "application"
}
}
resource "aws_route_table" "database" {
vpc_id = aws_vpc.tf_eks.id
tags = {
Name = "database"
}
}
resource "aws_route_table" "gateway" {
vpc_id = aws_vpc.tf_eks.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.tf_eks.id
}
tags = {
Name = "gateway"
}
}
resource "aws_route_table_association" "application" {
count = var.subnet_count
subnet_id = aws_subnet.application.*.id[count.index]
route_table_id = aws_route_table.application.*.id[count.index]
}
resource "aws_route_table_association" "database" {
count = var.subnet_count
subnet_id = aws_subnet.database.*.id[count.index]
route_table_id = aws_route_table.database.id
}
resource "aws_route_table_association" "gateway" {
count = var.subnet_count
subnet_id = aws_subnet.gateway.*.id[count.index]
route_table_id = aws_route_table.gateway.id
}
resource "aws_internet_gateway" "tf_eks" {
vpc_id = aws_vpc.tf_eks.id
tags = {
Name = "internet_gateway"
}
}
resource "aws_eip" "nat_gateway" {
count = var.subnet_count
vpc = true
}
resource "aws_nat_gateway" "tf_eks" {
count = var.subnet_count
allocation_id = aws_eip.nat_gateway.*.id[count.index]
subnet_id = aws_subnet.gateway.*.id[count.index]
tags = {
Name = "nat_gateway"
}
depends_on = [aws_internet_gateway.tf_eks]
}
安全组
resource "aws_security_group" "eks" {
name = "tf-eks-master"
description = "Cluster communication with worker nodes"
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "node" {
name = "tf-eks-node"
description = "Security group for all nodes in the cluster"
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group_rule" "main-node-ingress-self" {
type = "ingress"
description = "Allow node to communicate with each other"
from_port = 0
protocol = "-1"
security_group_id = aws_security_group.node.id
to_port = 65535
cidr_blocks = var.subnet_cidrs
}
resource "aws_security_group_rule" "main-node-ingress-cluster" {
type = "ingress"
description = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
from_port = 1025
protocol = "tcp"
security_group_id = aws_security_group.node.id
source_security_group_id = aws_security_group.eks.id
to_port = 65535
}
kubectl 获取所有 --all-namespaces
kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/go-hello-world-68545f84bc-5st4s 1/1 Running 0 35s
default pod/go-hello-world-68545f84bc-bkwpb 1/1 Running 0 35s
default pod/go-hello-world-68545f84bc-kmfbq 1/1 Running 0 35s
kube-system pod/alb-ingress-controller-5f9cb4b7c4-w858g 1/1 Running 0 2m7s
kube-system pod/aws-node-8jfkf 1/1 Running 0 67m
kube-system pod/aws-node-d7s7w 1/1 Running 0 67m
kube-system pod/aws-node-termination-handler-g5fmj 1/1 Running 0 67m
kube-system pod/aws-node-termination-handler-q5tz5 1/1 Running 0 67m
kube-system pod/aws-node-termination-handler-tmzmr 1/1 Running 0 67m
kube-system pod/aws-node-vswpf 1/1 Running 0 67m
kube-system pod/coredns-5c4dd4cc7-sk474 1/1 Running 0 71m
kube-system pod/coredns-5c4dd4cc7-zplwg 1/1 Running 0 71m
kube-system pod/kube-proxy-5m9dn 1/1 Running 0 67m
kube-system pod/kube-proxy-8tn9l 1/1 Running 0 67m
kube-system pod/kube-proxy-qs652 1/1 Running 0 67m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 71m
kube-system service/kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 71m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/aws-node 3 3 3 3 3 <none> 71m
kube-system daemonset.apps/aws-node-termination-handler 3 3 3 3 3 <none> 68m
kube-system daemonset.apps/kube-proxy 3 3 3 3 3 <none> 71m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default deployment.apps/go-hello-world 3/3 3 3 37s
kube-system deployment.apps/alb-ingress-controller 1/1 1 1 2m9s
kube-system deployment.apps/coredns 2/2 2 2 71m
NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/go-hello-world-68545f84bc 3 3 3 37s
kube-system replicaset.apps/alb-ingress-controller-5f9cb4b7c4 1 1 1 2m9s
kube-system replicaset.apps/coredns-5c4dd4cc7 2 2
最佳答案
您可以尝试添加这些行并尝试 kubectl 命令吗
# ALB's Target Group Configurations
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
Check this
kubectl logs -n kube-system deployment.apps/aws-load-balancer-controller
关于Terraform AWS EKS ALB Kubernetes Ingress 不会创建监听器或目标组,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62619448/
我在 GCP 上使用 k8s。需要设置入口来设置 TLS 连接,所以我为我的应用程序设置了入口,它有效!!! 顺便说一句,什么是入口 Controller ,如 Nginx Ingress Contr
我有一个响应 / 的后端服务,但我希望它在入口路由 myhost.com/overview 上运行。无论我尝试哪种配置,traefik 都不会删除路径 /overview - 我可以看到后端获取 /o
我正在尝试添加多个应该共享同一主机的 Ingress。 一个 Ingress 应该处理对 www.example.de/some 的请求,一个 Ingress 应该处理所有其他请求。 这是 Ingre
我正在尝试设置入口负载平衡器。 基本上,我有一个具有多个路径的后端服务。 假设我的后端 NodePort 服务名称是 hello-app。与此服务关联的 pod 公开多个路径,如/foo 和/bar。
我已经按照文档中的说明设置了 Nginx Controller https://docs.nginx.com/nginx-ingress-controller/installation/install
在 kubernetes 中,如果我在下面有一个入口资源,它如何知道要使用哪种类型的入口 Controller 或哪个入口 Controller (如果我有多个)? ” apiVersion: ext
如何为 ingress-nginx 中的所有主机使用自定义通配符 TLS 证书? 我使用 ingress-nginx 作为入口 Controller 。它是使用 Helm 图表安装的: helm re
有没有办法在 Kong-Ingress-Controller 中设置通配符证书以在每个 Ingress 中使用? 我从图表安装了Kong: $ helm repo add kong https://c
GKE 入口:https://cloud.google.com/kubernetes-engine/docs/concepts/ingress Nginx 入口:https://kubernetes.
此配置适用于其他集群,但不适用于我部署的最后一个集群。 我的 RBAC 配置存在某种问题。 kubectl get pods -n ingress-controller NAME
我有一个服务在 NodePort 服务上运行。我如何使用入口访问多个服务。 部署.yml apiVersion: apps/v1 kind: Deployment metadata: name:
我正在尝试在 GKE 中创建静态内部入口。似乎我们没有直接的方法。我遵循了 How to set static internal IP to the GKE internal Ingress 之后的解
我正在尝试让 GKE 入口要求像这样的基本身份验证 example from github. 入口工作正常。它路由到服务。但是身份验证不起作用。允许所有流量直接通过。 GKE 还没有推出这个功能吗?我
我目前正在将 IT 环境从 Nginx Ingress Gateway 迁移到 Kubernetes 上的 IstIO Ingress Gateway。 我需要迁移以下 Nginx 注释: nginx
我想在我的自托管系统上构建一个 gitlab + kubernetes 小 gitops。但是当我尝试从 gitlab kubernetes 部分安装 nginx ingress 时,出现此错误:Se
I'm struggling to set a global (on ingress controller scope) SSL/HTTPS redirection. It works fine
I'm struggling to set a global (on ingress controller scope) SSL/HTTPS redirection. It works fine
我正在尝试使用 Kong 插件进行 k8s 入口自定义。具体来说,我正在使用 Kong 入口 Controller 和“request-transformer-advanced”插件(引用: http
我使用 Kubespray 部署了一个裸机集群,并启用了 kubernetes 1.22.2、MetalLB 和 ingress-nginx。在设置 ingressClassName: nginx 时
我正在使用 NextJS,我需要它知道它何时在服务器或浏览器上发出请求。要在服务器端做,因为我是在微服务架构中构建的,我需要获取服务的服务名称和命名空间来完成这样的 url http://SERVIC
我是一名优秀的程序员,十分优秀!