- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试使用 Terraform 资源创建一个带有 ALB 入口的 AWS EKS 集群。
This document表示入口将自动创建一个带有关联监听器和目标组的负载均衡器。
Kubernetes Ingress 创建 ALB 负载均衡器、安全组和规则,但不创建目标组或监听器。我曾尝试使用网关或应用程序子网,但没有任何区别。我尝试设置安全组,但 ALB 设置并使用了它自己的自我管理的安全组。
我靠this guide
ALB 的 curl 让我感到
Failed to connect tode59ecbf-default-mainingre-8687-1051686593.ap-southeast-1.elb.amazonaws.comport 80: Connection refused
kubectl
单独应用 kubernetes 入口,但结果相同。它创建 ALB 和一个安全组,其中包含端口规则,但没有目标组或监听器。
aws eks describe-cluster --name my-tf-eks-cluster --query "cluster.endpoint"
粘贴集群端点时进入浏览器我得到这个:
{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "forbidden: User "system:anonymous" cannot get path "/"", "reason": "Forbidden","details": {}, "code": 403 }
kubectl describe ingresses
Name: main-ingress
Namespace: default
Address:
Default backend: go-hello-world:8080 (<none>)
Rules:
Host Path Backends
---- ---- --------
* * go-hello-world:8080 (<none>)
aws eks describe-cluster --name my-tf-eks-cluster --query cluster.endpoint"
"https://88888888B.gr7.ap-southeast-1.eks.amazonaws.com"
curl https://88888888B.gr7.ap-southeast-1.eks.amazonaws.com
curl: (60) SSL certificate problem: unable to get local issuer certificate
编辑:IAM 集群策略缺少这些权限。我决定使用 ELB 可能会更好,因为它们可以终止 ssl 证书,然后使用 traefik 作为后端代理,所以我现在无法真正测试它。谁能确认 ALB 是否需要这些权限?
"elasticloadbalancing:DescribeListenerCertificates",
"elasticloadbalancing:AddListenerCertificates",
"elasticloadbalancing:RemoveListenerCertificates"
这是我的 EKS 主资源:
data "aws_iam_role" "tf-eks-master" {
name = "terraform-eks-cluster"
}
resource "aws_eks_cluster" "tf_eks" {
name = var.cluster_name
role_arn = data.aws_iam_role.tf-eks-master.arn
vpc_config {
security_group_ids = [aws_security_group.master.id]
subnet_ids = var.application_subnet_ids
endpoint_private_access = true
endpoint_public_access = true
}
}
ALB 入口 Controller :
output "vpc_id" {
value = data.aws_vpc.selected
}
data "aws_subnet_ids" "selected" {
vpc_id = data.aws_vpc.selected.id
tags = map(
"Name", "application",
)
}
resource "kubernetes_deployment" "alb-ingress" {
metadata {
name = "alb-ingress-controller"
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
namespace = "kube-system"
}
spec {
selector {
match_labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
template {
metadata {
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
spec {
volume {
name = kubernetes_service_account.alb-ingress.default_secret_name
secret {
secret_name = kubernetes_service_account.alb-ingress.default_secret_name
}
}
container {
# This is where you change the version when Amazon comes out with a new version of the ingress controller
image = "docker.io/amazon/aws-alb-ingress-controller:v1.1.8"
name = "alb-ingress-controller"
args = [
"--ingress-class=alb",
"--cluster-name=${var.cluster_name}",
"--aws-vpc-id=${data.aws_vpc.selected.id}",
"--aws-region=${var.aws_region}"
]
volume_mount {
name = kubernetes_service_account.alb-ingress.default_secret_name
mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
read_only = true
}
}
service_account_name = "alb-ingress-controller"
}
}
}
}
resource "kubernetes_service_account" "alb-ingress" {
metadata {
name = "alb-ingress-controller"
namespace = "kube-system"
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
automount_service_account_token = true
}
kubernetes_ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/target-type: "ip"
alb.ingress.kubernetes.io/subnets: 'subnet-0ab65d9cec9451287, subnet-034bf8856ab9157b7, subnet-0c16b1d382fadd0b4'
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80},{"HTTPS": 443}]'
spec:
backend:
serviceName: go-hello-world
servicePort: 8080
角色
resource "kubernetes_cluster_role" "alb-ingress" {
metadata {
name = "alb-ingress-controller"
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
rule {
api_groups = ["", "extensions"]
resources = ["configmaps", "endpoints", "events", "ingresses", "ingresses/status", "services"]
verbs = ["create", "get", "list", "update", "watch", "patch"]
}
rule {
api_groups = ["", "extensions"]
resources = ["nodes", "pods", "secrets", "services", "namespaces"]
verbs = ["get", "list", "watch"]
}
}
resource "kubernetes_cluster_role_binding" "alb-ingress" {
metadata {
name = "alb-ingress-controller"
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "alb-ingress-controller"
}
subject {
kind = "ServiceAccount"
name = "alb-ingress-controller"
namespace = "kube-system"
}
}
来自 VPC 的一些代码
data "aws_availability_zones" "available" {}
resource "aws_subnet" "gateway" {
count = var.subnet_count
availability_zone = data.aws_availability_zones.available.names[count.index]
cidr_block = "10.0.1${count.index}.0/24"
vpc_id = aws_vpc.tf_eks.id
tags = map(
"Name", "gateway",
)
}
resource "aws_subnet" "application" {
count = var.subnet_count
availability_zone = data.aws_availability_zones.available.names[count.index]
cidr_block = "10.0.2${count.index}.0/24"
vpc_id = aws_vpc.tf_eks.id
tags = map(
"Name", "application",
"kubernetes.io/cluster/${var.cluster_name}", "shared",
"kubernetes.io/role/elb", "1",
)
}
resource "aws_subnet" "database" {
count = var.subnet_count
availability_zone = data.aws_availability_zones.available.names[count.index]
cidr_block = "10.0.3${count.index}.0/24"
vpc_id = aws_vpc.tf_eks.id
tags = map(
"Name", "database"
)
}
resource "aws_route_table" "application" {
count = var.subnet_count
vpc_id = aws_vpc.tf_eks.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.tf_eks.*.id[count.index]
}
tags = {
Name = "application"
}
}
resource "aws_route_table" "database" {
vpc_id = aws_vpc.tf_eks.id
tags = {
Name = "database"
}
}
resource "aws_route_table" "gateway" {
vpc_id = aws_vpc.tf_eks.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.tf_eks.id
}
tags = {
Name = "gateway"
}
}
resource "aws_route_table_association" "application" {
count = var.subnet_count
subnet_id = aws_subnet.application.*.id[count.index]
route_table_id = aws_route_table.application.*.id[count.index]
}
resource "aws_route_table_association" "database" {
count = var.subnet_count
subnet_id = aws_subnet.database.*.id[count.index]
route_table_id = aws_route_table.database.id
}
resource "aws_route_table_association" "gateway" {
count = var.subnet_count
subnet_id = aws_subnet.gateway.*.id[count.index]
route_table_id = aws_route_table.gateway.id
}
resource "aws_internet_gateway" "tf_eks" {
vpc_id = aws_vpc.tf_eks.id
tags = {
Name = "internet_gateway"
}
}
resource "aws_eip" "nat_gateway" {
count = var.subnet_count
vpc = true
}
resource "aws_nat_gateway" "tf_eks" {
count = var.subnet_count
allocation_id = aws_eip.nat_gateway.*.id[count.index]
subnet_id = aws_subnet.gateway.*.id[count.index]
tags = {
Name = "nat_gateway"
}
depends_on = [aws_internet_gateway.tf_eks]
}
安全组
resource "aws_security_group" "eks" {
name = "tf-eks-master"
description = "Cluster communication with worker nodes"
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "node" {
name = "tf-eks-node"
description = "Security group for all nodes in the cluster"
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group_rule" "main-node-ingress-self" {
type = "ingress"
description = "Allow node to communicate with each other"
from_port = 0
protocol = "-1"
security_group_id = aws_security_group.node.id
to_port = 65535
cidr_blocks = var.subnet_cidrs
}
resource "aws_security_group_rule" "main-node-ingress-cluster" {
type = "ingress"
description = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
from_port = 1025
protocol = "tcp"
security_group_id = aws_security_group.node.id
source_security_group_id = aws_security_group.eks.id
to_port = 65535
}
kubectl 获取所有 --all-namespaces
kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/go-hello-world-68545f84bc-5st4s 1/1 Running 0 35s
default pod/go-hello-world-68545f84bc-bkwpb 1/1 Running 0 35s
default pod/go-hello-world-68545f84bc-kmfbq 1/1 Running 0 35s
kube-system pod/alb-ingress-controller-5f9cb4b7c4-w858g 1/1 Running 0 2m7s
kube-system pod/aws-node-8jfkf 1/1 Running 0 67m
kube-system pod/aws-node-d7s7w 1/1 Running 0 67m
kube-system pod/aws-node-termination-handler-g5fmj 1/1 Running 0 67m
kube-system pod/aws-node-termination-handler-q5tz5 1/1 Running 0 67m
kube-system pod/aws-node-termination-handler-tmzmr 1/1 Running 0 67m
kube-system pod/aws-node-vswpf 1/1 Running 0 67m
kube-system pod/coredns-5c4dd4cc7-sk474 1/1 Running 0 71m
kube-system pod/coredns-5c4dd4cc7-zplwg 1/1 Running 0 71m
kube-system pod/kube-proxy-5m9dn 1/1 Running 0 67m
kube-system pod/kube-proxy-8tn9l 1/1 Running 0 67m
kube-system pod/kube-proxy-qs652 1/1 Running 0 67m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 71m
kube-system service/kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 71m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/aws-node 3 3 3 3 3 <none> 71m
kube-system daemonset.apps/aws-node-termination-handler 3 3 3 3 3 <none> 68m
kube-system daemonset.apps/kube-proxy 3 3 3 3 3 <none> 71m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default deployment.apps/go-hello-world 3/3 3 3 37s
kube-system deployment.apps/alb-ingress-controller 1/1 1 1 2m9s
kube-system deployment.apps/coredns 2/2 2 2 71m
NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/go-hello-world-68545f84bc 3 3 3 37s
kube-system replicaset.apps/alb-ingress-controller-5f9cb4b7c4 1 1 1 2m9s
kube-system replicaset.apps/coredns-5c4dd4cc7 2 2
最佳答案
您可以尝试添加这些行并尝试 kubectl 命令吗
# ALB's Target Group Configurations
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
Check this
kubectl logs -n kube-system deployment.apps/aws-load-balancer-controller
关于Terraform AWS EKS ALB Kubernetes Ingress 不会创建监听器或目标组,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62619448/
我正在尝试在命令行运行此命令: aws eks create-cluster \ --name ignitecluster \ --role-arn "$role_arn" \
我使用 EKS 和 Terraform 在 AWS 上创建了 k8s 集群,遵循此文档 https://docs.aws.amazon.com/eks/latest/userguide/what-is
我遇到了一个小问题,它在没有运气的情况下花了很多时间。 我有一个 EKS 集群,我在其中创建了 2 个部署,每个部署都有自己的服务。 我的 2 个应用程序是一个 tensorflow 服务器和一个 F
使用 eksctl 工具在 AWS 上创建 EKS 集群后,无法使用 ssh 访问工作机器。出了什么问题? Marcs-MBP:kubernetes tests marc$ eksctl crea
我已经通过 AWS 控制台创建了集群,并尝试使用 kubectl 从 cloud9 连接到它,但我看到了以下错误错误:您必须登录到服务器(未经授权)详情 -我以root用户登录创建了集群 -我在 cl
EKS 文档说 "When you create an Amazon EKS cluster, the IAM entity (user or role) is automatically grant
所以,我想通过 CloudFormation 更改附加到 eks 的节点的实例类型。我唯一的麻烦是,如果我更改节点实例类型,我无法确定 eks 集群上运行的所有服务和 Pod 会发生什么情况。我检查了
我正在尝试使用入口控制器安装我的CA证书。我正在遵循这份指南。Https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-co
kubectl正在使用 aws eks get-token并且完美运行。 但是当我尝试登录 kubernetes-dashboard 时使用下面生成的 token ,我得到 Unauthorized
我有一个正常工作的 EKS 集群。它正在使用 ALB 进行入口。 当我应用一个服务然后一个入口时,大多数这些工作都按预期工作。然而,一些目标群体最终没有注册目标。如果我获得服务 IP 地址 kubec
我们目前正在设计一个基于微服务的架构,通过将单体应用划分为微服务。 早些时候,巨石出现在 2 个不同的地区,即。美国和亚洲。美国实例会收到来自美国的请求,而亚洲实例会收到来自亚洲国家的请求。 现在,我
在EKS中部署工作程序节点时,有没有一种添加节点标签的方法。我没有在CF模板中看到可用于工作程序节点的选项。 EKS-CF-Workers 我现在看到的唯一选项是使用kubectl label命令添加
我遇到了 Terraform EKS 标记的问题,并且似乎没有找到可行的解决方案来在创建新集群时标记所有 VPC 子网。 提供一些上下文:我们有一个 AWS VPC,我们在其中将多个 EKS 集群部署
使用指令https://docs.aws.amazon.com/eks/latest/userguide/worker.html可以启动 Kube 集群工作节点。我希望工作节点没有公共(public)
我正在尝试将Kube State Metrics部署到运行Kubernetes v1.14的EKS集群(eks.4)中的kube-system命名空间中。 Kubernetes连接 provider
我有一个包含两个工作节点的 EKS 集群。我想“关闭”节点或做一些事情来降低我的集群在工作时间之外的成本。有什么办法可以在晚上关闭节点,早上再打开? 非常感谢。 最佳答案 对于使用托管 K8s 集群的
我想向 eks 添加多个节点组,每个节点组都有不同的标签。我已经成功部署了第二个云形成堆栈并且可以看到新的 ec2 实例,但是我在 k8s 仪表板中看不到新节点。我错过了什么吗? 最佳答案 我能够通过
我已按照此博客在 AWS 上设置 open5GS:https://aws.amazon.com/blogs/opensource/open-source-mobile-core-network-imp
我已经使用EKS https://github.com/kubernetes/examples/tree/master/guestbook-go为 guest 应用程序配置了集群 并遵循了官方教程 h
我已按照 AWS 入门指南配置 EKS 集群(3 个公共(public)子网和 3 个私有(private)子网)。创建后,我得到以下 API 服务器端点 https://XXXXXXXXXXXXXX
我是一名优秀的程序员,十分优秀!