gpt4 book ai didi

kubernetes - 在 Kubernetes 集群中使用 Docker 运行 Nexus 3

转载 作者:行者123 更新时间:2023-12-02 11:33:56 29 4
gpt4 key购买 nike

运行的最佳设置是什么 sonatype\nexus3在允许使用 Docker 存储库的 Kubernetes 中?

目前我有一个基本设置:

  • 部署 sonatype\nexus3
  • 内部服务暴露端口 80 和 5000
  • Ingress + kube-lego 提供对 Nexus UI 的 HTTPS 访问

  • 如何绕过不允许多个端口的入口限制?

    最佳答案

    tl;博士

    Nexus 需要通过 SSL 提供服务,否则 docker 将无法连接到它。这可以通过 k8s 入口 + kube-lego 来实现对于 Let's Encrypt证书。任何其他真正的证书也可以使用。但是,为了通过一个入口(因此,一个端口)同时为 nexus UI 和 docker 注册表提供服务,需要在入口后面使用反向代理来检测 docker 用户代理并将请求转发到注册表。

                                                                                 --(IF user agent docker) --> [nexus service]nexus:5000 --> docker registry
    |
    [nexus ingress]nexus.example.com:80/ --> [proxy service]internal-proxy:80 -->|
    |
    --(ELSE ) --> [nexus service]nexus:80 --> nexus UI

    启动 nexus 服务器

    联系部署.yaml
    这将使用 azureFile 卷,但您可以使用任何卷。此外,由于显而易见的原因,该 secret 并未显示。
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    name: nexus
    namespace: default

    spec:
    replicas: 1
    strategy:
    type: Recreate
    template:
    metadata:
    labels:
    app: nexus
    spec:
    containers:
    - name: nexus
    image: sonatype/nexus3:3.3.1
    imagePullPolicy: IfNotPresent
    ports:
    - containerPort: 8081
    - containerPort: 5000
    volumeMounts:
    - name: nexus-data
    mountPath: /nexus-data
    resources:
    requests:
    cpu: 440m
    memory: 3.3Gi
    limits:
    cpu: 440m
    memory: 3.3Gi
    volumes:
    - name: nexus-data
    azureFile:
    secretName: azure-file-storage-secret
    shareName: nexus-data

    添加健康和就绪探测器始终是一个好主意,以便 kubernetes 可以检测到应用何时出现故障。击中 index.html页面并不总是工作得很好,所以我使用 REST API。这需要为具有 nx-script-*-browse 的用户添加 Authorization header 。允许。显然,您必须首先在没有探针的情况下启动系统来设置用户,然后再更新您的部署。
          readinessProbe:
    httpGet:
    path: /service/siesta/rest/v1/script
    port: 8081
    httpHeaders:
    - name: Authorization
    # The authorization token is simply the base64 encoding of the `healthprobe` user's credentials:
    # $ echo -n user:password | base64
    value: Basic dXNlcjpwYXNzd29yZA==
    initialDelaySeconds: 900
    timeoutSeconds: 60
    livenessProbe:
    httpGet:
    path: /service/siesta/rest/v1/script
    port: 8081
    httpHeaders:
    - name: Authorization
    value: Basic dXNlcjpwYXNzd29yZA==
    initialDelaySeconds: 900
    timeoutSeconds: 60

    因为nexus 有时需要很长时间才能启动,所以我使用了非常慷慨的初始延迟和超时。

    nexus-service.yaml 为 UI 公开端口 80,为注册表公开端口 5000。这必须对应于通过 UI 为注册表配置的端口。
    apiVersion: v1
    kind: Service
    metadata:
    labels:
    app: nexus
    name: nexus
    namespace: default
    selfLink: /api/v1/namespaces/default/services/nexus

    spec:
    ports:
    - name: http
    port: 80
    targetPort: 8081
    - name: docker
    port: 5000
    targetPort: 5000
    selector:
    app: nexus
    type: ClusterIP

    启动反向代理(nginx)

    proxy-configmap.yaml 将 nginx.conf 添加为 ConfigMap 数据卷。这包括用于检测 docker 用户代理的规则。这依赖于 kubernetes DNS 来访问 nexus作为上游服务。
    apiVersion: v1
    data:
    nginx.conf: |
    worker_processes auto;

    events {
    worker_connections 1024;
    }

    http {
    error_log /var/log/nginx/error.log warn;
    access_log /dev/null;
    proxy_intercept_errors off;
    proxy_send_timeout 120;
    proxy_read_timeout 300;

    upstream nexus {
    server nexus:80;
    }

    upstream registry {
    server nexus:5000;
    }

    server {
    listen 80;
    server_name nexus.example.com;

    keepalive_timeout 5 5;
    proxy_buffering off;

    # allow large uploads
    client_max_body_size 1G;

    location / {
    # redirect to docker registry
    if ($http_user_agent ~ docker ) {
    proxy_pass http://registry;
    }
    proxy_pass http://nexus;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto "https";
    }
    }
    }
    kind: ConfigMap
    metadata:
    creationTimestamp: null
    name: internal-proxy-conf
    namespace: default
    selfLink: /api/v1/namespaces/default/configmaps/internal-proxy-conf

    代理部署.yaml
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    name: internal-proxy
    namespace: default

    spec:
    replicas: 1
    template:
    metadata:
    labels:
    proxy: internal
    spec:
    containers:
    - name: nginx
    image: nginx:1.11-alpine
    imagePullPolicy: IfNotPresent
    lifecycle:
    preStop:
    exec:
    command: ["/usr/sbin/nginx","-s","quit"]
    volumeMounts:
    - name: internal-proxy-conf
    mountPath: /etc/nginx/
    env:
    # This is a workaround to easily force a restart by incrementing the value (numbers must be quoted)
    # NGINX needs to be restarted for configuration changes, especially DNS changes, to be detected
    - name: RESTART_
    value: "0"
    volumes:
    - name: internal-proxy-conf
    configMap:
    name: internal-proxy-conf
    items:
    - key: nginx.conf
    path: nginx.conf

    proxy-service.yaml 代理的类型是 ClusterIP因为入口会将流量转发给它。本示例中未使用端口 443。
    kind: Service
    apiVersion: v1
    metadata:
    name: internal-proxy
    namespace: default

    spec:
    selector:
    proxy: internal
    ports:
    - name: http
    port: 80
    targetPort: 80
    - name: https
    port: 443
    targetPort: 443
    type: ClusterIP

    创建入口

    nexus-ingress.yaml 这一步假设你有一个 nginx 入口 Controller 。如果您有证书,则不需要入口,而是可以公开代理服务,但您将无法获得 kube-lego 的自动化优势。
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
    name: nexus
    namespace: default
    annotations:
    kubernetes.io/ingress.class: "nginx"
    kubernetes.io/tls-acme: "true"

    spec:
    tls:
    - hosts:
    - nexus.example.com
    secretName: nexus-tls
    rules:
    - host: nexus.example.com
    http:
    paths:
    - path: /
    backend:
    serviceName: internal-proxy
    servicePort: 80

    关于kubernetes - 在 Kubernetes 集群中使用 Docker 运行 Nexus 3,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42766349/

    29 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com