gpt4 book ai didi

node.js - 使用 DataDog 的 Helm Chart 进行 DataDog GKE NESTJS 集成

转载 作者:太空宇宙 更新时间:2023-11-04 03:00:25 27 4
gpt4 key购买 nike

我正在尝试部署我的服务并从内部 pod 读取我的本地日志文件。

通过以下配置使用 DataDog 的 Helm chart 值:

## Default values for Datadog Agent
## See Datadog helm documentation to learn more:
## https://docs.datadoghq.com/agent/kubernetes/helm/

## @param image - object - required
## Define the Datadog image to work with.
#
image:

## @param repository - string - required
## Define the repository to use:
## use "datadog/agent" for Datadog Agent 6
## use "datadog/dogstatsd" for Standalone Datadog Agent DogStatsD6
#
repository: datadog/agent

## @param tag - string - required
## Define the Agent version to use.
## Use 6.13.0-jmx to enable jmx fetch collection
#
tag: 6.13.0

## @param pullPolicy - string - required
## The Kubernetes pull policy.
#
pullPolicy: IfNotPresent

## @param pullSecrets - list of key:value strings - optional
## It is possible to specify docker registry credentials
## See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
#
# pullSecrets:
# - name: "<REG_SECRET>"

nameOverride: ""
fullnameOverride: ""

datadog:

## @param apiKey - string - required
## Set this to your Datadog API key before the Agent runs.
## ref: https://app.datadoghq.com/account/settings#agent/kubernetes
#
apiKey: "xxxxxxxx"

## @param apiKeyExistingSecret - string - optional
## Use existing Secret which stores API key instead of creating a new one.
## If set, this parameter takes precedence over "apiKey".
#
# apiKeyExistingSecret: <DATADOG_API_KEY_SECRET>

## @param appKey - string - optional
## If you are using clusterAgent.metricsProvider.enabled = true, you must set
## a Datadog application key for read access to your metrics.
#
appKey: "xxxxxx"

## @param appKeyExistingSecret - string - optional
## Use existing Secret which stores APP key instead of creating a new one
## If set, this parameter takes precedence over "appKey".
#
# appKeyExistingSecret: <DATADOG_APP_KEY_SECRET>

## @param securityContext - object - optional
## You can modify the security context used to run the containers by
## modifying the label type below:
#
# securityContext:
# seLinuxOptions:
# seLinuxLabel: "spc_t"

## @param clusterName - string - optional
## Set a unique cluster name to allow scoping hosts and Cluster Checks easily
#
# clusterName: <CLUSTER_NAME>

## @param name - string - required
## Daemonset/Deployment container name
## See clusterAgent.containerName if clusterAgent.enabled = true
#
name: datadog

## @param site - string - optional - default: 'datadoghq.com'
## The site of the Datadog intake to send Agent data to.
## Set to 'datadoghq.eu' to send data to the EU site.
#
# site: datadoghq.com

## @param dd_url - string - optional - default: 'https://app.datadoghq.com'
## The host of the Datadog intake server to send Agent data to, only set this option
## if you need the Agent to send data to a custom URL.
## Overrides the site setting defined in "site".
#
# dd_url: https://app.datadoghq.com

## @param logLevel - string - required
## Set logging verbosity, valid log levels are:
## trace, debug, info, warn, error, critical, and off
#
logLevel: INFO

## @param podLabelsAsTags - list of key:value strings - optional
## Provide a mapping of Kubernetes Labels to Datadog Tags.
#
# podLabelsAsTags:
# app: kube_app
# release: helm_release
# <KUBERNETES_LABEL>: <DATADOG_TAG_KEY>

## @param podAnnotationsAsTags - list of key:value strings - optional
## Provide a mapping of Kubernetes Annotations to Datadog Tags
#
# podAnnotationsAsTags:
# iam.amazonaws.com/role: kube_iamrole
# <KUBERNETES_ANNOTATIONS>: <DATADOG_TAG_KEY>

## @param tags - list of key:value elements - optional
## List of tags to attach to every metric, event and service check collected by this Agent.
##
## Learn more about tagging: https://docs.datadoghq.com/tagging/
#
# tags:
# - <KEY_1>:<VALUE_1>
# - <KEY_2>:<VALUE_2>

## @param useCriSocketVolume - boolean - required
## Enable container runtime socket volume mounting
#
useCriSocketVolume: true

## @param dogstatsdOriginDetection - boolean - optional
## Enable origin detection for container tagging
## https://docs.datadoghq.com/developers/dogstatsd/unix_socket/#using-origin-detection-for-container-tagging
#
# dogstatsdOriginDetection: true

## @param useDogStatsDSocketVolume - boolean - optional
## Enable dogstatsd over Unix Domain Socket
## ref: https://docs.datadoghq.com/developers/dogstatsd/unix_socket/
#
# useDogStatsDSocketVolume: true

## @param nonLocalTraffic - boolean - optional - default: false
## Enable this to make each node accept non-local statsd traffic.
## ref: https://github.com/DataDog/docker-dd-agent#environment-variables
#
nonLocalTraffic: true

## @param collectEvents - boolean - optional - default: false
## Enables this to start event collection from the kubernetes API
## ref: https://docs.datadoghq.com/agent/kubernetes/event_collection/
#
collectEvents: true

## @param leaderElection - boolean - optional - default: false
## Enables leader election mechanism for event collection.
#
# leaderElection: false

## @param leaderLeaseDuration - integer - optional - default: 60
## Set the lease time for leader election in second.
#
# leaderLeaseDuration: 60

## @param logsEnabled - boolean - optional - default: false
## Enables this to activate Datadog Agent log collection.
## ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup
#
logsEnabled: true

## @param logsConfigContainerCollectAll - boolean - optional - default: false
## Enable this to allow log collection for all containers.
## ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup
#
logsConfigContainerCollectAll: true

## @param containerLogsPath - string - optional - default: /var/lib/docker/containers
## This to allow log collection from container log path. Set to a different path if not
## using docker runtime.
## ref: https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/?tab=k8sfile#create-manifest
#
containerLogsPath: /var/lib/docker/containers


## @param apmEnabled - boolean - optional - default: false
## Enable this to enable APM and tracing, on port 8126
## ref: https://github.com/DataDog/docker-dd-agent#tracing-from-the-host
#
apmEnabled: true

## @param processAgentEnabled - boolean - optional - default: false
## Enable this to activate live process monitoring.
## Note: /etc/passwd is automatically mounted to allow username resolution.
## ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset
#
processAgentEnabled: true

## @param env - list of object - optional
## The dd-agent supports many environment variables
## ref: https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent#environment-variables
#
# env:
# - name: <ENV_VAR_NAME>
# value: <ENV_VAR_VALUE>

## @param volumes - list of objects - optional
## Specify additional volumes to mount in the dd-agent container
#
# volumes:
# - hostPath:
# path: <HOST_PATH>
# name: <VOLUME_NAME>

## @param volumeMounts - list of objects - optional
## Specify additional volumes to mount in the dd-agent container
#
# volumeMounts:
# - name: <VOLUME_NAME>
# mountPath: <CONTAINER_PATH>
# readOnly: true

## @param confd - list of objects - optional
## Provide additional check configurations (static and Autodiscovery)
## Each key becomes a file in /conf.d
## ref: https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent#optional-volumes
## ref: https://docs.datadoghq.com/agent/autodiscovery/
#
confd:
conf.yaml: |-
init_config:
instances:
logs:
- type: "file"
path: "/app/logs/service.log"
service: nodejs
source: nodejs
sourcecategory: sourcecode
# kubernetes_state.yaml: |-
# ad_identifiers:
# - kube-state-metrics
# init_config:
# instances:
# - kube_state_url: http://%%host%%:8080/metrics

## @param checksd - list of key:value strings - optional
## Provide additional custom checks as python code
## Each key becomes a file in /checks.d
## ref: https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent#optional-volumes
#
# checksd:
# service.py: |-

## @param criSocketPath - string - optional
## Path to the container runtime socket (if different from Docker)
## This is supported starting from agent 6.6.0
#
# criSocketPath: /var/run/containerd/containerd.sock

## @param dogStatsDSocketPath - string - optional
## Path to the DogStatsD socket
#
# dogStatsDSocketPath: /var/run/datadog/dsd.socket

## @param livenessProbe - object - optional
## Override the agent's liveness probe logic from the default:
## In case of issues with the probe, you can disable it with the
## following values, to allow easier investigating:
#
# livenessProbe:
# exec:
# command: ["/bin/true"]

## @param resources - object -required
## datadog-agent resource requests and limits
## Make sure to keep requests and limits equal to keep the pods in the Guaranteed QoS class
## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
#
resources: {}
# requests:
# cpu: 200m
# memory: 256Mi
# limits:
# cpu: 200m
# memory: 256Mi

## @param clusterAgent - object - required
## This is the Datadog Cluster Agent implementation that handles cluster-wide
## metrics more cleanly, separates concerns for better rbac, and implements
## the external metrics API so you can autoscale HPAs based on datadog metrics
## ref: https://docs.datadoghq.com/agent/kubernetes/cluster/
#
clusterAgent:

## @param enabled - boolean - required
## Set this to true to enable Datadog Cluster Agent
#
enabled: false

containerName: cluster-agent
image:
repository: datadog/cluster-agent
tag: 1.3.2
pullPolicy: IfNotPresent

## @param token - string - required
## This needs to be at least 32 characters a-zA-z
## It is a preshared key between the node agents and the cluster agent
## ref:
#
token: ""

replicas: 1

## @param metricsProvider - object - required
## Enable the metricsProvider to be able to scale based on metrics in Datadog
#
metricsProvider:
enabled: true

## @param clusterChecks - object - required
## Enable the Cluster Checks feature on both the cluster-agents and the daemonset
## ref: https://docs.datadoghq.com/agent/autodiscovery/clusterchecks/
## Autodiscovery via Kube Service annotations is automatically enabled
#
clusterChecks:
enabled: true

## @param confd - list of objects - optional
## Provide additional cluster check configurations
## Each key will become a file in /conf.d
## ref: https://docs.datadoghq.com/agent/autodiscovery/
#
# confd:
# mysql.yaml: |-
# cluster_check: true
# instances:
# - server: '<EXTERNAL_IP>'
# port: 3306
# user: datadog
# pass: '<YOUR_CHOSEN_PASSWORD>'

## @param resources - object -required
## Datadog cluster-agent resource requests and limits.
#
resources: {}
# requests:
# cpu: 200m
# memory: 256Mi
# limits:
# cpu: 200m
# memory: 256Mi

## @param priorityclassName - string - optional
## Name of the priorityClass to apply to the Cluster Agent

# priorityClassName: system-cluster-critical

## @param livenessProbe - object - optional
## Override the agent's liveness probe logic from the default:
## In case of issues with the probe, you can disable it with the
## following values, to allow easier investigating:
#
# livenessProbe:
# exec:
# command: ["/bin/true"]

## @param podAnnotations - list of key:value strings - optional
## Annotations to add to the cluster-agents's pod(s)
#
# podAnnotations:
# key: "value"

## @param readinessProbe - object - optional
## Override the cluster-agent's readiness probe logic from the default:
#
# readinessProbe:

rbac:

## @param created - boolean - required
## If true, create & use RBAC resources
#
create: true

## @param serviceAccountName - string - required
## Ignored if rbac.create is true
#
serviceAccountName: default

tolerations: []

kubeStateMetrics:

## @param enabled - boolean - required
## If true, deploys the kube-state-metrics deployment.
## ref: https://github.com/kubernetes/charts/tree/master/stable/kube-state-metrics
#
enabled: true

kube-state-metrics:
rbac:
## @param created - boolean - required
## If true, create & use RBAC resources
#
create: true

serviceAccount:
## @param created - boolean - required
## If true, create ServiceAccount, require rbac kube-state-metrics.rbac.create true
#
create: true
## @param name - string - required
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the fullname template
#
name: coupon-service-account

## @param resources - object - optional
## Resource requests and limits for the kube-state-metrics container.
#
# resources:
# requests:
# cpu: 200m
# memory: 256Mi
# limits:
# cpu: 200m
# memory: 256Mi

daemonset:

## @param enabled - boolean - required
## You should keep Datadog DaemonSet enabled!
## The exceptional case could be a situation when you need to run
## single DataDog pod per every namespace, but you do not need to
## re-create a DaemonSet for every non-default namespace install.
## Note: StatsD and DogStatsD work over UDP, so you may not
## get guaranteed delivery of the metrics in Datadog-per-namespace setup!
#
enabled: true

## @param useDedicatedContainers - boolean - optional
## Deploy each datadog agent process in a separate container. Allow fine-grained
## control over allocated resources and better isolation.
#
# useDedicatedContainers: false

containers:

agent:
## @param env - list - required
## Additionnal environment variables for the agent container.
#
# env:

## @param logLevel - string - optional
## Set logging verbosity, valid log levels are:
## trace, debug, info, warn, error, critical, and off.
## If not set, fall back to the value of datadog.logLevel.
#
logLevel: INFO

## @param resources - object - required
## Resource requests and limits for the agent container.
#
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 200m
memory: 256Mi

processAgent:
## @param env - list - required
## Additionnal environment variables for the process-agent container.
#
# env:

## @param logLevel - string - optional
## Set logging verbosity, valid log levels are:
## trace, debug, info, warn, error, critical, and off.
## If not set, fall back to the value of datadog.logLevel.
#
logLevel: INFO

## @param resources - object - required
## Resource requests and limits for the process-agent container.
#
resources:
requests:
cpu: 100m
memory: 200Mi
limits:
cpu: 100m
memory: 200Mi

traceAgent:
## @param env - list - required
## Additionnal environment variables for the trace-agent container.
#
# env:

## @param logLevel - string - optional
## Set logging verbosity, valid log levels are:
## trace, debug, info, warn, error, critical, and off.
## If not set, fall back to the value of datadog.logLevel.
#
logLevel: INFO

## @param resources - object - required
## Resource requests and limits for the trace-agent container.
#
resources:
requests:
cpu: 100m
memory: 200Mi
limits:
cpu: 100m
memory: 200Mi

## @param useHostNetwork - boolean - optional
## Bind ports on the hostNetwork. Useful for CNI networking where hostPort might
## not be supported. The ports need to be available on all hosts. It Can be
## used for custom metrics instead of a service endpoint.
##
## WARNING: Make sure that hosts using this are properly firewalled otherwise
## metrics and traces are accepted from any host able to connect to this host.
#
useHostNetwork: true

## @param useHostPort - boolean - optional
## Sets the hostPort to the same value of the container port. Needs to be used
## to receive traces in a standard APM set up. Can be used as for sending custom metrics.
## The ports need to be available on all hosts.
##
## WARNING: Make sure that hosts using this are properly firewalled otherwise
## metrics and traces are accepted from any host able to connect to this host.
#
useHostPort: true

## @param useHostPID - boolean - optional
## Run the agent in the host's PID namespace. This is required for Dogstatsd origin
## detection to work. See https://docs.datadoghq.com/developers/dogstatsd/unix_socket/
#
# useHostPID: true

## @param podAnnotations - list of key:value strings - optional
## Annotations to add to the DaemonSet's Pods
#
# podAnnotations:
# <POD_ANNOTATION>: '[{"key": "<KEY>", "value": "<VALUE>"}]'

## @param tolerations - array - optional
## Allow the DaemonSet to schedule on tainted nodes (requires Kubernetes >= 1.6)
#
# tolerations: []

## @param nodeSelector - object - optional
## Allow the DaemonSet to schedule on selected nodes
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
#
# nodeSelector: {}

## @param affinity - object - optional
## Allow the DaemonSet to schedule using affinity rules
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
#
# affinity: {}

## @param updateStrategy - string - optional
## Allow the DaemonSet to perform a rolling update on helm update
## ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/
#
# updateStrategy: RollingUpdate

## @param priorityClassName - string - optional
## Sets PriorityClassName if defined.
#
# priorityClassName:

## @param podLabels - object - optional
## Sets podLabels if defined.
#
# podLabels: {}

## @param useConfigMap - boolean - optional
# Configures a configmap to provide the agent configuration
#
# useConfigMap: false

deployment:
## @param enabled - boolean - required
## Apart from DaemonSet, deploy Datadog agent pods and related service for
## applications that want to send custom metrics. Provides DogStasD service.
#
enabled: false

## @param replicas - integer - required
## If you want to use datadog.collectEvents, keep deployment.replicas set to 1.
#
replicas: 1

## @param affinity - object - required
## Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
#
affinity: {}

## @param tolerations - array - required
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
#
tolerations: []

## @param dogstatsdNodePort - integer - optional
## If you're using a NodePort-type service and need a fixed port, set this parameter.
#
# dogstatsdNodePort: 8125

## @param traceNodePort - integer - optional
## If you're using a NodePort-type service and need a fixed port, set this parameter.
#
# traceNodePort: 8126

## @param service - object - required
##
#
service:
type: ClusterIP
annotations: {}

## @param priorityClassName - string - optional
## Sets PriorityClassName if defined.
#
# priorityClassName:

clusterchecksDeployment:

## @param enabled - boolean - required
## If true, deploys agent dedicated for running the Cluster Checks instead of running in the Daemonset's agents.
## ref: https://docs.datadoghq.com/agent/autodiscovery/clusterchecks/
#
enabled: false

rbac:
## @param dedicated - boolean - required
## If true, use a dedicated RBAC resource for the cluster checks agent(s)
#
dedicated: false
## @param serviceAccountName - string - required
## Ignored if rbac.create is true
#
serviceAccountName: default

## @param replicas - integer - required
## If you want to deploy the cluckerchecks agent in HA, keep at least clusterchecksDeployment.replicas set to 2.
## And increase the clusterchecksDeployment.replicas according to the number of Cluster Checks.
#
replicas: 2

## @param resources - object -required
## Datadog clusterchecks-agent resource requests and limits.
#
resources: {}
# requests:
# cpu: 200m
# memory: 500Mi
# limits:
# cpu: 200m
# memory: 500Mi

## @param affinity - object - optional
## Allow the ClusterChecks Deployment to schedule using affinity rules.
## By default, ClusterChecks Deployment Pods are forced to run on different Nodes.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
#
# affinity:

## @param nodeSelector - object - optional
## Allow the ClusterChecks Deploument to schedule on selected nodes
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
#
# nodeSelector: {}

## @param tolerations - array - required
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
#
# tolerations: []

## @param livenessProbe - object - optional
## Override the agent's liveness probe logic from the default:
## In case of issues with the probe, you can disable it with the
## following values, to allow easier investigating:
#
# livenessProbe:
# exec:
# command: ["/bin/true"]

## @param env - list of object - optional
## The dd-agent supports many environment variables
## ref: https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent#environment-variables
#
# env:
# - name: <ENV_VAR_NAME>
# value: <ENV_VAR_VALUE>

如您所见,我希望我的日志可以在/app/logs/service.log 中找到,这就是我向我的 conf.d 提供的内容:

confd:
conf.yaml: |-
init_config:
instances:
logs:
- type: "file"
path: "/app/logs/service.log"
service: nodejs
source: nodejs
sourcecategory: sourcecode

在我的服务中,我使用 WinstonLogger 使用 JSON 格式的文件传输。

transports: [
new transports.File({
winston.format.json(),
filename: `${process.env.LOGS_PATH}/service.log`,
}),
]

process.env.LOGS_PATH = '/app/logs'

毕竟,在预期的/app/logs 文件夹中探索我的 pod 和 tail -f my service.log 时,我发现应用程序实际上按预期以 JSON 格式写入日志。

我的 DataDog 没有获取日志,并且它们没有显示在日志部分中..

注意::我不会在我的服务中装载任何卷..

我错过了什么?我应该将本地日志挂载到/var/log/pods/[service_name]/吗?

最佳答案

主要问题是您必须在应用程序容器和代理容器中安装该卷才能使其可用。这还意味着您必须在代理获取日志文件之前找到存储日志文件的位置。对每个容器执行此操作可能会变得难以维护且耗时。

另一种方法是将日志发送到 stdout 并让代理通过 Docker 集成收集它们。由于您将 logsConfigContainerCollectAll 配置为 true,代理已配置为从每个容器输出收集日志,因此将 Winston 配置为输出到 stdout 应该可以正常工作。

参见:https://docs.datadoghq.com/agent/docker/log/

关于node.js - 使用 DataDog 的 Helm Chart 进行 DataDog GKE NESTJS 集成,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58267953/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com