gpt4 book ai didi

docker - 如何从 Kubernetes 中的 flink docker 镜像启动我的 jar 应用程序?

转载 作者:行者123 更新时间:2023-12-02 20:40:43 24 4
gpt4 key购买 nike

我正在尝试将可用的 felipeogutierrez/explore-flink:1.11.1-scala_2.12 图像 here 用于 kubernetes 集群配置,就像它所说的 here 一样。我用 maven 编译我的项目 https://github.com/felipegutierrez/explore-flink,并用这个 Dockerfile 扩展默认的 flink 图像 flink:1.11.1-scala_2.12:

FROM maven:3.6-jdk-8-slim AS builder
# get explore-flink job and compile it
COPY ./java/explore-flink /opt/explore-flink
WORKDIR /opt/explore-flink
RUN mvn clean install

FROM flink:1.11.1-scala_2.12
WORKDIR /opt/flink/usrlib
COPY --from=builder /opt/explore-flink/target/explore-flink.jar /opt/flink/usrlib/explore-flink.jar
ADD /opt/flink/usrlib/explore-flink.jar /opt/flink/usrlib/explore-flink.jar
#USER flink

然后教程 2 说创建公共(public)集群组件:

kubectl create -f k8s/flink-configuration-configmap.yaml
kubectl create -f k8s/jobmanager-service.yaml
kubectl proxy
kubectl create -f k8s/jobmanager-rest-service.yaml
kubectl get svc flink-jobmanager-rest

然后创建jobmanager-job.yaml:

kubectl create -f k8s/jobmanager-job.yaml

我在 flink-jobmanager pod 上收到一个 CrashLoopBackOff 状态错误,日志显示它找不到类 org.sense.flink.examples flink-dist_2.12-1.11.1.jar:1.11.1 jar 文件中的.stream.tpch.TPCHQuery03。但是,我希望 kubernetes 也尝试查看 /opt/flink/usrlib/explore-flink.jar jar 文件。我正在将此 jar 文件复制并添加到我的图像的 Dockerfile 中,但它似乎不起作用。我在这里错过了什么?下面是我的 jobmanager-job.yaml 文件:

apiVersion: batch/v1
kind: Job
metadata:
name: flink-jobmanager
spec:
template:
metadata:
labels:
app: flink
component: jobmanager
spec:
restartPolicy: OnFailure
containers:
- name: jobmanager
image: felipeogutierrez/explore-flink:1.11.1-scala_2.12
imagePullPolicy: Always
env:
args: ["standalone-job", "--job-classname", "org.sense.flink.examples.stream.tpch.TPCHQuery03"]
ports:
- containerPort: 6123
name: rpc
- containerPort: 6124
name: blob-server
- containerPort: 8081
name: webui
livenessProbe:
tcpSocket:
port: 6123
initialDelaySeconds: 30
periodSeconds: 60
volumeMounts:
- name: flink-config-volume
mountPath: /opt/flink/conf
- name: job-artifacts-volume
mountPath: /opt/flink/usrlib
securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
volumes:
- name: flink-config-volume
configMap:
name: flink-config
items:
- key: flink-conf.yaml
path: flink-conf.yaml
- key: log4j-console.properties
path: log4j-console.properties
- name: job-artifacts-volume
hostPath:
path: /host/path/to/job/artifacts

和我完整的日志文件:

$ kubectl logs flink-jobmanager-qfkjl
Starting Job Manager
sed: couldn't open temporary file /opt/flink/conf/sedSg30ro: Read-only file system
sed: couldn't open temporary file /opt/flink/conf/sed1YrBco: Read-only file system
/docker-entrypoint.sh: 72: /docker-entrypoint.sh: cannot create /opt/flink/conf/flink-conf.yaml: Permission denied
/docker-entrypoint.sh: 91: /docker-entrypoint.sh: cannot create /opt/flink/conf/flink-conf.yaml.tmp: Read-only file system
Starting standalonejob as a console application on host flink-jobmanager-qfkjl.
2020-09-21 08:08:29,528 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --------------------------------------------------------------------------------
2020-09-21 08:08:29,531 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Preconfiguration:
2020-09-21 08:08:29,532 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] -


JM_RESOURCE_PARAMS extraction logs:
jvm_params: -Xmx1073741824 -Xms1073741824 -XX:MaxMetaspaceSize=268435456
logs: INFO [] - Loading configuration property: jobmanager.rpc.address, flink-jobmanager
INFO [] - Loading configuration property: taskmanager.numberOfTaskSlots, 4
INFO [] - Loading configuration property: blob.server.port, 6124
INFO [] - Loading configuration property: jobmanager.rpc.port, 6123
INFO [] - Loading configuration property: taskmanager.rpc.port, 6122
INFO [] - Loading configuration property: queryable-state.proxy.ports, 6125
INFO [] - Loading configuration property: jobmanager.memory.process.size, 1600m
INFO [] - Loading configuration property: taskmanager.memory.process.size, 1728m
INFO [] - Loading configuration property: parallelism.default, 2
INFO [] - The derived from fraction jvm overhead memory (160.000mb (167772162 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
INFO [] - Final Master Memory configuration:
INFO [] - Total Process Memory: 1.563gb (1677721600 bytes)
INFO [] - Total Flink Memory: 1.125gb (1207959552 bytes)
INFO [] - JVM Heap: 1024.000mb (1073741824 bytes)
INFO [] - Off-heap: 128.000mb (134217728 bytes)
INFO [] - JVM Metaspace: 256.000mb (268435456 bytes)
INFO [] - JVM Overhead: 192.000mb (201326592 bytes)

2020-09-21 08:08:29,533 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --------------------------------------------------------------------------------
2020-09-21 08:08:29,533 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Starting StandaloneApplicationClusterEntryPoint (Version: 1.11.1, Scala: 2.12, Rev:7eb514a, Date:2020-07-15T07:02:09+02:00)
2020-09-21 08:08:29,533 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - OS current user: flink
2020-09-21 08:08:29,533 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Current Hadoop/Kerberos user: <no hadoop dependency found>
2020-09-21 08:08:29,534 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JVM: OpenJDK 64-Bit Server VM - Oracle Corporation - 1.8/25.265-b01
2020-09-21 08:08:29,534 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Maximum heap size: 989 MiBytes
2020-09-21 08:08:29,534 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JAVA_HOME: /usr/local/openjdk-8
2020-09-21 08:08:29,534 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - No Hadoop Dependency available
2020-09-21 08:08:29,534 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JVM Options:
2020-09-21 08:08:29,534 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Xmx1073741824
2020-09-21 08:08:29,534 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Xms1073741824
2020-09-21 08:08:29,535 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -XX:MaxMetaspaceSize=268435456
2020-09-21 08:08:29,535 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog.file=/opt/flink/log/flink--standalonejob-0-flink-jobmanager-qfkjl.log
2020-09-21 08:08:29,535 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog4j.configuration=file:/opt/flink/conf/log4j-console.properties
2020-09-21 08:08:29,535 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog4j.configurationFile=file:/opt/flink/conf/log4j-console.properties
2020-09-21 08:08:29,535 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlogback.configurationFile=file:/opt/flink/conf/logback-console.xml
2020-09-21 08:08:29,535 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Program Arguments:
2020-09-21 08:08:29,536 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --configDir
2020-09-21 08:08:29,536 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - /opt/flink/conf
2020-09-21 08:08:29,536 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --job-classname
2020-09-21 08:08:29,536 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - org.sense.flink.examples.stream.tpch.TPCHQuery03
2020-09-21 08:08:29,537 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Classpath: /opt/flink/lib/flink-csv-1.11.1.jar:/opt/flink/lib/flink-json-1.11.1.jar:/opt/flink/lib/flink-shaded-zookeeper-3.4.14.jar:/opt/flink/lib/flink-table-blink_2.12-1.11.1.jar:/opt/flink/lib/flink-table_2.12-1.11.1.jar:/opt/flink/lib/log4j-1.2-api-2.12.1.jar:/opt/flink/lib/log4j-api-2.12.1.jar:/opt/flink/lib/log4j-core-2.12.1.jar:/opt/flink/lib/log4j-slf4j-impl-2.12.1.jar:/opt/flink/lib/flink-dist_2.12-1.11.1.jar:::
2020-09-21 08:08:29,538 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --------------------------------------------------------------------------------
2020-09-21 08:08:29,540 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Registered UNIX signal handlers for [TERM, HUP, INT]
2020-09-21 08:08:29,577 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Could not create application program.
org.apache.flink.util.FlinkException: Could not find the provided job class (org.sense.flink.examples.stream.tpch.TPCHQuery03) in the user lib directory (/opt/flink/usrlib).
at org.apache.flink.client.deployment.application.ClassPathPackagedProgramRetriever.getJobClassNameOrScanClassPath(ClassPathPackagedProgramRetriever.java:140) ~[flink-dist_2.12-1.11.1.jar:1.11.1]
at org.apache.flink.client.deployment.application.ClassPathPackagedProgramRetriever.getPackagedProgram(ClassPathPackagedProgramRetriever.java:123) ~[flink-dist_2.12-1.11.1.jar:1.11.1]
at org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.getPackagedProgram(StandaloneApplicationClusterEntryPoint.java:110) ~[flink-dist_2.12-1.11.1.jar:1.11.1]
at org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.main(StandaloneApplicationClusterEntryPoint.java:78) [flink-dist_2.12-1.11.1.jar:1.11.1]

最佳答案

我的配置有两个问题。首先,Dockerfile 没有将 explore-flink.jar 复制到正确的位置。其次,我不需要在 Kubernetes 文件 jobmanager-job.yaml 上挂载卷 job-artifacts-volume。这是我的 Dockerfile:

FROM maven:3.6-jdk-8-slim AS builder
# get explore-flink job and compile it
COPY ./java/explore-flink /opt/explore-flink
WORKDIR /opt/explore-flink
RUN mvn clean install

FROM flink:1.11.1-scala_2.12
WORKDIR /opt/flink/lib
COPY --from=builder --chown=flink:flink /opt/explore-flink/target/explore-flink.jar /opt/flink/lib/explore-flink.jar

jobmanager-job.yaml 文件:

apiVersion: batch/v1
kind: Job
metadata:
name: flink-jobmanager
spec:
template:
metadata:
labels:
app: flink
component: jobmanager
spec:
restartPolicy: OnFailure
containers:
- name: jobmanager
image: felipeogutierrez/explore-flink:1.11.1-scala_2.12
imagePullPolicy: Always
env:
#command: ["ls"]
args: ["standalone-job", "--job-classname", "org.sense.flink.App", "-app", "36"] #, <optional arguments>, <job arguments>] # optional arguments: ["--job-id", "<job id>", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState"]
#args: ["standalone-job", "--job-classname", "org.sense.flink.examples.stream.tpch.TPCHQuery03"] #, <optional arguments>, <job arguments>] # optional arguments: ["--job-id", "<job id>", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState"]
ports:
- containerPort: 6123
name: rpc
- containerPort: 6124
name: blob-server
- containerPort: 8081
name: webui
livenessProbe:
tcpSocket:
port: 6123
initialDelaySeconds: 30
periodSeconds: 60
volumeMounts:
- name: flink-config-volume
mountPath: /opt/flink/conf
securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
volumes:
- name: flink-config-volume
configMap:
name: flink-config
items:
- key: flink-conf.yaml
path: flink-conf.yaml
- key: log4j-console.properties
path: log4j-console.properties

关于docker - 如何从 Kubernetes 中的 flink docker 镜像启动我的 jar 应用程序?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63988675/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com