- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
尝试设置自己的kubeflow管道时,完成一个步骤并应保存输出时遇到了问题。完成步骤后,kubeflow始终会在消息This step is in Error state with this message: failed to save outputs: Error response from daemon: No such container: <container-id>
上引发错误
首先我以为我的管道会犯一个错误,但是与先前的示例管道相同,例如对于“[示例]基本-有条件的执行”,我在第一步(翻转硬币)完成后得到了此消息。
主容器显示输出:
heads
time="2019-06-07T11:41:35Z" level=info msg="Creating a docker executor"
time="2019-06-07T11:41:35Z" level=info msg="Executor (version: v2.2.0, build_date: 2018-08-30T08:52:54Z) initialized with template:\narchiveLocation:\n s3:\n accessKeySecret:\n key: accesskey\n name: mlpipeline-minio-artifact\n bucket: mlpipeline\n endpoint: minio-service.kubeflow:9000\n insecure: true\n key: artifacts/conditional-execution-pipeline-vmdhx/conditional-execution-pipeline-vmdhx-2104306666\n secretKeySecret:\n key: secretkey\n name: mlpipeline-minio-artifact\ncontainer:\n args:\n - python -c \"import random; result = 'heads' if random.randint(0,1) == 0 else 'tails';\n print(result)\" | tee /tmp/output\n command:\n - sh\n - -c\n image: python:alpine3.6\n name: \"\"\n resources: {}\ninputs: {}\nmetadata: {}\nname: flip-coin\noutputs:\n artifacts:\n - name: mlpipeline-ui-metadata\n path: /mlpipeline-ui-metadata.json\n - name: mlpipeline-metrics\n path: /mlpipeline-metrics.json\n parameters:\n - name: flip-coin-output\n valueFrom:\n path: /tmp/output\n"
time="2019-06-07T11:41:35Z" level=info msg="Waiting on main container"
time="2019-06-07T11:41:36Z" level=info msg="main container started with container ID: 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c"
time="2019-06-07T11:41:36Z" level=info msg="Starting annotations monitor"
time="2019-06-07T11:41:36Z" level=info msg="docker wait 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c"
time="2019-06-07T11:41:36Z" level=info msg="Starting deadline monitor"
time="2019-06-07T11:41:37Z" level=error msg="`docker wait 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c` failed: Error response from daemon: No such container: 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c\n"
time="2019-06-07T11:41:37Z" level=info msg="Main container completed"
time="2019-06-07T11:41:37Z" level=info msg="No sidecars"
time="2019-06-07T11:41:37Z" level=info msg="Saving output artifacts"
time="2019-06-07T11:41:37Z" level=info msg="Annotations monitor stopped"
time="2019-06-07T11:41:37Z" level=info msg="Saving artifact: mlpipeline-ui-metadata"
time="2019-06-07T11:41:37Z" level=info msg="Archiving 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c:/mlpipeline-ui-metadata.json to /argo/outputs/artifacts/mlpipeline-ui-metadata.tgz"
time="2019-06-07T11:41:37Z" level=info msg="sh -c docker cp -a 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c:/mlpipeline-ui-metadata.json - | gzip > /argo/outputs/artifacts/mlpipeline-ui-metadata.tgz"
time="2019-06-07T11:41:37Z" level=info msg="Archiving completed"
time="2019-06-07T11:41:37Z" level=info msg="Creating minio client minio-service.kubeflow:9000 using static credentials"
time="2019-06-07T11:41:37Z" level=info msg="Saving from /argo/outputs/artifacts/mlpipeline-ui-metadata.tgz to s3 (endpoint: minio-service.kubeflow:9000, bucket: mlpipeline, key: artifacts/conditional-execution-pipeline-vmdhx/conditional-execution-pipeline-vmdhx-2104306666/mlpipeline-ui-metadata.tgz)"
time="2019-06-07T11:41:37Z" level=info msg="Successfully saved file: /argo/outputs/artifacts/mlpipeline-ui-metadata.tgz"
time="2019-06-07T11:41:37Z" level=info msg="Saving artifact: mlpipeline-metrics"
time="2019-06-07T11:41:37Z" level=info msg="Archiving 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c:/mlpipeline-metrics.json to /argo/outputs/artifacts/mlpipeline-metrics.tgz"
time="2019-06-07T11:41:37Z" level=info msg="sh -c docker cp -a 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c:/mlpipeline-metrics.json - | gzip > /argo/outputs/artifacts/mlpipeline-metrics.tgz"
time="2019-06-07T11:41:37Z" level=info msg="Archiving completed"
time="2019-06-07T11:41:37Z" level=info msg="Creating minio client minio-service.kubeflow:9000 using static credentials"
time="2019-06-07T11:41:37Z" level=info msg="Saving from /argo/outputs/artifacts/mlpipeline-metrics.tgz to s3 (endpoint: minio-service.kubeflow:9000, bucket: mlpipeline, key: artifacts/conditional-execution-pipeline-vmdhx/conditional-execution-pipeline-vmdhx-2104306666/mlpipeline-metrics.tgz)"
time="2019-06-07T11:41:37Z" level=info msg="Successfully saved file: /argo/outputs/artifacts/mlpipeline-metrics.tgz"
time="2019-06-07T11:41:37Z" level=info msg="Saving output parameters"
time="2019-06-07T11:41:37Z" level=info msg="Saving path output parameter: flip-coin-output"
time="2019-06-07T11:41:37Z" level=info msg="[sh -c docker cp -a 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c:/tmp/output - | tar -ax -O]"
time="2019-06-07T11:41:37Z" level=error msg="`[sh -c docker cp -a 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c:/tmp/output - | tar -ax -O]` stderr:\nError: No such container:path: 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c:/tmp/output\ntar: This does not look like a tar archive\ntar: Exiting with failure status due to previous errors\n"
time="2019-06-07T11:41:37Z" level=info msg="Alloc=4338 TotalAlloc=11911 Sys=10598 NumGC=4 Goroutines=11"
time="2019-06-07T11:41:37Z" level=fatal msg="exit status 2\ngithub.com/argoproj/argo/errors.Wrap\n\t/root/go/src/github.com/argoproj/argo/errors/errors.go:87\ngithub.com/argoproj/argo/errors.InternalWrapError\n\t/root/go/src/github.com/argoproj/argo/errors/errors.go:70\ngithub.com/argoproj/argo/workflow/executor/docker.(*DockerExecutor).GetFileContents\n\t/root/go/src/github.com/argoproj/argo/workflow/executor/docker/docker.go:40\ngithub.com/argoproj/argo/workflow/executor.(*WorkflowExecutor).SaveParameters\n\t/root/go/src/github.com/argoproj/argo/workflow/executor/executor.go:343\ngithub.com/argoproj/argo/cmd/argoexec/commands.waitContainer\n\t/root/go/src/github.com/argoproj/argo/cmd/argoexec/commands/wait.go:49\ngithub.com/argoproj/argo/cmd/argoexec/commands.glob..func4\n\t/root/go/src/github.com/argoproj/argo/cmd/argoexec/commands/wait.go:19\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).execute\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:766\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).ExecuteC\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:852\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).Execute\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:800\nmain.main\n\t/root/go/src/github.com/argoproj/argo/cmd/argoexec/main.go:15\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:198\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:2361"
kubectl describe pods
的输出如下:
Name: conditional-execution-pipeline-vmdhx-2104306666
Namespace: kubeflow
Priority: 0
PriorityClassName: <none>
Node: root-nuc8i5beh/9.233.5.90
Start Time: Fri, 07 Jun 2019 13:41:29 +0200
Labels: workflows.argoproj.io/completed=true
workflows.argoproj.io/workflow=conditional-execution-pipeline-vmdhx
Annotations: workflows.argoproj.io/node-message:
Error response from daemon: No such container: 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c
workflows.argoproj.io/node-name: conditional-execution-pipeline-vmdhx.flip-coin
workflows.argoproj.io/template:
{"name":"flip-coin","inputs":{},"outputs":{"parameters":[{"name":"flip-coin-output","valueFrom":{"path":"/tmp/output"}}],"artifacts":[{"na...
Status: Failed
IP: 10.1.1.30
Controlled By: Workflow/conditional-execution-pipeline-vmdhx
Containers:
main:
Container ID: containerd://7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c
Image: python:alpine3.6
Image ID: docker.io/library/python@sha256:766a961bf699491995cc29e20958ef11fd63741ff41dcc70ec34355b39d52971
Port: <none>
Host Port: <none>
Command:
sh
-c
Args:
python -c "import random; result = 'heads' if random.randint(0,1) == 0 else 'tails'; print(result)" | tee /tmp/output
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 07 Jun 2019 13:41:35 +0200
Finished: Fri, 07 Jun 2019 13:41:35 +0200
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from pipeline-runner-token-xh2p7 (ro)
wait:
Container ID: containerd://f0449dc70c0a651c09aeb883edda9ce0ec5e415fa15a5468fe5b360fb06637c2
Image: argoproj/argoexec:v2.2.0
Image ID: docker.io/argoproj/argoexec@sha256:eea81e0b0d8899a0b7f9815c9c7bd89afa73ab32e5238430de82342b3bb7674a
Port: <none>
Host Port: <none>
Command:
argoexec
Args:
wait
State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 07 Jun 2019 13:41:35 +0200
Finished: Fri, 07 Jun 2019 13:41:37 +0200
Ready: False
Restart Count: 0
Environment:
ARGO_POD_NAME: conditional-execution-pipeline-vmdhx-2104306666 (v1:metadata.name)
Mounts:
/argo/podmetadata from podmetadata (rw)
/var/lib/docker from docker-lib (ro)
/var/run/docker.sock from docker-sock (ro)
/var/run/secrets/kubernetes.io/serviceaccount from pipeline-runner-token-xh2p7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
podmetadata:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.annotations -> annotations
docker-lib:
Type: HostPath (bare host directory volume)
Path: /var/lib/docker
HostPathType: Directory
docker-sock:
Type: HostPath (bare host directory volume)
Path: /var/run/docker.sock
HostPathType: Socket
pipeline-runner-token-xh2p7:
Type: Secret (a volume populated by a Secret)
SecretName: pipeline-runner-token-xh2p7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m1s default-scheduler Successfully assigned kubeflow/conditional-execution-pipeline-vmdhx-2104306666 to root-nuc8i5beh
Normal Pulling 8m1s kubelet, root-nuc8i5beh Pulling image "python:alpine3.6"
Normal Pulled 7m56s kubelet, root-nuc8i5beh Successfully pulled image "python:alpine3.6"
Normal Created 7m56s kubelet, root-nuc8i5beh Created container main
Normal Started 7m55s kubelet, root-nuc8i5beh Started container main
Normal Pulled 7m55s kubelet, root-nuc8i5beh Container image "argoproj/argoexec:v2.2.0" already present on machine
Normal Created 7m55s kubelet, root-nuc8i5beh Created container wait
Normal Started 7m55s kubelet, root-nuc8i5beh Started container wait
cat
读取该文件时,即使我可以看到带有
ls /var/run
的文件,也会得到“没有这样的设备或地址”。当我尝试使用
vi
打开它时,它提到权限被拒绝,因此我看不到文件的内部。这是此文件的通常行为,还是似乎有任何问题?
最佳答案
问题是microk8上的kubeflow管道的上游问题,无法一起正常工作:https://github.com/kubeflow/kubeflow/issues/2347
我切换到Minikube,在其上Kubeflow管道运行良好。
关于docker - Kubeflow管道错误:无法保存输出:没有这样的容器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56494402/
我正在使用 Assets 管道来管理我的 Grails 3.0 应用程序的前端资源。但是,似乎没有创建 CoffeeScript 文件的源映射。有什么办法可以启用它吗? 我的 build.gradle
我有一个我想要的管道: 提供一些资源, 运行一些测试, 拆资源。 我希望第 3 步中的拆卸任务运行 不管 测试是否通过或失败,在第 2 步。据我所知 runAfter如果前一个任务成功,则只运行一个任
如果我运行以下命令: Measure-Command -Expression {gci -Path C:\ -Recurse -ea SilentlyContinue | where Extensio
我知道管道是一个特殊字符,我需要使用: Scanner input = new Scanner(System.in); String line = input.next
我再次遇到同样的问题,我有我的默认处理方式,但它一直困扰着我。 有没有更好的办法? 所以基本上我有一个运行的管道,在管道内做一些事情,并想从管道内返回一个键/值对。 我希望整个管道返回一个类型为 ps
我有三个环境:dev、hml 和 qa。 在我的管道中,根据分支,阶段有一个条件来检查它是否会运行: - stage: Project_Deploy_DEV condition: eq(varia
我有 Jenkins Jenkins ver. 2.82 正在运行并想在创建新作业时使用 Pipeline 功能。但我没有看到这个列为选项。我只能在自由式项目、maven 项目、外部项目和多配置之间进
在对上一个问题 (haskell-data-hashset-from-unordered-container-performance-for-large-sets) 进行一些观察时,我偶然发现了一个奇
我正在寻找有关如何使用管道将标准输出作为其他命令的参数传递的见解。 例如,考虑这种情况: ls | grep Hello grep 的结构遵循以下模式:grep SearchTerm PathOfFi
有没有办法不因声明性管道步骤而失败,而是显示警告?目前我正在通过添加 || exit 0 来规避它到 sh 命令行的末尾,所以它总是可以正常退出。 当前示例: sh 'vendor/bin/phpcs
我们正在从旧的 Jenkins 设置迁移到所有计划都是声明性 jenkinsfile 管道的新服务器……但是,通过使用管道,我们无法再手动清除工作区。我如何设置 Jenkins 以允许 手动点播清理工
我在 Python 中阅读了有关 Pipelines 和 GridSearchCV 的以下示例: http://www.davidsbatista.net/blog/2017/04/01/docume
我有一个这样的管道脚本: node('linux'){ stage('Setup'){ echo "Build Stage" } stage('Build'){ echo
我正在使用 bitbucket 管道进行培训 这是我的 bitbucket-pipelines.yml: image: php:7.2.9 pipelines: default:
我正在编写一个程序,其中输入文件被拆分为多个文件(Shamir 的 secret 共享方案)。 这是我想象的管道: 来源:使用 Conduit.Binary.sourceFile 从输入中读取 导管:
我创建了一个管道,它有一个应该只在开发分支上执行的阶段。该阶段还需要用户输入。即使我在不同的分支上,为什么它会卡在这些步骤的用户输入上?当我提供输入时,它们会被正确跳过。 stage('Deplo
我正在尝试学习管道功能(%>%)。 当试图从这行代码转换到另一行时,它不起作用。 ---- R代码--原版----- set.seed(1014) replicate(6,sample(1:8))
在 Jenkins Pipeline 中,如何将工件从以前的构建复制到当前构建? 即使之前的构建失败,我也想这样做。 最佳答案 Stuart Rowe 还在 Pipeline Authoring Si
我正在尝试使用 执行已定义的作业构建 使用 Jenkins 管道的方法。 这是一个简单的例子: build('jenkins-test-project-build', param1 : 'some-
当我使用 where 过滤器通过管道命令排除对象时,它没有给我正确的输出。 PS C:\Users\Administrator> $proall = Get-ADComputer -filter *
我是一名优秀的程序员,十分优秀!