gpt4 book ai didi

python - AWS Fargate 任务 - awslogs 驱动程序 - 间歇性日志

转载 作者:IT老高 更新时间:2023-10-28 21:19:20 48 4
gpt4 key购买 nike

我正在运行一个一次性的 Fargate 任务,该任务运行一个小型 python 脚本。任务定义配置为使用 awslogs 将日志发送到 Cloudwatch,但我遇到了一个非常奇怪的间歇性问题。

日志有时会出现在新创建的 Cloudwatch 流中,有时不会。我已经尝试删除部分代码,现在,这就是我所拥有的。

当我删除 asyncio/aiohttp 获取逻辑时,打印语句会正常显示在 Cloudwatch 日志中。虽然由于问题是间歇性的,但我不能 100% 确定这会一直发生。

但是,由于包含了获取逻辑,我有时会在 Fargate 任务退出后得到完全为空的日志流。没有日志显示“工作开始”、“工作结束”或“将文件放入 S3”。也没有错误日志。尽管如此,当我检查 S3 存储桶时,创建了具有相应时间戳的文件,表明脚本确实运行完成。我无法理解这怎么可能。

dostuff.py

#!/usr/bin/env python3.6

import asyncio
import datetime
import time

from aiohttp import ClientSession
import boto3


def s3_put(bucket, key, body):
try:
print(f"Putting file into {bucket}/{key}")
client = boto3.client("s3")
client.put_object(Bucket=bucket,Key=key,Body=body)
except Exception:
print(f"Error putting object into S3 Bucket: {bucket}/{key}")
raise


async def fetch(session, number):
url = f'https://jsonplaceholder.typicode.com/todos/{number}'
try:
async with session.get(url) as response:
return await response.json()
except Exception as e:
print(f"Failed to fetch {url}")
print(e)
return None


async def fetch_all():
tasks = []
async with ClientSession() as session:
for x in range(1, 6):
for number in range(1, 200):
task = asyncio.ensure_future(fetch(session=session,number=number))
tasks.append(task)
responses = await asyncio.gather(*tasks)
return responses


def main():
try:
loop = asyncio.get_event_loop()
future = asyncio.ensure_future(fetch_all())
responses = list(filter(None, loop.run_until_complete(future)))
except Exception:
print("uh oh")
raise

# do stuff with responses

body = "whatever"
key = f"{datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d_%H-%M-%S')}_test"
s3_put(bucket="my-s3-bucket", key=key, body=body)


if __name__ == "__main__":
print("Job starting")
main()
print("Job complete")

Dockerfile

FROM python:3.6-alpine
COPY docker/test_fargate_logging/requirements.txt /
COPY docker/test_fargate_logging/dostuff.py /
WORKDIR /
RUN pip install --upgrade pip && \
pip install -r requirements.txt
ENTRYPOINT python dostuff.py

任务定义

{
"ipcMode": null,
"executionRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecsInstanceRole",
"containerDefinitions": [
{
"dnsSearchDomains": null,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "test-fargate-logging-stg-log-group",
"awslogs-region": "ap-northeast-1",
"awslogs-stream-prefix": "ecs"
}
},
"entryPoint": null,
"portMappings": [],
"command": null,
"linuxParameters": null,
"cpu": 256,
"environment": [],
"ulimits": null,
"dnsServers": null,
"mountPoints": [],
"workingDirectory": null,
"secrets": null,
"dockerSecurityOptions": null,
"memory": 512,
"memoryReservation": null,
"volumesFrom": [],
"image": "xxxxxxxxxxxx.dkr.ecr.ap-northeast-1.amazonaws.com/test-fargate-logging-stg-ecr-repository:xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"disableNetworking": null,
"interactive": null,
"healthCheck": null,
"essential": true,
"links": null,
"hostname": null,
"extraHosts": null,
"pseudoTerminal": null,
"user": null,
"readonlyRootFilesystem": null,
"dockerLabels": null,
"systemControls": null,
"privileged": null,
"name": "test_fargate_logging"
}
],
"placementConstraints": [],
"memory": "512",
"taskRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecsInstanceRole",
"compatibilities": [
"EC2",
"FARGATE"
],
"taskDefinitionArn": "arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task-definition/test-fargate-logging-stg-task-definition:2",
"family": "test-fargate-logging-stg-task-definition",
"requiresAttributes": [
{
"targetId": null,
"targetType": null,
"value": null,
"name": "ecs.capability.execution-role-ecr-pull"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "ecs.capability.task-eni"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.task-iam-role"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "ecs.capability.execution-role-awslogs"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
}
],
"pidMode": null,
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
"cpu": "256",
"revision": 2,
"status": "ACTIVE",
"volumes": []
}

观察

  • 当我将任务数量(要获取的 URL)减少到 10 个而不是 ~1000 个时,日志似乎大部分/全部(?)时间出现。同样,这个问题是间歇性的,所以我不能 100% 确定。
  • 我的原始脚本有额外的逻辑,用于在失败时重试获取,并解析我在故障排除时删除的逻辑。当时的日志记录行为至少有“作业开始”的日志和异步 aiohttp 请求期间的日志。但是,写入 S3 的日志和最终的“作业完成”日志间歇性地出现。使用上面的简化脚本,我似乎得到了所有日志,或者根本没有。
  • python 的 logging 库也发生了问题,我将其更改为 print 以排除 logging
  • 的问题

最佳答案

问题

我也遇到过同样的问题;在 CloudWatch 中间歇性丢失 ECS Fargate 任务的日志。

虽然我无法回答为什么会发生这种情况,但我可以提供一个我刚刚测试过的解决方法。

什么对我有用:

升级到 Python 3.7 版本(我看到您使用的是 3.6。我遇到同样的问题时也是如此)。

我现在可以查看我的所有日​​志,并且受益于最新版本的 Python。

我希望这对你有所帮助。

关于python - AWS Fargate 任务 - awslogs 驱动程序 - 间歇性日志,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54084601/

48 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com