gpt4 book ai didi

amazon-web-services - 在将AWS Lambda与Redis连接在一起时,任务在23.02秒错误后超时

转载 作者:行者123 更新时间:2023-12-03 06:44:42 30 4
gpt4 key购买 nike

在我的项目中,我想将lambda函数连接到Redis存储,但是在建立连接时出现任务超时错误。即使我已经用NAT网关连接了专用子网。
Python代码:

import json
import boto3
import math
import redis
# from sklearn.model_selection import train_test_split
redis = redis.Redis(host='redisconnection.sxxqwc.ng.0001.use1.cache.amazonaws.com', port=6379)

s3 = boto3.client('s3')

def lambda_handler(event, context):
# bucket = event['Records'][0]['s3']['bucket']['name'] // if dynamic allocation
# key = event['Records'][0]['s3']['object']['key'] // if dynamic searching
bucket = "aws-trigger1"
key = "unigram1.csv"

response = s3.head_object(Bucket=bucket, Key=key)
fileSize = response['ContentLength']
fileSize = fileSize / 1048576
print("FileSize = " + str(fileSize) + " MB")
# redis.rpush(fileSize)
redis.ping
redis.set('foo','bar')



obj = s3.get_object(Bucket= bucket, Key=key)
file_content = obj["Body"].read().decode("utf-8")



#Calculate the chunk size
chunkSize = ''
MAPPERNUMBER=2
MINBLOCKSIZE= 1024
chunkSize = int(fileSize/MAPPERNUMBER)
numberMappers = MAPPERNUMBER
if chunkSize < MINBLOCKSIZE:
print("chunk size to small (" + str(chunkSize) + " bytes), changing to " + str(MINBLOCKSIZE) + " bytes")
chunkSize = MINBLOCKSIZE
numberMappers = int(fileSize/chunkSize)+1
residualData = fileSize - (MAPPERNUMBER - 1)*chunkSize
# print("numberMappers--",residualData)

#Ensure that chunk size is smaller than lambda function memory
MEMORY= 1536
memoryLimit = 0.30
secureMemorySize = int(MEMORY*memoryLimit)
if chunkSize > secureMemorySize:
print("chunk size to large (" + str(chunkSize) + " bytes), changing to " + str(secureMemorySize) + " bytes")
chunkSize = secureMemorySize
numberMappers = int(fileSize/chunkSize)+1

# print("Using chunk size of " + str(chunkSize) + " bytes, and " + str(numberMappers) + " nodes")

#remove 1st row from the data
file_content=file_content.split('\n', 1)[-1]
# print("after removing column name")
# X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.5, randomstate=42)
train_pct_index = int(0.5 * len(file_content))

X_Map1, X_Map2 = file_content[:train_pct_index], file_content[train_pct_index:]
# print("the size is--------------",X_Map1)
# print("the size is--------------",X_Map2)


linelen = file_content.find('\n')
if linelen < 0:
print("\ n not found in mapper chunk")
return
extraRange = 2*(linelen+20)
initRange = fileSize + 1
limitRange = fileSize + extraRange

# chunkRange = 'bytes=' + str(initRange) + '-' + str(limitRange)
# print(chunkRange)


#invoke mappers
invokeLam = boto3.client("lambda", region_name="us-east-1")
payload = X_Map1
payload2 = X_Map2
print(X_Map1)
# resp = invokeLam.invoke(FunctionName = "map1", InvocationType="RequestResponse", Payload = json.dumps(payload))
# resp2 = invokeLam.invoke(FunctionName = "map2", InvocationType="RequestResponse", Payload = json.dumps(payload2))

return file_conte
connection of VPC in lambda

最佳答案

尝试从S3检索对象时,您可能会收到超时。
检查您的VPC中是否配置了Amazon S3终端节点:
https://docs.aws.amazon.com/glue/latest/dg/vpc-endpoints-s3.html

关于amazon-web-services - 在将AWS Lambda与Redis连接在一起时,任务在23.02秒错误后超时,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63150557/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com