gpt4 book ai didi

python - PyCurl 请求在执行时无限挂起

转载 作者:行者123 更新时间:2023-12-01 02:32:01 25 4
gpt4 key购买 nike

我编写了一个脚本来从 Qualys 获取扫描结果,每周运行一次以收集指标。

此脚本的第一部分涉及获取过去一周运行的每个扫描的引用列表以进行进一步处理。

问题是,虽然有时这会完美地工作,但有时脚本会卡在 c.perform() 行上。手动运行脚本时这是可以管理的,因为它可以重新运行直到它起作用。但是,我希望每周将其作为计划任务运行,而无需任何手动交互。

是否有一种万无一失的方法可以检测是否发生挂起并重新发送 PyCurl 请求直至其正常工作?

我尝试设置 c.TIMEOUTc.CONNECTTIMEOUT 选项,但这些似乎不起作用。另外,由于不会引发异常,因此简单地将其放入 try- except block 中也不会成功。

相关函数如下:

# Retrieve a list of all scans conducted in the past week
# Save this to refs_raw.txt
def getScanRefs(usr, pwd):

print("getting scan references...")

with open('refs_raw.txt','wb') as refsraw:
today = DT.date.today()
week_ago = today - DT.timedelta(days=7)
strtoday = str(today)
strweek_ago = str(week_ago)

c = pycurl.Curl()

c.setopt(c.URL, 'https://qualysapi.qualys.eu/api/2.0/fo/scan/?action=list&launched_after_datetime=' + strweek_ago + '&launched_before_datetime=' + strtoday)
c.setopt(c.HTTPHEADER, ['X-Requested-With: pycurl', 'Content-Type: text/xml'])
c.setopt(c.USERPWD, usr + ':' + pwd)
c.setopt(c.POST, 1)
c.setopt(c.PROXY, 'companyproxy.net:8080')
c.setopt(c.CAINFO, certifi.where())
c.setopt(c.SSL_VERIFYPEER, 0)
c.setopt(c.SSL_VERIFYHOST, 0)
c.setopt(c.CONNECTTIMEOUT, 3)
c.setopt(c.TIMEOUT, 3)

refsbuffer = BytesIO()
c.setopt(c.WRITEDATA, refsbuffer)
c.perform()

body = refsbuffer.getvalue()
refsraw.write(body)
c.close()

print("Got em!")

最佳答案

我自己解决了这个问题,方法是使用 multiprocessing 启动一个单独的进程,在单独的进程中启动 API 调用,如果持续时间超过 5 秒,则终止并重新启动。它不是很漂亮,但是是跨平台的。对于那些寻找更优雅但仅适用于 *nix 的解决方案的人,请查看 the signal library ,特别是 SIGALRM。

代码如下:

# As this request for scan references sometimes hangs it will be run in a separate thread here
# This will be terminated and relaunched if no response is received within 5 seconds
def performRequest(usr, pwd):
today = DT.date.today()
week_ago = today - DT.timedelta(days=7)
strtoday = str(today)
strweek_ago = str(week_ago)

c = pycurl.Curl()

c.setopt(c.URL, 'https://qualysapi.qualys.eu/api/2.0/fo/scan/?action=list&launched_after_datetime=' + strweek_ago + '&launched_before_datetime=' + strtoday)
c.setopt(c.HTTPHEADER, ['X-Requested-With: pycurl', 'Content-Type: text/xml'])
c.setopt(c.USERPWD, usr + ':' + pwd)
c.setopt(c.POST, 1)
c.setopt(c.PROXY, 'companyproxy.net:8080')
c.setopt(c.CAINFO, certifi.where())
c.setopt(c.SSL_VERIFYPEER, 0)
c.setopt(c.SSL_VERIFYHOST, 0)

refsBuffer = BytesIO()
c.setopt(c.WRITEDATA, refsBuffer)
c.perform()
c.close()
body = refsBuffer.getvalue()
refsraw = open('refs_raw.txt', 'wb')
refsraw.write(body)
refsraw.close()

# Retrieve a list of all scans conducted in the past week
# Save this to refs_raw.txt
def getScanRefs(usr, pwd):

print("Getting scan references...")

# Occasionally the request will hang infinitely. Launch in separate method and retry if no response in 5 seconds
success = False
while success != True:
sendRequest = multiprocessing.Process(target=performRequest, args=(usr, pwd))
sendRequest.start()

for seconds in range(5):
print("...")
time.sleep(1)

if sendRequest.is_alive():
print("Maximum allocated time reached... Resending request")
sendRequest.terminate()
del sendRequest
else:
success = True

print("Got em!")

关于python - PyCurl 请求在执行时无限挂起,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46706265/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com