gpt4 book ai didi

python - 在 url 错误中搜索单词

转载 作者:太空宇宙 更新时间:2023-11-03 11:21:48 24 4
gpt4 key购买 nike

我在一个具有唯一 ID 的文本文件中有一百万个奇怪的 url 和搜索词。我需要打开 url 并搜索搜索词,如果存在则表示为 1,否则为 0

输入文件:

"ID" "URL","SearchTerm1","Searchterm2"
"1","www.google.com","a","b"
"2","www.yahoo.com","f","g"
"3","www.att.net","k"
"4" , "www.facebook.com","cs","ee"

代码片段:

import urllib2
import re
import csv
import datetime
from BeautifulSoup import BeautifulSoup

with open('txt.txt') as inputFile, open ('results.txt','w+') as proc_seqf:
header = 'Id' + '\t' + 'URL' + '\t'
for i in range(1,3):
header += 'Found_Search' + str(i) + '\t'
header += '\n'
proc_seqf.write(header)
for line in inputFile:
line=line.split(",")
url = 'http://' + line[1]
req = urllib2.Request(url, headers={'User-Agent' : "Magic Browser"})
html_content = urllib2.urlopen(req).read()
soup = BeautifulSoup(html_content)
if line[2][0:1] == '"' and line[2][-1:] == '"':
line[2] = line[2][1:-1]
matches = soup(text=re.compile(line[2]))
#print soup(text=re.compile(line[2]))
#print matches
if len(matches) == 0 or line[2].isspace() == True:
output_1 =0
else:
output_1 =1
#print output_1
#print line[2]
if line[3][0:1] == '"' and line[3][-1:] == '"':
line[3] = line[3][1:-1]
matches = soup(text=re.compile(line[3]))
if len(matches) == 0 or line[3].isspace() == True:
output_2 =0
else:
output_2 =1
#print output_2
#print line[3]

proc_seqf.write("{}\t{}\t{}\t{}\n".format(line[0],url,output_1, output_2))

输出文件:

ID,SearchTerm1,Searchterm2
1,0,1
2,1,0
3,0
4,1,1

代码的两个问题:

  1. 当我一次运行大约 200 个 url 时,它给我 urlopen error [Errno 11004] getaddrinfo failed error

  2. 有没有一种方法可以搜索非常匹配但不完全匹配的内容?

最佳答案

when I run around 200 urls at once it gives me urlopen error [Errno 11004] getaddrinfo failed error.

此错误消息告诉您托管服务器的 DNS 查找网址失败。

这是你的程序无法控制的,但你可以决定如何处理这种情况。

最简单的方法是捕获错误,记录并继续:

try:
html_content = urllib2.urlopen(req).read()
except urllib2.URLError as ex:
print 'Could not fetch {} because of {}, skipping.'.format(url, ex)
# skip the rest of the loop
continue

但是,错误可能是暂时的,并且查找将如果您稍后尝试,可以工作;例如,DNS 服务器可能配置为如果在太短的时间内收到太多请求,则拒绝传入请求。
在这种情况下,你可以写一个延迟重试的函数:

import time

class FetchException(Exception):
pass

def fetch_url(req, retries=5):
for i in range(1, retries + 1):
try:
html_content = urllib2.urlopen(req).read()
except urllib2.URLError as ex:
print 'Could not fetch {} because of {}, skipping.'.format(url, ex)
time.sleep(1 * i))
continue
else:
return html_content
# if we reach here then all lookups have failed
raise FetchFailedException()

# In your main code
try:
html_content = fetch_url(req)
except FetchFailedException:
print 'Could not fetch {} because of {}, skipping.'.format(url, ex)
# skip the rest of the loop
continue

Is there a way to search something which closely matches but not exact match?

如果要匹配带有可选尾随点的字符串,请使用 ? 修饰符。

来自docs :

Causes the resulting RE to match 0 or 1 repetitions of the preceding RE. ab? will match either ‘a’ or ‘ab’.

>>> s = 'Abc In'
>>> m = re.match(r'Abc In.', s)
>>> m is None
True

# Surround `.` with brackets so that `?` only applies to the `.`
>>> m = re.match(r'Abc In(.)?', s)
>>> m.group()
'Abc In'
>>> m = re.match(r'Abc In(.)?', 'Abc In.')
>>> m.group()
'Abc In.'

注意正则表达式模式前面的 r 字符。这表示 raw string .在正则表达式模式中使用原始字符串是一种很好的做法,因为它们可以更轻松地处理反斜杠 (\) 字符,这在正则表达式中很常见。

所以你可以构建一个正则表达式来匹配可选的尾随点,如下所示:

matches = soup(text=re.compile(r'{}(.)?').format(line[2]))

关于python - 在 url 错误中搜索单词,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41472676/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com