gpt4 book ai didi

python - 只抓取 PDF 嵌入 URL 中包含特定单词的段落

转载 作者:行者123 更新时间:2023-12-04 14:59:16 27 4
gpt4 key购买 nike

我目前正在开发一些代码来从网站上抓取文本。我对抓取整个页面不感兴趣,而只是抓取包含某些词的页面部分。我已经使用 .find_all("p") 命令对大多数 URL 做到了这一点,但这不适用于指向 PDF 的 URL。

我似乎无法找到将 PDF 作为文本打开然后将文本分成段落的方法。这就是我想要做的:首先 1) 打开一个 PDF 嵌入的 URL 作为文本,和 2) 将这个文本分成多个段落。这样,我只能抓取包含特定单词的段落。

下面是我目前用来为“普通”URL 抓取包含某些词的段落的代码。非常感谢任何使这项工作适用于 PDF 嵌入 URL 的提示(例如变量“url2”,下面的代码)!

from urllib.request import Request, urlopen
from bs4 import BeautifulSoup
import re

url1 = "https://brainybackpackers.com/best-places-for-whale-watching-in-the-world/"
url2 = "https://www.environment.gov.au/system/files/resources/7f15bfc1-ed3d-40b6-a177-c81349028ef6/files/aust-national-guidelines-whale-dolphin-watching-2017.pdf"
url = url1
req = Request(url, headers={"User-Agent": 'Mozilla/5.0'})
page = urlopen(req, timeout = 5) # Open page within 5 seconds. This line skips 'empty' websites
htmlParse = BeautifulSoup(page.read(), 'lxml')
SearchWords = ["orca", "killer whale", "humpback"] # text must contain these words

# Check if the article text mentions the SearchWord(s). If so, continue the analysis.
if any(word in htmlParse.text for word in SearchWords):
textP = ""
text = ""

# Look for paragraphs ("p") that contain a SearchWord
for word in SearchWords:
print(word)
for para in htmlParse.find_all("p", text = re.compile(word)):
textParagraph = para.get_text()
textP = textP + textParagraph
text= text + textP
print(text)

最佳答案

您可以尝试的一件事是 pdfminer.six package .导入后,我们可以使用 pdfminer.high_level.extract_text() 函数。通过导入它,我们可以抓取一个 pdf:

import pdfminer.high_level as pdfminer

infile = "my/file/path.pdf" # file you want to turn into text

out_text = pdfminer.extract_text(infile) # extract the text to out_file var

# out_text now contains a string of your pdf contents

应该注意的是,extract_text 函数适用于本地文件,因此我们需要将 pdf 保存到某个本地缓冲区,您可以稍后删除该缓冲区。如果您使用的是类 Unix 操作系统,我会说类似 /tmp/ 的内容。

转到您的实现,我相信您会想要这样的东西:

import pdfminer.high_level as pdfminer
import requests

# get the pdf and save it
url = "https://www.environment.gov.au/system/files/resources/7f15bfc1-ed3d-40b6-a177-c81349028ef6/files/aust-national-guidelines-whale-dolphin-watching-2017.pdf"
response = requests.get(url)
pdf_name = url.split('/')[-1] # everything right of the last slash
pdf_path = "/tmp/" + pdf_name # CHANGE TO WHATEVER "BUFFER" FOLDER YOU WANT

# save the pdf locally to be used with the pdf parser
with open(pdf_path,'wb') as outfile:
outfile.write(response.content)

# read the contents of the pdf into the out_text var
out_text = pdfminer.extract_text(pdf_path)

# out_text now contains a string of your pdf contents

从这里你应该可以自由地抓取你想要的一切。

关于python - 只抓取 PDF 嵌入 URL 中包含特定单词的段落,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/67268377/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com