gpt4 book ai didi

python - 如何制作一个网络爬虫来解析名称为 "patch"或 "fix?"的链接

转载 作者:太空宇宙 更新时间:2023-11-03 21:03:52 26 4
gpt4 key购买 nike

我正在尝试为 Debian GSoC 项目的应用程序任务进行编程,并且我已经能够解析从互联网下载的文本文件,但我很难尝试从链接下载补丁通过抓取页面,尤其是出现的第一页,在页面上:来自 sourceware.org 的 BugZilla 站点。

这是我尝试过的代码:

#!/usr/bin/env python3 This program uses Python 3, don't use with 2.
import requests
from bs4 import BeautifulSoup
import re
import os


PAGES_CAH = ["https://sourceware.org/bugzilla/show_bug.cgi?id=23685", "https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=f055032e4e922f1e1a5e11026c7c2669fa2a7d19", "https://github.com/golang/net/commit/4b62a64f59f73840b9ab79204c94fee61cd1ba2c", "http://www.ca.tcpdump.org/cve/0002-test-case-files-for-CVE-2015-2153-2154-2155.patch" ]
patches = []


def searchy(pages):
for link in pages:
global patches
if "github.com" in link and "commit" in link: # detect that in each page that it's from GitHub
if 'patch' not in link: # detect if it's a patch page or not
link = link + '.patch' # add .patch to link if the patch link lacks it
request = requests.get(link) # connect to page
patches.append(request.text) # download patch to patches variable
elif ".patch" in link: # any other page with ".patach" in the end is downloaded like GitHub patches by default
request = requests.get(link) # connect to page
patches.append(request) #downmload patch to patches variable
else:
request = requests.get(link) # connect to page
soup = BeautifulSoup(request.text, "lxml") # turn the page into something parsable
if "sourceware.org/git" in link: # if it's from sourceware.org's git:
patch_link = soup.find_all('a', string="patch") # find all patch links
patch_request = requests.get(patch_link[0]) # connect to patch link
patches.append(patch_request.text) # download patch
elif "sourceware.org/bugzilla" in link: # if it's from sourceware's bugzilla
patch_link_possibilities = soup.find('a', id="attachment_table") # find all links from the attachment table
local_patches_links = patch_link_possibilities.find_all(string="patch") # find all links with the "patch" name
local_fixes_links = patch_link_possibilities.find_all(string="fix") # find all links with the "fix" name
for lolpatch in local_patches_links: # for each local patch in the local patch links list
patch_request = requests.get(lolpatch) # connect to page
patches.append(patch_request.text) #download patch
for fix in local_fixes_links: # for each fix in the local fix links list
patch_request = requests.get(fix) # connect to page
patches.append(patch_request.text) #download patch
searchy(PAGES_CAH)
print(patches)

最佳答案

您可以尝试添加 :contains 伪类选择器来查找链接文本中的 patch。需要BeautifulSoup 4.7.1

import requests
from bs4 import BeautifulSoup
url = 'https://sourceware.org/bugzilla/show_bug.cgi?id=23685'
r = requests.get(url)
soup = BeautifulSoup(r.content, 'lxml')

links = [item['href'] for item in soup.select('a:contains(patch)')]
print(links)

您可以使用 css 或语法扩展:

links = [item['href'] for item in soup.select('a:contains(patch), a:contains(fix)')]

关于python - 如何制作一个网络爬虫来解析名称为 "patch"或 "fix?"的链接,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55554363/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com