gpt4 book ai didi

Python 提取 href 问题

转载 作者:行者123 更新时间:2023-12-01 08:10:20 24 4
gpt4 key购买 nike

我正在尝试从 url 获取所有 href。问题是我无法提取写入 href:

<a href="#!DetalleNorma/203906/20190322" title="" data-bind="html: organismo, attr: {href: $root.crearHrefDetalleNorma(idTamite,fechaPublicacion)} ">SECRETARÍA GENERAL</a>

我只能提取:#!

from bs4 import BeautifulSoup
import urllib.request as urllib2
import re

html_page = urllib2.urlopen('https://www.boletinoficial.gob.ar/')
soup = BeautifulSoup(html_page)
for link in soup.findAll('a'):
print link.get('href')

这里是解析。它也不起作用:

import requests
from lxml import html
from bs4 import BeautifulSoup

r = requests.get('https://www.boletinoficial.gob.ar/')
soup = BeautifulSoup(r.content, "html.parser")

for td in soup.findAll("div", class_="itemsection"):
for a in td.findAll("a", href=True):
print(a.text)

最佳答案

我必须在等待条件下使用 selenium

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome()
driver.get('https://www.boletinoficial.gob.ar/')
links = [item.get_attribute('href') for item in WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, ".itemsection [href]")))]
print(links)

文本和链接作为元组

data =  [(item.get_attribute('href'), item.text) for item in WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, ".itemsection [href]")))]
print(data)

关于Python 提取 href 问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55294672/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com