gpt4 book ai didi

php - 在远程网页中搜索特定字符串

转载 作者:行者123 更新时间:2023-12-01 15:35:25 24 4
gpt4 key购买 nike

我想创建一个 PHP 脚本,它将转到另一个网站(给定一个 URL)并检查该页面的页面源以获取特定的数据字符串。

我实际上现在有一种方法,但正在寻找替代方法。

现在我正在使用 file_get_contents php 函数将 URL 的页面源读入一个变量。

$link = "www.example.com";
$linkcontents = file_get_contents($link);

然后我使用 strpos php 函数在页面中搜索我要查找的字符串:

$needle = "<div>find me</div>";
if (strpos($linkcontents, $needle) == false) {
echo "String not found";
} else {
echo "String found";
}

我听说 cURL 命令可以很好地处理与 URL 相关的事情,我只是不确定我将如何使用它来完成我正在做的事情,如我在上面所说的那样.

或者如果有其他方法,我会洗耳恭听:-)

最佳答案

我们构造一个像这样的 CURL 函数

function Visit($irc_server){
// Open the connection
$user_agent = $_SERVER['HTTP_USER_AGENT'];
$port = '80';
$ch = curl_init(); // initialize curl handle
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($ch, CURLOPT_URL, $irc_server);
curl_setopt($ch, CURLOPT_FAILONERROR, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch, CURLOPT_TIMEOUT, 50);
curl_setopt($ch, CURLOPT_USERAGENT, $user_agent);
curl_setopt($ch, CURLOPT_PORT, $port);

$data = curl_exec($ch);
$httpcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
$curl_errno = curl_errno($ch);
$curl_error = curl_error($ch);
if ($curl_errno > 0) {
$return = ("cURL Error ($curl_errno): $curl_error\n");
} else {
$return = $data;
}
curl_close($ch);
/*if($httpcode >= 200 && $httpcode < 300){
$return = 'OK';
}else{
$return ='Nok';
}*/

return $return;

}

另一个函数来处理我们的url

function tenta($url){
// Now, create a instance of your class, define the behaviour
// of the crawler (see class-reference for more options and details)
// and start the crawling-process.

$crawler = new MyCrawler();


// URL to crawl
$crawler->setURL($url);

// Only receive content of files with content-type "text/html"
$crawler->addContentTypeReceiveRule("#text/html#");

// Ignore links to pictures, dont even request pictures
$crawler->addURLFilterRule("#\.(jpg|jpeg|gif|png)$# i");

// Store and send cookie-data like a browser does
$crawler->enableCookieHandling(true);

// Set the traffic-limit to 1 MB (in bytes,
// for testing we dont want to "suck" the whole site)
$crawler->setTrafficLimit(1000 * 1024);

// Thats enough, now here we go
$crawler->go();

// At the end, after the process is finished, we print a short
// report (see method getProcessReport() for more information)
$report = $crawler->getProcessReport();

if (PHP_SAPI == "cli") $lb = "\n";
else $lb = "<br />";
/*
echo "Summary:".$lb;
echo "Links followed: ".$report->links_followed.$lb;
echo "Documents received: ".$report->files_received.$lb;
echo "Bytes received: ".$report->bytes_received." bytes".$lb;
echo "Process runtime: ".$report->process_runtime." sec".$lb; */
}

我们构造我们的类

// It may take a whils to crawl a site ...
set_time_limit(110000);
// Inculde the phpcrawl-mainclass
include("libs/PHPCrawler.class.php");

// Extend the class and override the handleDocumentInfo()-method
class MyCrawler extends PHPCrawler
{
function handleDocumentInfo($DocInfo)
{
global $find;

// Just detect linebreak for output ("\n" in CLI-mode, otherwise "<br>").
if (PHP_SAPI == "cli") $lb = "\n";
else $lb = "<br />";

// Print the URL and the HTTP-status-Code
echo "Page requested: ".$DocInfo->url." (".$DocInfo->http_status_code.")".$lb;
//echo $img_url = '<img src="'.$DocInfo->url.'.jpg" width = "150" height = "150" />'.$lb;

//we looking for kenya on this domain
foreach ($find as $matche) {
$matchb = implode(',',$matche);
//$matchb = $matche['word'];
if(preg_match("/(".$matchb.")/i", Visit($DocInfo->url))) {
echo "<a href=".$DocInfo->url." target=_blank>".$DocInfo->url."</a><b style='color:red;'>".$matche['word']."</b>".$lb;
}
}
// Print the refering URL
echo "Referer-page: ".$DocInfo->referer_url.$lb;

// Print if the content of the document was be recieved or not
if ($DocInfo->received == true)
echo "Content received: ".$DocInfo->bytes_received." bytes".$lb;
else
echo "Content not received".$lb;

// Now you should do something with the content of the actual
// received page or file ($DocInfo->source), we skip it in this example

echo $lb;

flush();
}
}

数组中的变量我们将抓取的网址。

$url = array(
array("id"=>7, "name"=>"soltechit","url" => "soltechit.co.uk"),
array("id"=>5, "name"=>"CNN","url" => "cnn.com", "description" => "A social utility that connects people, to keep up with friends, upload photos, share links")
);
strings we are looking for
$find = array(
array("word" => "routers"),
array("word" => "Moose"),
array("word" => "worm"),
array("word" => "kenya"),
array("word" => "alshabaab"),
array("word" => "ISIS"),
array("word" => "security"),
array("word" => "windows 10 release"),
array("word" => "hacked")
);

我们这样称呼

foreach ($url as $urls) {
$url = $urls['url'];
echo '<h2>'.$urls['name'].'</h2>';
echo $urls['description'].'<br>';
echo tenta($url).'<br>';

}

关于php - 在远程网页中搜索特定字符串,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/5861662/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com