`
wangshaofei
  • 浏览: 283804 次
  • 性别: Icon_minigender_1
  • 来自: 深圳
社区版块
存档分类
最新评论

(转)php抓取网页内容汇总

阅读更多

①、使用php 获取网页 内容
http://hi.baidu.com/quqiufeng/blog/item/7e86fb3f40b598c67d1e7150.html
header("Content-type: text/html; charset=utf-8");
1、
$xhr = new COM("MSXML2.XMLHTTP");
$xhr->open("GET","http://localhost/xxx.php?id=2",false);
$xhr->send();
echo $xhr->responseText

2、file_get_contents实现
<?php
$url="http://www.blogjava.net/pts";
echo file_get_contents( $url );
?>

3、fopen()实现
<?
if ($stream = fopen('http://www.sohu.com', 'r')) {
    // print all the page starting at the offset 10
    echo stream_get_contents($stream, -1, 10);
    fclose($stream);
}

if ($stream = fopen('http://www.sohu.net', 'r')) {
    // print the first 5 bytes
    echo stream_get_contents($stream, 5);
    fclose($stream);
}
?>

②、使用php获取网页内容
http://www.blogjava.net/pts/archive/2007/08/26/99188.html
简单的做法:
<?php
$url="http://www.blogjava.net/pts";
echo file_get_contents( $url );
?>
或者:
<?
if ($stream = fopen('http://www.sohu.com', 'r')) {
    // print all the page starting at the offset 10
    echo stream_get_contents($stream, -1, 10);
    fclose($stream);
}

if ($stream = fopen('http://www.sohu.net', 'r')) {
    // print the first 5 bytes
    echo stream_get_contents($stream, 5);
    fclose($stream);
}
?>

③、PHP获取网站内容,保存为TXT文件源码
http://blog.chinaunix.net/u1/44325/showart_348444.html
<?
$my_book_url='http://book.yunxiaoge.com/files/article/html/4/4550/index.html';
ereg("http://book.yunxiaoge.com/files/article/html/[0-9]+/[0-9]+/",$my_book_url,$myBook);
$my_book_txt=$myBook[0];
$file_handle = fopen($my_book_url, "r");//读取文件
unlink("test.txt");
while (!feof($file_handle)) { //循环到文件结束
    $line = fgets($file_handle); //读取一行文件
    $line1=ereg("href=\"[0-9]+.html",$line,$reg); //分析文件内部书的文章页面
       $handle = fopen("test.txt", 'a');
   if ($line1) {
     $my_book_txt_url=$reg[0]; //另外赋值,给抓取分析做准备
   $my_book_txt_url=str_replace("href=\"","",$my_book_txt_url);
      $my_book_txt_over_url="$my_book_txt$my_book_txt_url"; //转换为抓取地址
      echo "$my_book_txt_over_url</p>"; //显示工作状态
      $file_handle_txt = fopen($my_book_txt_over_url, "r"); //读取转换后的抓取地址
      while (!feof($file_handle_txt)) {
       $line_txt = fgets($file_handle_txt);
       $line1=ereg("^&nbsp.+",$line_txt,$reg); //根据抓取内容标示抓取
       $my_over_txt=$reg[0];
       $my_over_txt=str_replace("&nbsp;&nbsp;&nbsp;&nbsp;","    ",$my_over_txt); //过滤字符
       $my_over_txt=str_replace("<br />","",$my_over_txt);
       $my_over_txt=str_replace("<script. language=\"javascript\">","",$my_over_txt);
       $my_over_txt=str_replace("&quot;","",$my_over_txt);
       if ($line1) {
         $handle1=fwrite($handle,"$my_over_txt\n"); //写入文件
       }
      }
    }
}
fclose($file_handle_txt);
fclose($handle);
fclose($file_handle); //关闭文件
echo "完成</p>";
?>


下面是比较嚣张的方法。
这里使用一个名叫Snoopy 的类。
先是在这里看到的:
PHP中获取网页内容的Snoopy
http://blog.declab.com/read.php/27.htm
然后是Snoopy的官网:
http://sourceforge.net/projects/snoopy/
这里有一些简单的说明:
代码收藏-Snoopy 类及简单的使用方法
http://blog.passport86.com/?p=161
下载:http://sourceforge.net/projects/snoopy/


今天才发现这个好东西,赶紧去下载了来看看,是用的parse_url
还是比较习惯curl

snoopy是一个php类,用来模仿web浏览器的功能,它能完成获取网页内容和发送表单的任务。
下面是它的一些特征:
1、方便抓取网页的内容
2、方便抓取网页的文字(去掉HTML代码)
3、方便抓取网页的链接
4、支持代理主机
5、支持基本的用户/密码认证模式
6、支持自定义用户agent,referer,cookies和header内容
7、支持浏览器转向,并能控制转向深度
8、能把网页中的链接扩展成高质量的url(默认)
9、方便提交数据并且获取返回值
10、支持跟踪HTML框架(v0.92增加)
11、支持再转向的时候传递cookies

具体使用请看下载文件中的说明。

<?php
include Snoopy.class.php ;
$snoopy = new Snoopy ;
$snoopy -> fetchform ( http://www.phpx.com/happy/logging.php?action=login ) ;
print $snoopy -> results ;
?>
<?php
include Snoopy.class.php ;
$snoopy = new Snoopy ;
$submit_url = http://www.phpx.com/happy/logging.php?action=login ; $submit_vars [ " loginmode " ] = normal ;
$submit_vars [ " styleid " ] = 1 ;
$submit_vars [ " cookietime " ] = 315360000 ;
$submit_vars [ " loginfield " ] = username ;
$submit_vars [ " username " ] = ******** ; //你的用户名
$submit_vars [ " password " ] = ******* ; //你的密码
$submit_vars [ " questionid " ] = 0 ;
$submit_vars [ " answer " ] = “” ;
$submit_vars [ " loginsubmit " ] = 提 &nbsp; 交 ;
$snoopy -> submit ( $submit_url , $submit_vars ) ;
print $snoopy -> results ; ?>


下面是 Snoopy Readme
NAME:

    Snoopy - the PHP net client v1.2.4
   
SYNOPSIS:

    include "Snoopy.class.php";
    $snoopy = new Snoopy;
   
    $snoopy->fetchtext("http://www.php.net/");
    print $snoopy->results;
   
    $snoopy->fetchlinks("http://www.phpbuilder.com/");
    print $snoopy->results;
   
    $submit_url = "http://lnk.ispi.net/texis/scripts/msearch/netsearch.html";
   
    $submit_vars["q"] = "amiga";
    $submit_vars["submit"] = "Search!";
    $submit_vars["searchhost"] = "Altavista";
       
    $snoopy->submit($submit_url,$submit_vars);
    print $snoopy->results;
   
    $snoopy->maxframes=5;
    $snoopy->fetch("http://www.ispi.net/");
    echo "<PRE>\n";
    echo htmlentities($snoopy->results[0]);
    echo htmlentities($snoopy->results[1]);
    echo htmlentities($snoopy->results[2]);
    echo "</PRE>\n";

    $snoopy->fetchform("http://www.altavista.com");
    print $snoopy->results;

DESCRIPTION:

    What is Snoopy?
   
    Snoopy is a PHP class that simulates a web browser. It automates the
    task of retrieving web page content and posting forms, for example.

    Some of Snoopy's features:
   
    * easily fetch the contents of a web page
    * easily fetch the text from a web page (strip html tags)
    * easily fetch the the links from a web page
    * supports proxy hosts
    * supports basic user/pass authentication
    * supports setting user_agent, referer, cookies and header content
    * supports browser redirects, and controlled depth of redirects
    * expands fetched links to fully qualified URLs (default)
    * easily submit form. data and retrieve the results
    * supports following html frames (added v0.92)
    * supports passing cookies on redirects (added v0.92)
   
   
REQUIREMENTS:

    Snoopy requires PHP with PCRE (Perl Compatible Regular Expressions),
    which should be PHP 3.0.9 and up. For read timeout support, it requires
    PHP 4 Beta 4 or later. Snoopy was developed and tested with PHP 3.0.12.

CLASS METHODS:

    fetch($URI)
    -----------
   
    This is the method used for fetching the contents of a web page.
    $URI is the fully qualified URL of the page to fetch.
    The results of the fetch are stored in $this->results.
    If you are fetching frames, then $this->results
    contains each frame. fetched in an array.
       
    fetchtext($URI)
    ---------------   
   
    This behaves exactly like fetch() except that it only returns
    the text from the page, stripping out html tags and other
    irrelevant data.       

    fetchform($URI)
    ---------------   
   
    This behaves exactly like fetch() except that it only returns
    the form. elements from the page, stripping out html tags and other
    irrelevant data.       

    fetchlinks($URI)
    ----------------

    This behaves exactly like fetch() except that it only returns
    the links from the page. By default, relative links are
    converted to their fully qualified URL form.

    submit($URI,$formvars)
    ----------------------
   
    This submits a form. to the specified $URI. $formvars is an
    array of the form. variables to pass.
       
       
    submittext($URI,$formvars)
    --------------------------

    This behaves exactly like submit() except that it only returns
    the text from the page, stripping out html tags and other
    irrelevant data.       

    submitlinks($URI)
    ----------------

    This behaves exactly like submit() except that it only returns
    the links from the page. By default, relative links are
    converted to their fully qualified URL form.


CLASS VARIABLES:    (default value in parenthesis)

    $host            the host to connect to
    $port            the port to connect to
    $proxy_host        the proxy host to use, if any
    $proxy_port        the proxy port to use, if any
    $agent            the user agent to masqerade as (Snoopy v0.1)
    $referer        referer information to pass, if any
    $cookies        cookies to pass if any
    $rawheaders        other header info to pass, if any
    $maxredirs        maximum redirects to allow. 0=none allowed. (5)
    $offsiteok        whether or not to allow redirects off-site. (true)
    $expandlinks    whether or not to expand links to fully qualified URLs (true)
    $user            authentication username, if any
    $pass            authentication password, if any
    $accept            http accept types (image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*)
    $error            where errors are sent, if any
    $response_code    responde code returned from server
    $headers        headers returned from server
    $maxlength        max return data length
    $read_timeout    timeout on read operations (requires PHP 4 Beta 4+)
                    set to 0 to disallow timeouts
    $timed_out        true if a read operation timed out (requires PHP 4 Beta 4+)
    $maxframes        number of frames we will follow
    $status            http status of fetch
    $temp_dir        temp directory that the webserver can write to. (/tmp)
    $curl_path        system path to cURL binary, set to false if none
   

EXAMPLES:

    Example:     fetch a web page and display the return headers and
                the contents of the page (html-escaped):
   
    include "Snoopy.class.php";
    $snoopy = new Snoopy;
   
    $snoopy->user = "joe";
    $snoopy->pass = "bloe";
   
    if($snoopy->fetch("http://www.slashdot.org/"))
    {
        echo "response code: ".$snoopy->response_code."<br>\n";
        while(list($key,$val) = each($snoopy->headers))
            echo $key.": ".$val."<br>\n";
        echo "<p>\n";
       
        echo "<PRE>".htmlspecialchars($snoopy->results)."</PRE>\n";
    }
    else
        echo "error fetching document: ".$snoopy->error."\n";



    Example:    submit a form. and print out the result headers
                and html-escaped page:

    include "Snoopy.class.php";
    $snoopy = new Snoopy;
   
    $submit_url = "http://lnk.ispi.net/texis/scripts/msearch/netsearch.html";
   
    $submit_vars["q"] = "amiga";
    $submit_vars["submit"] = "Search!";
    $submit_vars["searchhost"] = "Altavista";

       
    if($snoopy->submit($submit_url,$submit_vars))
    {
        while(list($key,$val) = each($snoopy->headers))
            echo $key.": ".$val."<br>\n";
        echo "<p>\n";
       
        echo "<PRE>".htmlspecialchars($snoopy->results)."</PRE>\n";
    }
    else
        echo "error fetching document: ".$snoopy->error."\n";



    Example:    showing functionality of all the variables:
   

    include "Snoopy.class.php";
    $snoopy = new Snoopy;

    $snoopy->proxy_host = "my.proxy.host";
    $snoopy->proxy_port = "8080";
   
    $snoopy->agent = "(compatible; MSIE 4.01; MSN 2.5; AOL 4.0; Windows 98)";
    $snoopy->referer = "http://www.microsnot.com/";
   
    $snoopy->cookies["SessionID"] = 238472834723489l;
    $snoopy->cookies["favoriteColor"] = "RED";
   
    $snoopy->rawheaders["Pragma"] = "no-cache";
   
    $snoopy->maxredirs = 2;
    $snoopy->offsiteok = false;
    $snoopy->expandlinks = false;
   
    $snoopy->user = "joe";
    $snoopy->pass = "bloe";
   
    if($snoopy->fetchtext("http://www.phpbuilder.com"))
    {
        while(list($key,$val) = each($snoopy->headers))
            echo $key.": ".$val."<br>\n";
        echo "<p>\n";
       
        echo "<PRE>".htmlspecialchars($snoopy->results)."</PRE>\n";
    }
    else
        echo "error fetching document: ".$snoopy->error."\n";


    Example:     fetched framed content and display the results
   
    include "Snoopy.class.php";
    $snoopy = new Snoopy;
   
    $snoopy->maxframes = 5;
   
    if($snoopy->fetch("http://www.ispi.net/"))
    {
        echo "<PRE>".htmlspecialchars($snoopy->results[0])."</PRE>\n";
        echo "<PRE>".htmlspecialchars($snoopy->results[1])."</PRE>\n";
        echo "<PRE>".htmlspecialchars($snoopy->results[2])."</PRE>\n";
    }
    else
        echo "error fetching document: ".$snoopy->error."\n";

 

 

<?php

//获取所有内容url保存到文件
function get_index($save_file, $prefix="index_"){
    $count = 68;
    $i = 1;
    if (file_exists($save_file)) @unlink($save_file);
    $fp = fopen($save_file, "a+") or die("Open ". $save_file ." failed");
    while($i<$count){
        $url = $prefix . $i .".htm";
        echo "Get ". $url ."...";
        $url_str = get_content_url(get_url($url));
        echo " OKn";
        fwrite($fp, $url_str);
        ++$i;
    }
    fclose($fp);
}

//获取目标多媒体对象
function get_object($url_file, $save_file, $split="|--:**:--|"){
    if (!file_exists($url_file)) die($url_file ." not exist");
    $file_arr = file($url_file);
    if (!is_array($file_arr) || empty($file_arr)) die($url_file ." not content");
    $url_arr = array_unique($file_arr);
    if (file_exists($save_file)) @unlink($save_file);
    $fp = fopen($save_file, "a+") or die("Open save file ". $save_file ." failed");
    foreach($url_arr as $url){
        if (empty($url)) continue;
        echo "Get ". $url ."...";
        $html_str = get_url($url);
        echo $html_str;
        echo $url;
        exit;
        $obj_str = get_content_object($html_str);
        echo " OKn";
        fwrite($fp, $obj_str);
    }
    fclose($fp);
}

//遍历目录获取文件内容
function get_dir($save_file, $dir){
    $dp = opendir($dir);
    if (file_exists($save_file)) @unlink($save_file);
    $fp = fopen($save_file, "a+") or die("Open save file ". $save_file ." failed");
    while(($file = readdir($dp)) != false){
        if ($file!="." && $file!=".."){
            echo "Read file ". $file ."...";
            $file_content = file_get_contents($dir . $file);
            $obj_str = get_content_object($file_content);
            echo " OKn";
            fwrite($fp, $obj_str);
        }
    }
    fclose($fp);
}


//获取指定url内容
function get_url($url){
    $reg = '/^http://[^/].+$/';
    if (!preg_match($reg, $url)) die($url ." invalid");
    $fp = fopen($url, "r") or die("Open url: ". $url ." failed.");
    while($fc = fread($fp, 8192)){
        $content .= $fc;
    }
    fclose($fp);
    if (empty($content)){
        die("Get url: ". $url ." content failed.");
    }
    return $content;
}

//使用socket获取指定网页
function get_content_by_socket($url, $host){
    $fp = fsockopen($host, 80) or die("Open ". $url ." failed");
    $header = "GET /".$url ." HTTP/1.1rn";
    $header .= "Accept: */*rn";
    $header .= "Accept-Language: zh-cnrn";
    $header .= "Accept-Encoding: gzip, deflatern";
    $header .= "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; Maxthon; InfoPath.1; .NET CLR 2.0.50727)rn";
    $header .= "Host: ". $host ."rn";
    $header .= "Connection: Keep-Alivern";
    //$header .= "Cookie: cnzz02=2; rtime=1; ltime=1148456424859; cnzz_eid=56601755-rnrn";
    $header .= "Connection: Closernrn";

    fwrite($fp, $header);
    while (!feof($fp)) {
        $contents .= fgets($fp, 8192);
    }
    fclose($fp);
    return $contents;
}


//获取指定内容里的url
function get_content_url($host_url, $file_contents){

    //$reg = '/^(#|javascript.*?|ftp://.+|http://.+|.*?href.*?|play.*?|index.*?|.*?asp)+$/i';
    //$reg = '/^(down.*?.html|d+_d+.htm.*?)$/i';
    $rex = "/([hH][rR][eE][Ff])s*=s*['"]*([^>'"s]+)["'>]*s*/i";
    $reg = '/^(down.*?.html)$/i';
    preg_match_all ($rex, $file_contents, $r);
    $result = ""; //array();
    foreach($r as $c){
        if (is_array($c)){
            foreach($c as $d){
                if (preg_match($reg, $d)){ $result .= $host_url . $d."n"; }
            }
        }
    }
    return $result;
}

//获取指定内容中的多媒体文件
function get_content_object($str, $split="|--:**:--|"){    
    $regx = "/hrefs*=s*['"]*([^>'"s]+)["'>]*s*(<b>.*?</b>)/i";
    preg_match_all($regx, $str, $result);

    if (count($result) == 3){
        $result[2] = str_replace("<b>多媒体: ", "", $result[2]);
        $result[2] = str_replace("</b>", "", $result[2]);
        $result = $result[1][0] . $split .$result[2][0] . "n";
    }
    return $result;
}

?> 





php抓取网页特定div区块及图片

(2009-06-05 09:56:23)
标签:

php

抓取

图片

it

分类: PHP

1. 取得指定網頁內的所有圖片:
<?php
//取得指定位址的內容,並儲存至text
$text=file_get_contents('http://andy.diimii.com/');

//取得第一個img標籤,並儲存至陣列match(regex語法與上述同義)
preg_match('/<img[^>]*>/Ui', $text, $match);

//印出match
print_r($match);
?>

-----------------
2. 取得指定網頁內的第一張圖片:
<?php
//取得指定位址的內容,並儲存至text
$text=file_get_contents('http://andy.diimii.com/');

//取得第一個img標籤,並儲存至陣列match(regex語法與上述同義)
preg_match('/<img[^>]*>/Ui', $text, $match);

//印出match
print_r($match);
?>

------------------------------------


3. 取得指定網頁內的特定div區塊(藉由id判斷):
<?php
//取得指定位址的內容,並儲存至text
$text=file_get_contents('http://andy.diimii.com/2009/01/seo%e5%8c%96%e7%9a%84%e9%97%9c%e9%8d%b5%e5%ad%97%e5%bb%a3%e5%91%8a%e9%80%a3%e7%b5%90/');

//去除換行及空白字元(序列化內容才需使用)
//$text=str_replace(array("\r","\n","\t","\s"), '', $text);   

//取出div標籤且id為PostContent的內容,並儲存至陣列match
preg_match('/<div[^>]*id="PostContent"[^>]*>(.*?) <\/div>/si',$text,$match);

//印出match[0]
print($match[0]);
?>

-------------------------------------------
4. 上述2及3的結合:
<?php
//取得指定位址的內容,並儲存至text
$text=file_get_contents('http://andy.diimii.com/2009/01/seo%e5%8c%96%e7%9a%84%e9%97%9c%e9%8d%b5%e5%ad%97%e5%bb%a3%e5%91%8a%e9%80%a3%e7%b5%90/');    

//取出div標籤且id為PostContent的內容,並儲存至陣列match
preg_match('/<div[^>]*id="PostContent"[^>]*>(.*?) <\/div>/si',$text,$match);   

//取得第一個img標籤,並儲存至陣列match2
preg_



  


  
分享到:
评论

相关推荐

    C#使用WebClient登录网站并抓取登录后的网页信息实现方法

    C#使用WebClient登录网站并抓取登录后的网页信息实现方法是一个非常有用的技术,可以帮助我们模拟浏览器的行为来登录网站和抓取页面内容。但是,我们需要注意相关的法律和道德规范,避免引起不良后果。 更多关于C#...

    php通过curl添加cookie伪造登陆抓取数据的方法

    在PHP开发中,有时我们需要从特定的网页抓取数据,尤其是那些需要用户登录后才能访问的页面。这种情况下,可以通过模拟登录,即“伪造登录”来实现数据抓取。本篇将详细介绍如何利用PHP的cURL库添加cookie来实现这个...

    PHP常用正则表达式汇总

    4. 提取网页内容:如抓取网页中的链接、标题等。 了解并熟练掌握这些正则表达式和PHP的正则函数,能极大地提高代码的效率和灵活性,让开发者在处理字符串时更加得心应手。在实际开发中,不断实践和积累,才能更好地...

    百度蜘蛛爬行跟踪统计系统 PHP v1.0.rar

    百度蜘蛛是百度搜索引擎的网络爬虫,它负责抓取互联网上的网页,并将这些网页的内容纳入百度的索引库。通过分析百度蜘蛛的爬行行为,网站管理员可以判断网站的可见性、更新频率以及内容对搜索引擎的友好程度。 3. ...

    php面试\笔试题汇总

    php笔试题汇总 1、抓取远程图片到本地,你会用什么函数? fsockopen, A 2、用最少的代码写一个求3值最大值的函数. function($a,$b,$c){ return $a&gt;$b? ($a&gt;$c? $a : $c) : ($b&gt;$c? $b : $c ); }

    php 远程下载图片的6种方法汇总源代码

    在PHP编程中,远程下载图片是一项常见的任务,尤其在网站内容抓取、数据备份或集成社交媒体功能时。这里,我们将详细探讨六种不同的方法来实现这个功能,每种方法都有其适用场景和优缺点。 1. **file_get_contents*...

    php笔试题汇总(超级精华的试题)

    1. **抓取远程图片到本地**:通常会使用`file_get_contents()`或`cURL`库来实现,但题目中提到了`fsockopen`,这是一个低级网络I/O函数,可以创建TCP/IP套接字连接。通过`fsockopen`配合`fread`和`fclose`等函数,...

    php知识点大总结

    php中从数据类型和变量定义,各种函数应用,常用的框架介绍,数据库联系,文件上传和下载,缓存,xhtml,服务器配置,文字处理,图片处Sphinx/Coreseek 特性,php检索,页面抓取数据

    myfirstspider.zip_WEB开发_PHP_

    PHP提供了诸如cURL库这样的工具,可以方便地发送HTTP请求,抓取网页内容。此外,我们可能还会接触到正则表达式或者DOM解析库(如PHP-HTML_Parser),用于提取有用的数据。 分布式爬虫的关键在于任务调度和结果汇总...

    毕业设计:基于PHP学生考勤系统(PHP项目).zip

    考勤记录模块是系统的心脏,它允许教师录入或自动抓取学生的出勤情况,如迟到、早退、请假等。最后,统计分析功能通过汇总考勤数据,生成各类报表,为教师评估学生表现、学校管理决策提供依据。 在数据库设计方面,...

    php使用curl详细解析及问题汇总

    以下是一个简单的CURL使用示例,用于从百度首页抓取内容: ```php &lt;?php // 初始化CURL会话 $ch = curl_init(); // 设置URL curl_setopt($ch, CURLOPT_URL, "http://baidu.com"); // 将响应内容保存在变量中,而...

    使用pthreads实现真正的PHP多线程(需PHP5.3以上版本)

    这段代码定义了一个名为`TestThreadRun`的类,继承自`Thread`,并在`run()`方法中实现了具体的线程任务,即通过curl获取URL对应的网页内容。`model_thread_result_get`函数接收一个URL数组,为每个URL创建一个新的`...

    基于Python的新闻聚合系统网络爬虫研究.pdf

    其中,网络爬虫负责从不同网站抓取数据,调度器负责管理爬虫的工作流程,而服务器则将爬取到的数据进行汇总处理,并输出到文件或数据库系统中。 Python作为一门广泛应用于数据处理、网络编程等领域的语言,为网络...

    PHP小偷程序的设计与实现方法详解

    通过这篇文章,我们可以了解到如何使用PHP编写小偷程序来抓取网页图片,并对程序的结构、功能和实现细节有了清晰的认识。对于对PHP编程和网络爬虫技术感兴趣的朋友,可以进一步研究相关专题文章,如《phpcurl用法...

    泛目录聚合版MIP泛目录视频导航泛目录

    - `tongji.php`、`tongjis.php`、`zong.php`:可能是统计脚本或者数据汇总页面,用于追踪网站流量和用户行为。 综合以上,这个压缩包提供了一个基础的MIP泛目录视频导航站点的构建框架,结合SEO策略,有望提升网站...

    discuz目录文件资料汇总

    - `robots.txt`:控制搜索引擎抓取。 - `rss.php`:RSS订阅功能。 - `search.php`:论坛搜索功能。 - `seccode.php`:生成验证码。 - `sitemap.php`:生成SiteMap。 - `space.php`:个人资料查看。 - `stats.php`:...

    Zblog模板SEO优化汇总终极版-xyklj-快乐界博客.doc

    【Zblog模板SEO优化汇总终极版】是一份针对Zblog平台的模板进行搜索引擎优化的综合指南,由快乐界博客的作者xyklj整理发布。Zblog是一个流行的PHP博客系统,适用于个人和技术爱好者创建和管理博客。SEO(搜索引擎...

Global site tag (gtag.js) - Google Analytics