# 如何用Python爬取指定关键词 在当今信息爆炸的时代,网络数据采集(Web Scraping)已成为数据分析、市场调研和学术研究的重要技能。本文将详细介绍如何使用Python爬取指定关键词的网页内容,涵盖工具选择、实战代码和常见问题解决方案。 --- ## 一、准备工作 ### 1.1 工具与库选择 - **Requests**:发送HTTP请求获取网页内容 - **BeautifulSoup**:解析HTML/XML文档 - **Scrapy**(可选):专业级爬虫框架 - **Selenium**:处理动态加载页面 - **正则表达式**:辅助文本匹配 安装所需库: ```bash pip install requests beautifulsoup4 selenium
robots.txt
(如:https://example.com/robots.txt
)import requests from bs4 import BeautifulSoup def scrape_with_keyword(url, keyword): headers = {'User-Agent': 'Mozilla/5.0'} try: response = requests.get(url, headers=headers) response.raise_for_status() # 检查请求是否成功 soup = BeautifulSoup(response.text, 'html.parser') # 查找包含关键词的元素 results = [] for element in soup.find_all(text=lambda text: text and keyword in text): parent = element.parent results.append({ 'text': element.strip(), 'tag': parent.name, 'attributes': parent.attrs }) return results except Exception as e: print(f"Error: {e}") return [] # 示例使用 data = scrape_with_keyword('https://example.com/news', '人工智能') print(data)
当遇到JavaScript渲染的内容时:
from selenium import webdriver from selenium.webdriver.common.by import By def scrape_dynamic_page(url, keyword): driver = webdriver.Chrome() driver.get(url) # 显式等待(需安装selenium的wait模块) from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC try: WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.TAG_NAME, "body")) page_source = driver.page_source soup = BeautifulSoup(page_source, 'html.parser') # ...(后续处理与静态页面相同) finally: driver.quit()
def paginated_scraper(base_url, keyword, pages=3): all_results = [] for page in range(1, pages+1): url = f"{base_url}?page={page}" print(f"正在爬取第{page}页...") all_results.extend(scrape_with_keyword(url, keyword)) return all_results
import csv import json def save_to_csv(data, filename): with open(filename, 'w', newline='', encoding='utf-8') as f: writer = csv.DictWriter(f, fieldnames=data[0].keys()) writer.writeheader() writer.writerows(data) def save_to_json(data, filename): with open(filename, 'w', encoding='utf-8') as f: json.dump(data, f, ensure_ascii=False, indent=2)
time.sleep(random.uniform(1, 3))
- 使用代理IP: ```python proxies = { 'http': 'http://your_proxy:port', 'https': 'https://your_proxy:port' } requests.get(url, proxies=proxies)
以爬取某新闻网站包含”区块链”关键词的文章为例:
def news_scraper(): base_url = "https://news.example.com/search" keyword = "区块链" max_pages = 5 results = paginated_scraper(base_url, keyword, max_pages) print(f"共找到{len(results)}条结果") save_to_json(results, "blockchain_news.json") if __name__ == "__main__": news_scraper()
法律合规:
robots.txt
规定异常处理:
try: # 爬虫代码 except requests.exceptions.RequestException as e: print(f"网络错误: {e}") except Exception as e: print(f"未知错误: {e}")
性能优化:
aiohttp
实现异步爬取通过Python实现关键词定向爬虫需要掌握: 1. 网页请求与响应处理 2. HTML解析技术 3. 反爬虫绕过策略 4. 数据存储与清洗
建议从简单静态页面开始练习,逐步过渡到复杂动态网站。完整的项目代码可参考GitHub上的Scrapy示例项目。
提示:本文示例代码需根据实际网站结构调整选择器和处理逻辑,建议先在小规模测试通过后再进行大规模爬取。 “`
(注:实际字符数约1500字,可根据需要删减部分章节调整到1350字左右)
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。