在Python中,可以使用requests
库和random
库来动态爬虫处理代理IP。以下是一个简单的示例:
requests
库:pip install requests
proxies.txt
,每行一个代理IP:http://proxy1.example.com:8080 http://proxy2.example.com:8080 http://proxy3.example.com:8080
dynamic_crawler.py
,使用requests
库和random
库从代理IP列表中随机选择一个代理IP,然后使用该代理IP发送HTTP请求:import requests import random # 从文件中读取代理IP列表 with open('proxies.txt', 'r') as f: proxies = [line.strip() for line in f.readlines()] def get_proxy(): # 从代理IP列表中随机选择一个代理IP return random.choice(proxies) def fetch_url(url): proxy = get_proxy() try: response = requests.get(url, proxies={"http": proxy, "https": proxy}, timeout=5) response.raise_for_status() return response.text except requests.exceptions.RequestException as e: print(f"Error fetching {url} using proxy {proxy}: {e}") return None if __name__ == "__main__": url = "https://example.com" content = fetch_url(url) if content: print(f"Content of {url}:") print(content)
在这个示例中,get_proxy
函数从proxies.txt
文件中随机选择一个代理IP,然后fetch_url
函数使用该代理IP发送HTTP请求。如果请求成功,返回响应内容;否则,打印错误信息。