AI正在生成摘要
文章介绍了如何使用robots.txt文件阻止AI爬虫访问网站,以减轻服务器负担。robots.txt文件位于网站根目录下,规定爬虫的抓取规则。通过指定User-agent和Disallow指令,可以禁止特定爬虫访问特定内容。文章还提供了一个针对常见AI爬虫的屏蔽清单示例。

最近辣鸡轻量服务器资源频频飙红,开始还以为流量暴涨,自己要发了。
结果一看日志——原来是每天被各种AI爬虫轮番“光顾”,资源被薅到见底……
如何礼貌地拒绝这些不请自来的“访客”呢?
这时, robots.txt 文件就可以派上用场了。
什么是 robots.txt?
robots.txt 是放置在网站根目录下的一个文本文件,用于向网络爬虫和机器人传达抓取规则。通过它,你可以指定哪些内容允许被抓取,哪些应被禁止访问。
需要注意的是,尽管大多数正规机器人会遵守 robots.txt 中的规则,但某些恶意爬虫可能会无视这些限制,仍需结合其他防护手段。
如何阻止特定爬虫?
你可以在 robots.txt 中使用如下指令:
User-agent: [爬虫名称] Disallow: /
针对常见AI机器人的屏蔽清单
以下是一份实用的 robots.txt 示例,可用于限制部分AI爬虫访问:
User-agent: AddSearchBot User-agent: AI2Bot User-agent: Ai2Bot-Dolma User-agent: aiHitBot User-agent: AmazonBuyForMe User-agent: atlassian-bot User-agent: amazon-kendra- User-agent: Amazonbot User-agent: Andibot User-agent: Anomura User-agent: anthropic-ai User-agent: Applebot User-agent: Applebot-Extended User-agent: Awario User-agent: bedrockbot User-agent: bigsur.ai User-agent: Bravebot User-agent: Brightbot 1.0 User-agent: BuddyBot User-agent: Bytespider User-agent: CCBot User-agent: ChatGPT Agent User-agent: ChatGPT-User User-agent: Claude-SearchBot User-agent: Claude-User User-agent: Claude-Web User-agent: ClaudeBot User-agent: Cloudflare-AutoRAG User-agent: CloudVertexBot User-agent: cohere-ai User-agent: cohere-training-data-crawler User-agent: Cotoyogi User-agent: Crawlspace User-agent: Datenbank Crawler User-agent: DeepSeekBot User-agent: Devin User-agent: Diffbot User-agent: DuckAssistBot User-agent: Echobot Bot User-agent: EchoboxBot User-agent: FacebookBot User-agent: facebookexternalhit User-agent: Factset_spyderbot User-agent: FirecrawlAgent User-agent: FriendlyCrawler User-agent: Gemini-Deep-Research User-agent: Google-CloudVertexBot User-agent: Google-Extended User-agent: Google-Firebase User-agent: Google-NotebookLM User-agent: GoogleAgent-Mariner User-agent: GoogleOther User-agent: GoogleOther-Image User-agent: GoogleOther-Video User-agent: GPTBot User-agent: iaskspider/2.0 User-agent: IbouBot User-agent: ICC-Crawler User-agent: ImagesiftBot User-agent: img2dataset User-agent: ISSCyberRiskCrawler User-agent: Kangaroo Bot User-agent: LinerBot User-agent: Linguee Bot User-agent: meta-externalagent User-agent: Meta-ExternalAgent User-agent: meta-externalfetcher User-agent: Meta-ExternalFetcher User-agent: meta-webindexer User-agent: MistralAI-User User-agent: MistralAI-User/1.0 User-agent: MyCentralAIScraperBot User-agent: netEstate Imprint Crawler User-agent: NovaAct User-agent: OAI-SearchBot User-agent: omgili User-agent: omgilibot User-agent: OpenAI User-agent: Operator User-agent: PanguBot User-agent: Panscient User-agent: panscient.com User-agent: Perplexity-User User-agent: PerplexityBot User-agent: PetalBot User-agent: PhindBot User-agent: Poseidon Research Crawler User-agent: QualifiedBot User-agent: QuillBot User-agent: quillbot.com User-agent: SBIntuitionsBot User-agent: Scrapy User-agent: SemrushBot-OCOB User-agent: SemrushBot-SWA User-agent: ShapBot User-agent: Sidetrade indexer bot User-agent: TerraCotta User-agent: Thinkbot User-agent: TikTokSpider User-agent: Timpibot User-agent: VelenPublicWebCrawler User-agent: WARDBot User-agent: Webzio-Extended User-agent: wpbot User-agent: YaK User-agent: YandexAdditional User-agent: YandexAdditionalBot User-agent: YouBot Disallow: /
使用方法
将上述内容保存为 robots.txt 文件,上传至你的网站根目录即可生效。
