Skip to main content
SEO & Search

Term #85

Robots.txt

What it is

Robots.txt is a simple text file that lives at the root of your website and instructs search engine crawlers which pages or sections they're allowed to access and index. It's a set of rules — "you can crawl this, but stay away from that." It doesn't actually block access the way a password does, but well-behaved crawlers like Googlebot respect it.

Why it matters

Robots.txt lets you prevent search engines from wasting crawl budget on pages that don't need to rank — admin pages, checkout flows, duplicate content, internal search results. By directing crawlers toward the pages that matter, you help Google understand your site more efficiently.

The mistake most people make

Accidentally blocking important pages — or the entire site — through a misconfigured robots.txt file. This happens more than you'd think, especially after a website migration or platform switch. It's a small file that can cause enormous damage if it's wrong, and it should be checked any time the site undergoes major changes.

Want help with this?

Knowing what Robots.txt means is useful. Having someone implement it correctly for your business is better. Let's have a real conversation — no pitch, no menu.