Robots.txt Generator

Create perfect robots.txt files to control how search engines crawl your website. Set custom rules, block unwanted bots, and optimize your site's crawl budget with our easy-to-use generator.

✓ Real-time Preview ✓ Syntax Validation ✓ Custom Rules ✓ Copy & Download

Basic Configuration

Enter your website URL for sitemap references

Optional: Delay between crawler requests (0 = no delay)

Search Engine Specific Rules

Directory & File Rules

Use * for wildcards (e.g., *.pdf blocks all PDF files)

Sitemap Configuration

robots.txt Preview

# Generated by SevenZerosClub Robots.txt Generator
# https://sevenzerosclub.com/robots-txt-generator.html

User-agent: *
Allow: /
Valid robots.txt syntax
File size: 0 bytes
Lines: 0

🚀 Quick Templates

WordPress Site

User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/
Allow: /wp-admin/admin-ajax.php
Sitemap: /sitemap.xml

E-commerce Store

User-agent: *
Disallow: /admin/
Disallow: /cart/
Disallow: /checkout/
Disallow: /search/
Sitemap: /sitemap.xml

Blog/News Site

User-agent: *
Allow: /
Disallow: /private/
Disallow: /search/
Crawl-delay: 10
Sitemap: /sitemap.xml

Corporate Website

User-agent: *
Allow: /
Disallow: /admin/
Disallow: /private/
Disallow: /internal/
Sitemap: /sitemap.xml

Restrictive (Private)

User-agent: *
Disallow: /

User-agent: Googlebot
Allow: /public/
Crawl-delay: 30

Fully Open

User-agent: *
Allow: /

Sitemap: /sitemap.xml
Sitemap: /sitemap-images.xml

How to Use Your robots.txt File

1

Generate File

Configure your rules and generate the robots.txt file using our tool.

2

Download File

Copy the content or download the robots.txt file to your computer.

3

Upload to Root

Upload the file to your website's root directory (yoursite.com/robots.txt).

4

Test & Monitor

Test your robots.txt file and monitor crawler behavior in search console.

Frequently Asked Questions

What is a robots.txt file and why do I need one?

A robots.txt file is a text file that tells search engine crawlers which pages or sections of your website they can or cannot access. It's placed in your website's root directory and helps you control how search engines crawl your site, manage crawl budget, and protect sensitive areas. While not mandatory, it's considered a best practice for SEO and website management.

Where should I place my robots.txt file?

The robots.txt file must be placed in the root directory of your website, accessible at yourwebsite.com/robots.txt. It cannot be placed in subdirectories or subdomains. Search engines will only look for it at this specific location. Make sure the file is named exactly "robots.txt" (lowercase) and is accessible via HTTP/HTTPS.

What's the difference between "Disallow" and "Allow" directives?

Disallow: Tells crawlers NOT to access specific paths or files. For example, "Disallow: /admin/" blocks access to the admin directory. Allow: Explicitly permits access to specific paths, often used to override broader Disallow rules. For example, you might disallow an entire directory but allow access to specific files within it. If no rules are specified, crawlers assume they can access everything.

Can robots.txt completely block access to my website?

No, robots.txt is not a security measure and cannot completely block access to your website. It's more like a "please don't enter" sign that well-behaved crawlers respect, but malicious bots or users can ignore it. If you need to truly restrict access, use server-level authentication, password protection, or IP blocking. Also, blocked pages might still appear in search results if they're linked from other sites.

Should I include my sitemap in robots.txt?

Yes, it's highly recommended to include your sitemap URL in your robots.txt file. This helps search engines discover and crawl your content more efficiently. Use the "Sitemap:" directive followed by the full URL to your sitemap (e.g., "Sitemap: https://yoursite.com/sitemap.xml"). You can include multiple sitemap references if you have different sitemaps for different content types.

What is crawl delay and when should I use it?

Crawl delay specifies the minimum time (in seconds) that crawlers should wait between requests to your server. Use it if your server has limited resources or if you notice crawling is affecting site performance. However, be cautious: setting it too high can slow down indexing of your content. Most modern websites don't need crawl delay, and Google ignores this directive, preferring to manage crawl rate automatically through Search Console.

How do I test if my robots.txt file is working correctly?

You can test your robots.txt file in several ways: 1) Direct access: Visit yoursite.com/robots.txt to see if it loads correctly, 2) Google Search Console: Use the robots.txt Tester tool to check syntax and test specific URLs, 3) Online validators: Use robots.txt validation tools to check for errors, 4) Monitor crawling: Check your server logs or Search Console to see if crawlers are respecting your rules.