Your WordPress robots.txt Is Blocking Google — Fix It Now in 2026

✍️ By Vikas Rohilla 📅 Updated: April 2026 ⏱️ 11 min read 🏷️ WordPress SEO

Open your browser right now and go to yourdomain.com/robots.txt.

What you see there — or do not see — is directly controlling what Google can and cannot crawl on your WordPress site. A single wrong line in that file can block your entire site from Google. A missing line means Googlebot is crawling pages that should never be indexed. And most WordPress site owners have never looked at theirs.

This guide covers everything about WordPress robots.txt: what it does, how WordPress generates it, what a correct robots.txt looks like, the most dangerous mistakes to avoid, and how to optimise yours to protect important pages while directing Googlebot exactly where you want it to go.

🚨 Check this first: Visit yourdomain.com/robots.txt in your browser right now. If you see Disallow: / — your entire site is blocked from Google. If you see no Sitemap: line — Google does not know where your sitemap is. Both are critical fixes that take under 5 minutes.
wordpress robots.txt file browser view disallow allow user-agent sitemap line
Your WordPress robots.txt file viewed in browser at yourdomain.com/robots.txt — this file controls what Googlebot can and cannot crawl. A misconfigured WordPress robots.txt is one of the most damaging and most overlooked SEO issues on WordPress sites.

What Is WordPress robots.txt — And Why Does It Matter?

The robots.txt file is a plain text file that sits at the root of your domain. Every time Googlebot visits your site, it reads robots.txt first — before crawling a single page. The file tells crawlers which URLs they are allowed to visit and which they should skip.

On a WordPress site, robots.txt serves several critical purposes. It prevents Googlebot from wasting crawl budget on pages that should never rank — admin pages, login pages, internal search results, tag archives. It points Googlebot directly to your sitemap. And it can block specific bots — like the Mediapartners-Google AdSense bot — from areas of your site that should not be evaluated for ad serving.

The reason WordPress robots.txt matters so much is that WordPress does not ship with a perfectly optimised robots.txt by default. The default WordPress robots.txt is functional but minimal — it does not block the many low-value URL patterns that WordPress generates automatically, and it does not include a sitemap reference. Every hour you spend optimising other parts of your SEO without checking robots.txt might be partially undermined by Googlebot crawling pages it should be skipping.

📖 Related: The robots.txt file and your sitemap work as a pair — robots.txt tells Google what NOT to crawl, your sitemap tells it what TO prioritise. If your sitemap is not working correctly, read the WordPress XML Sitemap Not Working guide to fix both together.

How WordPress Generates robots.txt

WordPress handles robots.txt in a way that surprises many site owners: by default, WordPress generates a virtual robots.txt dynamically — there is no actual robots.txt file in your root directory. WordPress intercepts requests to /robots.txt and generates the output on the fly from its internal settings.

This means if you look in cPanel File Manager → public_html and do not see a robots.txt file — that is normal for a default WordPress install. The file is being served virtually.

However, once you use an SEO plugin like RankMath or Yoast, the robots.txt management shifts to the plugin. Both RankMath and Yoast give you a built-in editor to modify your WordPress robots.txt without touching any server files directly. This is the recommended approach — it keeps robots.txt edits inside WordPress and prevents the file from being accidentally overwritten during updates.

If a physical robots.txt file exists in your public_html folder, it takes precedence over the virtual WordPress version. This can cause conflicts if both exist — the physical file may not match what your SEO plugin thinks the robots.txt contains.

💡 Check which version you have: Go to cPanel File Manager → public_html. If you see a robots.txt file there — you have a physical file. If you do not see one — WordPress is generating it virtually. Either can work, but you should only be editing one of them. If you use RankMath or Yoast, manage it through the plugin editor and delete any conflicting physical file.

The Default WordPress robots.txt — What It Contains

WordPress’s default virtual robots.txt looks like this:

User-agent: * Disallow: /wp-admin/ Allow: /wp-admin/admin-ajax.php# End of default WordPress robots.txt

This blocks all crawlers from the WordPress admin area, with one exception: admin-ajax.php, which needs to be accessible because it handles AJAX requests that plugins use for frontend functionality.

That is it. That is the default. It does not block tag archives. It does not block search result pages. It does not block the login page directly. It does not include a sitemap reference. For a simple personal blog, this might be acceptable. For any site actively trying to rank and control its crawl budget, it is not enough.

The Correct WordPress robots.txt — Complete Template

This is the WordPress robots.txt template that covers the most important optimisations for a standard WordPress blog or content site:

User-agent: * Disallow: /wp-admin/ Disallow: /wp-login.php Disallow: /?s= Disallow: /search/ Disallow: /feed/ Disallow: /comments/feed/ Disallow: /trackback/ Allow: /wp-admin/admin-ajax.phpSitemap: https://yourdomain.com/sitemap_index.xml

Replace yourdomain.com with your actual domain. If you use Yoast instead of RankMath, your sitemap URL may be slightly different — check by visiting yourdomain.com/sitemap_index.xml to confirm it loads.

✅ What each line does:
Disallow: /wp-admin/ — blocks admin pages (default, keep this)
Disallow: /wp-login.php — blocks login page from being crawled
Disallow: /?s= — blocks internal search results (unlimited crawlable URLs otherwise)
Disallow: /feed/ — blocks RSS feed pages from being indexed
Disallow: /trackback/ — blocks old pingback/trackback URLs
Sitemap: — tells every crawler exactly where your sitemap is

How to Edit WordPress robots.txt — 3 Methods

Method 1 — RankMath (Recommended)

  1. Go to WordPress Dashboard → RankMath → General Settings
  2. Click the “Edit robots.txt” button (found in the General tab)
  3. The current robots.txt content appears in an editable text box
  4. Replace the content with the template above (adjusted for your domain)
  5. Click Save Changes
  6. Verify by visiting yourdomain.com/robots.txt in your browser
rankmath general settings edit robots.txt wordpress robots txt editor
RankMath → General Settings → Edit robots.txt — the built-in robots.txt editor lets you manage your WordPress robots.txt directly inside the dashboard without touching any server files. This is the recommended approach for any WordPress site using RankMath.

Method 2 — Yoast SEO

  1. Go to SEO → Tools → File Editor
  2. The robots.txt file content is shown in an editable section
  3. Make your changes → click Save changes to robots.txt
  4. Verify at yourdomain.com/robots.txt

Method 3 — cPanel File Manager (Direct Edit)

  1. Login to Hostinger → hPanel → File Manager
  2. Navigate to public_html
  3. If a robots.txt file exists — right-click → Edit
  4. If no file exists — right-click → New File → name it robots.txt → Edit
  5. Paste the template above → Save Changes
  6. Verify at yourdomain.com/robots.txt
⚠️ Do not maintain two robots.txt files. If you edit via cPanel and also have RankMath generating a virtual robots.txt, they can conflict. Choose one method and stick to it. The safest approach: use RankMath’s editor, and if a physical robots.txt exists in cPanel, delete it after confirming RankMath’s version is being served correctly.
cpanel file manager public_html robots.txt wordpress robots txt file location
cPanel / Hostinger File Manager → public_html — if a physical robots.txt file exists here, it takes precedence over WordPress’s virtual version. Check for conflicts between this physical file and what your SEO plugin is generating.

How to Test Your WordPress robots.txt

After editing your WordPress robots.txt, always test it before assuming it is working correctly. A single syntax error can block Googlebot from your entire site.

Test 1 — Browser Check

Visit yourdomain.com/robots.txt in your browser. The file content should load as plain text. Verify that every rule you added is visible and correctly formatted — no extra spaces before Disallow:, correct URL patterns, sitemap line present at the bottom.

Test 2 — Google Search Console robots.txt Tester

  1. Open Google Search Console → click Settings (gear icon, bottom of left sidebar)
  2. Under “Crawling” — find “robots.txt” and click “Open Report”
  3. The tester shows your current robots.txt content and lets you test specific URLs
  4. Enter a URL you want to test — for example /?s=test — and click Test
  5. Result should show “Blocked” for that URL
  6. Test a page you DO want indexed — like a blog post URL — result should show “Allowed”
google search console robots.txt tester wordpress allowed blocked url test
Google Search Console → Settings → robots.txt tester — enter any URL to see whether your current WordPress robots.txt allows or blocks Googlebot from crawling it. Always test after any robots.txt edit to confirm rules are working as intended.
✅ Key URLs to test in GSC tester:
/?s=test → should show Blocked
/wp-admin/ → should show Blocked
/wp-admin/admin-ajax.php → should show Allowed
/your-post-slug/ → should show Allowed
/feed/ → should show Blocked

WordPress robots.txt for WooCommerce

WooCommerce sites need additional robots.txt rules because WooCommerce generates many URL patterns that should not be indexed — cart pages, checkout pages, account pages, and product filter combinations that create thousands of near-duplicate URLs.

User-agent: * Disallow: /wp-admin/ Disallow: /wp-login.php Disallow: /?s= Disallow: /search/ Disallow: /feed/ Disallow: /cart/ Disallow: /checkout/ Disallow: /my-account/ Disallow: /order-received/ Disallow: /shop/?orderby= Disallow: /shop/?filter_ Allow: /wp-admin/admin-ajax.phpSitemap: https://yourdomain.com/sitemap_index.xml

The Disallow: /shop/?orderby= and Disallow: /shop/?filter_ lines are particularly important — they prevent Googlebot from crawling thousands of product filter and sorting URL combinations that create crawl budget waste without ranking value.

📖 Related: robots.txt blocking is one layer of crawl budget optimisation — but it works best when combined with noindex settings for archive pages and clean sitemap submission. Read the complete WordPress Crawl Budget Guide for the full stack of optimisations that work alongside robots.txt.

The Most Dangerous WordPress robots.txt Mistakes

🔴 Disallow: /

Blocks your ENTIRE site from Google. This is accidentally set when “Discourage search engines” is enabled during development and never turned off after launch. Check: Settings → Reading → “Discourage search engines” must be UNCHECKED.

🔴 Blocking CSS and JS

Old advice was to block /wp-content/ in robots.txt. This blocks your CSS and JavaScript from Googlebot — preventing it from rendering your pages correctly and tanking your Core Web Vitals scores in GSC.

🟡 No Sitemap Line

Without “Sitemap: https://yourdomain.com/sitemap_index.xml” in robots.txt, many crawlers never discover your sitemap URL unless you submit it manually in GSC. Always include it.

🟡 Blocking /uploads/

Some security plugins block /wp-content/uploads/ in robots.txt. This prevents Google from indexing your images for Google Images and stops it from verifying that images on your pages actually load.

🔵 Wildcard Blocking /*.xml

Blocking all .xml files with a wildcard rule also blocks your sitemap. If you see “Disallow: /*.xml” in your robots.txt — this is preventing Googlebot from reading your sitemap file entirely.

🟣 Two Conflicting Files

Physical robots.txt in cPanel + virtual version from SEO plugin = the physical file wins. Your SEO plugin edits do nothing. Check cPanel for a physical file and delete it if using plugin management.

WordPress robots.txt and the “Discourage Search Engines” Setting

WordPress has a built-in setting that adds Disallow: / to your robots.txt — blocking your entire site from search engines. It exists for development environments where you do not want Google indexing a site that is not ready.

The problem is that many WordPress sites are launched with this setting accidentally left on. The site goes live. Traffic never comes. The owner spends months wondering why nothing ranks — and the answer has been sitting in Settings → Reading the entire time.

  1. Go to WordPress Dashboard → Settings → Reading
  2. Find “Search engine visibility”
  3. The checkbox should say “Discourage search engines from indexing this site”
  4. This checkbox must be UNCHECKED on any live site
  5. If it was checked — uncheck it → Save Changes → immediately verify your robots.txt no longer contains Disallow: /
🚨 Was your site blocking Google? If you just unchecked “Discourage search engines” — go immediately to Google Search Console → URL Inspection → enter your homepage URL → Request Indexing. Then submit your sitemap fresh: GSC → Sitemaps → delete old submission → resubmit. This starts the recovery process for any rankings lost while the site was blocked.

robots.txt vs noindex — What Is the Difference?

This is one of the most important distinctions in technical SEO, and it is commonly misunderstood. robots.txt and noindex both control what Google sees — but they work in completely different ways.

robots.txt Disallownoindex Meta Tag
What it doesTells Googlebot not to crawl the URLTells Googlebot not to index the URL
Does Google crawl the page?No — it never visits the URLYes — it visits but does not add to index
Can page still appear in results?Yes — if other sites link to it, Google may still show it with no descriptionNo — noindex is a direct instruction not to index
Use forAdmin pages, login pages, search results, feeds — pages you never want crawledTag archives, author pages, paginated pages — pages you want accessible but not indexed
Wrong use caseDo not use to block pages you want noindexed — if Googlebot cannot crawl the page, it cannot see the noindex tagDo not use for security — Google respects it but other crawlers may not

The critical rule: never use robots.txt to block a page you want to noindex. If Googlebot cannot crawl the page, it cannot see the noindex meta tag, and may still show the URL in search results based on links from other pages. For pages you want deindexed — use noindex and allow crawling.

robots.txt optimised — now optimise your server too

Controlling what Googlebot crawls is one layer. How fast it can crawl is another. Hostinger Business and Cloud plans with LiteSpeed servers deliver pages faster to Googlebot — more pages crawled per session, better indexing speed for all the pages your optimised robots.txt is now pointing crawlers towards.

Explore Hostinger →

How to Monitor Your robots.txt Going Forward

  1. Check after every plugin update: Security plugins, SEO plugins, and caching plugins can all modify your robots.txt. After any major plugin update, visit yourdomain.com/robots.txt and verify the content is still correct.
  2. Check after any site migration: Domain changes, HTTP to HTTPS moves, and server migrations can reset or overwrite your WordPress robots.txt. Post-migration robots.txt verification is non-negotiable.
  3. Monitor GSC “Blocked by robots.txt” count: GSC → Pages → “Blocked by robots.txt” shows URLs Googlebot tried to crawl but was blocked by your file. Check this monthly — unexpected pages appearing here signal a misconfigured rule.
  4. Use GSC robots.txt Tester quarterly: Run a quick test on your 5 most important page types to confirm they are still being allowed correctly after any changes to your setup.
google search console blocked by robots.txt count reduced wordpress robots txt fix
Google Search Console → Pages → “Blocked by robots.txt” — after optimising your WordPress robots.txt, this count should only include genuinely low-value pages you intentionally blocked. Any important content pages appearing here signal a misconfigured Disallow rule that needs immediate fixing.
🔍

ToolXray Free WordPress Technical Audit

Check robots.txt health alongside broken links, crawl budget, Core Web Vitals and 80+ SEO signals — free scan, no signup required.

Run Free Audit →

Complete WordPress robots.txt Checklist

  • Visit yourdomain.com/robots.txt right now — confirm it loads and does not contain Disallow: /
  • Settings → Reading — “Discourage search engines” is UNCHECKED on your live site
  • wp-admin/ is blocked — default WordPress rule, must remain
  • admin-ajax.php is allowed — required for plugin AJAX functionality
  • /?s= is blocked — prevents unlimited internal search result URLs from being crawled
  • Sitemap: line is present at the bottom pointing to your correct sitemap URL
  • No /wp-content/ block — blocking this breaks CSS/JS rendering for Googlebot
  • No /*.xml wildcard block — this blocks your sitemap
  • No /uploads/ block — this prevents image indexing
  • Only ONE robots.txt in effect — either physical file OR plugin-generated, not both
  • Test in GSC robots.txt Tester — verify allowed and blocked URLs return correct results
  • WooCommerce sites: /cart/ /checkout/ /my-account/ blocked
  • Monitor monthly — check after every plugin update and migration

The Bottom Line

Your WordPress robots.txt file is one of the first things Googlebot reads when it visits your site — and for most WordPress sites, it has never been deliberately optimised. The default rules are minimal. The common mistakes range from accidentally blocking your entire site to leaving internal search results open for unlimited crawling.

A well-configured WordPress robots.txt takes 15 minutes to set up and requires a quick check after every major plugin update or migration. The payoff is direct: Googlebot stops wasting crawl time on admin pages, login forms, and internal search results, and spends more of each visit crawling the content you actually want ranked.

Start by visiting yourdomain.com/robots.txt right now. Verify what is there. Update it with the template in this guide. Test it in Google Search Console’s robots.txt tester. Then add a monthly calendar reminder to check it after any significant site changes. That combination — a correct robots.txt plus consistent monitoring — is all it takes to eliminate this as an SEO liability permanently.

🔍 Check Your WordPress robots.txt Health

Free technical audit — robots.txt, crawl health, broken links, Core Web Vitals and 80+ signals

Run Free Audit at ToolXray →

Related Articles

🤖

WordPress Crawl Budget Guide

robots.txt is one layer of crawl control — combine it with noindex settings and sitemap optimisation for maximum crawl efficiency.

🗺️

WordPress XML Sitemap Not Working

robots.txt and your sitemap work as a pair. Fix both together for complete crawl control.

🔁

WordPress Duplicate Content Fix

robots.txt blocks crawling. Canonical tags handle duplicate indexing. Use both together for clean site architecture.

🔬

Complete Technical SEO Audit

robots.txt is one checkpoint in an 80+ signal audit. Run the full check after optimising your crawl setup.

🔗

WordPress Permalink Settings

Clean permalinks reduce the URL patterns you need to block in robots.txt. Fix both for the cleanest crawl architecture.

📐

Technical SEO for Beginners

robots.txt, canonical tags, sitemap, crawl budget — the complete technical SEO foundation in one guide.

Frequently Asked Questions

❓ Does WordPress create a robots.txt file automatically?
Yes — WordPress generates a virtual robots.txt dynamically when no physical file exists in your root directory. This virtual file contains the minimal default rules blocking /wp-admin/ and allowing admin-ajax.php. However, once you install an SEO plugin like RankMath or Yoast, robots.txt management shifts to the plugin, which gives you a built-in editor. If a physical robots.txt file exists in your public_html folder, it overrides the virtual WordPress version — check cPanel to confirm you only have one robots.txt in effect.
❓ Should I block tag and category pages in robots.txt?
No — do not block tag and category pages in robots.txt. If Googlebot cannot crawl these pages, it cannot see any noindex meta tags you have set on them, and may still show the URLs in search results based on links from other pages. The correct approach is to allow crawling of these pages but set them to noindex in your SEO plugin (RankMath → Titles & Meta → Tags → noindex). This way Googlebot crawls the page, reads the noindex instruction, and correctly removes it from the index. robots.txt blocking is for pages you never want crawled at all — not for pages you want deindexed.
❓ How do I check if my WordPress robots.txt is blocking Google?
Three checks: (1) Visit yourdomain.com/robots.txt in your browser — look for Disallow: / which blocks everything. (2) WordPress Dashboard → Settings → Reading — ensure “Discourage search engines” is unchecked. (3) Google Search Console → Settings → robots.txt tester — enter your homepage URL and important post URLs to confirm they show as “Allowed.” If any important content shows as “Blocked,” you have a misconfigured Disallow rule that needs immediate fixing.
❓ Can robots.txt block Googlebot from my entire WordPress site?
Yes — and it does happen, most commonly in two ways. First, leaving WordPress’s “Discourage search engines from indexing this site” checkbox enabled in Settings → Reading adds Disallow: / to your robots.txt, blocking all crawlers from everything. Second, a Disallow: / line manually added to robots.txt (or added by a security plugin) produces the same result. Both are easily fixed — uncheck the WordPress setting or remove the Disallow: / line — but any period when the site was blocked means Google may have dropped pages from its index, requiring fresh indexing requests after the fix.
❓ Should the Sitemap line be in robots.txt?
Yes — including Sitemap: https://yourdomain.com/sitemap_index.xml in your robots.txt is best practice. It tells every crawler — not just Googlebot — exactly where your sitemap is, without requiring manual submission to each search engine separately. Bing, Yandex, and other search engines use the Sitemap directive in robots.txt to discover sitemaps automatically. Even if you have already submitted your sitemap in Google Search Console, the robots.txt Sitemap line is still worth including because it serves other crawlers and acts as a persistent, self-hosted reference to your sitemap location.

    Leave a Comment

    Your email address will not be published. Required fields are marked *

    Scroll to Top