Blocked by robots.txt despite allowing all URLs

  • Hi.

    Google Search Console won’t index my homepage or any other page because they are being blocked by robots.txt.

    While developing my site, I did have ‘Disallow: *’ in the robots.txt file and under WordPress Settings > Reading, I checked ‘Discourage search engines from indexing this site’.

    Now that the site is finished, I’ve removed ‘Disallow: *’ from the robots.txt file and unchecked ‘Discourage search engines from indexing this site’ but when requesting GSC to index URLs, I’m still getting the error messages below:

    Crawl allowed?
    No: blocked by robots.txt
    Page fetch
    Failed: Blocked by robots.txt

    Is it just a case of giving G time to update it’s copy of my most current robots.txt file or are there any other settings in WP I need to change to help get pages indexed?

    Thanks in advance

  • Hi there,

    On what site are you working? The account you’re using to post here does not own any sites on WordPress.com.

    If you’re using the open source WordPress software at another host, please ask for help from the WordPress.org community at https://wordpress.org/support/forums instead.

  • Oh, I’m using the open source software on my hosting. The issue is resolved, it just took a couple of days to update Google with the robot.txt files.

    I’ll know for the future reference to visit the .org site.

    Thanks.

  • The topic ‘Blocked by robots.txt despite allowing all URLs’ is closed to new replies.