Why Pages Aren’t Indexed in Google Search Console

In Google Search Console → Indexing → Pages you’ll find a section titled “Why pages aren’t indexed.” These statuses explain why specific URLs aren’t eligible to appear in Google’s results. In many cases, there is no need to panic as it is totally normal to have pages on your site which are not indexed. However, it is important to know what each report means so that you know what action to take, if any.

Below you’ll find a plain-English explanation for each reason and a practical checklist of what to do next.

Before you start: These reports are no live updated so can often contain outdated information. In order to decide on next steps, the following actions need to be taken:

  • Open each Pages report to get an overview of the pages which are flagged. Determine if the pages which are not indexed need to be so you can prioritise your actions.
  • Use the Inspect URL tool or the Google Search Console API tool in Screaming Frog to confirm if the pages are actually indexed or not.
Once you have verified the status of your non-indexed pages, use the following guide to determine what next steps, if any, need to be taken.

Excluded by ‘noindex’ tag

What it means

The page includes a <meta name="robots" content="noindex"> or an HTTP X-Robots-Tag: noindex header. You’re telling Google not to index it.

What you need to do

In most cases, there is nothing you need to do here as they are pages which you are telling Google not to index. However, it is worth reviewing the URLs in the report to ensure that none have been marked as noindex by mistake.

  • If you want the page indexed, remove the noindex directive and ensure the page is crawlable and internally linked.
  • If exclusion is intentional (e.g., category filters, admin, cart, thank-you pages, etc.), do nothing.


Page with redirect

What it means

The URL responds with a redirect (eg., 301/302/307/308 status codes). Google indexes the final destination, not the redirecting URL. In most cases, this is nothing to worry about, but high numbers of redirected URLs on your website can slow down search crawling. If the destination URL is not relevant, such as redirecting to the homepage, then this can negatively impact performance.

What you need to do

Any action should be split into two sections; checking that redirects are correct and updating internal links. For the first stage, follow these steps:

  • Confirm the redirect for each URL is intended and goes to the best canonical URL.
  • Update internal links to point directly at the final URL.
  • Remove any redirects which have been placed in error.

For the second stage, follow these steps:

  • Crawl your website using a tool like Screaming Frog to identify all pages where redirected links are found
  • Update content on the site to update or remove redirected internal links.
  • If you use WordPress, you can use a tool like Link Replacer to update links in bulk

For more information, view our guide on how to fix redirected internal links in WordPress.


Blocked due to access forbidden (403)

What it means

These error codes often appear where the content is password protected, such as admin pages. In rare circumstances, Googlebot may receive a 403 error when the user does not.

What you need to do

  • Review the 403 pages in the report and confirm if they are correctly blocking access.
  • If there are a large number of pages, consider updating your robots.txt file to prevent their crawling.
  • If pages are present in the report which should be indexed, contact your web developer or hosting company to ask them to whitelist Googlebot crawlers for those pages.


Alternate page with proper canonical tag

What it means

This URL signals that another URL is the preferred version (via rel="canonical"), and Google agreed. Variants (tracking params, printer versions, hreflang alternates) are excluded. In most cases, this is basically means that the canonical tag is working correctly. However, if there are a considerable number of internal links to non-canonical URLs then this may prevent the canonical tag from working correctly.

Ideally, the canonical version of URLs is the only one which should be internally linked to on your website..

What you need to do

  • In most cases, no action is needed. Keep canonicals, internal links, and sitemaps pointing at the preferred URL.
  • If a large number of pages are appearing in the report, crawl the site using Screaming Frog to identify all pages flagged as "canonicalized" under "Indexability Status".
  • Update content on the site to update or remove internal links to non-canonical URLs.
  • If you use WordPress, you can use a tool like Link Replacer to update links in bulk.


Not found (404)

What it means

These are pages which have been removed on your site. Google have claimed that these do not cause any problems, but I prefer to redirect them to a relevant page since they may hold some pagerank or relevance.

What you need to do

  • Ensure that flagged pages have actually been removed. Any which should be live should be re-enabled.
  • Where relevant pages on the site exist, redirect 404 pages to them.
  • Otherwise leave 404 (or apply a 410 status, which indicates that the page has been permanently removed) and remove internal links to it.


Duplicate without user-selected canonical

What it means

Google found duplicates but you didn’t specify a canonical. It excluded this version and chose on its own. This requires action since you have no control over which version of the page appears in search results which can directly hamper SEO performance.

What you need to do

  • Add a self-referencing canonical on the preferred URL; add canonicals on variants pointing to it.
  • Align internal links and sitemaps with the preferred URL.


Blocked by robots.txt

What it means

The /robots.txt file prevents Googlebot from crawling this path. If Google can’t crawl, it usually won’t index the page. In most cases, pages in this report will not be an issue. However, mistakes can be made with the /robots.txt file which can mistakenly prevent crawling.

What you need to do

  • Check URLs in report and ensure that none are blocked which should not be.
  • If any pages are blocked which should not be, the rules in the /robots.txt should updated to allow crawling of those pages.


Server error (5xx)

What it means

Server errors (e.g., 500, 502, 503 status codes) happen when something goes wrong when trying to fetch a page.

What you need to do

  • Most websites will occasionally temporarily encounter server errors which fix themselves in time and no action is needed.
  • If individual URLs return server errors which are not important, consider disallowing them using robots.txt.
  • If large numbers of pages are returning server errors, contact your web developer or hosting company for support.


Blocked due to other 4xx issue

What it means

The URL returned a non-403/404 4xx (e.g., 410 Gone, 429 Too Many Requests). It is worth familiarising yourself with what the different status codes mean so that you can take appropriate action.

What you need to do

The most common 4xx status codes that you will encounter are 410 and 429.

  • 410: This means permanently removed and Google will not apply any pagerank to it. Since this must be deliberately applied, in most cases there is no action to take If any pages have been mistakenly flagged as 410 then removing this status will not allow the page to rank again and the content should be republished on a new URL
  • 429: This means too many requests, the server prevents crawling from a single source to prevent it from being overwhelmed. In these cases, contact your web developer or hosting company to white list or reduce rate limiting for Googlebot.


Crawled – currently not indexed

What it means

This can mean one of three things:

  1. Google sometimes needs to crawl a page several times before crawling it. In many cases, the page will just take more time to be indexed.
  2. Google crawled the page but decided not to index it, often due to low value, duplication, poor UX, or thin/boilerplate content.
  3. The page was previously indexed, but Google changed their minds and decided that the page no longer meets their criteria for indexing.

This is one of the hardest to diagnose as there could be several reasons why it appears. In 

What you need to do

The first step is to diagnose whether this is an issue. The first place to look is the last crawl dates. If Google is not crawling the pages then it either means there are either issues with crawl budget (ie. it's spending so much time crawling the rest of the site that it's not had a chance to re-crawl those pages yet), internal linking or Google has decided the pages are such poor quality then there's no point recrawling them. you can test this by requesting indexing of the pages via the Inspect URL tool. If after a few days the pages are still not indexed then we have to assume that there is a quality issue.

If the pages have not been crawled for a long time but the Inspect URL allows pages to be indexed then there may either be a crawl budget or internal linking issue. Crawl budget is likely only going to come into play if there are at least 10,000s or more pages being crawled on the site. If you suspect this is the case, then it is worth getting the opinion of an experienced technical SEO. However, the first step is to ensure that the internal linking on the site is robust, no page is more than 3-4 clicks away from the homepage and that there is a clear and logical way to navigate to every page on the site.

IF you suspect that the issue is quality-based, you may need to rewrite your content using the following guidelines:

  • Improve uniqueness, depth, E-E-A-T signals (expertise/experience), and media.
  • Strengthen internal links; link from relevant.
  • Building authoritative backlinks from relevant sites to your target pages can also help them to get indexed.
  • Ensure the page targets a distinct query with clear intent match.

In some cases, rewriting the content will not help it get recrawled. If Google is refusing to crawl the pages, you may need to change the URL structure of them and add 301 redirects from the old versions to the new.


Discovered – currently not indexed

What it means

This is similar to above, except that Google is aware of the pages but has not crawled them yet. In many cases, it's just a case of being patient but if pages have been published for a long time without any crawling then it could be a sign of a more serious problem.

In some cases, Google will assess the quality of the whole site and if it does not meet their threshold then it can stop crawling pages altogether.

What you need to do

The steps to take are pretty much the same as for "Crawled – currently not indexed". However, if there is deemed to be a sitewide quality issue then you may need to rethink your content strategy across the whole site.


Duplicate, Google chose different canonical than user

What it means

You set a canonical URL, but Google selected a different page for search results. Remember, the canonical tag is only a hint which Google can easily ignore if other signals disagree, such as internal links and  sitemaps.

What you need to do

  • Make all signals consistent across internal links, sitemap, redirects and canonical tags.
  • Ensure your preferred URL is the strongest (content, links, performance).


Soft 404

What it means

The page returns 200 OK but looks like an error/empty page (no content, zero-result listings, “product unavailable”).

What you need to do

  • The first step is to ensure that the page is actually empty. If the pages have relevant content then I recommend getting the advice of an experienced technical SEO to determine if there are any rendering issues.
  • For pages that do not provide value, disable the pages so that they return a 404 status, or add meaningful content and proper templating.
  • For out-of-stock items, show alternative products and provide information about when the product will return rather than a near-empty page.


Blocked by page removal tool

What it means

A temporary removal request was submitted in Search Console, hiding the URL from results.

What you need to do

  • Cancel the request if it was accidental, or wait for expiry while fixing underlying issues (content/privacy).


Crawl anomaly

What it means

Google encountered an unspecified error fetching the page (timeouts, malformed responses, TLS issues). Most of the time, these will be temporary issues which will resolve by themselves.

What you need to do

  • Confirm if Google is unable to fetch the pages using the Inspect URL tool.
  • If pages can be fetched without issue then no further action is to be taken.
  • If pages cannot be fetched on request, you may need to consult your web developer or hosting company.

    After You Fix Issues: Validation Workflow

    • Use URL Inspection to confirm indexation status for high priority URLs.
    • Click Validate fix on the specific Performance > Pages reports in GSC. Google will re-crawl samples.
    • Monitor your Performance > Pages report for indexing status.