Meta robots

Meta robots directives tell search engines how a page should be handled at index level. In a WordPress launch or handoff workflow, this check helps confirm that the visible page is not carrying an accidental noindex or other page-level directive that conflicts with the intended public setup.

This page explains what meta robots means, why it matters, and how PreFlight evaluates the result before launch.

Why it matters

A page can be publicly reachable and still be set not to index. That is why meta robots is a separate technical check from homepage accessibility or robots.txt. It works at page level and can directly affect whether search engines are allowed to index the content.

This matters because an accidental noindex on key public pages can quietly block visibility even when the site looks live. Google also makes an important distinction here: robots.txt is mainly for crawl control, while noindex is the correct mechanism when the goal is to keep a page out of search results.

What to review

Before marking this check as correct, review the following points:

Public pages that should rank should not carry an accidental noindex.

The page should not send conflicting robots directives in HTML or HTTP headers.

Search visibility settings should match the real publication status of the site.

noindex should be intentional when used, not a leftover from staging or private review.

robots.txt should not block pages that rely on noindex, because crawlers need to access the page to see that directive.

How PreFlight checks this check

PreFlight inspects the page-level robots directives exposed by the public page and reviews whether indexing instructions appear coherent with a live production setup. The goal is to detect accidental noindex, conflicting signals, or directives that do not match the intended state of the site.

This check is especially useful before delivery because page-level indexing controls are easy to forget during migrations, staging reviews, or temporary privacy setups. It helps surface silent visibility issues before the site is considered technically ready.

PASS / WARN / FAIL

PASS

The page-level robots directives are coherent with a public site, and there are no obvious indexing instructions blocking pages that should be visible in search.

WARN

The directives exist, but something deserves review, such as unexpected combinations, unusual behavior, or settings that may not match the intended launch state.

FAIL

The page exposes a blocking or conflicting robots directive, such as an accidental noindex, in a context where the page should be indexable and publicly available.

Common mistakes

Leaving noindex active on public pages after staging or review.

Assuming robots.txt can replace a page-level noindex.

Sending conflicting directives through HTML and HTTP headers.

Applying visibility rules globally without checking key entry pages.

Forgetting that crawlers must access a page in order to see its noindex directive.

FAQ

What is the difference between meta robots and robots.txt?

Meta robots works at page level and can control indexing directly. robots.txt works at crawl level and is not the correct method for reliably keeping a page out of search results.

Can a page be accessible and still fail this check?

Yes. A page may load normally for users and still include a noindex directive that tells search engines not to index it.

Can blocking a page in robots.txt make noindex useless?

Yes. If the crawler cannot access the page, it may never see the noindex directive, which is why crawl blocking and page-level indexing control should not be confused.

Check your WordPress site before delivery

Reduce rework, catch last-minute issues and review critical points before launch.

Run analysis