Key Takeaways

  • Google’s docs now show three key limits: 2MB, 15MB, and 64MB.
  • For Google Search, Googlebot says it crawls the first 2MB of supported text/HTML files and the first 64MB of PDF files.
  • In Google’s crawling infrastructure docs, the default limit is 15MB for crawlers and fetchers.
  • These limits use uncompressed bytes, not the smaller compressed transfer size.
  • Most sites are safe, because the median HTML file is tiny: about 22KB on modern pages.

Google just updated its docs about how much content Googlebot can fetch. This matters if your pages are very large, or if you add lots of inline code. However, most pages will never get close to these limits.

What the topic means

This update is about the Googlebot file size limit. It is the maximum amount of a file Google will fetch and send for indexing.

Also, Google moved some details to a new “crawling infrastructure” site. Therefore, you now see different limits in different places.

Why it matters?

If your important text sits after the cutoff, Google may not see it. Therefore, rankings and indexing can suffer.

Also, big files can waste crawl budget. That can slow down how fast Google finds new or updated pages.

Finally, the new docs can confuse teams. One page says 2MB, while another says 15MB. So you need a clear “safe rule” for your site.

The key numbers (with real-world context)

Here are the limits that matter today:

  • 2MB for HTML and supported text files (Google Search view).
    That is about 2,048KB.

  • 64MB for PDFs (Google Search view).
    That is about 65,536KB.

  • 15MB default limit (Google crawling infrastructure view).
    That is about 15,360KB.

Also, Google applies the cutoff to uncompressed data. So compression does not “hide” a huge page.

Now compare that with typical pages:

  • The median home page weight in 2025 was 2.86MB on desktop and 2.56MB on mobile.
  • However, the median HTML portion was only about 22KB.
  • Therefore, a 2MB HTML limit is roughly 93x bigger than the median HTML file.
  • Also, a 15MB limit is roughly 698x bigger than the median HTML file.

So yes, most sites are fine. However, some pages can still hit limits when they include huge inline scripts, giant JSON blobs, or embedded data URIs.

Step-by-step: how to stay under the limits

Use these steps in a technical SEO audit.

  1. Measure your real HTML size
    First, check the HTML document size in browser DevTools.
    Then, confirm server-side size with a command-line tool if possible.
    Also, try to note resource size (uncompressed) and transfer size (compressed).

  2. Set a “safe target” size
    Next, treat 2MB uncompressed HTML as your safe cap for Google Search.
    Also, treat 15MB as a broad default you should never approach.

  3. Move heavy inline code out of HTML
    If you have large inline JavaScript or CSS, move it to external files.
    Therefore, your HTML shrinks fast.

  4. Watch your CSS and JS file sizes too
    Google fetches CSS and JS separately.
    Also, older guidance said each subresource fetch can have its own cutoff.
    So keep individual CSS/JS files well below 15MB, and ideally far smaller.

  5. Split very long pages
    If one page tries to do everything, break it into sections.
    Also, use internal links to connect them.
    Therefore, Google can crawl more cleanly.

  6. Treat PDFs differently
    If you publish PDFs, remember the Search doc mentions 64MB for PDFs.
    However, smaller is still better for users and crawling speed.

Benefits and tips

  • You reduce crawl waste. Therefore, Google can discover more URLs.
  • You improve rendering reliability. Also, you lower the risk of partial indexing.
  • You ship faster pages. That helps users and performance signals.
  • You make debugging easier. Finally, you avoid “why is Google missing this text?” moments.

Did You Know?

Google’s crawling infrastructure supports common compression methods like gzip and Brotli, but the file size cutoff is based on uncompressed bytes. So a heavily compressed response can still hit the limit after it is unpacked.

Conclusion

Google’s docs now point to three numbers: 2MB, 15MB, and 64MB. Google says this is a documentation clarification, not a behavior announcement. However, the safest move is simple. Keep your HTML lean, keep key content early, and avoid massive inline code. Therefore, Googlebot can fetch what matters and index it reliably.

FAQs section

Did Google change Googlebot behavior or only the docs?

Google described it as a documentation clarification. Therefore, there is no confirmed behavior change announcement.

Which limit should I follow for SEO work?

Follow 2MB uncompressed HTML as a safe target for Google Search. Also, treat 15MB as a hard “never get close” default.

Does compression help me stay under the limit?

Compression helps transfer speed. However, the limit is based on uncompressed size, so you still need to reduce real HTML bytes.

Do images and videos count inside the HTML limit?

Images and videos linked by URL are fetched separately. However, data embedded inside HTML (like data URIs) does count.

How do I know if my HTML is too large?

Check your page in browser DevTools and look at the HTML request size. Also, run a crawler audit to flag oversized documents.

References section