Key Takeaways

  • Google now gives clearer rules for “Read more” deep links in snippets.
  • Robots.txt may get more documentation, but unsupported rules are still risky.
  • The EU may force Google to share some search data with rivals and some AI chatbots.
  • Search is moving closer to task completion, not just answer delivery.

These Google SEO updates matter because they turn fuzzy advice into written rules. Google has now documented how “Read more” deep links are more likely to appear. It may also expand robots.txt guidance. At the same time, the European Commission is pushing Google to share some search data with rivals, including some AI chatbots that qualify as search engines. Together, these changes tell SEOs one thing: structure, crawl rules, and search access are getting more formal.

What changed in this week’s Google SEO updates

This week’s changes are not one big algorithm shock. Instead, they are a set of smaller moves that make SEO more rule-based.

Update What changed Why it matters
Read more deep links Google added best practices to its snippet docs Page structure now matters more for snippet jumps
Robots.txt guidance Google may explain more unsupported rules Sites can spot dead directives faster
EU data sharing The Commission proposed data sharing measures for Google Search AI search competition could change in Europe
Task-based Search Search keeps adding tools that help users complete actions More user journeys may stay inside Google

A “Read more” deep link is a link in a Google snippet that sends users to a specific section on a page. This is not a new feature, but Google has now published clearer guidance on how to improve the chance of getting it.

The new guidance is simple.

  • Make sure the content is visible right away on page load
  • Do not force the page to scroll to the top with JavaScript on load
  • Do not remove the URL hash on load if your page uses deep linking

This matters because many sites hide key copy inside tabs, accordions, or click-to-expand areas. That design may still work for users. But it can lower the chance that Google shows a section jump in the snippet.

The lesson is bigger than deep links alone. Google is again rewarding pages where important content is easy to reach, easy to load, and easy to map to a specific section.

What to check on your pages

Start with pages that already rank well. Then look for these issues:

  1. Important text hidden inside tabs or accordions
  2. JavaScript that resets scroll position on load
  3. URL hashes that vanish before the page finishes loading
  4. Core answers placed too far down the page

If a page already earns strong snippet visibility, use it as your model. Then copy that layout pattern to similar pages.

Robots.txt guidance may get broader, but the basics stay the same

Robots.txt is still a crawl control file, not an indexing control tool. Google says this clearly. If you want to keep a page out of Google Search, robots.txt is not the right method on its own.

That is why this update matters. Search Engine Journal reported that Google may expand its robots.txt documentation after studying real-world usage data from HTTP Archive. The goal appears to be clearer documentation around unsupported rules.

That would help many site owners. Old robots.txt files often carry dead directives from plugins, old advice, or copied templates. In many cases, those rules never worked for Google in the first place.

Google has already said unsupported robots.txt rules such as noindex, nofollow, and crawl-delay were never documented for Googlebot. In 2019, Google said those unsupported rules were contradicted by other rules in all but 0.001% of robots.txt files it analyzed.

What this means in practice

This is not a green light to wait for Google to become more forgiving. It is the opposite.

Now is the time to clean up your file.

  • Keep supported directives clear
  • Remove old custom rules you do not understand
  • Do not rely on robots.txt noindex
  • Use noindex meta tags or X-Robots-Tag headers when you need deindexing
  • Test important rules before pushing them live

A smaller, cleaner robots.txt file is often the safer file.

The EU may force Google to share search data

The European Commission has opened a consultation on proposed measures that would require Alphabet to share Google Search data with third-party online search engines under the Digital Markets Act.

The proposal covers four data types:

  • ranking data
  • query data
  • click data
  • view data

The consultation opened on 16 April 2026. The deadline is 1 May 2026. The Commission says it plans to adopt a final decision by 27 July 2026.

This part is especially important for AI. The proposal says Alphabet should not exclude third parties such as AI chatbots with online search engine functions, as long as they meet the legal definition of an online search engine.

Why this matters for SEO

Right now, most SEO teams treat search engines and AI chatbots as separate channels. The EU proposal blurs that line.

If the final rules stay close to the draft, some AI products in the EU and EEA could gain access to anonymized Google Search data on fair, reasonable, and non-discriminatory terms.

That does not change rankings tomorrow. But it could change how competing search tools improve retrieval, ranking, and citation systems over time.

For publishers and brands, the message is clear: search data access may become part of the AI search race.

Search keeps moving from answers to actions

The roundup also points to another trend. Google Search keeps adding more task-based features.

That includes tools like hotel price tracking and more agent-like actions inside AI Mode. The big shift is not just about one feature. It is about where the user journey ends.

In the past, Search mostly helped people find a page. Now, Google wants to help people finish a task.

This changes SEO in two ways.

First, the value of being visible inside Google-owned surfaces gets higher. Second, pages need to be machine-friendly enough to support crawlers, snippets, and agent-like systems.

What stronger pages now look like

Pages that fit this new model usually have:

  • clear headings
  • visible answers near the top
  • stable URLs and section anchors
  • important text in HTML
  • clean crawl rules
  • accurate structured data
  • content that is easy to quote and summarize

That structure helps both classic search and AI-driven discovery.

What site owners should do next

Wondering where to start? Focus on the fixes that remove guesswork.

1. Audit snippet-ready content

Check your top landing pages for hidden text, weak section anchors, and JavaScript that breaks deep links.

2. Clean your robots.txt file

Remove old rules that are unsupported, unclear, or copied from outdated templates.

3. Separate crawling from indexing

Use robots.txt for crawl control. Use noindex, X-Robots-Tag, removals, or password protection when you need index control.

4. Watch the EU timeline

If your business depends on search visibility in Europe, track the DMA process closely. The final decision may shape how AI search competitors improve.

5. Build for machine readability

Keep key content visible, structured, and easy to fetch. That helps snippets today and agentic systems tomorrow.

Did You Know?

Google says robots.txt is not a way to keep a normal web page out of Google Search. A blocked page can still appear as a URL-only result if other pages link to it.

Conclusion

The biggest lesson from these Google SEO updates is simple: the rules are becoming easier to see. Google is writing down more guidance. Regulators are defining who counts as a search engine. And Search itself is moving closer to task completion. For SEOs, that means fewer excuses for messy page structure, weak crawl rules, and hidden content. The teams that win will be the ones that make their sites easy to crawl, easy to quote, and easy to act on.

FAQs

Read more deep links are snippet links that jump users to a specific section on a page. Google says they are more likely to appear when content is visible on load, deep linking works correctly, and JavaScript does not break the page position or URL fragment.

Does hidden content hurt rankings?

Not always. But hidden or collapsed content can lower the chance of getting certain snippet features, including Read more deep links. The safer approach is to keep the most important answer visible right away, especially on pages that target informational queries.

Can robots.txt remove a page from Google’s index?

No, not reliably. Google says robots.txt is mainly for crawl control, not for keeping normal web pages out of search results. If you need deindexing, use noindex, an X-Robots-Tag header, password protection, or another proper removal method.

What search data could Google have to share in the EU?

Under the European Commission’s draft measures, the data categories include ranking, query, click, and view data. The proposal says this data sharing would apply to eligible third-party online search engines on fair, reasonable, and non-discriminatory terms, with anonymization safeguards.

What should SEOs do first after these updates?

Start with a technical audit. Review snippet-target pages, check deep link behavior, clean up robots.txt, and make sure important content is visible without extra clicks. Then monitor the EU DMA process if your traffic or product strategy depends on European search markets.

References