A recently published deposition from a Google engineer has given the SEO community something rare: an official glimpse into the inner workings of how Google ranks pages. While heavily redacted, the document—shared as part of the ongoing DOJ antitrust case—outlines several important components of Google’s ranking system, including hand-crafted signals, static page quality scores, and a Chrome-based popularity metric.

This article breaks down what we learned and what it means for marketers and SEOs today.

Hand-Crafted Signals: Why Google Avoids Full Automation

According to the engineer, Google still relies on hand-crafted signal development. This doesn’t mean manual ranking—it means that engineers use structured data (like click patterns and rater feedback) to build algorithms they can troubleshoot and improve.

The benefit? Control and explainability. Google prefers models where, if something goes wrong, their engineers know exactly what to fix. In contrast, fully automated systems (like those used by Bing) are harder to debug when things break.

This is a key insight: Google designs for reliability and auditability, not just AI novelty.

The ABC Model of Topicality

One of the clearest reveals in the deposition is the mention of “ABC” signals:

  • A – Anchors (external links pointing to a page)
  • B – Body (how well the content matches query terms)
  • C – Clicks (user dwell time and return behavior)

Together, these signals feed into a Topicality score (T*), which estimates how relevant a document is to a search query. While this is just one layer of many in Google’s ranking model, it emphasizes the continued role of links, content match, and user behavior.

Static Page Quality Score: The Trust Factor

Google’s engineer confirmed that page quality is mostly static. Once a site earns a reputation as trustworthy, it retains that quality across many queries. This trust score isn’t recalculated in real-time for every search.

Here’s the key distinction:

  • Topical relevance is query-dependent
  • Trustworthiness is site-wide and relatively fixed

This means that if your site is perceived as a reliable source, Google will prefer it—even in searches where topical signals alone aren’t strong. It also suggests that building long-term authority matters more than ever.

Quality Complaints and AI’s Impact

Interestingly, the Googler noted that AI has made quality perception worse. Despite static quality scores being useful, users still complain—and recent shifts in generative AI are amplifying dissatisfaction.

This reinforces what many SEOs are seeing: LLMs can generate plausible but shallow content, and the line between “fast answers” and “trustworthy insights” is blurring.

Google is working hard to manage that tension.

eDeepRank: Making LLMs Transparent

Among the more futuristic components is eDeepRank, described as an LLM-based system using BERT and Transformers. Its job? To break down complex language understanding models into simpler components for analysis.

Why this matters: transparency.

Search engineers want to understand why an LLM ranks a page. Decomposing the model allows Google to audit machine learning outcomes, a necessary step for both relevance and accountability.

PageRank and Distance from Authority

Google hasn’t abandoned its roots. The engineer confirms that PageRank still plays a role, specifically as a measure of distance from trusted seed sites.

This means that even in 2025, link-based authority matters—not just who links to you, but how far you are (in hops) from topically authoritative sources. If you’re building links, focus on proximity to topic hubs, not just DA scores.

Chrome-Based Popularity Signal (Unnamed)

One of the more intriguing—and redacted—reveals is a popularity signal derived from Chrome data. While the document doesn’t explain how it’s used, it does confirm that some measure of user behavior collected via Chrome feeds into Google’s ranking system.

What it might mean:

  • Engagement metrics like time-on-site
  • Topic scroll patterns in Discover
  • Click-through or repeat visit behavior
  • Core Web Vitals or real-user performance data

Though controversial, it suggests that real-world behavior in Chrome feeds Google’s understanding of content value.

SEOs, Pay Attention to These Takeaways

Here’s what the modern SEO strategy should embrace based on this deposition:

1. Prioritize Page Trust and Consistency

Build long-term quality. Static page quality scores reflect your site’s overall authority and trust. Regularly updated content, expert contributors, and citation-worthy material go a long way.

2. Know That User Behavior Is Modeled—Not Just Measured

Click signals (like dwell time) are not treated equally. Google corrects for positional bias, meaning that why someone clicks is as important as whether they do.

3. Embrace Semantic Structure and Topical Proximity

Google’s Rank systems now rely on vectors and embeddings. Use:

  • Related topics
  • Structured headings
  • Clear definitions
  • Internal links clustered by topic

4. Expect More AI, But With Guardrails

LLMs are here to stay, but Google is working to make them explainable. This balance of innovation and control is core to how Google thinks about scaling generative models in ranking.

Final Thoughts

This deposition doesn’t reveal everything, but it pulls back the curtain just enough. Google’s ranking system is a layered mix of:

  • Hand-crafted signal engineering
  • Topicality (ABC signals)
  • Trust (static quality scores)
  • AI transparency (eDeepRank)
  • Behavioral data (Clicks + Chrome-based insights)

If you’re optimizing for search in 2025, your job isn’t just to chase rankings—it’s to be the content Google can explain, trust, and elevate.

As the search engine evolves, so must we.