Link Parity Affects Crawl Budget
Google has updated one of its Search Central Documentation which is related to Crawl Budget.
In the new documentation, Google says that if their exist link parity between the desktop and mobile version of a webpage, then it will affect the crawl budget of a website.
Do note that, back in September 2020, Google rolled out mobile fist indexing which means Google predominantly uses the mobile version of a webpage for indexing and ranking.
Now, empowering the mobile indexing further, Google says “If your website uses seperate HTML for mobile and desktop versions then provide the same set of link on the mobile version and ensure that all these links are included in the sitemap for better discovery”.
Which means their shouldn’t be link parity between the desktop and mobile version of a webpage.
OpenAI Launches ChatGPT Search Engine
After beta testing SearchGPT (which is also called ChatGPT Search Engine) for a few months, OpenAI finally makes it live on ChatGPT.
Those of you who don’t know, SearchGPT is a new search experience integrated in ChatGPT itself provides real-time and localized information to the user’s query and gives proper attribution to the sources to which it has partnered or the websites allowed to be crawled by ChatGPT bots.
Here are the list of some of the biggest news organizations to which OpenAI has partnered for extracting information:
- Associated Press
- Axel Springer
- The Financial Times
- Reuters
Once the user places a query, the SearchGPT gives a response and below each response it gives a Source Button clicking on which reveals the list of sources from where ChatGPT has extracted the information to made the response.
Apart from that, user’s can ask follow up questions for the response provided.
Now, this new search experience has been made available to all the ChatGPT Plus and Team users, however, will soon be made available to the free users.
How to Track Traffic Coming from ChatGPT?
Currently, ChatGPT has around 200 million weekly active users which is growing day by day.
Also, ChatGPT is giving attribution to the sources from where it extracts information. So, it is possible for those sources to get traffic from ChatGPT as well.
Hence, tracking traffic coming from ChatGPT is also essential for making your analytics report clear (especially when you are optimizing your website for Artificially intelligent search engines).
So, there is a UTM parameter you can use to track traffic coming from ChatGPT.
Now, suppose you are using GA4 to track the traffic coming to your website from different sources.
Then, you can simply see the UTM tracking parameter “utm_source=chatgpt.com” to track traffic coming to your website from chatgpt.com.
Google Search Removes Support for Sitelinks Search Box
Starting November 21, 2024, Google Search is going to remove support for Sitelinks Search Box.
Google Search introduced this feature ten years back but now have noticed that its usage has been dropped, hence decided to remove this feature.
This new change has been rolled out globally for all the countries and languages and will not affect rankings of other sitelinks visual elements.
One Google Search stops showing Sitelinks Search Box, it will also be removed from the Google Search Console reports.
Now, while Sitelinks Search Box has been removed from the Google Search and it’s report from Google Search Console, you can still keep its structured data because it works in accordance with website structured data.
Google Rolling Out AI Organized Search Result in US
To improve the user’s experience and ease of accessing accurate information, Google has started rolling out AI Organized Search Result in the US. But, what is so special about these results and how will it be different from the existing results, let’s explore the same.
What is AI Organized Search Result?
As the name gives hint, it means the search results are organized by Artifical Intelligence.
This new Google Search feature will give you personalized result based on your entered query.
For example, if you search a keyword say “vegetarian pizza receipe”, you will get different results grouped by category/section. Each of these category showcase the results from different perspective, for eg. a section dedicating results for top vegetarian pizza receipes, another section for easy vegetarian dips, and another one titled “Explore by Ingredient” and so on. Do check the image below to get more understanding about this feature.
Note: While writing this blog post, this new feature has been rolled out for queries related to receipes and meal inspiration, but it will not take longer for Google to roll out this feature for other niche keywords as well.
Well, this is not a new feature as the Google Search team is testing this feature since 2007. However, the feature is now rolling out, starting with the US for select niche keywords.
If you have any helpful information for this new topic, feel free to let me know in the comments down below.
How to Efficiently Crawl a Next.js Website in Screaming Frog?
While doing SEO of a website built on Next.js (or any other Javascript technology), its important to crawl the website efficiently in Screaming Frog. But, Javascript websites are a little bit complicated in comparision to the websites built on other technologies like PHP, HTML and so on.
Below I have shared a few configurations that you can use to efficiently crawl a next.js website in Screaming Frog.
Note: You can use the below configurations for any other Javascript websites as well.
-
Rendering with Javascript: To enable this option visit Configuration > Spider > Rendering > Javascript. Do note that Javascript crawling is slower in comparison to the text rendering.
-
Crawled Linked Sitemap: To enable this option visit Configuration > Crawl Configuration > Crawled Linked XML Sitemap.
-
Auto Discover XML Sitemap via robots.txt: You can find this option at Configuration > Crawl Configuration. Enabling this option will automatically fetch the website’s sitemap URL from the website’s robots.txt file.
-
Crawl these Sitemaps: This option enables you to feed the website’s sitemap manually. This option is also located at Configuration > Crawl Configuration.
-
Crawl and Store Javascript Resource Links: While crawling a Javascript based website in Screaming Frog, make sure to enable options to crawl and store Javascript resource links. You can find this option in Configuration > Crawl Configuration.
-
User Agent: You can also change the Screaming Frog’s crawling user agent to Google Bot Smartphone via Configuration > User Agent.
-
Speed: Now, as I said crawling of a Javascript based website is comparitively slower, but if you can increase the crawl speed by tweaking this option. All you need to do is visit the Configuration > Speed and there you can increase the number of maximum threads and maximum URLs.
If I forgot to mention any configuration, please let me know in the comments down below.
GSC Performance Report Filter is now Sticky
GSC Performance Report Filter is now Sticky, which means now the filters will stick to the last setting where you left off.
In GSC, now there will be a reset filter button, clicking on which will reset all the filters applied.
Infact, if you have the set the filters for search performance, discover performance, or news performance (in the performance report), the filters will remain be there unless and until you reset it.
In SEO, GSC Performance Report filter plays an important role as it helps to narrow down the result we actually want.
And now, the filters become sticky, finding the results with the filters you applied earlier will never become that easy.
What will happen if we use both the Canonical and No-Index Tag on a Webpage?
Have you ever questioned “What will happen if we use both the Canonical and No-Index tag on the webpage?”.
To understand this, let me explain the two things:
- No-Index: It is a directive that must be obeyed by the Google.
- Canonical: It is an attribute that modifies the HTML element with additional information. It is a strong signal that can’t be ignored but the crawler may ignore it (and that’s why we may sometimes get the duplicate content indexing issue in GSC).
Google Search has removed the Cache Search Operator
To make Internet Non-Redundant, Google has removed the Cache Search Operator.
The cache: operator is no longer working on Google Search.
Google has basically replaced the Cache Search Operator with Internet Archive (Wayback Machine’s) link spotted on a search link’s “About this result” section. You can read more about it from this blog post.
After removing the removing the functionality of Cache Search Operator, Google search has also removed its documentation.
Now, to see the older version of a webpage, users have to check the Internet Archive’s link present w.r.t each search’s link.
Search GPT Optimization for Websites 2024
Search GPT is an AI Powered search engine that is equipped with the power of traditional search engines but has conversational abilities with Large Language Models.
Search GPT works on Retrieval Augmented Generation (RAG) which is used by Perplexity and Google AI Overviews as well.
Retrieval Augmented Generation works by integrating information from a database into the LLM response (for enhanced accuracy).