SEO

    Link Parity Affects Crawl Budget

    Google has updated one of its Search Central Documentation which is related to Crawl Budget.

    In the new documentation, Google says that if their exist link parity between the desktop and mobile version of a webpage, then it will affect the crawl budget of a website.

    Do note that, back in September 2020, Google rolled out mobile fist indexing which means Google predominantly uses the mobile version of a webpage for indexing and ranking.

    Now, empowering the mobile indexing further, Google says “If your website uses seperate HTML for mobile and desktop versions then provide the same set of link on the mobile version and ensure that all these links are included in the sitemap for better discovery”.

    Which means their shouldn’t be link parity between the desktop and mobile version of a webpage.

    Google Search Removes Support for Sitelinks Search Box

    Starting November 21, 2024, Google Search is going to remove support for Sitelinks Search Box.

    Google Search introduced this feature ten years back but now have noticed that its usage has been dropped, hence decided to remove this feature.

    This new change has been rolled out globally for all the countries and languages and will not affect rankings of other sitelinks visual elements.

    One Google Search stops showing Sitelinks Search Box, it will also be removed from the Google Search Console reports.

    Now, while Sitelinks Search Box has been removed from the Google Search and it’s report from Google Search Console, you can still keep its structured data because it works in accordance with website structured data.

    How to Efficiently Crawl a Next.js Website in Screaming Frog?

    While doing SEO of a website built on Next.js (or any other Javascript technology), its important to crawl the website efficiently in Screaming Frog. But, Javascript websites are a little bit complicated in comparision to the websites built on other technologies like PHP, HTML and so on.

    Below I have shared a few configurations that you can use to efficiently crawl a next.js website in Screaming Frog.

    Note: You can use the below configurations for any other Javascript websites as well.

    1. Rendering with Javascript: To enable this option visit Configuration > Spider > Rendering > Javascript. Do note that Javascript crawling is slower in comparison to the text rendering.

    2. Crawled Linked Sitemap: To enable this option visit Configuration > Crawl Configuration > Crawled Linked XML Sitemap.

    3. Auto Discover XML Sitemap via robots.txt: You can find this option at Configuration > Crawl Configuration. Enabling this option will automatically fetch the website’s sitemap URL from the website’s robots.txt file.

    4. Crawl these Sitemaps: This option enables you to feed the website’s sitemap manually. This option is also located at Configuration > Crawl Configuration.

    5. Crawl and Store Javascript Resource Links: While crawling a Javascript based website in Screaming Frog, make sure to enable options to crawl and store Javascript resource links. You can find this option in Configuration > Crawl Configuration.

    6. User Agent: You can also change the Screaming Frog’s crawling user agent to Google Bot Smartphone via Configuration > User Agent.

    7. Speed: Now, as I said crawling of a Javascript based website is comparitively slower, but if you can increase the crawl speed by tweaking this option. All you need to do is visit the Configuration > Speed and there you can increase the number of maximum threads and maximum URLs.

    If I forgot to mention any configuration, please let me know in the comments down below.

    GSC Performance Report Filter is now Sticky

    GSC Performance Report Filter is now Sticky, which means now the filters will stick to the last setting where you left off.

    In GSC, now there will be a reset filter button, clicking on which will reset all the filters applied.

    Infact, if you have the set the filters for search performance, discover performance, or news performance (in the performance report), the filters will remain be there unless and until you reset it.

    In SEO, GSC Performance Report filter plays an important role as it helps to narrow down the result we actually want.

    And now, the filters become sticky, finding the results with the filters you applied earlier will never become that easy.

    What will happen if we use both the Canonical and No-Index Tag on a Webpage?

    Have you ever questioned “What will happen if we use both the Canonical and No-Index tag on the webpage?”.

    To understand this, let me explain the two things:

    1. No-Index: It is a directive that must be obeyed by the Google.
    2. Canonical: It is an attribute that modifies the HTML element with additional information. It is a strong signal that can’t be ignored but the crawler may ignore it (and that’s why we may sometimes get the duplicate content indexing issue in GSC).

    Read More β†’

    Spam Score in SEO: What it is and Should You Consider it?

    Spam Score is a metric created by different third party SEO softwares that explains how many spam websites are referring to a particular website.

    However, spam score isn’t any official metric referred by Google (confirmed by John Muller in a reddit post) for improved ranking and you can safely ignore them.

    Read More β†’

    What is Canonical Tag in SEO?

    When we create a website, it may happens that two or more pages have the same structure, content and their is slight difference. So in such cases, when search engines crawl the website and find these pages, it flags them as duplicate content which ultimately hampers the website traffic. Hence, to make certain that this issue won’t happen, there is a meta tag called canonical tag, you can define in the page code that basically tells the search engine that the URL present in the canonical tag is the original URL and the search engine should consider indexing the canonical URL instead of the current page URL in which the search engine crawler is present.

    Do we loose the Google Search Console Data after Domain Expiration?

    Have you ever qurious to know that “What happens to the Google Search Console Data after domain expires?”.

    Well, the data will remain present in the Search Console and the new domain owner can claim the data after domain verification.

    So, the GSC data transfers from one owner to another.

    How to Remove Unwanted Content from Google Search?

    You can add no-index meta tag on the webpage but it takes a maximum of 6 months for the webpage to disappear from SERP.

    However, there are two more alternative ways mentioned below:

    1. Request for Removal Tool​: If you are the owner of the website.

    2. Refresh Outdated Content​: If you are not the owner of the website.

    Keyword Difficulty

    Keyword difficulty is a metric that shows How difficult the keyword is to rank on the search engine result page.

    While keyword difficulty is not an official Google’s metric, many SEO softwares provide this metric by analyzing the number of pages created for a particular keyword w.r.t to the webpages authority.