How to Efficiently Crawl a Next.js Website in Screaming Frog?
While doing SEO of a website built on Next.js (or any other Javascript technology), its important to crawl the website efficiently in Screaming Frog. But, Javascript websites are a little bit complicated in comparision to the websites built on other technologies like PHP, HTML and so on.
Below I have shared a few configurations that you can use to efficiently crawl a next.js website in Screaming Frog.
Note: You can use the below configurations for any other Javascript websites as well.
-
Rendering with Javascript: To enable this option visit Configuration > Spider > Rendering > Javascript. Do note that Javascript crawling is slower in comparison to the text rendering.
-
Crawled Linked Sitemap: To enable this option visit Configuration > Crawl Configuration > Crawled Linked XML Sitemap.
-
Auto Discover XML Sitemap via robots.txt: You can find this option at Configuration > Crawl Configuration. Enabling this option will automatically fetch the website’s sitemap URL from the website’s robots.txt file.
-
Crawl these Sitemaps: This option enables you to feed the website’s sitemap manually. This option is also located at Configuration > Crawl Configuration.
-
Crawl and Store Javascript Resource Links: While crawling a Javascript based website in Screaming Frog, make sure to enable options to crawl and store Javascript resource links. You can find this option in Configuration > Crawl Configuration.
-
User Agent: You can also change the Screaming Frog’s crawling user agent to Google Bot Smartphone via Configuration > User Agent.
-
Speed: Now, as I said crawling of a Javascript based website is comparitively slower, but if you can increase the crawl speed by tweaking this option. All you need to do is visit the Configuration > Speed and there you can increase the number of maximum threads and maximum URLs.
If I forgot to mention any configuration, please let me know in the comments down below.