The evergreen Googlebot was a huge leap forward in Google’s ability to crawl and render content. Prior to this update, Googlebot was based on Chrome 41 (released in 2015) so that the search engine could index pages that would still work for users on older versions of Chrome. The drawback, however, was that sites with modern features may not have been supported. This discrepancy created more work for site owners that wanted to take advantage of modern frameworks while still maintaining compatibility with Google’s web crawler.
Always up-to-date. “Now, whenever there is an update, it pretty much automatically updates to the latest stable version, rather than us having to work years on actually making one version jump,” said Martin Splitt, search developer advocate at Google, during our crawling and indexing session of Live with Search Engine Land. Splitt was part of the team that worked on making Googlebot “evergreen,” meaning that the crawler will always be up-to-date with the latest version of Chromium; he also unveiled it at the company’s I/O developer conference in 2019.
Twice the work. Before the advent of the evergreen Googlebot, one common workaround was to use modern frameworks to build a site for users, but to serve alternate code for Googlebot. This was achieved by identifying Googlebot’s user agent, which included “41” to represent the version of Chrome it was using.
This compromise meant that site owners would have to create an alternate version of their content meant specifically for Googlebot. Doing this would’ve been laborious and time consuming.
Googlebot’s user agent, revisited. Part of the issue of updating Googlebot’s user agent to reflect the latest version of Chromium was that some sites were using the above-mentioned technique to identify the web crawler. An updated user agent might have resulted in a situation where a site owner (that wasn’t aware of the change) did not serve any code to Googlebot, which could have resulted in their site not getting crawled, and subsequently indexed and ranked.
To prevent disruption of its services, Google communicated the user agent change in advance and worked with technology providers to ensure that sites would still get crawled as usual. “When we actually flipped . . . pretty much no fires broke out,” Splitt said.
Why we care. The evergreen Googlebot can access more of your content without the need for workarounds. That also means fewer indexing issues for sites running modern JavaScript. This enables site owners and SEOs to spend more of their time creating content instead of splitting their attention between supporting users and an outdated version of Chrome.
Want more Live with Search Engine Land? Get it here:
- Click here for the full session.
- SEOs and developers: Why they’re better together [Video]
- Common oversights that can impede Google from crawling your content [Video]
- How Google crawls and indexes: a non-technical explanation [Video]
- Don’t try to reinvent the SEO wheel, says Google’s Martin Splitt
- You can also find a full list of all our Live with Search Engine Land sessions on YouTube.
The post Why the evergreen Googlebot is such a big deal [Video] appeared first on Search Engine Land.
Source: IAB