The author’s views are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.
Introduction to Googlebot spoofing
In this article, I’ll describe how and why to use Google Chrome (or Chrome Canary) to view a website as Googlebot.
We’ll set up a web browser specifically for Googlebot browsing. Using a user-agent browser extension is often close enough for SEO audits, but extra steps are needed to get as close as possible to emulating Googlebot.
Originally, web servers sent complete websites (fully rendered HTML) to web browsers. These days, many websites are rendered client-side (in the web browser itself) – whether that’s Chrome, Safari, or whatever browser a search bot uses – meaning the user’s browser and device must do the work to render a webpage.
Attempting to get around potential SEO issues, some websites use dynamic rendering, so each page has two versions:
Viewing a website as Googlebot means we can see discrepancies between what a person sees and what a search bot sees. What Googlebot sees doesn’t need to be identical to what a person using a browser sees, but main navigation and the content you want the page to rank for should be the same.
That’s where this article comes in. For a proper technical SEO audit, we need to see what the most common search engine sees. In most English language-speaking countries, at least, that’s Google.
Why use Chrome (or Chrome Canary) to view websites as Googlebot?
Can we see exactly what Googlebot sees?
When auditing, I use my Googlebot browser alongside Screaming Frog SEO Spider’s Googlebot spoofing and rendering, and Google’s own tools such as URL Inspection in Search Console (which can be automated using SEO Spider), and the render screenshot and code from the Mobile Friendly Test.
Even Google’s own publicly available tools aren’t 100% accurate in showing what Googlebot sees. But along with the Googlebot browser and SEO Spider, they can point towards issues and help with troubleshooting.
Why use a separate browser to view websites as Googlebot?
Having a dedicated browser saves time. Without relying on or waiting for other tools, I get an idea of how Googlebot sees a website in seconds.
While auditing a website that served different content to browsers and Googlebot, and where issues included inconsistent server responses, I needed to switch between the default browser user-agent and Googlebot more often than usual. But constant user-agent switching using a Chrome browser extension was inefficient.
Aside from having a coder who can code a headless Chrome solution, the “Googlebot browser” setup is an easy way to spoof Googlebot.
2. Improved accuracy
Browser extensions can impact how websites look and perform. This approach keeps the number of extensions in the Googlebot browser to a minimum.
It’s easy to forget to switch Googlebot spoofing off between browsing sessions, which can lead to websites not working as expected. I’ve even been blocked from websites for spoofing Googlebot, and had to email them with my IP to remove the block.
For which SEO audits are a Googlebot browser useful?
The most common use-case for SEO audits is likely websites using client-side rendering or dynamic rendering. You can easily compare what Googlebot sees to what a general website visitor sees.
Even with websites that don’t use dynamic rendering, you never know what you might find by spoofing Googlebot. After over eight years auditing e-commerce websites, I’m still surprised by issues I haven’t come across before.
Example Googlebot comparisons for technical SEO and content audits:
Is the main navigation different?
Is Googlebot seeing the content you want indexed?
Do URLs return different server responses? For example, incorrect URLs can return 200 OK for Googlebot but 404 Not Found for general website visitors.
Is the page layout different to what the general website visitor sees? For example, I often see links as blue text on a black background when spoofing Googlebot. While machines can read such text, we want to present something that looks user-friendly to Googlebot. If it can’t render your client-side website, how will it know? (Note: a website might display as expected in Google’s cache, but that isn’t the same as what Googlebot sees.)
Do websites redirect based on location? Googlebot mostly crawls from US-based IPs.
It depends how in-depth you want to go, but Chrome itself has many useful features for technical SEO audits. I sometimes compare its Console and Network tab data for a general visitor vs. a Googlebot visit (e.g. Googlebot might be blocked from files that are essential for page layout or are required to display certain content).
How to set up your Googlebot browser
Once set up (which takes about a half hour), the Googlebot browser solution makes it easy to quickly view webpages as Googlebot.
Step 1: Download and install Chrome or Canary
If Chromeisn’t your default browser, use it as your Googlebot browser.
If Chrome is your default browser, download and install Chrome Canary. Canary is a development version of Chrome where Google tests new features, and it can be installed and run separately to Chrome’s default version.
As Canary is a development version of Chrome, Google warns that Canary “can be unstable.” But I’m yet to have issues using it as my Googlebot browser.
Step 2: Install browser extensions
I installed five browser extensions and a bookmarklet on my Googlebot browser. I’ll list the extensions, then advise on settings and why I use them.
For emulating Googlebot (the links are the same whether you use Chrome or Canary):
User-Agent Switcher extension
User-Agent Switcher does what it says on the tin: switches the browser’s user-agent. Chrome and Canary have a user-agent setting, but it only applies to the tab you’re using and resets if you close the browser.
I take the Googlebot user-agent string from Chrome’s browser settings, which at the time of writing will be the latest version of Chrome (note that below, I’m taking the user-agent from Chrome and not Canary).
To get the user-agent, access Chrome DevTools (by pressing F12 or using the hamburger menu to the top-right of the browser window, then navigating to More tools > Developer tools). See the screenshot below or follow these steps:
Go to the Network tab
From the top-right Network hamburger menu: More tools > Network conditions
Click the Network conditions tab that appears lower down the window
Untick “Use browser default”
Select “Googlebot Smartphone” from the list, then copy and paste the user-agent from the field below the list into the User-Agent Switcher extension list (another screenshot below). Don’t forget to switch Chrome back to its default user-agent if it’s your main browser.
At this stage, if you’re using Chrome (and not Canary) as your Googlebot browser, you may as well tick “Disable cache” (more on that later).
To access User-Agent Switcher’s list, right-click its icon in the browser toolbar and click Options (see screenshot below). “Indicator Flag” is text that appears in the browser toolbar to show which user-agent has been selected — I chose GS to mean “Googlebot Smartphone:”
I added Googlebot Desktop and the bingbots to my list, too.
Why spoof Googlebot’s user agent?
Web servers detect what is browsing a website from a user-agent string. For example, the user-agent for a Windows 10 device using the Chrome browser at the time of writing is:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.115 Safari/537.36
Long answer: that would be a whole other article.
Windscribe (or another VPN)
Windscribe (or your choice of VPN) is used to spoof Googlebot’s US location. I use a pro Windscribe account, but the free account allows up to 2GB data transfer a month and includes US locations.
I don’t think the specific US location matters, but I pretend Gotham is a real place (in a time when Batman and co. have eliminated all villains):
Ensure settings that may impact how webpages display are disabled — Windscribe’s extension blocks ads by default. The two icons to the top-right should show a zero.
For the Googlebot browser scenario, I prefer a VPN browser extension to an application, because the extension is specific to my Googlebot browser.
Why spoof Googlebot’s location?
Googlebot mostly crawls websites from US IPs, and there are many reasons for spoofing Googlebot’s primary location.
Some websites block or show different content based on geolocation. If a website blocks US IPs, for example, Googlebot may never see the website and therefore cannot index it.
Another example: some websites redirect to different websites or URLs based on location. If a company had a website for customers in Asia and a website for customers in America, and redirected all US IPs to the US website, Googlebot would never see the Asian version of the website.
With Link Redirect Trace, I see at a glance what server response a URL returns.
The View Rendered Source extension enables easy comparison of raw HTML (what the web server delivers to the browser) and rendered HTML (the code rendered on the client-side browser).
Step 3: Configure browser settings to emulate Googlebot
Next, we’ll configure the Googlebot browser settings in line with what Googlebot doesn’t support when crawling a website.
What doesn’t Googlebot crawling support?
Service workers (because people clicking to a page from search results may never have visited before, so it doesn’t make sense to cache data for later visits).
Permission requests (e.g. push notifications, webcam, geolocation). If content relies on any of these, Googlebot will not see that content.
Googlebot is stateless so doesn’t support cookies, session storage, local storage, or IndexedDB. Data can be stored in these mechanisms but will be cleared before Googlebot crawls the next URL on a website.
These bullet points are summarized from an interview by Eric Enge with Google’s Martin Splitt:
Step 3a: DevTools settings
To open Developer Tools in Chrome or Canary, press F12, or using the hamburger menu to the top-right, navigate to More tools > Developer tools:
The Developer Tools window is generally docked within the browser window, but I sometimes prefer it in a separate window. For that, change the “Dock side” in the second hamburger menu:
If using normal Chrome as your Googlebot browser, you may have done this already.
Otherwise, via the DevTools hamburger menu, click to More tools > Network conditions and tick the “Disable cache” option:
Block service workers
To block service workers, go to the Application tab > Service Workers > tick “Bypass for network”:
Step 3b: General browser settings
In your Googlebot browser, navigate to Settings > Privacy and security > Cookies (or visit chrome://settings/cookies directly) and choose the “Block all cookies (not recommended)” option (isn’t it fun to do something “not recommended?”):
Also in the “Privacy and security” section, choose “Site settings” (or visit chrome://settings/content) and individually block Location, Camera, Microphone, Notifications, and Background sync (and likely anything that appears there in future versions of Chrome):
Step 4: Emulate a mobile device
Finally, as our aim is to emulate Googlebot’s mobile-first crawling, emulate a mobile device within your Googlebot browser.
Towards the top-left of DevTools, click the device toolbar toggle, then choose a device to emulate in the browser (you can add other devices too):
Whatever device you choose, Googlebot doesn’t scroll on webpages, and instead renders using a window with a long vertical height.
I recommend testing websites in desktop view, too, and on actual mobile devices if you have access to them.
How about viewing a website as bingbot?
To create a bingbot browser, use a recent version of Microsoft Edge with the bingbot user agent.
Yahoo! Search, DuckDuckGo, Ecosia, and other search engines are either powered by or based on Bing search, so Bing is responsible for a higher percentage of search than many people realize.
Summary and closing notes
So, there you have your very own Googlebot emulator.
Using an existing browser to emulate Googlebot is the easiest method to quickly view webpages as Googlebot. It’s also free, assuming you already use a desktop device that can install Chrome and/or Canary.
Questions? Something I missed? Tweet me @AlexHarfordSEO. Thanks for reading!
The author’s views are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.
We’re back with another SEO recap with Tom Capper! As you’ve probably noticed, ChatGPT has taken the search world by storm. But does GPT-3 mean the end of SEO as we know it, or are there ways to incorporate the AI model into our daily work?
Tom tries to tackle this question by demonstrating how he plans to use ChatGPT, along with other natural language processing systems, in his own work.
Be sure to check out the commentary on ChatGPT from our other Moz subject matter experts, Dr. Pete Meyers and Miriam Ellis:
Hello, I’m Tom Capper from Moz, and today I want to talk about how I’m going to use ChatGPT and NLP, natural language processing apps in general in my day-to-day SEO tasks. This has been a big topic recently. I’ve seen a lot of people tweeting about this. Some people saying SEO is dead. This is the beginning of the end. As always, I think that’s maybe a bit too dramatic, but there are some big ways that this can be useful and that this will affect SEOs in their industry I think.
The first question I want to ask is, “Can we use this instead of Google? Are people going to start using NLP-powered assistants instead of search engines in a big way?”
So just being meta here, I asked ChatGPT to write a song about Google’s search results being ruined by an influx of AI content. This is obviously something that Google themselves is really concerned about, right? They talked about it with the helpful content update. Now I think the fact that we can be concerned about AI content ruining search results suggests there might be some problem with an AI-powered search engine, right?
No, AI powered is maybe the wrong term because, obviously, Google themselves are at some degree AI powered, but I mean pure, AI-written results. So for example, I stole this from a tweet and I’ve credited the account below, but if you ask it, “What is the fastest marine mammal,” the fastest marine mammal is the peregrine falcon. That is not a mammal.
Then it mentions the sailfish, which is not a mammal, and marlin, which is not a mammal. This is a particularly bad result. Whereas if I google this, great, that is an example of a fast mammal. We’re at least on the right track. Similarly, if I’m looking for a specific article on a specific web page, I’ve searched Atlantic article about the declining quality of search results, and even though clearly, if you look at the other information that it surfaces, clearly this has consumed some kind of selection of web pages, it’s refusing to acknowledge that here.
Whereas obviously, if I google that, very easy. I can find what I’m looking for straightaway. So yeah, maybe I’m not going to just replace Google with ChatGPT just yet. What about writing copy though? What about I’m fed up of having to manually write blog posts about content that I want to rank for or that I think my audience want to hear about?
So I’m just going to outsource it to a robot. Well, here’s an example. “Write a blog post about the future of NLP in SEO.” Now, at first glance, this looks okay. But actually, when you look a little bit closer, it’s a bluff. It’s vapid. It doesn’t really use any concrete examples.
It doesn’t really read the room. It doesn’t talk about sort of how our industry might be affected more broadly. It just uses some quick tactical examples. It’s not the worst article you could find. I’m sure if you pulled a teenager off the street who knew nothing about this and asked them to write about it, they would probably produce something worse than this.
But on the other hand, if you saw an article on the Moz blog or on another industry credible source, you’d expect something better than this. So yeah, I don’t think that we’re going to be using ChatGPT as our copywriter right away, but there may be some nuance, which I’ll get to in just a bit. What about writing descriptions though?
I thought this was pretty good. “Write a meta description for my Moz blog post about SEO predictions in 2023.” Now I could do a lot better with the query here. I could tell it what my post is going to be about for starters so that it could write a more specific description. But this is already quite good. It’s the right length for a meta description. It covers the bases.
It’s inviting people to click. It makes it sound exciting. This is pretty good. Now you’d obviously want a human to review these for the factual issues we talked about before. But I think a human plus the AI is going to be more effective here than just the human or at least more time efficient. So that’s a potential use case.
What about ideating copy? So I said that the pure ChatGPT written blog post wasn’t great. But one thing I could do is get it to give me a list of subtopics or subheadings that I might want to include in my own post. So here, although it is not the best blog post in the world, it has covered some topics that I might not have thought about.
So I might want to include those in my own post. So instead of asking it “write a blog post about the future of NLP in SEO,” I could say, “Write a bullet point list of ways NLP might affect SEO.” Then I could steal some of those, if I hadn’t thought of them myself, as potential topics that my own ideation had missed. Similarly you could use that as a copywriter’s brief or something like that, again in addition to human participation.
Even experienced coders often find themselves falling back to Stack Overflow and this kind of thing. So here’s an example. “Write an SQL query that extracts all the rows from table2 where column A also exists as a row in table1.” So that’s quite complex. I’ve not really made an effort to make that query very easy to understand, but the result is actually pretty good.
It’s a working piece of SQL with an explanation below. This is much quicker than me figuring this out from first principles, and I can take that myself and work it into something good. So again, this is AI plus human rather than just AI or just human being the most effective. I could get a lot of value out of this, and I definitely will. I think in the future, rather than starting by going to Stack Overflow or googling something where I hope to see a Stack Overflow result, I think I would start just by asking here and then work from there.
That’s all. So that’s how I think I’m going to be using ChatGPT in my day-to-day SEO tasks. I’d love to hear what you’ve got planned. Let me know. Thanks.
This afternoon, HubSpot announced it would be making cuts in its workforce during Q1 2023. In a Securities and Exchange Commission filing it put the scale of the cuts at 7%. This would mean losing around 500 employees from its workforce of over 7,000.
The reasons cited were a downward trend in business and a “faster deceleration” than expected following positive growth during the pandemic.
Layoffs follow swift growth. Indeed, the layoffs need to be seen against the background of very rapid growth at the company. The size of the workforce at HubSpot grew over 40% between the end of 2020 and today.
In 2022 it announced a major expansion of its international presence with new operations in Spain and the Netherlands and a plan to expand its Canadian presence in 2023.
Why we care. The current cool down in the martech space, and in tech generally, does need to be seen in the context of startling leaps forward made under pandemic conditions. As the importance of digital marketing and the digital environment in general grew at an unprecedented rate, vendors saw opportunities for growth.
The world is re-adjusting. We may not be seeing a bubble burst, but we are seeing a bubble undergoing some slight but predictable deflation.
Kim Davis is the Editorial Director of MarTech. Born in London, but a New Yorker for over two decades, Kim started covering enterprise software ten years ago. His experience encompasses SaaS for the enterprise, digital- ad data-driven urban planning, and applications of SaaS, digital technology, and data in the marketing space.
He first wrote about marketing technology as editor of Haymarket’s The Hub, a dedicated marketing tech website, which subsequently became a channel on the established direct marketing brand DMN. Kim joined DMN proper in 2016, as a senior editor, becoming Executive Editor, then Editor-in-Chief a position he held until January 2020.
Prior to working in tech journalism, Kim was Associate Editor at a New York Times hyper-local news site, The Local: East Village, and has previously worked as an editor of an academic publication, and as a music journalist. He has written hundreds of New York restaurant reviews for a personal blog, and has been an occasional guest contributor to Eater.