SEARCHENGINES
Google On The Most Expensive Pages To Crawl

There is a fun Twitter thread where Google’s John Mueller and SEO Elmer Boutin talk about expensive pages for Google to crawl. John starts off by explaining that Google doesn’t think about it from an expense or not, it is more about if the page is something that is relevant and useful – that is what Google cares about.
John goes on to say that the “most “expensive” pages are those we’ve been crawling & indexing for years, and nobody has looked for them ever.” But Google crawls and indexes those pages because “what if tomorrow is different? It’s good to be prepared,” he added. He then threw out the line “15% of all searches are new every day, you never know.” Indeed.
Here are those tweets:
All of web-search costs time & resources. I don’t think it’s useful to compare the processing effort of specific media types, the important part is that relevant & useful content can be linked to for users who are looking for it.
— John Mueller is mostly not here 🐀 (@JohnMu) November 15, 2022
Thank you for the response. You bring up a very good point that I believe we SEOs talk about often, but our advice is not heeded as much as it should be.
— Elmer Boutin (@rehor) November 15, 2022
Sure, Google tries to be as efficient as possible when it comes to everything it does. That means discovering, crawling, indexing, ranking and serving are all done with efficiency in mind. But not at the expense of not providing the most useful and relevant search result to the searcher.
Forum discussion at Twitter.
Source: www.seroundtable.com
SEARCHENGINES
Google Business Profile Services Showing Incorrect Pricing

Google Business Profiles lets you see what business listings offer service-wise, they’ve been doing this for a while now, and now they seem to impact your local rankings. But what is new and scary is that Google is making up pricing for your services that are almost always incorrect and sometimes dangerous for that businesses.
Carrie Hill and Sukhjit S Matharu spotted this and posted a couple of examples, one from a client and one from some random business. In both cases, the pricing Google listed are incorrect. She said her client is not offering these services for free, despite what Google says. And her client’s competitors are not offering bed bug inspection for only $1 and $100.
Here is what Sukhjit S Matharu shared on Twitter, saying “when looking into a client’s services in their GBP, we noticed that some of the predefined services had a “free” label which we nor the client added.”
Here is what Carrie Hill shared on Twitter saying “Here’s another where pricing is arbitrarily added in – not from the client… certainly not correct!”
Joy Hawkins, a local SEO, also confirmed this is new.
I wonder if this is easy for the business to fix by going into their Google Business Profiles and editing their services. But I suspect most of these businesses have no clue Google added these prices to their services and it might lead to some bad reviews if a customer is charged or quoted more than what is listed in Google Search.
This reminds me when Google Local Service Ads estimated pricing, which upset many businesses.
Forum discussion at Twitter.
SEARCHENGINES
Google’s 15MB Googlebot Limit Is For Each Individual Subresources

Google has updated the Googlebot help document on crawling to clarify that the 15MB fetch size limit applies to each fetch of the individual subresources referenced in the HTML as well, such as JavaScript and CSS files.
Google initially added information about this 15MB limit several months ago, which caused a lot of concern in the SEO industry, so make sure to read that story.
The help document now reads:
Googlebot can crawl the first 15MB of an HTML file or supported text-based file. Each resource referenced in the HTML such as CSS and JavaScript is fetched separately, and each fetch is bound by the same file size limit. After the first 15MB of the file, Googlebot stops crawling and only considers the first 15MB of the file for indexing. The file size limit is applied on the uncompressed data. Other Google crawlers, for example Googlebot Video and Googlebot Image, may have different limits.
It previously read:
Googlebot can crawl the first 15MB of an HTML file or supported text-based file. Any resources referenced in the HTML such as images, videos, CSS, and JavaScript are fetched separately. After the first 15MB of the file, Googlebot stops crawling and only considers the first 15MB of the file for indexing. The file size limit is applied on the uncompressed data. Other Google crawlers may have different limits.
Gary added on LinkedIn, “PSA from my inbox: The 15MB resource fetch threshold applies on the JavaScript resources, too, so if your JavaScript files are larger than that, your site might have a bad time in Google Search. See googlebot.com for information about the fetch features and limitations. Yes, there are JavaScript resources out there that are larger than 15MB. No, it’s not a good idea to have JavaScript files that are that large.”
Here are some comments on early feedback from the SEO community on this:
Are you running into 15mb Don size? That’s quite a bit.
— johnmu is not a chatbot yet 🐀 (@JohnMu) March 19, 2023
The 15mb is just about fetching, it’s totally separate from the indexing side.
— johnmu is not a chatbot yet 🐀 (@JohnMu) March 19, 2023
Forum discussion at LinkedIn.
SEARCHENGINES
Sam Michelson On Google Algorithm Updates With Reputation Management

In part one, we spoke about who Sam Michelson is and his business development techniques. In part two, we dove into CRM software and how they use it, including looking at a new piece of software for partner management. Then in part three, we go into pricing your services, and in part four, we talk about human resources. In part five, we talk about Google algorithm updates.
He said in the old days of SEO, it was live and die by these Google algorithm updates. With reputation management, it is about managing the first ten to twenty results, and it is about doing SEO for not just your site but other sites you don’t manage.
They take a data-centric approach to Google algorithms. They basically look at the search results for the competing search results. Then they try to replicate those results based on what Google tends to rank for those queries. Then you find a pattern, and then you try to repeat that pattern.
Reputation management aims to build a search result set that Google wants to show, people want to see, and your clients approve of. And that type of content is things you own, earned media, profiles, and it tends to be all very deliberate.
For more on Sam Michelson, visit Five Blocks and find him on LinkedIn.
You can subscribe to our YouTube channel by clicking here so you don’t miss the next vlog where I interviews. I do have a nice lineup of interviews scheduled with SEOs and SEMS, many of which you don’t want to miss – and I promise to continue to make these vlogs better over time. If you want to be interviewed, please fill out this form with your details.
Forum discussion at YouTube.
-
SEARCHENGINES5 days ago
Google Says Ignore Spammy Referral Traffic
-
PPC6 days ago
How to Get Found Online: Our Top 9 Tips for Local Service Businesses
-
MARKETING5 days ago
12 Best Practices to Boost Your TikTok Ad Performance
-
SEO7 days ago
Can Google Detect AI Generated Content?
-
AFFILIATE MARKETING6 days ago
The 17 Best Affiliate Marketing Programs For Beginners – Jeffbullas's Blog
-
AMAZON18 hours ago
The Top 10 Benefits of Amazon AWS Lightsail: Why It’s a Great Choice for Businesses
-
MARKETING5 days ago
A treatise on e-commerce data security and compliance
-
SEARCHENGINES5 days ago
John Mueller Offers Hreflang Google SEO Advice