Google clarified that the Search Console that the Index Coverage Report does not report the up to the minute coverage data. Google recommends using the URL Inspection Tool for those who need the most up to date confirmation of whether a URL is indexed or not.
Google Clarifies Index Coverage Report Data
There have been a number of tweets noticing what seemed like an error in the Index Coverage Report that was causing it to report that a URL was crawled but not indexed.
Turns out that this isn’t a bug but rather a limitation of the Index Coverage report.
Google explained it in a series of tweets.
Reports of Search Console Report Bug
“A few Google Search Console users reported that they saw URLs in the Index Coverage report marked as “Crawled – currently not indexed” that, when inspected with the URL Inspection tool, were listed as “Submitted and indexed” or some other status.”
Google Explains the Index Coverage Report
Google then shared in a series of tweets how the Index Coverage report works.
“This is because the Index Coverage report data is refreshed at a different (and slower) rate than the URL Inspection.
The results shown in URL Inspection are more recent, and should be taken as authoritative when they conflict with the Index Coverage report. (2/4)
Data shown in Index Coverage should reflect the accurate status of a page within a few days, when the status changes. (3/4)
As always, thanks for the feedback 🙏, we’ll look for ways to decrease this discrepancy so our reports and tools are always aligned and fresh! (4/4)”
John Mueller Answers Question About Index Coverage Report
Google’s John Mueller had answered a question about this issue on October 8, 2021. This was before it was understood that there wasn’t an error in the Index Coverage Report but rather a difference in the expectation of data freshness of the the Index Coverage Report and the reality that the data is refreshed at a slower pace.
The person asking the question related that in July 2021 they noticed that URLs submitted through Google Search Console reported the error of submitted but not indexed, even though the pages didn’t have a noindex tag.
Thereafter Google would return to the website, crawl the page and index it normally.
“The problem is we get 300 errors/no index and then on subsequent crawls only five get crawled before they re-crawl so many more.
So, given that that they are noindexed and granted if things can’t render or they can’t find the page, they’re directed to our page not found, which does have a no-index.
And so I know somehow they’re getting directed there.
Is this just a memory issue or since they’re subsequently crawled fine, is it just a…”
John Mueller svarade:
“It’s hard to say without looking at the pages.
So I would really try to double-check if this was a problem then and is not a problem anymore or if it’s still something that kind of intermittently happens.
Because if it doesn’t matter, if it doesn’t kind of take place now anymore then like whatever…”
The person asking the question responded by insisting that it still takes place and that it continues to be an ongoing problem.
John Mueller responded by saying that his hunch is that something with the rendering might be going wrong.
“And if that’s something that still takes place, I would try to figure out what might be causing that.
And it might be that when you test the page in Search Console, nine times out of ten it works well. But kind of that one time out of ten when it doesn’t work well and redirects to the error page or we think it redirects to the error page.
Mueller next explained how the crawling and rendering part happens from Google’s side of crawling.
He makes reference to a “Chrome-type” browser which might be a reference to Google’s headless Chrome bot which is essentially a Chrome browser that is missing the front end user interface.
“What happens on our side is we crawl the HTML page and then we try to process the HTML page in kind of the Chromium kind of Chrome-type browser.
And for that we try to pull in all of the resources that are mentioned on there.
So if you go to the Developer Console in Chrome and you look at the network section, it shows you a waterfall diagram of everything that it loads to render the page.
And if there are lots of things that need to be loaded, then it can happen that things time out and then we might run into that error situation.”
Mueller’s suggestion is related to Rendering SEO which was discussed by Google’s Martin Splitt, where the technical aspects of how a web page is downloaded and rendered in a browser is optimized for fast and efficient performance.
Some Crawl Errors Are Server Related
Mueller’s answer was not entirely precisely relevant for this specific situation because the problem was one of expectation of freshness and not an indexing.
However his advice is still accurate for the many times that there is a server-related issue that is causing resource serving timeouts that block the proper rendering of a web page.
This can happen at night in the early morning hours when rogue bots swarm a website and slow down the site.
A site that doesn’t have optimized resources, particularly one on a shared server, can experience dramatic slowdowns where the server begins showing 500 error response codes.
Speaking from experience in maintaining a dedicated server, misconfiguration in Nginx, Apache or PHP at the server level or a failing hard drive can also contribute to the website failing to show requested pages to Google or to website visitors.
Some of these issues can creep in unnoticed when the various software are updated to less than optimal settings, requiring troubleshooting to identify errors.
Fortunately server software like Plesk have diagnostic and repair tools that can help fix these problems when they arise.
This time the problem was that Google hadn’t adequately set the correct expectation for the Index Coverage Report.
But next time it could be a server or rendering issue.
Google Index Coverage Report and Reported Indexing Errors
Watch at the 6:00 Minute Mark
Google ska betala $391,5 miljoner för uppgörelse över platsspårning, säger statliga AG:er
Google has agreed to pay a $391.5 million settlement to 40 states to resolve accusations that it tracked people’s locations in violation of state laws, including snooping on consumers’ whereabouts even after they told the tech behemoth to bug off.
Louisiana Attorney General Jeff Landry said it is time for Big Tech to recognize state laws that limit data collection efforts.
“I have been ringing the alarm bell on big tech for years, and this is why,” Mr. Landry, a Republican, said in a statement Monday. “Citizens must be able to make informed decisions about what information they release to big tech.”
The attorneys general said the investigation resulted in the largest-ever multistate privacy settlement. Connecticut Attorney General William Tong, a Democrat, said Google’s penalty is a “historic win for consumers.”
“Location data is among the most sensitive and valuable personal information Google collects, and there are so many reasons why a consumer may opt out of tracking,” Mr. Tong said. “Our investigation found that Google continued to collect this personal information even after consumers told them not to. That is an unacceptable invasion of consumer privacy, and a violation of state law.”
Location tracking can help tech companies sell digital ads to marketers looking to connect with consumers within their vicinity. It’s another tool in a data-gathering toolkit that generates more than $200 billion in annual ad revenue for Google, accounting for most of the profits pouring into the coffers of its corporate parent, Alphabet, which has a market value of $1.2 trillion.
The settlement is part of a series of legal challenges to Big Tech in the U.S. and around the world, which include consumer protection and antitrust lawsuits.
Though Google, based in Mountain View, California, said it fixed the problems several years ago, the company’s critics remained skeptical. State attorneys general who also have tussled with Google have questioned whether the tech company will follow through on its commitments.
The states aren’t dialing back their scrutiny of Google’s empire.
Last month, Texas Attorney General Ken Paxton said he was filing a lawsuit over reports that Google unlawfully collected millions of Texans’ biometric data such as “voiceprints and records of face geometry.”
The states began investigating Google’s location tracking after The Associated Press reported in 2018 that Android devices and iPhones were storing location data despite the activation of privacy settings intended to prevent the company from following along.
Arizona Attorney General Mark Brnovich went after the company in May 2020. The state’s lawsuit charged that the company had defrauded its users by misleading them into believing they could keep their whereabouts private by turning off location tracking in the settings of their software.
Arizona settled its case with Google for $85 million last month. By then, attorneys general in several other states and the District of Columbia had pounced with their own lawsuits seeking to hold Google accountable.
Along with the hefty penalty, the state attorneys general said, Google must not hide key information about location tracking, must give users detailed information about the types of location tracking information Google collects, and must show additional information to people when users turn location-related account settings to “off.”
States will receive differing sums from the settlement. Mr. Landry’s office said Louisiana would receive more than $12.7 million, and Mr. Tong’s office said Connecticut would collect more than $6.5 million.
The financial penalty will not cripple Google’s business. The company raked in $69 billion in revenue for the third quarter of 2022, according to reports, yielding about $13.9 billion in profit.
Google downplayed its location-tracking tools Monday and said it changed the products at issue long ago.
“Consistent with improvements we’ve made in recent years, we have settled this investigation which was based on outdated product policies that we changed years ago,” Google spokesman Jose Castaneda said in a statement.
Google product managers Marlo McGriff and David Monsees defended their company’s Search and Maps products’ usage of location information.
“Location information lets us offer you a more helpful experience when you use our products,” the two men wrote on Google’s blog. “From Google Maps’ driving directions that show you how to avoid traffic to Google Search surfacing local restaurants and letting you know how busy they are, location information helps connect experiences across Google to what’s most relevant and useful.”
The blog post touted transparency tools and auto-delete controls that Google has developed in recent years and said the private browsing Incognito mode prevents Google Maps from saving an account’s search history.
Mr. McGriff and Mr. Monsees said Google would make changes to its products as part of the settlement. The changes include simplifying the process for deleting location data, updating the method to set up an account and revamping information hubs.
“We’ll provide a new control that allows users to easily turn off their Location History and Web & App Activity settings and delete their past data in one simple flow,” Mr. McGriff and Mr. Monsees wrote. “We’ll also continue deleting Location History data for users who have not recently contributed new Location History data to their account.”
• This article is based in part on wire service reports.