Connect with us

GOOGLE

Google Explains Why Index Coverage Report is Slow

Published

on

Main Article Image

Google clarified that the Search Console that the Index Coverage Report does not report the up to the minute coverage data. Google recommends using the URL Inspection Tool for those who need the most up to date confirmation of whether a URL is indexed or not.

Google Clarifies Index Coverage Report Data

There have been a number of tweets noticing what seemed like an error in the Index Coverage Report that was causing it to report that a URL was crawled but not indexed.

Turns out that this isn’t a bug but rather a limitation of the Index Coverage report.

Google explained it in a series of tweets.

Reports of Search Console Report Bug

“A few Google Search Console users reported that they saw URLs in the Index Coverage report marked as “Crawled – currently not indexed” that, when inspected with the URL Inspection tool, were listed as “Submitted and indexed” or some other status.”

Google Explains the Index Coverage Report

Google then shared in a series of tweets how the Index Coverage report works.

“This is because the Index Coverage report data is refreshed at a different (and slower) rate than the URL Inspection.

The results shown in URL Inspection are more recent, and should be taken as authoritative when they conflict with the Index Coverage report. (2/4)

Advertisement

Data shown in Index Coverage should reflect the accurate status of a page within a few days, when the status changes. (3/4)

As always, thanks for the feedback 🙏, we’ll look for ways to decrease this discrepancy so our reports and tools are always aligned and fresh! (4/4)”

John Mueller Answers Question About Index Coverage Report

Google’s John Mueller had answered a question about this issue on October 8, 2021. This was before it was understood that there wasn’t an error in the Index Coverage Report but rather a difference in the expectation of data freshness of the the Index Coverage Report and the reality that the data is refreshed at a slower pace.

The person asking the question related that in July 2021 they noticed that URLs submitted through Google Search Console reported the error of submitted but not indexed, even though the pages didn’t have a noindex tag.

Thereafter Google would return to the website, crawl the page and index it normally.

“The problem is we get 300 errors/no index and then on subsequent crawls only five get crawled before they re-crawl so many more.

Advertisement

So, given that that they are noindexed and granted if things can’t render or they can’t find the page, they’re directed to our page not found, which does have a no-index.

And so I know somehow they’re getting directed there.

Is this just a memory issue or since they’re subsequently crawled fine, is it just a…”

John Mueller answered:

“It’s hard to say without looking at the pages.

So I would really try to double-check if this was a problem then and is not a problem anymore or if it’s still something that kind of intermittently happens.
Because if it doesn’t matter, if it doesn’t kind of take place now anymore then like whatever…”

The person asking the question responded by insisting that it still takes place and that it continues to be an ongoing problem.

Advertisement

John Mueller responded by saying that his hunch is that something with the rendering might be going wrong.

“And if that’s something that still takes place, I would try to figure out what might be causing that.

And it might be that when you test the page in Search Console, nine times out of ten it works well. But kind of that one time out of ten when it doesn’t work well and redirects to the error page or we think it redirects to the error page.

That’s kind of the case I would try to drill down into and try to figure out is it that there are too many requests to render this page or there’s something complicated with the JavaScript that sometimes takes too long and sometimes works well and then try to narrow things down from that point of view.”

Mueller next explained how the crawling and rendering part happens from Google’s side of crawling.

He makes reference to a “Chrome-type” browser which might be a reference to Google’s headless Chrome bot which is essentially a Chrome browser that is missing the front end user interface.

“What happens on our side is we crawl the HTML page and then we try to process the HTML page in kind of the Chromium kind of Chrome-type browser.

And for that we try to pull in all of the resources that are mentioned on there.

Advertisement

So if you go to the Developer Console in Chrome and you look at the network section, it shows you a waterfall diagram of everything that it loads to render the page.

And if there are lots of things that need to be loaded, then it can happen that things time out and then we might run into that error situation.”

Mueller next suggested reducing the amount of resource requests being made for JavaScript and CSS files and try to combine or reduce them, and minimize images, which is always a good thing to do.

Mueller’s suggestion is related to Rendering SEO which was discussed by Google’s Martin Splitt, where the technical aspects of how a web page is downloaded and rendered in a browser is optimized for fast and efficient performance.

Some Crawl Errors Are Server Related

Mueller’s answer was not entirely precisely relevant for this specific situation because the problem was one of expectation of freshness and not an indexing.

However his advice is still accurate for the many times that there is a server-related issue that is causing resource serving timeouts that block the proper rendering of a web page.

Advertisement

This can happen at night in the early morning hours when rogue bots swarm a website and slow down the site.

A site that doesn’t have optimized resources, particularly one on a shared server, can experience dramatic slowdowns where the server begins showing 500 error response codes.

Speaking from experience in maintaining a dedicated server, misconfiguration in Nginx, Apache or PHP at the server level or a failing hard drive can also contribute to the website failing to show requested pages to Google or to website visitors.

Some of these issues can creep in unnoticed when the various software are updated to less than optimal settings, requiring troubleshooting to identify errors.

Fortunately server software like Plesk have diagnostic and repair tools that can help fix these problems when they arise.

This time the problem was that Google hadn’t adequately set the correct expectation for the Index Coverage Report.

Advertisement

But next time it could be a server or rendering issue.

Citations

Google Search Central Tweets Explanation of Index Coverage Report

Google Index Coverage Report and Reported Indexing Errors

Watch at the 6:00 Minute Mark

Searchenginejournal.com

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

AI

Exploring the Evolution of Language Translation: A Comparative Analysis of AI Chatbots and Google Translate

Published

on

By

A Comparative Analysis of AI Chatbots and Google Translate

According to an article on PCMag, while Google Translate makes translating sentences into over 100 languages easy, regular users acknowledge that there’s still room for improvement.

In theory, large language models (LLMs) such as ChatGPT are expected to bring about a new era in language translation. These models consume vast amounts of text-based training data and real-time feedback from users worldwide, enabling them to quickly learn to generate coherent, human-like sentences in a wide range of languages.

However, despite the anticipation that ChatGPT would revolutionize translation, previous experiences have shown that such expectations are often inaccurate, posing challenges for translation accuracy. To put these claims to the test, PCMag conducted a blind test, asking fluent speakers of eight non-English languages to evaluate the translation results from various AI services.

The test compared ChatGPT (both the free and paid versions) to Google Translate, as well as to other competing chatbots such as Microsoft Copilot and Google Gemini. The evaluation involved comparing the translation quality for two test paragraphs across different languages, including Polish, French, Korean, Spanish, Arabic, Tagalog, and Amharic.

In the first test conducted in June 2023, participants consistently favored AI chatbots over Google Translate. ChatGPT, Google Bard (now Gemini), and Microsoft Bing outperformed Google Translate, with ChatGPT receiving the highest praise. ChatGPT demonstrated superior performance in converting colloquialisms, while Google Translate often provided literal translations that lacked cultural nuance.

For instance, ChatGPT accurately translated colloquial expressions like “blow off steam,” whereas Google Translate produced more literal translations that failed to resonate across cultures. Participants appreciated ChatGPT’s ability to maintain consistent levels of formality and its consideration of gender options in translations.

Advertisement

The success of AI chatbots like ChatGPT can be attributed to reinforcement learning with human feedback (RLHF), which allows these models to learn from human preferences and produce culturally appropriate translations, particularly for non-native speakers. However, it’s essential to note that while AI chatbots outperformed Google Translate, they still had limitations and occasional inaccuracies.

In a subsequent test, PCMag evaluated different versions of ChatGPT, including the free and paid versions, as well as language-specific AI agents from OpenAI’s GPTStore. The paid version of ChatGPT, known as ChatGPT Plus, consistently delivered the best translations across various languages. However, Google Translate also showed improvement, performing surprisingly well compared to previous tests.

Overall, while ChatGPT Plus emerged as the preferred choice for translation, Google Translate demonstrated notable improvement, challenging the notion that AI chatbots are always superior to traditional translation tools.


Source: https://www.pcmag.com/articles/google-translate-vs-chatgpt-which-is-the-best-language-translator

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

GOOGLE

Google Implements Stricter Guidelines for Mass Email Senders to Gmail Users

Published

on

1280x924 gmail

Beginning in April, Gmail senders bombarding users with unwanted mass emails will encounter a surge in message rejections unless they comply with the freshly minted Gmail email sender protocols, Google cautions.

Fresh Guidelines for Dispatching Mass Emails to Gmail Inboxes In an elucidative piece featured on Forbes, it was highlighted that novel regulations are being ushered in to shield Gmail users from the deluge of unsolicited mass emails. Initially, there were reports surfacing about certain marketers receiving error notifications pertaining to messages dispatched to Gmail accounts. Nonetheless, a Google representative clarified that these specific errors, denoted as 550-5.7.56, weren’t novel but rather stemmed from existing authentication prerequisites.

Moreover, Google has verified that commencing from April, they will initiate “the rejection of a portion of non-compliant email traffic, progressively escalating the rejection rate over time.” Google elaborates that, for instance, if 75% of the traffic adheres to the new email sender authentication criteria, then a portion of the remaining non-conforming 25% will face rejection. The exact proportion remains undisclosed. Google does assert that the implementation of the new regulations will be executed in a “step-by-step fashion.”

This cautious and methodical strategy seems to have already kicked off, with transient errors affecting a “fraction of their non-compliant email traffic” coming into play this month. Additionally, Google stipulates that bulk senders will be granted until June 1 to integrate “one-click unsubscribe” in all commercial or promotional correspondence.

Exclusively Personal Gmail Accounts Subject to Rejection These alterations exclusively affect bulk emails dispatched to personal Gmail accounts. Entities sending out mass emails, specifically those transmitting a minimum of 5,000 messages daily to Gmail accounts, will be mandated to authenticate outgoing emails and “refrain from dispatching unsolicited emails.” The 5,000 message threshold is tabulated based on emails transmitted from the same principal domain, irrespective of the employment of subdomains. Once the threshold is met, the domain is categorized as a permanent bulk sender.

These guidelines do not extend to communications directed at Google Workspace accounts, although all senders, including those utilizing Google Workspace, are required to adhere to the updated criteria.

Advertisement

Augmented Security and Enhanced Oversight for Gmail Users A Google spokesperson emphasized that these requisites are being rolled out to “fortify sender-side security and augment user control over inbox contents even further.” For the recipient, this translates to heightened trust in the authenticity of the email sender, thus mitigating the risk of falling prey to phishing attempts, a tactic frequently exploited by malevolent entities capitalizing on authentication vulnerabilities. “If anything,” the spokesperson concludes, “meeting these stipulations should facilitate senders in reaching their intended recipients more efficiently, with reduced risks of spoofing and hijacking by malicious actors.”

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

GOOGLE

Google’s Next-Gen AI Chatbot, Gemini, Faces Delays: What to Expect When It Finally Launches

Published

on

By

Google AI Chatbot Gemini

In an unexpected turn of events, Google has chosen to postpone the much-anticipated debut of its revolutionary generative AI model, Gemini. Initially poised to make waves this week, the unveiling has now been rescheduled for early next year, specifically in January.

Gemini is set to redefine the landscape of conversational AI, representing Google’s most potent endeavor in this domain to date. Positioned as a multimodal AI chatbot, Gemini boasts the capability to process diverse data types. This includes a unique proficiency in comprehending and generating text, images, and various content formats, even going so far as to create an entire website based on a combination of sketches and written descriptions.

Originally, Google had planned an elaborate series of launch events spanning California, New York, and Washington. Regrettably, these events have been canceled due to concerns about Gemini’s responsiveness to non-English prompts. According to anonymous sources cited by The Information, Google’s Chief Executive, Sundar Pichai, personally decided to postpone the launch, acknowledging the importance of global support as a key feature of Gemini’s capabilities.

Gemini is expected to surpass the renowned ChatGPT, powered by OpenAI’s GPT-4 model, and preliminary private tests have shown promising results. Fueled by significantly enhanced computing power, Gemini has outperformed GPT-4, particularly in FLOPS (Floating Point Operations Per Second), owing to its access to a multitude of high-end AI accelerators through the Google Cloud platform.

SemiAnalysis, a research firm affiliated with Substack Inc., expressed in an August blog post that Gemini appears poised to “blow OpenAI’s model out of the water.” The extensive compute power at Google’s disposal has evidently contributed to Gemini’s superior performance.

Google’s Vice President and Manager of Bard and Google Assistant, Sissie Hsiao, offered insights into Gemini’s capabilities, citing examples like generating novel images in response to specific requests, such as illustrating the steps to ice a three-layer cake.

Advertisement

While Google’s current generative AI offering, Bard, has showcased noteworthy accomplishments, it has struggled to achieve the same level of consumer awareness as ChatGPT. Gemini, with its unparalleled capabilities, is expected to be a game-changer, demonstrating impressive multimodal functionalities never seen before.

During the initial announcement at Google’s I/O developer conference in May, the company emphasized Gemini’s multimodal prowess and its developer-friendly nature. An application programming interface (API) is under development, allowing developers to seamlessly integrate Gemini into third-party applications.

As the world awaits the delayed unveiling of Gemini, the stakes are high, with Google aiming to revolutionize the AI landscape and solidify its position as a leader in generative artificial intelligence. The postponed launch only adds to the anticipation surrounding Gemini’s eventual debut in the coming year.

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending

Follow by Email
RSS