Connect with us

GOOGLE

Google FAQ Provides Core Web Vitals Insights

Published

on

Google FAQ Provides Core Web Vitals Insights

Core Web Vitals (CWV) is a set of metrics developed by Google to help website publishers improve page performance for the benefit of site visitors.

Webpage performance matters to publishers because fast pages generate more leads, sales, and advertising revenue.

Google recently published a document that provides insights into how CWVs work and their value for ranking purposes. This article discusses it.

Core Web Vitals Intended to Encourage a Healthy Web Experience

Page performance is important to site visitors because it reduces the time it takes for them to get what they want.

Beginning in mid-June, Core Web Vitals becomes a minor ranking factor. Some articles have overstated the importance of CWV as a being a critical ranking factor. But that’s not accurate.

Relevance has always been the most important ranking factor, even more important than page speed.

Advertisement

Statements from Google’s John Mueller assure that relevance will continue to be the stronger influence.

According to Mueller:

“…relevance is still by far much more important. So just because your website is faster with regards to Core Web Vitals than some competitors doesn’t necessarily mean that come May, you will jump to position number one in the search results.”

While Core Web Vitals may not necessarily have a noticeable impact on rankings, it remains inadvisable to ignore the metric. A poor-performing webpage causes disadvantages in other ways, such as lower earnings and possibly less popularity.

Popularity is a key to important ranking factors like links. So it can be asserted that ranking better for Core Web Vitals could help rankings in an indirect manner in addition to the direct ranking boost given by Google’s algorithm.

The goal for Core Web Vitals is to have a shared metric for all sites in order to improve user experience across the web.

“Q: Is Google recommending that all my pages hit these thresholds? What’s the benefit?

A: We recommend that websites use these three thresholds as a guidepost for optimal user experience across all pages.

Advertisement

Core Web Vitals thresholds are assessed at the per-page level, and you might find that some pages are above and others below these thresholds.

The immediate benefit will be a better experience for users that visit your site, but in the long-term we believe that working towards a shared set of user experience metrics and thresholds across all websites, will be critical in order to sustain a healthy web ecosystem.”

AMP Is a Fairly Reliable Way to Score Well

AMP is an acronym for Accelerated Mobile Pages. It’s an HTML framework for delivering to mobile devices webpages that are slimmed down, load fast, and are attractive.

AMP was originally developed by Google but is open source. AMP can accommodate ecommerce sites as well as informational sites.

There are, for example, apps for the Shopify ecommerce platform as well as plugins for WordPress sites that make it easy to add AMP functionality to a website.

Google will show preference to a website’s AMP version for the purposes of calculating a CWV score. So if a site is having a difficult time optimizing for CWV, using AMP is a fast and easy way to gain a high score.

Advertisement

Nevertheless, Google warned that there are factors like a slow server or poorly optimized images that can still negatively impact the core web vitals score.

“Q: If I built AMP pages, do they meet the recommended thresholds?

A: There is a high likelihood that AMP pages will meet the thresholds. AMP is about delivering high-quality, user-first experiences; its initial design goals are closely aligned with what Core Web Vitals measure today.

This means that sites built using AMP likely can easily meet Web Vitals thresholds.

Furthermore, AMP’s evergreen release enables site owners to get these performance improvements without having to change their codebase or invest in additional resources.

It is important to note that there are things outside of AMP’s control which can result in pages not meeting the thresholds, such as slow server response times and un-optimized images.”

First Input Delay Does Not Consider Scrolling or Bounce/Abandon

First Input Delay (FID) is a metric that measures the time it takes from when a site visitor interacts with a site to when the browser responds to that interaction.

Advertisement

Once a site appears to be downloaded and interactive elements appear to be ready to be interacted with, a user should ideally be able to start clicking around without delay.

A bounce is when a visitor visits a site but then soon after abandons the page, presumably returning back to the search page.

The question is about bounced sessions but the answer incorporates scrolling as well.

Google answers that bounce and abandonment are not a part of the FID metric, presumably because there was no interaction.

“Q: Can sessions that don’t report FID be considered “bounced” sessions?

A: No, FID excludes scrolls, and there are legitimate sessions with no non-scroll input. Bounce Rate and Abandonment Rate may be defined as part of your analytics suite of choice and are not considered in the design of CWV metric.”

Core Web Vitals Impacts Ranking

This section reiterates and confirms that Core Web Vitals became a ranking signal in June 2021.

Advertisement

“…Core Web vitals will be included in page experience signals together with existing search signals including mobile-friendliness, safe-browsing, HTTPS-security, and intrusive interstitial guidelines.”

Importance of Core Web Vitals Ranking Signal For Ranking

Ranking signals are said to have different weights. That’s a reflection that some ranking signals have more importance than other ranking signals.

So when it’s said that a ranking signal is weighted more than another ranking signal, that means that it’s more important.

This is an interesting section of the FAQ because it deals with how much weight the Core Web Vitals ranking signal has compared to other ranking signals.

Google appears to say that the Core Web Vitals ranking signal is weaker than other ranking signals that are directly related to satisfying a user query.

It’s almost as though there is a hierarchy of signals, with intent-related signals given more importance than user experience signals.

Here’s how Google explains it:

Advertisement

Q: How does Google determine which pages are affected by the assessment of Page Experience and usage as a ranking signal?

A: Page experience is just one of many signals that are used to rank pages. Keep in mind that intent of the search query is still a very strong signal, so a page with a subpar page experience may still rank highly if it has great, relevant content.

Q: What can site owners expect to happen to their traffic if they don’t hit Core Web Vitals performance metrics?

A: It’s difficult to make any kind of general prediction. We may have more to share in the future when we formally announce the changes are coming into effect. Keep in mind that the content itself and its match to the kind of information a user is seeking remains a very strong signal as well.”

Field Data in Search Console Core Web Vitals Reporting

This next section explains possible discrepancies between what a publisher experiences in terms of download speed and what users on different devices and Internet connections might experience.

That’s why Google Search Console may report that a site scores low on Core Web Vitals despite the site being perceived as fast by the publisher.

More importantly, the Core Web Vitals metric is concerned with more than just speed.

Advertisement

Furthermore, the Search Console report is based on real-world data whereas Lighthouse data is based on simulated users on simulated devices and simulated internet connections.

Real-world data is called Filed Data, while the testing based on simulations is called Lab Data.

Q: My page is fast. Why do I see warnings on the Search Console Core Web Vitals report?

A: Different devices, network connections, geography, and other factors may contribute to how a page loads and is experienced by a particular user. While some users, in certain conditions, can observe a good experience, this may not be indicative of other user’s experience.

Core Web Vitals look at the full body of user visits and its thresholds are assessed at the 75th percentile across the body of users. The SC CWV report helps report on this data.

…remember that Core Web Vitals is looking at more than speed. For instance, Cumulative Layout Shift describes users annoyances like content moving around…

Q: When I look at Lighthouse, I see no errors. Why do I see errors on the Search Console report?

Advertisement

A: The Search Console Core Web Vitals report shows how your pages are performing based on real world usage data from the CrUX report (sometimes called “field data”). Lighthouse, on the other hand, shows data based on what is called “lab data”. Lab data is useful for debugging performance issues while developing a website, as it is collected in a controlled environment. However, it may not capture real-world bottlenecks. “

Google published a Frequently Asked Questions section about Core Web Vitals that answers many questions.

While the above questions were the ones I thought were particularly interesting, do take a moment to review the rest of the FAQ as there is much more information there.

Citation:

Core Web Vitals & Page Experience FAQs


Featured image credit: Paulo Bobita

Searchenginejournal.com

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

AI

Exploring the Evolution of Language Translation: A Comparative Analysis of AI Chatbots and Google Translate

Published

on

By

A Comparative Analysis of AI Chatbots and Google Translate

According to an article on PCMag, while Google Translate makes translating sentences into over 100 languages easy, regular users acknowledge that there’s still room for improvement.

In theory, large language models (LLMs) such as ChatGPT are expected to bring about a new era in language translation. These models consume vast amounts of text-based training data and real-time feedback from users worldwide, enabling them to quickly learn to generate coherent, human-like sentences in a wide range of languages.

However, despite the anticipation that ChatGPT would revolutionize translation, previous experiences have shown that such expectations are often inaccurate, posing challenges for translation accuracy. To put these claims to the test, PCMag conducted a blind test, asking fluent speakers of eight non-English languages to evaluate the translation results from various AI services.

The test compared ChatGPT (both the free and paid versions) to Google Translate, as well as to other competing chatbots such as Microsoft Copilot and Google Gemini. The evaluation involved comparing the translation quality for two test paragraphs across different languages, including Polish, French, Korean, Spanish, Arabic, Tagalog, and Amharic.

In the first test conducted in June 2023, participants consistently favored AI chatbots over Google Translate. ChatGPT, Google Bard (now Gemini), and Microsoft Bing outperformed Google Translate, with ChatGPT receiving the highest praise. ChatGPT demonstrated superior performance in converting colloquialisms, while Google Translate often provided literal translations that lacked cultural nuance.

For instance, ChatGPT accurately translated colloquial expressions like “blow off steam,” whereas Google Translate produced more literal translations that failed to resonate across cultures. Participants appreciated ChatGPT’s ability to maintain consistent levels of formality and its consideration of gender options in translations.

Advertisement

The success of AI chatbots like ChatGPT can be attributed to reinforcement learning with human feedback (RLHF), which allows these models to learn from human preferences and produce culturally appropriate translations, particularly for non-native speakers. However, it’s essential to note that while AI chatbots outperformed Google Translate, they still had limitations and occasional inaccuracies.

In a subsequent test, PCMag evaluated different versions of ChatGPT, including the free and paid versions, as well as language-specific AI agents from OpenAI’s GPTStore. The paid version of ChatGPT, known as ChatGPT Plus, consistently delivered the best translations across various languages. However, Google Translate also showed improvement, performing surprisingly well compared to previous tests.

Overall, while ChatGPT Plus emerged as the preferred choice for translation, Google Translate demonstrated notable improvement, challenging the notion that AI chatbots are always superior to traditional translation tools.


Source: https://www.pcmag.com/articles/google-translate-vs-chatgpt-which-is-the-best-language-translator

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

GOOGLE

Google Implements Stricter Guidelines for Mass Email Senders to Gmail Users

Published

on

1280x924 gmail

Beginning in April, Gmail senders bombarding users with unwanted mass emails will encounter a surge in message rejections unless they comply with the freshly minted Gmail email sender protocols, Google cautions.

Fresh Guidelines for Dispatching Mass Emails to Gmail Inboxes In an elucidative piece featured on Forbes, it was highlighted that novel regulations are being ushered in to shield Gmail users from the deluge of unsolicited mass emails. Initially, there were reports surfacing about certain marketers receiving error notifications pertaining to messages dispatched to Gmail accounts. Nonetheless, a Google representative clarified that these specific errors, denoted as 550-5.7.56, weren’t novel but rather stemmed from existing authentication prerequisites.

Moreover, Google has verified that commencing from April, they will initiate “the rejection of a portion of non-compliant email traffic, progressively escalating the rejection rate over time.” Google elaborates that, for instance, if 75% of the traffic adheres to the new email sender authentication criteria, then a portion of the remaining non-conforming 25% will face rejection. The exact proportion remains undisclosed. Google does assert that the implementation of the new regulations will be executed in a “step-by-step fashion.”

This cautious and methodical strategy seems to have already kicked off, with transient errors affecting a “fraction of their non-compliant email traffic” coming into play this month. Additionally, Google stipulates that bulk senders will be granted until June 1 to integrate “one-click unsubscribe” in all commercial or promotional correspondence.

Exclusively Personal Gmail Accounts Subject to Rejection These alterations exclusively affect bulk emails dispatched to personal Gmail accounts. Entities sending out mass emails, specifically those transmitting a minimum of 5,000 messages daily to Gmail accounts, will be mandated to authenticate outgoing emails and “refrain from dispatching unsolicited emails.” The 5,000 message threshold is tabulated based on emails transmitted from the same principal domain, irrespective of the employment of subdomains. Once the threshold is met, the domain is categorized as a permanent bulk sender.

These guidelines do not extend to communications directed at Google Workspace accounts, although all senders, including those utilizing Google Workspace, are required to adhere to the updated criteria.

Advertisement

Augmented Security and Enhanced Oversight for Gmail Users A Google spokesperson emphasized that these requisites are being rolled out to “fortify sender-side security and augment user control over inbox contents even further.” For the recipient, this translates to heightened trust in the authenticity of the email sender, thus mitigating the risk of falling prey to phishing attempts, a tactic frequently exploited by malevolent entities capitalizing on authentication vulnerabilities. “If anything,” the spokesperson concludes, “meeting these stipulations should facilitate senders in reaching their intended recipients more efficiently, with reduced risks of spoofing and hijacking by malicious actors.”

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

GOOGLE

Google’s Next-Gen AI Chatbot, Gemini, Faces Delays: What to Expect When It Finally Launches

Published

on

By

Google AI Chatbot Gemini

In an unexpected turn of events, Google has chosen to postpone the much-anticipated debut of its revolutionary generative AI model, Gemini. Initially poised to make waves this week, the unveiling has now been rescheduled for early next year, specifically in January.

Gemini is set to redefine the landscape of conversational AI, representing Google’s most potent endeavor in this domain to date. Positioned as a multimodal AI chatbot, Gemini boasts the capability to process diverse data types. This includes a unique proficiency in comprehending and generating text, images, and various content formats, even going so far as to create an entire website based on a combination of sketches and written descriptions.

Originally, Google had planned an elaborate series of launch events spanning California, New York, and Washington. Regrettably, these events have been canceled due to concerns about Gemini’s responsiveness to non-English prompts. According to anonymous sources cited by The Information, Google’s Chief Executive, Sundar Pichai, personally decided to postpone the launch, acknowledging the importance of global support as a key feature of Gemini’s capabilities.

Gemini is expected to surpass the renowned ChatGPT, powered by OpenAI’s GPT-4 model, and preliminary private tests have shown promising results. Fueled by significantly enhanced computing power, Gemini has outperformed GPT-4, particularly in FLOPS (Floating Point Operations Per Second), owing to its access to a multitude of high-end AI accelerators through the Google Cloud platform.

SemiAnalysis, a research firm affiliated with Substack Inc., expressed in an August blog post that Gemini appears poised to “blow OpenAI’s model out of the water.” The extensive compute power at Google’s disposal has evidently contributed to Gemini’s superior performance.

Google’s Vice President and Manager of Bard and Google Assistant, Sissie Hsiao, offered insights into Gemini’s capabilities, citing examples like generating novel images in response to specific requests, such as illustrating the steps to ice a three-layer cake.

Advertisement

While Google’s current generative AI offering, Bard, has showcased noteworthy accomplishments, it has struggled to achieve the same level of consumer awareness as ChatGPT. Gemini, with its unparalleled capabilities, is expected to be a game-changer, demonstrating impressive multimodal functionalities never seen before.

During the initial announcement at Google’s I/O developer conference in May, the company emphasized Gemini’s multimodal prowess and its developer-friendly nature. An application programming interface (API) is under development, allowing developers to seamlessly integrate Gemini into third-party applications.

As the world awaits the delayed unveiling of Gemini, the stakes are high, with Google aiming to revolutionize the AI landscape and solidify its position as a leader in generative artificial intelligence. The postponed launch only adds to the anticipation surrounding Gemini’s eventual debut in the coming year.

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending

Follow by Email
RSS