Connect with us

GOOGLE

Surprising Facts About E-A-T & SEO

Published

on

Surprising Facts About E-A-T & SEO

Want to know what Google wants?

Google recommends that publishers review their quality raters guidelines.

SEO professionals have been doing that for years, looking for any clues to unlock some secrets of Google’s algorithm.

But here’s why much of what you’ve read about optimizing for E-A-T may need an update.

What Is E-A-T?

E-A-T is an acronym for Expertise, Authoritativeness, and Trustworthiness. It is a concept created by Google for third-party quality raters as a standardized method for judging search results.

Google also recommends it to publishers as a way to measure the quality of their content.

Advertisement

The reason Google created E-A-T is strictly for measuring the quality of content, particularly for third-party quality raters.

According to Google’s Search Quality Guidelines:

Unless your rating task indicates otherwise, your ratings should be based on the instructions and examples given in these guidelines.

Ratings should not be based on your personal opinions, preferences, religious beliefs, or political views.

Personal opinions would make the ratings submitted to Google unreliable. That’s why the concept of E-A-T was developed.

The search quality raters guidelines and the concept of E-A-T reflect the kinds of sites Google’s algorithm attempts to rank.

Advertisement

E-A-T As Ranking Factors – Is It Possible?

There are no actual patents or research papers that establish the existence of those three concepts (expertise, authoritativeness, trustworthiness) as ranking factors.

What Google has admitted is that there are signals that indicate that a site is trustworthy but Google has never said what those signals are.

It must be repeated that the Quality Raters Guidelines do not provide hints for what those signals may be.

If the guidelines instruct the rater to review a page for an author, that does not mean that Google uses an “author signal” in the algorithm.

It is asking the rater to do that in order to be a better judge of website authority. That’s all.

There are concepts represented by E-A-T that can be expressed in real factors like links.

Advertisement

Expertise, authoritativeness, trustworthiness are not actual ranking factors or ranking metrics in use by Google.

How Does Google Know if Content Is Authoritative?

There are real factors like links that have traditionally been used to establish expertise and authority as well as understanding what users want to see.

If a webpage receives many links, particularly from webpages about similar topics, then the webpage receiving the links can be understood as being authoritative for that topic.

There is no actual metric called “authority” that Google uses. Authority is simply a quality of a webpage that Google can guess at based on (undisclosed) signals.

Links are pretty much the only signal that we know about that can indicate that a webpage is authoritative.

But it’s not the only one. In April 2021, Google disclosed that AI is used to identify if the content is authoritative or not.

Advertisement

Google Uses AI to Understand Expertise and Authority

Did you know Google relies on AI technologies to understand the content better?

Google is using AI to weed out low-quality content related to shopping and product reviews.

“…we wanted to make sure that you’re getting the most useful information for your next purchase by rewarding content that has more in-depth research and useful information.”

According to that statement, Google is using AI to understand if web content is superficial or if it has the contours and features typical of “in-depth research” and other qualities typical of sites that are useful to users.

Google Research & E-A-T

Ultimately, Google’s search results pages are about showing users what they expect to see.

Many of Google’s patents and research papers that describe link analysis, content analysis, and natural language processing all revolve around understanding what users want and understanding what webpages are about.

  • Links can communicate what page is expert.
  • AI helps Google understand what webpages are authoritative.
  • Content analyzed by AI and links communicate which webpages are trustworthy.
  • On-page signals may indicate expertise, authoritativeness, and authority… as well as their opposites.

How the E-A-T Concept Translates to Better Ranking

E-A-T is an abstract idea created to teach the quality raters how to judge a site.

The search quality guidelines do not provide clues to ranking factors.

Advertisement

The concepts of expertise, authoritativeness, and trustworthiness need to be defined in order to be understood.

Once E-A-T is understood, publishers will have a firm idea of how to improve and optimize content.

Expertise

Qualities of Expertise

Expertise is the quality of competence and technical skill. Expertise demonstrates a mastery of the topic, depth of knowledge, and hands-on experience.

As an example, when a webpage is about curing an ailment the topic must generally be approached from a scientific point of view in order to qualify as an expert.

An expert page teaches, reveals, and provides knowledge. An expert webpage will demonstrate qualities of depth of knowledge that can be signaled by the subtopics it raises or maybe by the citations it makes to other work.

Depth of Knowledge Is Not Comprehensiveness

Do not confuse depth of knowledge with being comprehensive. Depth of knowledge means that a topic is deeply understood.

Advertisement

Comprehensiveness is concerned with how broad the scope of the content is.

When evaluating a webpage for expertise, it may be helpful to ask, how does this webpage signal that it communicates a depth of knowledge?

Content is expert if a topic contains a specific kind of information for a given topic. For example, it is almost required for an article about headaches to mention aspirin.

Understand Depth of Knowledge in Order to Understand Expertise

Adding “expertise” to an article is more than the laughably simplistic practice of adding an author box with the author’s academic credentials.

Expertise in webpage content is the expression of the depth of knowledge and experience.

One can’t simply cannot add an author biography and expect it to magically become an expert article.

Advertisement

The first step toward adding expertise to webpages is understanding what depth of knowledge actually is.

What Is Expertise?

Expertise has been studied in a number of disciplines. Some researchers state that “expertise results from practice and experience, built on a foundation of talent, or innate ability.”

The educational field has a system for measuring student’s depth of knowledge called Webb’s Depth of Knowledge. In it there are four levels of depth of knowledge.

The beginner level starts with the ability to remember facts. The fourth level consists of the ability to bring together facts and ideas from different areas and stitch them together into a coherent thesis.

A scientific research organization called Global Cognition states that there are two kinds of expertise. One kind of expertise (Routine Expertise) is the ability to solve problems using similar routines and solutions over and over.

The second kind of expertise is called Adaptive Expertise. Adaptive Expertise is characterized by the ability to formulate solutions for problems that are changing or not previously seen before.

Advertisement

In both cases the results are:

“…the thinking and qualities that lead to consistently superior performance.”

Expertise is generally defined as the result of:

  • Practice.
  • Feedback.
  • Analysis.

What Does It Mean to Have Content With Expertise?

Given what is known about expertise and depth of knowledge, it can be said that expert content contains evidence that the author physically handled the object of the article, has actual experience in the topic, provides analysis, measurements, and comparisons.

Example of Expertise in Content

I wrote an article about structured data. None of the top-ranked articles on the topic mentioned that structured data is a markup language (like HTML is).

Google’s machine learning (and whatever else they use to understand a topic) probably knew that and may have responded favorably to that expert observation.

It’s not that my observation was good because it was different than the top-ranked pages. It’s that my observation demonstrated a deep understanding of what Schema.org structured data is.

Advertisement

Authoritativeness

Being authoritative is not the same thing as being comprehensive. This is a common mistake that publishers make when attempting to create authoritative content.

The Difference Between Authoritativeness and Comprehensive

  • Authoritativeness has to do with being reliable, trustworthy, and accurate.
  • Comprehensiveness has to do with the quality of having a wide scope.

Accuracy (authoritativeness) and a wide scope (comprehensiveness) are not the same things.

Elements of Authoritative Content

So when reviewing content for authoritativeness, go back to the definition of authoritativeness and review the content for qualities such as accuracy, soundness of ideas, and validity.

Can You Optimize for Authoritativeness?

What is authority? Metrics for authority can be the links that point to your site. That’s pretty much what is known and confirmed for authority.

But authority and authoritativeness are just concepts and are not actual ranking factors or metrics that Google uses. There is no “authority” metric at Google unless you call PageRank an authority metric.

So if you talk about “optimizing for authority,” in a way you’re really talking about how to optimize for PageRank, which is kind of silly. One does not optimize for PageRank. PageRank is something that is accumulated by a webpage.

Related: The Three Pillars of SEO: Authority, Relevance, and Trust

Advertisement

Trustworthiness

People will link to your page, talk about your site on social media, and cite a wide range of pages from your site if your webpages satisfy users on a consistent basis.

That kind of user satisfaction on a wide scale can cause individuals to regard your site as a trustworthy source of information, services, or products.

It is generally understood that Google does not use social signals for ranking purposes. If Google uses them for anything it’s not something that is known.

But social signals can be the smoke that tells you there’s a fire raging that indicates you are doing something right.

Optimizing for Trustworthiness

Googlers have made references to the trustworthiness of a website. Research papers and patents have made references to trustworthiness.

Interesting research into trustworthiness relates to link analysis (Read: Link Distance Ranking Algorithms for more information).

Advertisement

Another line of research is Knowledge-based Trust. But Bill Slawski, an expert on Google patents, said it’s unlikely that Google uses it.

A specific trustworthiness metric where a site accumulates “trust points” to indicate trustworthiness isn’t something that Google has researched.

Link distance ranking is the closest thing that Google might be using that approximates trust, but there is no actual trust score. Link distance ranking can identify spammy sites as well as quality sites.

Aside from being careful about where you get links (which you should be doing anyway!), there’s no way to “optimize” for trustworthiness.

You just have to be a reliable and trustworthy source of information. If people notice then Google might also notice, perhaps by the way other sites link to your webpages.

Advertisement

E-A-T Is Not an Algorithm

In October 2019 at Pubcon Gary Illyes confirmed that E-A-T was not an algorithm.

Gary Illyes was asked about E-A-T point-blank and everything he said matches up with what Googlers have been saying about the QRG and E-A-T.

Optimizing for E-A-T

You can build expertise, authoritativeness, and trustworthiness using all of the above approaches that focus on excellence.

Expertise, authoritativeness, and trustworthiness in content are more than just descriptions and perceptions of your site. They are qualities that your content can contain.

So it makes sense to think hard about what those words expertise, authoritativeness, and trustworthiness mean and apply your insights to every webpage that you publish.

Advertisement

Featured image: Paulo Bobita/SearchEngineJournal

Searchenginejournal.com

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

AI

Exploring the Evolution of Language Translation: A Comparative Analysis of AI Chatbots and Google Translate

Published

on

By

A Comparative Analysis of AI Chatbots and Google Translate

According to an article on PCMag, while Google Translate makes translating sentences into over 100 languages easy, regular users acknowledge that there’s still room for improvement.

In theory, large language models (LLMs) such as ChatGPT are expected to bring about a new era in language translation. These models consume vast amounts of text-based training data and real-time feedback from users worldwide, enabling them to quickly learn to generate coherent, human-like sentences in a wide range of languages.

However, despite the anticipation that ChatGPT would revolutionize translation, previous experiences have shown that such expectations are often inaccurate, posing challenges for translation accuracy. To put these claims to the test, PCMag conducted a blind test, asking fluent speakers of eight non-English languages to evaluate the translation results from various AI services.

The test compared ChatGPT (both the free and paid versions) to Google Translate, as well as to other competing chatbots such as Microsoft Copilot and Google Gemini. The evaluation involved comparing the translation quality for two test paragraphs across different languages, including Polish, French, Korean, Spanish, Arabic, Tagalog, and Amharic.

In the first test conducted in June 2023, participants consistently favored AI chatbots over Google Translate. ChatGPT, Google Bard (now Gemini), and Microsoft Bing outperformed Google Translate, with ChatGPT receiving the highest praise. ChatGPT demonstrated superior performance in converting colloquialisms, while Google Translate often provided literal translations that lacked cultural nuance.

For instance, ChatGPT accurately translated colloquial expressions like “blow off steam,” whereas Google Translate produced more literal translations that failed to resonate across cultures. Participants appreciated ChatGPT’s ability to maintain consistent levels of formality and its consideration of gender options in translations.

Advertisement

The success of AI chatbots like ChatGPT can be attributed to reinforcement learning with human feedback (RLHF), which allows these models to learn from human preferences and produce culturally appropriate translations, particularly for non-native speakers. However, it’s essential to note that while AI chatbots outperformed Google Translate, they still had limitations and occasional inaccuracies.

In a subsequent test, PCMag evaluated different versions of ChatGPT, including the free and paid versions, as well as language-specific AI agents from OpenAI’s GPTStore. The paid version of ChatGPT, known as ChatGPT Plus, consistently delivered the best translations across various languages. However, Google Translate also showed improvement, performing surprisingly well compared to previous tests.

Overall, while ChatGPT Plus emerged as the preferred choice for translation, Google Translate demonstrated notable improvement, challenging the notion that AI chatbots are always superior to traditional translation tools.


Source: https://www.pcmag.com/articles/google-translate-vs-chatgpt-which-is-the-best-language-translator

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

GOOGLE

Google Implements Stricter Guidelines for Mass Email Senders to Gmail Users

Published

on

1280x924 gmail

Beginning in April, Gmail senders bombarding users with unwanted mass emails will encounter a surge in message rejections unless they comply with the freshly minted Gmail email sender protocols, Google cautions.

Fresh Guidelines for Dispatching Mass Emails to Gmail Inboxes In an elucidative piece featured on Forbes, it was highlighted that novel regulations are being ushered in to shield Gmail users from the deluge of unsolicited mass emails. Initially, there were reports surfacing about certain marketers receiving error notifications pertaining to messages dispatched to Gmail accounts. Nonetheless, a Google representative clarified that these specific errors, denoted as 550-5.7.56, weren’t novel but rather stemmed from existing authentication prerequisites.

Moreover, Google has verified that commencing from April, they will initiate “the rejection of a portion of non-compliant email traffic, progressively escalating the rejection rate over time.” Google elaborates that, for instance, if 75% of the traffic adheres to the new email sender authentication criteria, then a portion of the remaining non-conforming 25% will face rejection. The exact proportion remains undisclosed. Google does assert that the implementation of the new regulations will be executed in a “step-by-step fashion.”

This cautious and methodical strategy seems to have already kicked off, with transient errors affecting a “fraction of their non-compliant email traffic” coming into play this month. Additionally, Google stipulates that bulk senders will be granted until June 1 to integrate “one-click unsubscribe” in all commercial or promotional correspondence.

Exclusively Personal Gmail Accounts Subject to Rejection These alterations exclusively affect bulk emails dispatched to personal Gmail accounts. Entities sending out mass emails, specifically those transmitting a minimum of 5,000 messages daily to Gmail accounts, will be mandated to authenticate outgoing emails and “refrain from dispatching unsolicited emails.” The 5,000 message threshold is tabulated based on emails transmitted from the same principal domain, irrespective of the employment of subdomains. Once the threshold is met, the domain is categorized as a permanent bulk sender.

These guidelines do not extend to communications directed at Google Workspace accounts, although all senders, including those utilizing Google Workspace, are required to adhere to the updated criteria.

Advertisement

Augmented Security and Enhanced Oversight for Gmail Users A Google spokesperson emphasized that these requisites are being rolled out to “fortify sender-side security and augment user control over inbox contents even further.” For the recipient, this translates to heightened trust in the authenticity of the email sender, thus mitigating the risk of falling prey to phishing attempts, a tactic frequently exploited by malevolent entities capitalizing on authentication vulnerabilities. “If anything,” the spokesperson concludes, “meeting these stipulations should facilitate senders in reaching their intended recipients more efficiently, with reduced risks of spoofing and hijacking by malicious actors.”

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

GOOGLE

Google’s Next-Gen AI Chatbot, Gemini, Faces Delays: What to Expect When It Finally Launches

Published

on

By

Google AI Chatbot Gemini

In an unexpected turn of events, Google has chosen to postpone the much-anticipated debut of its revolutionary generative AI model, Gemini. Initially poised to make waves this week, the unveiling has now been rescheduled for early next year, specifically in January.

Gemini is set to redefine the landscape of conversational AI, representing Google’s most potent endeavor in this domain to date. Positioned as a multimodal AI chatbot, Gemini boasts the capability to process diverse data types. This includes a unique proficiency in comprehending and generating text, images, and various content formats, even going so far as to create an entire website based on a combination of sketches and written descriptions.

Originally, Google had planned an elaborate series of launch events spanning California, New York, and Washington. Regrettably, these events have been canceled due to concerns about Gemini’s responsiveness to non-English prompts. According to anonymous sources cited by The Information, Google’s Chief Executive, Sundar Pichai, personally decided to postpone the launch, acknowledging the importance of global support as a key feature of Gemini’s capabilities.

Gemini is expected to surpass the renowned ChatGPT, powered by OpenAI’s GPT-4 model, and preliminary private tests have shown promising results. Fueled by significantly enhanced computing power, Gemini has outperformed GPT-4, particularly in FLOPS (Floating Point Operations Per Second), owing to its access to a multitude of high-end AI accelerators through the Google Cloud platform.

SemiAnalysis, a research firm affiliated with Substack Inc., expressed in an August blog post that Gemini appears poised to “blow OpenAI’s model out of the water.” The extensive compute power at Google’s disposal has evidently contributed to Gemini’s superior performance.

Google’s Vice President and Manager of Bard and Google Assistant, Sissie Hsiao, offered insights into Gemini’s capabilities, citing examples like generating novel images in response to specific requests, such as illustrating the steps to ice a three-layer cake.

Advertisement

While Google’s current generative AI offering, Bard, has showcased noteworthy accomplishments, it has struggled to achieve the same level of consumer awareness as ChatGPT. Gemini, with its unparalleled capabilities, is expected to be a game-changer, demonstrating impressive multimodal functionalities never seen before.

During the initial announcement at Google’s I/O developer conference in May, the company emphasized Gemini’s multimodal prowess and its developer-friendly nature. An application programming interface (API) is under development, allowing developers to seamlessly integrate Gemini into third-party applications.

As the world awaits the delayed unveiling of Gemini, the stakes are high, with Google aiming to revolutionize the AI landscape and solidify its position as a leader in generative artificial intelligence. The postponed launch only adds to the anticipation surrounding Gemini’s eventual debut in the coming year.

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending

Follow by Email
RSS