Connect with us

NEWS

Google KELM Reduces Bias and Improves Factual Accuracy via @sejournal, @martinibuster

Published

on

Google AI Blog announced KELM, a way that could be used to reduce bias and toxic content in search (open domain question answering). It uses a method called TEKGEN to convert Knowledge Graph facts into natural language text that can then be used to improve natural language processing models.

What is KELM?

KELM is an acronym for Knowledge-Enhanced Language Model Pre-training.  Natural language processing models like BERT are typically trained on web and other documents. KELM proposes adding trustworthy factual content (knowledge-enhanced) to the language model pre-training in order to improve the factual accuracy and reduce bias.

KELM TEKGEnTEKGEN converts knowledge graph structured data to natural language text known as the KELM CorpusKELM TEKGEn

KELM Uses Trustworthy Data

The Google researchers proposed using knowledge graphs for improving factual accuracy because they’re a trusted source of facts.

Advertisement

Continue Reading Below

“Alternate sources of information are knowledge graphs (KGs), which consist of structured data. KGs are factual in nature because the information is usually extracted from more trusted sources, and post-processing filters and human editors ensure inappropriate and incorrect content are removed.”

Is Google Using KELM?

Google has not indicated whether or not KELM is in  use. KELM is an approach to language model pre-training that shows strong promise and was summarized on the Google AI blog.

Bias, Factual Accuracy and Search Results

According to the research paper this approach improves factual accuracy:

“It carries the further advantages of improved factual accuracy and reduced toxicity in the resulting language model.”

This research is important because reducing bias and increasing factual accuracy could impact how sites are ranked.

But until KELM is put in use there is no way to predict what kind of impact it would have.

Google doesn’t currently fact check search results.

KELM, should it be introduced, could conceivably have an impact on sites that promote factually incorrect statements and ideas.

Advertisement

Continue Reading Below

KELM Could Impact More than Search

The KELM Corpus has been released under a Creative Commons license (CC BY-SA 2.0).

That means, in theory, any other company (like Bing, Facebook or Twitter) can use it to improve their natural language processing pre-training as well.

It’s possible then that the influence of KELM could extend across many search and social media platforms.

Indirect Ties to MUM

Google has also indicated that the next-generation MUM algorithm will not be released until Google is satisfied that bias does not negatively impact the answers it gives.

According to the Google MUM announcement:

“Just as we’ve carefully tested the many applications of BERT launched since 2019, MUM will undergo the same process as we apply these models in Search.
Specifically, we’ll look for patterns that may indicate bias in machine learning to avoid introducing bias into our systems.”

The KELM approach specifically targets bias reduction, which could make it valuable for developing the MUM algorithm.

See also  Get more Customers with Pay-per-Click

Machine Learning Can Generate Biased Results

The research paper states that the data that natural language models like BERT and GPT-3 use for training can result in “toxic content” and biases.

In computing there is an old acronym , GIGO that stands for Garbage In – Garbage Out. That means the quality of the output is determined by the quality of the input.

If what you’re training the algorithm with is high quality then the result is going to be high quality.

What the researchers are proposing is to improve the quality of the data that technologies like BERT and MUM are trained on in order to remove biases.

Knowledge Graph

The knowledge graph is a collection of facts in a structured data format. Structured data is a markup language that communicates specific information in a manner easily consumed by machines.

In this case the information is facts about people, places and things.

The Google Knowledge Graph was introduced in 2012 as a way to help Google understand the relationships between things. So when someone asks about Washington, Google could be able to discern if the person asking the question was asking about Washington the person, the state or the District of Columbia.

Advertisement

Continue Reading Below

Google’s knowledge graph was announced to be comprised of data from trusted sources of facts.

Google’s 2012 announcement characterized the knowledge graph as a first step towards building the next generation of search, which we are currently enjoying.

Knowledge Graph and Factual Accuracy

Knowledge graph data is used in this research paper for improving Google’s algorithms because the information is trustworthy and reliable.

The Google research paper proposes integrating knowledge graph information into the training process to remove the biases and increase factual accuracy.

What the Google research proposes is two-fold.

  1. First, they need to convert knowledge bases into natural language text.
  2. Secondly the resulting corpus, named Knowledge-Enhanced Language Model Pre-training (KELM), can then be integrated into the algorithm pre-training to reduce biases.

The researchers explain the problem like this:

“Large pre-trained natural language processing (NLP) models, such as BERT, RoBERTa, GPT-3, T5 and REALM, leverage natural language corpora that are derived from the Web and fine-tuned on task specific data…

However, natural language text alone represents a limited coverage of knowledge… Furthermore, existence of non-factual information and toxic content in text can eventually cause biases in the resulting models.”

Advertisement

See also  How to Build Your Brand’s Authority with Strategic Content & SEO

Continue Reading Below

From Knowledge Graph Structured Data to Natural Language Text

The researchers state that a problem with integrating knowledge base information into the training is that the knowledge base data is in the form of structured data.

The solution is to convert the knowledge graph structured data to natural language text using a natural language task called, data-to-text-generation.

They explained that because data-to-text-generation is challenging they created what they called a new “pipeline” called “Text from KG Generator (TEKGEN)” to solve the problem.

Citation: Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training (PDF) 

TEKGEN Natural Language Text Improved Factual Accuracy

TEKGEN is the technology the researchers created to convert structured data to natural language text. It is this end result, factual text, that can be used to create the KELM corpus which can then be used as part of machine learning pre-training to help prevent bias from making its way into algorithms.

The researchers noted that adding this additional knowledge graph information (corpora) into the training data resulted in improved factual accuracy.

Advertisement

Continue Reading Below

The TEKGEN/KELM paper states:

“We further show that verbalizing a comprehensive, encyclopedic KG like Wikidata can be used to integrate structured KGs and natural language corpora.

…our approach converts the KG into natural text, allowing it to be seamlessly integrated into existing language models. It carries the further advantages of improved factual accuracy and reduced toxicity in the resulting language model.”

The KELM article published an illustration showing how one structured data node is concatenated then converted from there to natural text (verbalized).

I broke up the illustration into two parts.

Below is an image representing a knowledge graph structured data. The data is concatenated to text.

Screenshot of First Part of TEKGEN Conversion Process

Google KELM Concatenation

Google KELM Concatenation

The image below represents the next step of the TEKGEN process that takes the concatenated text and converts it to a natural language text.

Advertisement

Continue Reading Below

Screenshot of Text Turned to Natural Language Text

Google KELM Verbalized Knowledge Graph Data

Google KELM Verbalized Knowledge Graph Data

Generating the KELM Corpus

There is another illustration that shows how the KELM natural language text that can be used for pre-training is generated.

See also  Digital tools bring sales success in new reality

The TEKGEN paper shows this illustration plus description:

How TEKGEN works

How TEKGEN works

  • “In Step 1 , KG triples arealigned with Wikipedia text using distant supervision.
  • In Steps 2 & 3 , T5 is fine-tuned sequentially first on this corpus, followed by a small number of steps on the WebNLG corpus,
  • In Step 4 , BERT is fine-tuned to generate a semantic quality score for generated sentences w.r.t. triples.
  • Steps 2 , 3 & 4 together form TEKGEN.
  • To generate the KELM corpus, in Step 5 , entity subgraphs are created using the relation pair alignment counts from the training corpus generated in step 1.
    The subgraph triples are then converted into natural text using TEKGEN.”

Advertisement

Continue Reading Below

KELM Works to Reduce Bias and Promote Accuracy

The KELM article published on Google’s AI blog states that KELM has real-world applications, particularly for question answering tasks which are explicitly related to information retrieval (search) and natural language processing (technologies like BERT and MUM).

Google researches many things, some of which seem to be explorations into what is possible but otherwise seem like dead-ends. Research that probably won’t make it into Google’s algorithm usually concludes with a statement that more research is needed because the technology doesn’t fulfill expectations in one way or another.

But that is not the case with the KELM and TEKGEN research. The article is in fact optimistic about real-world application of the discoveries. That tends to give it a higher probability that KELM could eventually make it into search in one form or another.

This is how the researchers concluded the article on KELM for reducing bias:

“This has real-world applications for knowledge-intensive tasks, such as question answering, where providing factual knowledge is essential. Moreover, such corpora can be applied in pre-training of large language models, and can potentially reduce toxicity and improve factuality.”

Advertisement

Continue Reading Below

Will KELM be Used in Soon?

Google’s recent announcement of the MUM algorithm requires accuracy, something the KELM corpus was created for. But the application of KELM is not limited to MUM.

The fact that reducing bias and factual accuracy are a critical concern in society today and that the researchers are optimistic about the results tends to give it a higher probability of being used in some form in the future in search.

Citations

Google AI Article on KELM
KELM: Integrating Knowledge Graphs with Language Model Pre-training Corpora

KELM Research Paper (PDF) 
Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training

TEKGEN Training Corpus at GitHub

Searchenginejournal.com

NEWS

Google December Product Reviews Update Affects More Than English Language Sites? via @sejournal, @martinibuster

Published

on

Google’s Product Reviews update was announced to be rolling out to the English language. No mention was made as to if or when it would roll out to other languages. Mueller answered a question as to whether it is rolling out to other languages.

Google December 2021 Product Reviews Update

On December 1, 2021, Google announced on Twitter that a Product Review update would be rolling out that would focus on English language web pages.

The focus of the update was for improving the quality of reviews shown in Google search, specifically targeting review sites.

A Googler tweeted a description of the kinds of sites that would be targeted for demotion in the search rankings:

“Mainly relevant to sites that post articles reviewing products.

Think of sites like “best TVs under $200″.com.

Goal is to improve the quality and usefulness of reviews we show users.”

Advertisement

Continue Reading Below

Google also published a blog post with more guidance on the product review update that introduced two new best practices that Google’s algorithm would be looking for.

The first best practice was a requirement of evidence that a product was actually handled and reviewed.

The second best practice was to provide links to more than one place that a user could purchase the product.

The Twitter announcement stated that it was rolling out to English language websites. The blog post did not mention what languages it was rolling out to nor did the blog post specify that the product review update was limited to the English language.

See also  How to Do Keyword Research: 7 Keyword Types to Focus On

Google’s Mueller Thinking About Product Reviews Update

Screenshot of Google's John Mueller trying to recall if December Product Review Update affects more than the English language

Screenshot of Google's John Mueller trying to recall if December Product Review Update affects more than the English language

Product Review Update Targets More Languages?

The person asking the question was rightly under the impression that the product review update only affected English language search results.

Advertisement

Continue Reading Below

But he asserted that he was seeing search volatility in the German language that appears to be related to Google’s December 2021 Product Review Update.

This is his question:

“I was seeing some movements in German search as well.

So I was wondering if there could also be an effect on websites in other languages by this product reviews update… because we had lots of movement and volatility in the last weeks.

…My question is, is it possible that the product reviews update affects other sites as well?”

John Mueller answered:

“I don’t know… like other languages?

My assumption was this was global and and across all languages.

But I don’t know what we announced in the blog post specifically.

But usually we try to push the engineering team to make a decision on that so that we can document it properly in the blog post.

I don’t know if that happened with the product reviews update. I don’t recall the complete blog post.

But it’s… from my point of view it seems like something that we could be doing in multiple languages and wouldn’t be tied to English.

And even if it were English initially, it feels like something that is relevant across the board, and we should try to find ways to roll that out to other languages over time as well.

So I’m not particularly surprised that you see changes in Germany.

But I also don’t know what we actually announced with regards to the locations and languages that are involved.”

Does Product Reviews Update Affect More Languages?

While the tweeted announcement specified that the product reviews update was limited to the English language the official blog post did not mention any such limitations.

See also  Twitter begins testing a way to watch YouTube videos from the home timeline on iOS

Google’s John Mueller offered his opinion that the product reviews update is something that Google could do in multiple languages.

One must wonder if the tweet was meant to communicate that the update was rolling out first in English and subsequently to other languages.

It’s unclear if the product reviews update was rolled out globally to more languages. Hopefully Google will clarify this soon.

Citations

Google Blog Post About Product Reviews Update

Product reviews update and your site

Google’s New Product Reviews Guidelines

Write high quality product reviews

John Mueller Discusses If Product Reviews Update Is Global

Watch Mueller answer the question at the 14:00 Minute Mark

[embedded content]

Searchenginejournal.com

Continue Reading

NEWS

Survey says: Amazon, Google more trusted with your personal data than Apple is

Published

on

survey-says:-amazon,-google-more-trusted-with-your-personal-data-than-apple-is-–-phonearena
 

MacRumors reveals that more people feel better with their personal data in the hands of Amazon and Google than Apple’s. Companies that the public really doesn’t trust when it comes to their personal data include Facebook, TikTok, and Instagram.

The survey asked over 1,000 internet users in the U.S. how much they trusted certain companies such as Facebook, TikTok, Instagram, WhatsApp, YouTube, Google, Microsoft, Apple, and Amazon to handle their user data and browsing activity responsibly.

Amazon and Google are considered by survey respondents to be more trustworthy than Apple

Those surveyed were asked whether they trusted these firms with their personal data “a great deal,” “a good amount,” “not much,” or “not at all.” Respondents could also answer that they had no opinion about a particular company. 18% of those polled said that they trust Apple “a great deal” which topped the 14% received by Google and Amazon.

However, 39% said that they trust Amazon  by “a good amount” with Google picking up 34% of the votes in that same category. Only 26% of those answering said that they trust Apple by “a good amount.” The first two responses, “a great deal” and “a good amount,” are considered positive replies for a company. “Not much” and “not at all” are considered negative responses.

By adding up the scores in the positive categories,

Apple tallied a score of 44% (18% said it trusted Apple with its personal data “a great deal” while 26% said it trusted Apple “a good amount”). But that placed the tech giant third after Amazon’s 53% and Google’s 48%. After Apple, Microsoft finished fourth with 43%, YouTube (which is owned by Google) was fifth with 35%, and Facebook was sixth at 20%.

See also  Google Reportedly Blocking Australian News From Search

Rounding out the remainder of the nine firms in the survey, Instagram placed seventh with a positive score of 19%, WhatsApp was eighth with a score of 15%, and TikTok was last at 12%.

Looking at the scoring for the two negative responses (“not much,” or “not at all”), Facebook had a combined negative score of 72% making it the least trusted company in the survey. TikTok was next at 63% with Instagram following at 60%. WhatsApp and YouTube were both in the middle of the pact at 53% followed next by Google and Microsoft at 47% and 42% respectively. Apple and Amazon each had the lowest combined negative scores at 40% each.

74% of those surveyed called targeted online ads invasive

The survey also found that a whopping 82% of respondents found targeted online ads annoying and 74% called them invasive. Just 27% found such ads helpful. This response doesn’t exactly track the 62% of iOS users who have used Apple’s App Tracking Transparency feature to opt-out of being tracked while browsing websites and using apps. The tracking allows third-party firms to send users targeted ads online which is something that they cannot do to users who have opted out.

The 38% of iOS users who decided not to opt out of being tracked might have done so because they find it convenient to receive targeted ads about a certain product that they looked up online. But is ATT actually doing anything?

Marketing strategy consultant Eric Seufert said last summer, “Anyone opting out of tracking right now is basically having the same level of data collected as they were before. Apple hasn’t actually deterred the behavior that they have called out as being so reprehensible, so they are kind of complicit in it happening.”

See also  Facebook Announces Social Audio that Goes Beyond Clubhouse

The Financial Times says that iPhone users are being lumped together by certain behaviors instead of unique ID numbers in order to send targeted ads. Facebook chief operating officer Sheryl Sandberg says that the company is working to rebuild its ad infrastructure “using more aggregate or anonymized data.”

Aggregated data is a collection of individual data that is used to create high-level data. Anonymized data is data that removes any information that can be used to identify the people in a group.

When consumers were asked how often do they think that their phones or other tech devices are listening in to them in ways that they didn’t agree to, 72% answered “very often” or “somewhat often.” 28% responded by saying “rarely” or “never.”

Continue Reading

NEWS

Google’s John Mueller on Brand Mentions via @sejournal, @martinibuster

Published

on

Google’s John Mueller was asked if “brand mentions” helped with SEO and rankings. John Mueller explained, in detail, how brand mentions are not anything used at Google.

What’s A Brand Mention?

A brand mention is when one website mentions another website. There is an idea in the SEO community that when a website mentions another website’s domain name or URL that Google will see this and count it the same as a link.

Brand Mentions are also known as an implied link. Much was written about this ten years ago after a Google patent that mentions “implied links” surfaced.

There has never been a solid review of why the idea of “brand mentions” has nothing to do with this patent, but I’ll provide a shortened version later in this article.

John Mueller Discussing Brand Mentions

John Mueller Brand Mentions

John Mueller Brand Mentions

Do Brand Mentions Help With Rankings?

The person asking the question wanted to know about brand mentions for the purpose of ranking. The person asking the question has good reason to ask it because the idea of “brand mentions” has never been definitively reviewed.

Advertisement

Continue Reading Below

The person asked the question:

“Do brand mentions without a link help with SEO rankings?”

Google Does Not Use Brand Mentions

Google’s John Mueller answered that Google does not use the “brand mentions” for any link related purpose.

Mueller explained:

“From my point of view, I don’t think we use those at all for things like PageRank or understanding the link graph of a website.

And just a plain mention is sometimes kind of tricky to figure out anyway.”

That part about it being tricky is interesting.

He didn’t elaborate on why it’s tricky until later in the video where he says it’s hard to understand the subjective context of a website mentioning another website.

Brand Mentions Are Useful For Building Awareness

Mueller next says that brand mentions may be useful for helping to get the word out about a site, which is about building popularity.

Mueller continued:

“But it can be something that makes people aware of your brand, and from that point of view, could be something where indirectly you might have some kind of an effect from that in that they search for your brand and then …obviously, if they’re searching for your brand then hopefully they find you right away and then they can go to your website.

And if they like what they see there, then again, they can go off and recommend that to other people as well.”

Advertisement

See also  Google’s Danny Sullivan Wants to Fix Page Title Rewrites

Continue Reading Below

“Brand Mentions” Are Problematic

Later on at the 58 minute mark another person brings the topic back up and asks how Google could handle spam sites that are mentioning a brand in a negative way.

The person said that one can disavow links but one cannot disavow a “brand mention.”

Mueller agreed and said that’s one of things that makes brand mentions difficult to use for ranking purposes.

John Mueller explained:

“Kind of understanding the almost the subjective context of the mention is really hard.

Is it like a positive mention or a negative mention?

Is it a sarcastic positive mention or a sarcastic negative mention? How can you even tell?

And all of that, together with the fact that there are lots of spammy sites out there and sometimes they just spin content, sometimes they’re malicious with regards to the content that they create…

All of that, I think, makes it really hard to say we can just use that as the same as a link.

…It’s just, I think, too confusing to use as a clear signal.”

Where “Brand Mentions” Come From

The idea of “brand mentions” has bounced around for over ten years.

There were no research papers or patents to support it. “Brand mentions” is literally an idea that someone invented out of thin air.

However the “brand mention” idea took off in 2012 when a patent surfaced that seemed to confirm the idea of brand mentions.

There’s a whole long story to this so I’m just going to condense it.

There’s a patent from 2012 that was misinterpreted in several different ways because most people at the time, myself included, did not read the entire patent from beginning to end.

See also  Get more Customers with Pay-per-Click

The patent itself is about ranking web pages.

The structure of most Google patents consist of introductory paragraphs that discuss what the patent is about and those paragraphs are followed by pages of in-depth description of the details.

The introductory paragraphs that explain what it’s about states:

“Methods, systems, and apparatus, including computer programs… for ranking search results.”

Advertisement

Continue Reading Below

Pretty much nobody read that beginning part of the patent.

Everyone focused on a single paragraph in the middle of the patent (page 9 out of 16 pages).

In that paragraph there is a mention of something called “implied links.”

The word “implied” is only mentioned four times in the entire patent and all four times are contained within that single paragraph.

So when this patent was discovered, the SEO industry focused on that single paragraph as proof that Google uses brand mentions.

In order to understand what an “implied link” is, you have to scroll all the way back up to the opening paragraphs where the Google patent authors describe something called a “reference query” that is not a link but is nevertheless used for ranking purposes just like a link.

What Is A Reference Query?

A reference query is a search query that contains a reference to a URL or a domain name.

The patent states:

“A reference query for a particular group of resources can be a previously submitted search query that has been categorized as referring to a resource in the particular group of resources.”

Advertisement

Continue Reading Below

Elsewhere the patent provides a more specific explanation:

“A query can be classified as referring to a particular resource if the query includes a term that is recognized by the system as referring to the particular resource.

…search queries including the term “example.com” can be classified as referring to that home page.”

The summary of the patent, which comes at the beginning of the document, states that it’s about establishing which links to a website are independent and also counting reference queries and with that information creating a “modification factor” which is used to rank web pages.

“…determining, for each of the plurality of groups of resources, a respective count of reference queries; determining, for each of the plurality of groups of resources, a respective group-specific modification factor, wherein the group-specific modification factor for each group is based on the count of independent links and the count of reference queries for the group;”

The entire patent largely rests on those two very important factors, a count of independent inbound links and the count of reference queries. The phrases reference query and reference queries are used 39 times in the patent.

See also  Google My Business Listings Can Link to Gift Card & Donation Pages

Advertisement

Continue Reading Below

As noted above, the reference query is used for ranking purposes like a link, but it’s not a link.

The patent states:

“An implied link is a reference to a target resource…”

It’s clear that in this patent, when it mentions the implied link, it’s talking about reference queries, which as explained above simply means when people search using keywords and the domain name of a website.

Idea of Brand Mentions Is False

The whole idea of “brand mentions” became a part of SEO belief systems because of how that patent was misinterpreted.

But now you have the facts and know why “brand mentions” is not real thing.

Plus John Mueller confirmed it.

“Brand mentions” is something completely random that someone in the SEO community invented out of thin air.

Citations

Ranking Search Results Patent

Watch John Mueller discuss “brand mentions” at 44:10 Minute Mark and the brand Mentions second part begins at the 58:12 minute mark

[embedded content]

Searchenginejournal.com

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending