Connect with us

NEWS

Bill Lambert is Not Real – Claims of Google Filters Likely False

Published

on

There’s been discussion about a Google insider who claims Google filters traffic from publishers. Is Google filtering website traffic? As fringe as that idea sounds, there actually is a Google patent on filtering web results.

Bill Lambert is Not Real

This article was published several hours before Mueller tweeted the following denial. Added as an update.

As I suspected and this article concluded, John Mueller confirmed that the persona known as Bill Lambert is not a Google contractor with access to inside information.

Consider anything sourced from that persona to be misinformation. 

Bill Lambert Google

Claim that Google Filters Traffic

A WebmasterWorld discussion about Google’s broad core algorithm update quoted someone who claims inside knowledge about filters designed to take away publisher traffic.

In the quote they said:

“Bill Lambert: “As I explained, prior to an update you will see traffic & metrics like you are used to. This is because while the core algo is updated the various “filters” designed to take your traffic away are not live. While the update rolls you will see flux.

Post update you will see traffic levels around where they were prior to the update BUT these will slowly drop away. We are still in filter drop mode. Geo targeting will be out for the next few days too (as it was last week with the test).”

If you listen to Googlers and read patents and research papers, you will not find evidence of a Google filters that takes “your traffic away.”

But there is a patent about filtering the search results.

Google Filter Algorithm Patent

The Google patent is called, Filtering in Search Engines

The research focused on satisfying user intent and information needs by filtering the search results.

This is how the paper describes the problem:

“…given the same query by two different users, a given set of search results can be relevant to one user and irrelevant to another, entirely because of the different intent and information needs.”

The research paper then states that a problem with identifying user intent is that user intent varies by user.

“Most attempts at solving the problem of inferring a user’s intent typically depend on relatively weak indicators, such as static user preferences, or predefined methods of query reformulation that may be educated guesses about what the user is interested in based on the query terms.

Approaches such as these cannot fully capture user intent because such intent is itself highly variable and dependent on numerous situational facts that cannot be extrapolated from typical query terms.”

The proposed solution is to filter the results “based on content sought by a user.

Then it describes a method of filtering results based on content in the URL, like the word “reviews” which signals that the page may satisfy the user intent.

The paper proposes creating a database that has a label that represents the search query and URLs containing words that match that search query.

“Annotation database… may contain a large collection of annotations. Generally, an annotation includes a pattern for a uniform resource locator (URL) for the URLs of documents, and a label to be applied to a document whose URL matches the URL pattern. Schematically, an annotation may take the form:

<label, URL pattern>

where label is a term or phrase, and URL pattern is a specification of a pattern for a URL.

For example, the annotation

<“professional review”, www.digitalcameraworld.com/review/>

would be used to apply the label “professional review” to any document whose URL includes a prefix matching the network location “www.digitalcameraworld.com/review/”. All documents in this particular host’s directory are considered by the provider of the annotation to be “professional review(s)” of digital cameras.”

I recall seeing something like this with two word search queries shortly after the Caffeine update. It looked like Google was excluding commercial sites for a specific SERP that used to feature commercial sites. After Caffeine it was preferring government and educational research related pages.

Except for one commercial site. One commercial site was still ranking.

That site had the word “/research” in the URL. Strange, right?

Google Filter Algorithm is Not About Taking Away Traffic

That patent is about satisfying user intent. It’s not about taking traffic away from publishers.

The idea of a Google algorithm that deprives users of traffic is part of an old myth. This myth holds that Google is purposely making it’s search results poor in order to increase ad clicks.

That myth is typically spread by SEOs whose sites have suffered in the search results and haven’t recovered. In my opinion it’s easier to blame Google of bad intents than it is to admit that maybe the SEO strategy is lacking. After all, commercial sites still rank for commercial queries.

The idea that Google filters traffic to help ad clicks is a myth. All information retrieval research and patents I have read focus on satisfying users. Google can’t win by being a poor search engine.

Only SEOs Talk About Google Filters

Filtering is an SEO viewpoint. Filtering is how an SEO sees things from their point of view.

Remember the theory of the Sandbox? SEOs believed that the algorithm was filtering websites that were new or that Google was filtering sites that had affiliate ads.

At the time, the SEO strategy was to create a website and link it up with directory links, reciprocal links and/or link bait viral links.

As we know now, Google was neutralizing the influence of those kinds of links. So it’s likely, in my opinion, that the filtering that SEOs thought they were looking at was really the effect of their outdated link building strategies.

The point is that “filtering” is culturally an SEO way of seeing things. It’s related to the word “targeting” in that SEOs tend to believe that Google “targets” certain kinds of sites in order to “filter” them out of the SERPs.

That is not not how Google describes their algorithms.

How Google Works

Google’s research and patents tend to focus on relevance.

  1. Understanding search queries
  2. Matching web pages to search queries

To drastically simplify how Google works, there are four basic parts to a search engine:

Google’s search engine has at least these four parts:

  1. Crawl Engine
    Crawl engine is what crawls your site.
  2. Indexing Engine
    Indexing engine represents the document set
  3. Ranking Engine
    Ranking engine is where the ranking factors live
  4. Modification engine
    The modification engine is where other factors related to personalization, geographic/time related factors, user intent and possibly fact checking.

All of those engines do many things but filtering traffic to web publishers is not one of them.

Google Filters are an SEO Idea

From the outside, to an SEO, it might look like Google is filtering. But when you read the research papers and patents, it’s really about satisfying user intent, about relevance.

Maybe the link algorithm contains processes that identify suspicious link patterns. But that’s not the “filter” being talked about.

It’s a misnomer to call them filters. Research papers and patents, especially the most recent ones, do not refer to filters or filtering.

Filters are exclusively an SEO idea. It’s how some in the SEO community perceive what Google is doing.

That is one reason among many why it seems likely to me that the so-called insider is not really an insider.

Relevance is a Google Focus

Research and patents focus on relevance. The idea that Google filters search results to keep “traffic away” is outside of accepted knowledge about Google.

The claim can be accurately described as surreal.

It’s safe to say that Google ranks websites from the point of view of relevance, not filtering.

NEWS

What can ChatGPT do?

Published

on

ChatGPT Explained

ChatGPT is a large language model developed by OpenAI that is trained on a massive amount of text data. It is capable of generating human-like text and has been used in a variety of applications, such as chatbots, language translation, and text summarization.

One of the key features of ChatGPT is its ability to generate text that is similar to human writing. This is achieved through the use of a transformer architecture, which allows the model to understand the context and relationships between words in a sentence. The transformer architecture is a type of neural network that is designed to process sequential data, such as natural language.

Another important aspect of ChatGPT is its ability to generate text that is contextually relevant. This means that the model is able to understand the context of a conversation and generate responses that are appropriate to the conversation. This is accomplished by the use of a technique called “masked language modeling,” which allows the model to predict the next word in a sentence based on the context of the previous words.

One of the most popular applications of ChatGPT is in the creation of chatbots. Chatbots are computer programs that simulate human conversation and can be used in customer service, sales, and other applications. ChatGPT is particularly well-suited for this task because of its ability to generate human-like text and understand context.

Another application of ChatGPT is language translation. By training the model on a large amount of text data in multiple languages, it can be used to translate text from one language to another. The model is able to understand the meaning of the text and generate a translation that is grammatically correct and semantically equivalent.

In addition to chatbots and language translation, ChatGPT can also be used for text summarization. This is the process of taking a large amount of text and condensing it into a shorter, more concise version. ChatGPT is able to understand the main ideas of the text and generate a summary that captures the most important information.

Despite its many capabilities and applications, ChatGPT is not without its limitations. One of the main challenges with using language models like ChatGPT is the risk of generating text that is biased or offensive. This can occur when the model is trained on text data that contains biases or stereotypes. To address this, OpenAI has implemented a number of techniques to reduce bias in the training data and in the model itself.

In conclusion, ChatGPT is a powerful language model that is capable of generating human-like text and understanding context. It has a wide range of applications, including chatbots, language translation, and text summarization. While there are limitations to its use, ongoing research and development is aimed at improving the model’s performance and reducing the risk of bias.

** The above article has been written 100% by ChatGPT. This is an example of what can be done with AI. This was done to show the advanced text that can be written by an automated AI.

Continue Reading

NEWS

Google December Product Reviews Update Affects More Than English Language Sites? via @sejournal, @martinibuster

Published

on

Google’s Product Reviews update was announced to be rolling out to the English language. No mention was made as to if or when it would roll out to other languages. Mueller answered a question as to whether it is rolling out to other languages.

Google December 2021 Product Reviews Update

On December 1, 2021, Google announced on Twitter that a Product Review update would be rolling out that would focus on English language web pages.

The focus of the update was for improving the quality of reviews shown in Google search, specifically targeting review sites.

A Googler tweeted a description of the kinds of sites that would be targeted for demotion in the search rankings:

“Mainly relevant to sites that post articles reviewing products.

Think of sites like “best TVs under $200″.com.

Goal is to improve the quality and usefulness of reviews we show users.”

Advertisement

Continue Reading Below

Google also published a blog post with more guidance on the product review update that introduced two new best practices that Google’s algorithm would be looking for.

The first best practice was a requirement of evidence that a product was actually handled and reviewed.

The second best practice was to provide links to more than one place that a user could purchase the product.

The Twitter announcement stated that it was rolling out to English language websites. The blog post did not mention what languages it was rolling out to nor did the blog post specify that the product review update was limited to the English language.

Google’s Mueller Thinking About Product Reviews Update

Screenshot of Google's John Mueller trying to recall if December Product Review Update affects more than the English language

Screenshot of Google's John Mueller trying to recall if December Product Review Update affects more than the English language

Product Review Update Targets More Languages?

The person asking the question was rightly under the impression that the product review update only affected English language search results.

Advertisement

Continue Reading Below

But he asserted that he was seeing search volatility in the German language that appears to be related to Google’s December 2021 Product Review Update.

This is his question:

“I was seeing some movements in German search as well.

So I was wondering if there could also be an effect on websites in other languages by this product reviews update… because we had lots of movement and volatility in the last weeks.

…My question is, is it possible that the product reviews update affects other sites as well?”

John Mueller answered:

“I don’t know… like other languages?

My assumption was this was global and and across all languages.

But I don’t know what we announced in the blog post specifically.

But usually we try to push the engineering team to make a decision on that so that we can document it properly in the blog post.

I don’t know if that happened with the product reviews update. I don’t recall the complete blog post.

But it’s… from my point of view it seems like something that we could be doing in multiple languages and wouldn’t be tied to English.

And even if it were English initially, it feels like something that is relevant across the board, and we should try to find ways to roll that out to other languages over time as well.

So I’m not particularly surprised that you see changes in Germany.

But I also don’t know what we actually announced with regards to the locations and languages that are involved.”

Does Product Reviews Update Affect More Languages?

While the tweeted announcement specified that the product reviews update was limited to the English language the official blog post did not mention any such limitations.

Google’s John Mueller offered his opinion that the product reviews update is something that Google could do in multiple languages.

One must wonder if the tweet was meant to communicate that the update was rolling out first in English and subsequently to other languages.

It’s unclear if the product reviews update was rolled out globally to more languages. Hopefully Google will clarify this soon.

Citations

Google Blog Post About Product Reviews Update

Product reviews update and your site

Google’s New Product Reviews Guidelines

Write high quality product reviews

John Mueller Discusses If Product Reviews Update Is Global

Watch Mueller answer the question at the 14:00 Minute Mark

[embedded content]

Searchenginejournal.com

Continue Reading

NEWS

Survey says: Amazon, Google more trusted with your personal data than Apple is

Published

on

survey-says:-amazon,-google-more-trusted-with-your-personal-data-than-apple-is-–-phonearena
 

MacRumors reveals that more people feel better with their personal data in the hands of Amazon and Google than Apple’s. Companies that the public really doesn’t trust when it comes to their personal data include Facebook, TikTok, and Instagram.

The survey asked over 1,000 internet users in the U.S. how much they trusted certain companies such as Facebook, TikTok, Instagram, WhatsApp, YouTube, Google, Microsoft, Apple, and Amazon to handle their user data and browsing activity responsibly.

Amazon and Google are considered by survey respondents to be more trustworthy than Apple

Those surveyed were asked whether they trusted these firms with their personal data “a great deal,” “a good amount,” “not much,” or “not at all.” Respondents could also answer that they had no opinion about a particular company. 18% of those polled said that they trust Apple “a great deal” which topped the 14% received by Google and Amazon.

However, 39% said that they trust Amazon  by “a good amount” with Google picking up 34% of the votes in that same category. Only 26% of those answering said that they trust Apple by “a good amount.” The first two responses, “a great deal” and “a good amount,” are considered positive replies for a company. “Not much” and “not at all” are considered negative responses.

By adding up the scores in the positive categories,

Apple tallied a score of 44% (18% said it trusted Apple with its personal data “a great deal” while 26% said it trusted Apple “a good amount”). But that placed the tech giant third after Amazon’s 53% and Google’s 48%. After Apple, Microsoft finished fourth with 43%, YouTube (which is owned by Google) was fifth with 35%, and Facebook was sixth at 20%.

Rounding out the remainder of the nine firms in the survey, Instagram placed seventh with a positive score of 19%, WhatsApp was eighth with a score of 15%, and TikTok was last at 12%.

Looking at the scoring for the two negative responses (“not much,” or “not at all”), Facebook had a combined negative score of 72% making it the least trusted company in the survey. TikTok was next at 63% with Instagram following at 60%. WhatsApp and YouTube were both in the middle of the pact at 53% followed next by Google and Microsoft at 47% and 42% respectively. Apple and Amazon each had the lowest combined negative scores at 40% each.

74% of those surveyed called targeted online ads invasive

The survey also found that a whopping 82% of respondents found targeted online ads annoying and 74% called them invasive. Just 27% found such ads helpful. This response doesn’t exactly track the 62% of iOS users who have used Apple’s App Tracking Transparency feature to opt-out of being tracked while browsing websites and using apps. The tracking allows third-party firms to send users targeted ads online which is something that they cannot do to users who have opted out.

The 38% of iOS users who decided not to opt out of being tracked might have done so because they find it convenient to receive targeted ads about a certain product that they looked up online. But is ATT actually doing anything?

Marketing strategy consultant Eric Seufert said last summer, “Anyone opting out of tracking right now is basically having the same level of data collected as they were before. Apple hasn’t actually deterred the behavior that they have called out as being so reprehensible, so they are kind of complicit in it happening.”

The Financial Times says that iPhone users are being lumped together by certain behaviors instead of unique ID numbers in order to send targeted ads. Facebook chief operating officer Sheryl Sandberg says that the company is working to rebuild its ad infrastructure “using more aggregate or anonymized data.”

Aggregated data is a collection of individual data that is used to create high-level data. Anonymized data is data that removes any information that can be used to identify the people in a group.

When consumers were asked how often do they think that their phones or other tech devices are listening in to them in ways that they didn’t agree to, 72% answered “very often” or “somewhat often.” 28% responded by saying “rarely” or “never.”

Continue Reading

Trending

en_USEnglish