Connect with us

SEO

What The Google Antitrust Verdict Could Mean For The Future Of SEO

Published

on

What The Google Antitrust Verdict Could Mean For The Future Of SEO

In August 2024, Google lost its first major antitrust case in the U.S. Department of Justice vs. Google.

While we all gained some interesting insights about how Google’s algorithm works (hello, NavBoost!), understanding the implications of this loss for Google as a business is not the easiest to unravel. Hence, this article.

There’s still plenty we don’t know about Google’s future as a result of this trial, but it’s clear there will be consequences ahead.

Even though Google representatives have said they will appeal the decision, both sides are already working on proposals for how to restore competition, which will be decided by August 2025.

My significant other is a corporate lawyer, and this trial has been a frequent topic at the dinner table over the course of the last year.

Advertisement

We come from different professional backgrounds, but we have been equally invested in the outcome – both for our respective careers and industries. His perspective has helped me better grasp the potential legal and business outcomes that could be ahead for Google.

I will break that down for you in this article, along with what that could mean for the SEO industry and Search at-large.

Background: The Case Against Google

In August 2024, Federal Judge Amit Mehta ruled that Google violated the U.S. antitrust law by maintaining an illegal monopoly through exclusive agreements it had with companies like Apple to be the world’s default search engine on smartphones and web browsers.

During the case, we learned that Google paid Apple $20 billion in 2022 to be the default search engine on its Safari browser, thus making it impossible for other search engines like DuckDuckGo or Bing to compete.

This case ruling also found Google guilty of monopolizing general search text advertising because Google was able to raise prices on ad products higher than what would have been possible in a free market.

Those ads are sold via Google Ads (formerly AdWords) and allow marketers to run ads against search keywords related to their business.

Advertisement

Note: There is a second antitrust case still underway about whether Google has created illegal monopolies with open web display ad technology as well. Closing arguments will be heard for that in November 2024 with a verdict to follow

Remedies Proposed By The DOJ

On Oct. 8, 2024, the DOJ filed proposed antitrust remedies for Google. Until this point, there has been plenty of speculation about potential solutions.

Now, we know that the DOJ will be seeking remedies in four “categories of harm”:

  1. Search Distribution and Revenue Sharing.
  2. Accumulation and Use of Data.
  3. Generation and Display of Search Results.
  4. Advertising Scale and Monetization.

The following sections highlight potential remedies the DOJ proposed in that filing.

Ban On Exclusive Contracts

In order to address Google’s search distribution and revenue sharing, it is likely that we will see a ban on exclusive contracts going forward for Google.

In the Oct. 8 filing, the DOJ outlined exploring limiting or prohibiting default agreements, pre-installation agreements, and other revenue-sharing agreements related to search and search-related products.

Given this is what the case was centered around, it seems most likely that we will see some flavor of this outcome, and that could provide new incentives for innovation around search at Apple.

Advertisement

Apple Search Engine?

Judge Mehta noted in his judgment that Apple had periodically considered building its own search technology, but decided against it when an analysis in 2018 concluded Apple would lose more than $12 billion in revenue during the first five years if they broke up with Google.

If Google were no longer able to have agreements of this nature, we may finally see Apple emerge with a search engine of its own.

According to a Bloomberg report in October 2023, Apple has been “tinkering” with search technology for years.

It has a large search team dedicated to a next-generation search engine for Apple’s apps called “Pegasus,” which has already rolled out in some apps.

And its development of Spotlight to help users find things across their devices has started adding web results to this tool pointing users to sites that answer search queries.

Apple already has a web crawler called Applebot that finds sites it can provide users in Siri and Spotlight. It has also built its own search engines for some of its services like the App Store, Maps, Apple TV, and News.

Advertisement

Apple purchased a company called Laserlike in 2019, which is an AI-based search engine founded by former Google employees. Apple’s machine learning team has been seeking new engineers to work on search technologies as well.

All of these could be important infrastructure for a new search engine.

Implications For SEO

If users are given more choices in their default search engine, some may stray away from Google, which could cut its market share.

However, as of now, Google is still thought of as the leader in search quality, so it’s hard to gauge how much would realistically change if exclusive contracts were banned.

A new search engine from Apple would obviously be an interesting development. It would be a new algorithm to test, understand, and optimize for.

Knowing that users are hungry for another quality option, people would likely embrace Apple in this space, and it could generate a significant amount of users, if the results are high enough quality. Quality is really key.

Advertisement

Search is the most used tool on smartphones, tablets, and computers. Apple has the users that Google needs.

Without Apple’s partnership with Google, Apple has the potential to disrupt this space. It can offer a more integrated search experience than any other company out there. And its commitment to privacy is appealing to many long-time Google users.

The DOJ would likely view this as a win as well because Apple is one of the few companies large enough to fully compete across the search space with Google.

Required Sharing Of Data To Competitors

Related to the accumulation and use of data harm Google has caused, the DOJ is considering a remedy that forces Google to license its data to competitors like Bing or DuckDuckGo.

The antitrust ruling found that Google’s contracts ensure that Google gets the most user data, and that data streams also keep its competitors from improving their search results to compete better.

In the Oct. 8 filing, the DOJ is considering forcing Google to make: 1) the indexes, data, fees, and models used for Google search, including those used in AI-assisted search features, and 2) Google search results, features, and ads, including the underlying ranking signals available via API.

Advertisement

Believe it or not, this solution has precedent, although certainly not at the same scale as what is being proposed for Google.

The DOJ required AT&T to provide royalty-free licenses to its patents in 1956, and required Microsoft to make some of its APIs available to third parties for free after they lost an antitrust case in 1999.

Google has argued that there are user privacy concerns related to data sharing. The DOJ’s response is that it is considering prohibiting Google from using or retaining data that cannot be shared with others because of privacy concerns.

Implications For SEO

Should Google be required to do any of this, it would be an unprecedented victory for the open web. It is overwhelming to think of the possibilities if any of these repercussions were to come to fruition.

We would finally be able to see behind the curtain of the algorithm and ranking signals at play. There would be a true open competition to build rival search engines.

If Google were no longer to use personalized data, we might see the end of personalized search results based on your search history, which has pros and cons.

Advertisement

I would also be curious what would happen to Google Discover since that product provides content based on your browsing history.

The flip side of this potential outcome is that it will be easier than ever to gamify search results again, at least in the short term.

If everyone knew what makes pages rank in Google, we would be back in the early days of SEO, when we could easily manipulate rank.

But if others take the search algorithm and build upon it in different ways, maybe that wouldn’t be as big of a concern in the long term.

Opting Out Of SERP Features

The DOJ filing briefly touched on one intriguing remedy for the harm Google has caused regarding the generation and display of search results.

The DOJ lawyers are proposing that website publishers receive the ability to opt out of Google features or products they wish to.

Advertisement

This would include Google’s AI Overviews, which they give as an example, but it could also include all other SERP features where Google relies on websites and other content created by third parties – in other words, all of them.

Because Google has held this monopoly, publishers have had virtually no bargaining power with Google in regards to being included in SERP features without risking complete exclusion from Google.

This solution would help publishers have more control over how they show up in the search results.

Implications For SEO

This could be potentially huge for SEO if the DOJ does indeed move forward with requiring Google to allow publishers to opt out of any and all features and products they wish without exclusion in Google’s results altogether.

There are plenty of website publishers who do not want Google to be able to use their content to train its AI products, and wish to opt out of AI Overviews.

When featured snippets first came about, there was a similar reaction to those.

Advertisement

Based on the query, featured snippets and AI Overviews have the ability to help or harm website traffic numbers, but it’s intriguing to think there could be a choice in the matter of inclusion.

Licensing Of Ad Feeds

To address advertising scale and monetization harm caused by Google, the DOJ filing provided a few half-baked solutions related to search text advertising.

Because Google holds a 91% market share of search in the U.S., other search engines have struggled to monetize through advertising.

One solution is to require Google to license or syndicate its ad feed independent of its search results. This way, other search engines could better monetize by utilizing Google’s advertising feed.

It is also looking at remedies to provide more transparent and detailed reporting to advertisers about search text ad auctions and monetization, and the ability to opt out of Google search features like keyword expansion and broad match that advertisers don’t want to partake in.

Implications For SEO

I don’t see obvious implications for SEO, but there are plenty for our friends in PPC.

Advertisement

While licensing the Google ad feed is intriguing in order to help other search engines monetize, it doesn’t get at the issue of Google overcharging advertisers in their auctions.

More thought and creativity might be needed here to find a solution that would make sense for both creating more competition in search and fairness for advertisers.

They are certainly on the right track with more transparency in reporting and allowing advertisers to opt out of programs they don’t want to be part of.

Breaking Up Of Google

The DOJ lawyers are also considering “structural remedies” like forcing Google to sell off parts of its business, like the Chrome browser or the Android operating system.

Divesting Android is the remedy that has been discussed the most. It would be another way to prevent Google from having a position of power over device makers and requiring them to enter into agreements for access to other Google product apps like Gmail or Google Play.

If the DOJ forced Google to sell Chrome, that would just be another way to force them to stop using the data from it to inform the search algorithm.

Advertisement

There are behavioral remedies already mentioned that could arguably accomplish the same thing, and without the stock market-shattering impact of a forced breakup.

That said, depending on the outcome of the U.S. election, we could see a DOJ that feels empowered to take bigger swings, so this may still be on the table.

The primary issue with this remedy is that Google’s revenue largely comes from search advertising. So, if the goal is to reduce its market share, would breaking up smaller areas of the business really accomplish that?

Implications For SEO

If Android became a stand-alone business, I don’t see implications for SEO because it isn’t directly related to search.

Also, Apple controls so much of the relevant mobile market that spinning Android off would have little to no effect in regards to addressing monopolistic practices.

If Chrome were sold, Google would lose the valuable user signals that inform Navboost in the algorithm.

Advertisement

That would have some larger implications for the quality of its results since we know, through trial testimony, that those Chrome user signals are heavily weighted in the algorithm.

How much of an impact that would have on the results may only be known inside Google, or maybe not even there, but it could be material.

Final Thoughts

There is so much to be decided in the year (potentially years) to come regarding Google’s fate.

While all of the recent headlines focus on the possibility of Google being broken up, I think this is a less likely outcome.

While divesting Chrome may be on the table, it seems like there are easier ways to accomplish the government’s goals.

And Android and Google Play are both free to customers and rely on open-source code, so mandating changes to them doesn’t seem the most logical way to solve monopolistic practices.

Advertisement

I suspect we’ll see some creative behavioral remedies instead. The banning of exclusive contracts feels like a no-brainer.

Of all the solutions out there, requiring Google to provide APIs of Google search results, ranking signals, etc. is by far the most intriguing idea.

I cannot even imagine a world where we have access to that information right now. And I can only hope that we do see the emergence of an Apple search engine. It feels long overdue for it to enter this space and start disrupting.

Even with Google appealing Mehta’s decision, the remedy proposals will continue ahead.

In November, the DOJ will file a more refined framework, and then Google will propose its own remedies in December.

More resources:

Advertisement

Featured Image: David Gyung/Shutterstock

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

How to Revive an Old Blog Article for SEO

Published

on

Step-by-Step: How to Optimize Old Blog Posts for SEO

Quick question: What do you typically do with your old blog posts? Most likely, the answer is: Not much.

If that’s the case, you’re not alone. Many of us in SEO and content marketing tend to focus on continuously creating new content, rather than leveraging our existing blog posts.

However, here’s the reality—Google is becoming increasingly sophisticated in evaluating content quality, and we need to adapt accordingly. Just as it’s easier to encourage existing customers to make repeat purchases, updating old content on your website is a more efficient and sustainable strategy in the long run.

Ways to Optimize Older Content 

Some of your old content might not be optimized for SEO very well, rank for irrelevant keywords, or drive no traffic at all. If the quality is still decent, however, you should be able to optimize it properly with little effort. 

Advertisement

Refresh Content 

If your blog post contains a specific year or mentions current events, it may become outdated over time. If the rest of the content is still relevant (like if it’s targeting an evergreen topic), simply updating the date might be all you need to do.

Rewrite Old Blog Posts 

When the content quality is low (you might have greatly improved your writing skills since you’ve written the post) but the potential is still there, there’s not much you can do apart from rewriting an old blog post completely. 

This is not a waste—you’re saving time on brainstorming since the basic structure is already in place. Now, focus on improving the quality.

Delete Old Blog Posts 

You might find a blog post that just seems unusable. Should you delete your old content? It depends. If it’s completely outdated, of low quality, and irrelevant to any valuable keywords for your website, it’s better to remove it. 

Once you decide to delete the post, don’t forget to set up a 301 redirect to a related post or page, or to your homepage.

Promote Old Blog Posts 

Sometimes all your content needs is a bit of promotion to start ranking and getting traffic again. Share it on your social media, link to it from a new post – do something to get it discoverable again to your audience. This can give it the boost it needs to attract organic links too.

Advertisement

Which Blog Posts Should You Update?

Deciding when to update or rewrite blog posts is a decision that relies on one important thing: a content audit. 

Use your Google Analytics to find out which blog posts used to drive tons of traffic, but no longer have the same reach. You can also use Google Search Console to find out which of your blog posts have lost visibility in comparison to previous months. I have a guide on website analysis using Google Analytics and Google Search Console you can follow.

If you use keyword tracking tools like SE Ranking, you can also use the data it provides to come up with a list of blog posts that have dropped in the rankings. 

Make data-driven decisions to identify which blog posts would benefit from these updates – i.e., which ones still have the chance to recover their keyword rankings and organic traffic. 

With Google’s helpful content update, which emphasizes better user experiences, it’s crucial to ensure your content remains relevant, valuable, and up-to-date.

How To Update Old Blog Posts for SEO

Updating articles can be an involved process. Here are some tips and tactics to help you get it right.

Advertisement

Author’s Note: I have a Comprehensive On-Page SEO Checklist you might also be interested in following while you’re doing your content audit.

Conduct New Keyword Research

Updating your post without any guide won’t get you far. Always do your keyword research to understand how users are searching for your given topic. 

Proper research can also show you relevant questions and sections that can be added to the blog post you’re updating or rewriting. Make sure to take a look at the People Also Ask (PAA) section that shows up when you search for your target keyword. Check out other websites like Answer The Public, Reddit, and Quora to see what users are looking for too. 

Look for New Ranking Opportunities

When trying to revive an old blog post for SEO, keep an eye out for new SEO opportunities (e.g., AI Overview, featured snippets, and related search terms) that didn’t exist when you first wrote your blog post. Some of these features can be targeted by the new content you will add to your post, if you write with the aim to be eligible for it. 

Rewrite Headlines and Meta Tags

If you want to attract new readers, consider updating your headlines and meta tags. 

Your headlines and meta tags should fulfill these three things:

Advertisement
  1. Reflect the rewritten and new content you’ve added to the blog post.
  2. Be optimized for the new keywords it’s targeting (if any).
  3. Appeal to your target audience – who may have changed tastes from when the blog post was originally made. 

Remember that your meta tags in particular act like a brief advertisement for your blog post, since this is what the user first sees when your blog post is shown in the search results page. 

Take a look at your blog post’s click-through rate on Google Search Console – if it falls below 2%, it’s definitely time for new meta tags. 

Replace Outdated Information and Statistics

Updating blog content with current studies and statistics enhances the relevance and credibility of your post. By providing up-to-date information, you help your audience make better, well-informed decisions, while also showing that your content is trustworthy.

Tighten or Expand Ideas

Your old content might be too short to provide real value to users – or you might have rambled on and on in your post. It’s important to evaluate whether you need to make your content more concise, or if you need to elaborate more. 

Keep the following tips in mind as you refine your blog post’s ideas:

  • Evaluate Helpfulness: Measure how well your content addresses your readers’ pain points. Aim to follow the E-E-A-T model (Experience, Expertise, Authoritativeness, Trustworthiness).
  • Identify Missing Context: Consider whether your content needs more detail or clarification. View it from your audience’s perspective and ask if the information is complete, or if more information is needed.
  • Interview Experts: Speak with industry experts or thought leaders to get fresh insights. This will help support your writing, and provide unique points that enhance the value of your content.
  • Use Better Examples: Examples help simplify complex concepts. Add new examples or improve existing ones to strengthen your points.
  • Add New Sections if Needed: If your content lacks depth or misses a key point, add new sections to cover these areas more thoroughly.
  • Remove Fluff: Every sentence should contribute to the overall narrative. Eliminate unnecessary content to make your post more concise.
  • Revise Listicles: Update listicle items based on SEO recommendations and content quality. Add or remove headings to stay competitive with higher-ranking posts.

Improve Visuals and Other Media

No doubt that there are tons of old graphics and photos in your blog posts that can be improved with the tools we have today. Make sure all of the visuals used in your content are appealing and high quality. 

Update Internal and External Links

Are your internal and external links up to date? They need to be for your SEO and user experience. Outdated links can lead to broken pages or irrelevant content, frustrating readers and hurting your site’s performance.

You need to check for any broken links on your old blog posts, and update them ASAP. Updating your old blog posts can also lead to new opportunities to link internally to other blog posts and pages, which may not have been available when the post was originally published.

Optimize for Conversions

When updating content, the ultimate goal is often to increase conversions. However, your conversion goals may have changed over the years. 

Advertisement

So here’s what you need to check in your updated blog post. First, does the call-to-action (CTA) still link to the products or services you want to promote? If not, update it to direct readers to the current solution or offer.

Second, consider where you can use different conversion strategies. Don’t just add a CTA at the end of the post. 

Last, make sure that the blog post leverages product-led content. It’s going to help you mention your products and services in a way that feels natural, without being too pushy. Being subtle can be a high ROI tactic for updated posts.

Key Takeaway

Reviving old blog articles for SEO is a powerful strategy that can breathe new life into your content and boost your website’s visibility. Instead of solely focusing on creating new posts, taking the time to refresh existing content can yield impressive results, both in terms of traffic and conversions. 

By implementing these strategies, you can transform old blog posts into valuable resources that attract new readers and retain existing ones. So, roll up your sleeves, dive into your archives, and start updating your content today—your audience and search rankings will thank you!

Advertisement

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

How Compression Can Be Used To Detect Low Quality Pages

Published

on

By

Compression can be used by search engines to detect low-quality pages. Although not widely known, it's useful foundational knowledge for SEO.

The concept of Compressibility as a quality signal is not widely known, but SEOs should be aware of it. Search engines can use web page compressibility to identify duplicate pages, doorway pages with similar content, and pages with repetitive keywords, making it useful knowledge for SEO.

Although the following research paper demonstrates a successful use of on-page features for detecting spam, the deliberate lack of transparency by search engines makes it difficult to say with certainty if search engines are applying this or similar techniques.

What Is Compressibility?

In computing, compressibility refers to how much a file (data) can be reduced in size while retaining essential information, typically to maximize storage space or to allow more data to be transmitted over the Internet.

TL/DR Of Compression

Compression replaces repeated words and phrases with shorter references, reducing the file size by significant margins. Search engines typically compress indexed web pages to maximize storage space, reduce bandwidth, and improve retrieval speed, among other reasons.

This is a simplified explanation of how compression works:

  • Identify Patterns:
    A compression algorithm scans the text to find repeated words, patterns and phrases
  • Shorter Codes Take Up Less Space:
    The codes and symbols use less storage space then the original words and phrases, which results in a smaller file size.
  • Shorter References Use Less Bits:
    The “code” that essentially symbolizes the replaced words and phrases uses less data than the originals.

A bonus effect of using compression is that it can also be used to identify duplicate pages, doorway pages with similar content, and pages with repetitive keywords.

Research Paper About Detecting Spam

This research paper is significant because it was authored by distinguished computer scientists known for breakthroughs in AI, distributed computing, information retrieval, and other fields.

Advertisement

Marc Najork

One of the co-authors of the research paper is Marc Najork, a prominent research scientist who currently holds the title of Distinguished Research Scientist at Google DeepMind. He’s a co-author of the papers for TW-BERT, has contributed research for increasing the accuracy of using implicit user feedback like clicks, and worked on creating improved AI-based information retrieval (DSI++: Updating Transformer Memory with New Documents), among many other major breakthroughs in information retrieval.

Dennis Fetterly

Another of the co-authors is Dennis Fetterly, currently a software engineer at Google. He is listed as a co-inventor in a patent for a ranking algorithm that uses links, and is known for his research in distributed computing and information retrieval.

Those are just two of the distinguished researchers listed as co-authors of the 2006 Microsoft research paper about identifying spam through on-page content features. Among the several on-page content features the research paper analyzes is compressibility, which they discovered can be used as a classifier for indicating that a web page is spammy.

Detecting Spam Web Pages Through Content Analysis

Although the research paper was authored in 2006, its findings remain relevant to today.

Then, as now, people attempted to rank hundreds or thousands of location-based web pages that were essentially duplicate content aside from city, region, or state names. Then, as now, SEOs often created web pages for search engines by excessively repeating keywords within titles, meta descriptions, headings, internal anchor text, and within the content to improve rankings.

Section 4.6 of the research paper explains:

Advertisement

“Some search engines give higher weight to pages containing the query keywords several times. For example, for a given query term, a page that contains it ten times may be higher ranked than a page that contains it only once. To take advantage of such engines, some spam pages replicate their content several times in an attempt to rank higher.”

The research paper explains that search engines compress web pages and use the compressed version to reference the original web page. They note that excessive amounts of redundant words results in a higher level of compressibility. So they set about testing if there’s a correlation between a high level of compressibility and spam.

They write:

“Our approach in this section to locating redundant content within a page is to compress the page; to save space and disk time, search engines often compress web pages after indexing them, but before adding them to a page cache.

…We measure the redundancy of web pages by the compression ratio, the size of the uncompressed page divided by the size of the compressed page. We used GZIP …to compress pages, a fast and effective compression algorithm.”

High Compressibility Correlates To Spam

The results of the research showed that web pages with at least a compression ratio of 4.0 tended to be low quality web pages, spam. However, the highest rates of compressibility became less consistent because there were fewer data points, making it harder to interpret.

Figure 9: Prevalence of spam relative to compressibility of page.

The researchers concluded:

Advertisement

“70% of all sampled pages with a compression ratio of at least 4.0 were judged to be spam.”

But they also discovered that using the compression ratio by itself still resulted in false positives, where non-spam pages were incorrectly identified as spam:

“The compression ratio heuristic described in Section 4.6 fared best, correctly identifying 660 (27.9%) of the spam pages in our collection, while misidentifying 2, 068 (12.0%) of all judged pages.

Using all of the aforementioned features, the classification accuracy after the ten-fold cross validation process is encouraging:

95.4% of our judged pages were classified correctly, while 4.6% were classified incorrectly.

More specifically, for the spam class 1, 940 out of the 2, 364 pages, were classified correctly. For the non-spam class, 14, 440 out of the 14,804 pages were classified correctly. Consequently, 788 pages were classified incorrectly.”

The next section describes an interesting discovery about how to increase the accuracy of using on-page signals for identifying spam.

Insight Into Quality Rankings

The research paper examined multiple on-page signals, including compressibility. They discovered that each individual signal (classifier) was able to find some spam but that relying on any one signal on its own resulted in flagging non-spam pages for spam, which are commonly referred to as false positive.

Advertisement

The researchers made an important discovery that everyone interested in SEO should know, which is that using multiple classifiers increased the accuracy of detecting spam and decreased the likelihood of false positives. Just as important, the compressibility signal only identifies one kind of spam but not the full range of spam.

The takeaway is that compressibility is a good way to identify one kind of spam but there are other kinds of spam that aren’t caught with this one signal. Other kinds of spam were not caught with the compressibility signal.

This is the part that every SEO and publisher should be aware of:

“In the previous section, we presented a number of heuristics for assaying spam web pages. That is, we measured several characteristics of web pages, and found ranges of those characteristics which correlated with a page being spam. Nevertheless, when used individually, no technique uncovers most of the spam in our data set without flagging many non-spam pages as spam.

For example, considering the compression ratio heuristic described in Section 4.6, one of our most promising methods, the average probability of spam for ratios of 4.2 and higher is 72%. But only about 1.5% of all pages fall in this range. This number is far below the 13.8% of spam pages that we identified in our data set.”

So, even though compressibility was one of the better signals for identifying spam, it still was unable to uncover the full range of spam within the dataset the researchers used to test the signals.

Combining Multiple Signals

The above results indicated that individual signals of low quality are less accurate. So they tested using multiple signals. What they discovered was that combining multiple on-page signals for detecting spam resulted in a better accuracy rate with less pages misclassified as spam.

Advertisement

The researchers explained that they tested the use of multiple signals:

“One way of combining our heuristic methods is to view the spam detection problem as a classification problem. In this case, we want to create a classification model (or classifier) which, given a web page, will use the page’s features jointly in order to (correctly, we hope) classify it in one of two classes: spam and non-spam.”

These are their conclusions about using multiple signals:

“We have studied various aspects of content-based spam on the web using a real-world data set from the MSNSearch crawler. We have presented a number of heuristic methods for detecting content based spam. Some of our spam detection methods are more effective than others, however when used in isolation our methods may not identify all of the spam pages. For this reason, we combined our spam-detection methods to create a highly accurate C4.5 classifier. Our classifier can correctly identify 86.2% of all spam pages, while flagging very few legitimate pages as spam.”

Key Insight:

Misidentifying “very few legitimate pages as spam” was a significant breakthrough. The important insight that everyone involved with SEO should take away from this is that one signal by itself can result in false positives. Using multiple signals increases the accuracy.

What this means is that SEO tests of isolated ranking or quality signals will not yield reliable results that can be trusted for making strategy or business decisions.

Takeaways

We don’t know for certain if compressibility is used at the search engines but it’s an easy to use signal that combined with others could be used to catch simple kinds of spam like thousands of city name doorway pages with similar content. Yet even if the search engines don’t use this signal, it does show how easy it is to catch that kind of search engine manipulation and that it’s something search engines are well able to handle today.

Here are the key points of this article to keep in mind:

Advertisement
  • Doorway pages with duplicate content is easy to catch because they compress at a higher ratio than normal web pages.
  • Groups of web pages with a compression ratio above 4.0 were predominantly spam.
  • Negative quality signals used by themselves to catch spam can lead to false positives.
  • In this particular test, they discovered that on-page negative quality signals only catch specific types of spam.
  • When used alone, the compressibility signal only catches redundancy-type spam, fails to detect other forms of spam, and leads to false positives.
  • Combing quality signals improves spam detection accuracy and reduces false positives.
  • Search engines today have a higher accuracy of spam detection with the use of AI like Spam Brain.

Read the research paper, which is linked from the Google Scholar page of Marc Najork:

Detecting spam web pages through content analysis

Featured Image by Shutterstock/pathdoc

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

New Google Trends SEO Documentation

Published

on

By

Google publishes new documentation for how to use Google Trends for search marketing

Google Search Central published new documentation on Google Trends, explaining how to use it for search marketing. This guide serves as an easy to understand introduction for newcomers and a helpful refresher for experienced search marketers and publishers.

The new guide has six sections:

  1. About Google Trends
  2. Tutorial on monitoring trends
  3. How to do keyword research with the tool
  4. How to prioritize content with Trends data
  5. How to use Google Trends for competitor research
  6. How to use Google Trends for analyzing brand awareness and sentiment

The section about monitoring trends advises there are two kinds of rising trends, general and specific trends, which can be useful for developing content to publish on a site.

Using the Explore tool, you can leave the search box empty and view the current rising trends worldwide or use a drop down menu to focus on trends in a specific country. Users can further filter rising trends by time periods, categories and the type of search. The results show rising trends by topic and by keywords.

To search for specific trends users just need to enter the specific queries and then filter them by country, time, categories and type of search.

The section called Content Calendar describes how to use Google Trends to understand which content topics to prioritize.

Advertisement

Google explains:

“Google Trends can be helpful not only to get ideas on what to write, but also to prioritize when to publish it. To help you better prioritize which topics to focus on, try to find seasonal trends in the data. With that information, you can plan ahead to have high quality content available on your site a little before people are searching for it, so that when they do, your content is ready for them.”

Read the new Google Trends documentation:

Get started with Google Trends

Featured Image by Shutterstock/Luis Molinero

Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending