SEO
14 Top Reasons Why Google Isn’t Indexing Your Site
Google won’t index your site? You’re not alone. There are many potential issues that may prevent Google from indexing web pages, and this article covers 14 of them.
Whether you want to know what to do if your site is not mobile-friendly or you’re facing complex indexing issues, we’ve got the information that you need.
Learn how to fix these common problems so that Google can start indexing your pages again.
1. You Don’t Have A Domain Name
The first reason why Google won’t index your site is that you don’t have a domain name. This could be because you’re using the wrong URL for the content, or it’s not set up correctly on WordPress.
If this is happening to you, there are some easy fixes.
Check whether or not your web address starts with “https://XXX.XXX…” which means that someone might be typing in an IP address instead of a domain name and getting redirected to your website.
Advertisement
Continue Reading Below
Also, your IP address redirection may not be configured correctly.
One way to fix this issue is by adding 301 redirects from WWW versions of pages back onto their respective domains. If people get directed here when they try searching for something like [yoursitehere], we want them to land on your physical domain name.
It’s important to ensure that you have a domain name. This is non-negotiable if you want to rank and be competitive on Google.
2. Your Site Is Not Mobile-Friendly
A mobile-friendly website is critical to getting your site indexed by Google since it introduced Mobile-First indexing.
No matter how great the content on your website is, if it’s not optimized for viewing on a smartphone or tablet, you’re going to lose rankings and traffic.
Mobile optimization doesn’t have to be difficult – simply adding responsive design principles like fluid grids and CSS Media Queries can go a long way towards making sure that users will find what they need without experiencing any navigation problems.
Advertisement
Continue Reading Below
The first thing I recommend doing with this issue is running your site through Google’s Mobile-Friendly Testing Tool.
If you don’t get a “passed reading,” you have some work to do to make your site mobile-friendly.
3. You’re Using A Coding Language In A Way That’s Too Complex for Google
Google won’t index your site if you’re using a coding language in a complex way. It doesn’t matter what the language is – it could be old or even updated, like JavaScript – as long as the settings are incorrect and cause crawling and indexing issues.
If this is a problem for you, I recommend running through Google’s Mobile-Friendly Testing Tool to see how mobile-friendly your site really is (and make any fixes that might need to be made).
If your website isn’t passable on their standards yet, they offer plenty of resources with guidelines about all manner of design quirks that can come up while designing a responsive webpage.
4. Your Site Loads Slowly
Slow-loading sites make Google less likely to want them featured in the top results of their index. If your site takes a long time to load, it may be due to many different factors.
It could even be that you have too much content on the page for a user’s browser to handle or if you’re using an old-fashioned server with limited resources.
Solutions:
- Use Google Page Speed Insights – This is one of my favorite tools I’ve found in recent years and helps me identify what sections of the website need urgent attention when improving its speed. The tool analyzes your webpage against five performance best practices (that are crucial for having faster loading sites), such as minimizing connections, reducing payload size, leveraging browser caching, etc., and will give you suggestions about how you can improve each aspect of your site.
- Use a tool like webpagetest.org – This tool will let you know if your website is loading at a fast enough pace. It will also allow you to see, in detail, the specific elements on your site that are causing you issues. Their waterfall can help you identify significant page speed issues before they cause serious problems.
- Use Google’s Page Speed insights again – See where you can make improvements to load times on the site. For example, it might be worth exploring a new hosting plan with more resources (pure dedicated servers are far better than shared ones) or using a CDN service that will serve static content from its cache in multiple locations around the world.
Ideally, make sure your page speed numbers hit 70 or more. As close to 100 as possible is ideal.
If you have any questions whatsoever regarding page speed, you may want to check out SEJ’s ebook on Core Web Vitals.
5. Your Site Has Minimal Well-Written Content
Well-written content is critical for succeeding on Google. If you have minimal content that doesn’t at least meet your competition’s levels, then you may have significant issues even breaking the top 50.
Advertisement
Continue Reading Below
In our experience, content that’s less than 1,000 words does not do as well as content that is more than 1,000 words.
Are we a content writing company? No, we are not. Is word count a ranking factor? Also no.
But, when you’re judging what to do in the context of the competition, making sure your content is well-written is key to success.
The content on your site needs to be good and informative. It needs to answer questions, provide information, or have a point of view that’s different enough from other sites in the same niche as yours.
If it doesn’t meet those standards, Google will likely find another site with better quality content that does.
If you’re wondering why your website isn’t ranking highly in Google search results for some keywords despite following through SEO best practices like adding relevant keywords throughout the text (Hint: Your Content), then one culprit may be thin pages where there really should be more than just 100 words per page!
Advertisement
Continue Reading Below
Thin pages can cause indexing issues because they don’t contain much unique content and don’t meet minimum quality levels compared to your competition.
6. Your Site Isn’t User-friendly And Engaging To Visitors
Having a user-friendly and engaging site is crucial to good SEO. Google will rank your site higher in search results if it’s easy for visitors to find what they’re looking for and navigate around the website without feeling frustrated or aggravated.
Google doesn’t want users spending too much time on a page that either takes forever to load, has confusing navigation, or is just plain hard to use because there are too many distractions (like ads above the fold).
If you only have one product listed per category instead of several, then this could be why your content isn’t ranking well with Google! It’s important not only to target keywords within each post but also to make sure that all related posts link back to other relevant articles/pages on the topic.
Advertisement
Continue Reading Below
Do people like sharing your blog? Are readers being wowed by your content? If not, then this could be why Google has stopped indexing your site.
If someone links directly to one specific product page instead of using relative keywords like “buy,” “purchase” etc., then there might be something wrong with the way other pages link back to that particular product.
Make sure all products listed on category pages also exist within each respective sub-category so users can easily make purchases without having to navigate complex linking hierarchies.
7. You Have A Redirect Loop
Redirect loops are another common problem that prevents indexing. These are typically caused by a common typo and can be fixed with the following steps:
Find the page that is causing the redirect loop. If you are using WordPress, find HTML source of one of your posts on this page or in an .htaccess file and look for “Redirect 301” to see which page it’s trying to direct traffic from. It’s also worth it to repair any 302 redirects and make sure they are set to 301.
Advertisement
Continue Reading Below
Use “find” in Windows Explorer (or Command + F if Mac) to search through all files containing “redirect” until you locate where the problem lies.
Fix any typos so there isn’t a duplicate URL address pointing back at itself then use redirection code like below:
Status codes such as 404s don’t always show up in Google Search Console. Using an external crawler like Screaming Frog, you can find the status codes for 404s and other errors.
If all looks good, use Google Search Console on-site to crawl the site again and resubmit it to indexing. Wait a week or so before checking back in with Google Search Console if there are any new warnings popping up that need attention.
Google doesn’t have time to update their indexes every day, but they do try every few hours which means sometimes your content may not show up right away even though you know it’s been updated. Be patient! It should be indexed soon enough.
Advertisement
Continue Reading Below
8. You’re Using Plugins That Block Googlebot from Crawling Your Site
One example of such a plugin is a robots.txt plugin. If you set your robots.txt file through this plugin to noindex your site, Googlebot will not be able to crawl it.
Set up a robots.txt file and do the following:
When you create this, set it as public so that crawlers can access it without restrictions.
Make sure your robots.txt file does not have the following lines:
User-agent: * Disallow: /
The forward slash means that the robots.txt file is blocking all pages from the root folder of the site. You want to make sure that your robots.txt file looks more like this:
User-agent: * Disallow:
With the disallow line being blank, this is telling crawlers that they can all crawl and index every page on your site without restriction (assuming you don’t have specific pages marked as being noindexed.
9. Your Site Uses JavaScript To Render Content
Using JavaScript by itself is not always a complex issue that causes indexing problems. There isn’t one single rule that says JS is the only thing that causes problems. You have to look at the individual site and diagnose issues to determine if this is a problem.
Advertisement
Continue Reading Below
Where JS comes into play as an issue is when the JS prevents crawling by doing shady things – techniques that may be akin to cloaking.
If you have rendered HTML vs. raw HTML, and you have a link in the raw HTML that isn’t in the rendered HTML, Google may not crawl or index that link. Defining your rendered HTML vs. raw HTML issues is crucial because of these types of mistakes.
If you’re into hiding your JS and CSS files, don’t do it. Google has mentioned that they want to see all of your JS and CSS files when they crawl.
Google wants you to keep all JS and CSS crawlable. If you have any of those files blocked, you may want to unblock them and allow for full crawling to give Google the view of your site that they need.
10. You Did Not Add All Domain Properties To Google Search Console
If you have more than one variation of your domain, especially in a situation where you have migrated from http:// to https://, you must have all of your domain variations added and verified in Google Search Console.
Advertisement
Continue Reading Below
It’s important to make sure that you’re not missing any of your domain variations when adding them to GSC.
Add them to GSC, and make sure that you verify your ownership of all domain properties to ensure that you are tracking the right ones.
For new sites that are just starting out, this is likely to not be an issue.
11. Your Meta Tags Are Set To Noindex, Nofollow
Sometimes, through sheer bad luck, meta tags are set to noindex, nofollow. For example, your site may have a link or page that was indexed by Google’s crawler and then deleted before the change to noindex, nofollow was set up correctly in your website’s backend.
As a result, that page may not have been re-indexed and if you’re using a plugin to block Google from crawling your site then that page may never be indexed again.
The solution is simple: change any meta tags with the words noindex,nofollow on them so they read index,follow instead.
Advertisement
Continue Reading Below
If you have thousands of pages like this, however, you may have an uphill battle ahead of you. This is one of those times where you must grit your teeth and move forward with the grind.
In the end, your site’s performance will thank you.
12. You’re Not Using A Sitemap
You need to use a sitemap!
A sitemap is a list of all the pages on your site, and it’s also one way for Google to find out what content you have. This tool will help ensure that every page gets crawled and indexed by Google Search Console.
If you don’t have a sitemap, Google is flying blind unless all of your pages are currently indexed and receiving traffic.
It’s important to note, however, that HTML Sitemaps are deprecated in Google Search Console. The preferred format for sitemaps nowadays are XML Sitemaps.
You want to use your sitemap to tell Google what the important pages of your site are, and you want to submit it regularly for crawling and indexing.
Advertisement
Continue Reading Below
13. You’ve Been Penalized By Google In The Past And Haven’t Cleaned Up Your Act Yet
Google has repeatedly stated that penalties can follow you.
If you’ve had a penalty before and have not cleaned up your act, then Google won’t index your site.
The answer to this question is pretty straightforward: if it’s penalized by Google, they may not be able to do anything about it because penalties follow you around like an uninvited friend who drags their feet on the carpet as they walk through each room of your house.
If you’re wondering why would you still exclude some information from your website since you’re already in trouble with search engines?
The thing is that even though there are ways out of being penalized, many people don’t know how or can no longer make those changes for whatever reason (maybe they sold their company). Some also think that just removing pages and slapping the old content onto a new site will work just as well (it doesn’t).
Advertisement
Continue Reading Below
If you are penalized, the safest route is cleaning up your act from before entirely. You must have all-new content, and re-build the domain from the ground up, or do a complete content overhaul. Google explains that they expect you to take just as long getting out of a penalty as it did for you to get into one.
14. Your Technical SEO Is Terrible
Make no mistake: purchasing technical SEO from Fiverr.com is like purchasing a Lamborghini from a dollar store: you’re likely to get a counterfeit item rather than the real thing.
Doing technical SEO correctly is worth it: Google and your users will love you.
Let’s take a look at some common problems and solutions, and where technical SEO can help you.
Problem: Your site is not hitting Core Web Vitals numbers
Solution: Technical SEO will help you identify the issues with your Core Web Vitals and provide you with a path to correcting these issues. Don’t just put your faith in a strategic audit – this won’t always help you in these areas. You need a full technical SEO audit to unearth some of these issues, because they can range from the downright simple to the incredibly complex.
Advertisement
Continue Reading Below
Problem: Your site is has crawling and indexing issues
Solution: They can be incredibly complex and requires a seasoned technical SEO in order to uncover them and repair them. You must identify them if you’re finding that you are having zero traction or not getting any performance from your site.
Also, make sure that you haven’t accidentally ticked the “discourage search engines from indexing your website” box in WordPress.
Problem: Your site’s robots.txt file is somehow inadvertently blocking crawlers from critical files
Solution: Again, Technical SEO is here to rescue you from the abyss. Some sites are in so deep that you may not see a way out other than deleting the site and starting over. The nuclear option is not always the best option. This is where an experienced technical SEO professional is worth their weight in gold.
Identifying Website Indexing Issues Are A Challenge, But Well Worth Solving
Content, technical SEO, and links are all important to maintaining your site’s performance trajectory. But if your site has indexing issues, the other SEO elements will only get you so far.
Advertisement
Continue Reading Below
Be sure to tick off all the boxes and make sure you really are getting your site out there in the most correct manner.
And don’t forget to optimize every page of your website for relevant keywords! Making sure your technical SEO is up to par is worth it as well because the better Google can crawl, index, and rank your site, the better your results will be.
Google (and your website’s traffic) will thank you.
More Resources:
Featured image: Shutterstock/Sammby
SEO
How to Revive an Old Blog Article for SEO
Quick question: What do you typically do with your old blog posts? Most likely, the answer is: Not much.
If that’s the case, you’re not alone. Many of us in SEO and content marketing tend to focus on continuously creating new content, rather than leveraging our existing blog posts.
However, here’s the reality—Google is becoming increasingly sophisticated in evaluating content quality, and we need to adapt accordingly. Just as it’s easier to encourage existing customers to make repeat purchases, updating old content on your website is a more efficient and sustainable strategy in the long run.
Ways to Optimize Older Content
Some of your old content might not be optimized for SEO very well, rank for irrelevant keywords, or drive no traffic at all. If the quality is still decent, however, you should be able to optimize it properly with little effort.
Refresh Content
If your blog post contains a specific year or mentions current events, it may become outdated over time. If the rest of the content is still relevant (like if it’s targeting an evergreen topic), simply updating the date might be all you need to do.
Rewrite Old Blog Posts
When the content quality is low (you might have greatly improved your writing skills since you’ve written the post) but the potential is still there, there’s not much you can do apart from rewriting an old blog post completely.
This is not a waste—you’re saving time on brainstorming since the basic structure is already in place. Now, focus on improving the quality.
Delete Old Blog Posts
You might find a blog post that just seems unusable. Should you delete your old content? It depends. If it’s completely outdated, of low quality, and irrelevant to any valuable keywords for your website, it’s better to remove it.
Once you decide to delete the post, don’t forget to set up a 301 redirect to a related post or page, or to your homepage.
Promote Old Blog Posts
Sometimes all your content needs is a bit of promotion to start ranking and getting traffic again. Share it on your social media, link to it from a new post – do something to get it discoverable again to your audience. This can give it the boost it needs to attract organic links too.
Which Blog Posts Should You Update?
Deciding when to update or rewrite blog posts is a decision that relies on one important thing: a content audit.
Use your Google Analytics to find out which blog posts used to drive tons of traffic, but no longer have the same reach. You can also use Google Search Console to find out which of your blog posts have lost visibility in comparison to previous months. I have a guide on website analysis using Google Analytics and Google Search Console you can follow.
If you use keyword tracking tools like SE Ranking, you can also use the data it provides to come up with a list of blog posts that have dropped in the rankings.
Make data-driven decisions to identify which blog posts would benefit from these updates – i.e., which ones still have the chance to recover their keyword rankings and organic traffic.
With Google’s helpful content update, which emphasizes better user experiences, it’s crucial to ensure your content remains relevant, valuable, and up-to-date.
How To Update Old Blog Posts for SEO
Updating articles can be an involved process. Here are some tips and tactics to help you get it right.
Author’s Note: I have a Comprehensive On-Page SEO Checklist you might also be interested in following while you’re doing your content audit.
Conduct New Keyword Research
Updating your post without any guide won’t get you far. Always do your keyword research to understand how users are searching for your given topic.
Proper research can also show you relevant questions and sections that can be added to the blog post you’re updating or rewriting. Make sure to take a look at the People Also Ask (PAA) section that shows up when you search for your target keyword. Check out other websites like Answer The Public, Reddit, and Quora to see what users are looking for too.
Look for New Ranking Opportunities
When trying to revive an old blog post for SEO, keep an eye out for new SEO opportunities (e.g., AI Overview, featured snippets, and related search terms) that didn’t exist when you first wrote your blog post. Some of these features can be targeted by the new content you will add to your post, if you write with the aim to be eligible for it.
Rewrite Headlines and Meta Tags
If you want to attract new readers, consider updating your headlines and meta tags.
Your headlines and meta tags should fulfill these three things:
- Reflect the rewritten and new content you’ve added to the blog post.
- Be optimized for the new keywords it’s targeting (if any).
- Appeal to your target audience – who may have changed tastes from when the blog post was originally made.
Remember that your meta tags in particular act like a brief advertisement for your blog post, since this is what the user first sees when your blog post is shown in the search results page.
Take a look at your blog post’s click-through rate on Google Search Console – if it falls below 2%, it’s definitely time for new meta tags.
Replace Outdated Information and Statistics
Updating blog content with current studies and statistics enhances the relevance and credibility of your post. By providing up-to-date information, you help your audience make better, well-informed decisions, while also showing that your content is trustworthy.
Tighten or Expand Ideas
Your old content might be too short to provide real value to users – or you might have rambled on and on in your post. It’s important to evaluate whether you need to make your content more concise, or if you need to elaborate more.
Keep the following tips in mind as you refine your blog post’s ideas:
- Evaluate Helpfulness: Measure how well your content addresses your readers’ pain points. Aim to follow the E-E-A-T model (Experience, Expertise, Authoritativeness, Trustworthiness).
- Identify Missing Context: Consider whether your content needs more detail or clarification. View it from your audience’s perspective and ask if the information is complete, or if more information is needed.
- Interview Experts: Speak with industry experts or thought leaders to get fresh insights. This will help support your writing, and provide unique points that enhance the value of your content.
- Use Better Examples: Examples help simplify complex concepts. Add new examples or improve existing ones to strengthen your points.
- Add New Sections if Needed: If your content lacks depth or misses a key point, add new sections to cover these areas more thoroughly.
- Remove Fluff: Every sentence should contribute to the overall narrative. Eliminate unnecessary content to make your post more concise.
- Revise Listicles: Update listicle items based on SEO recommendations and content quality. Add or remove headings to stay competitive with higher-ranking posts.
Improve Visuals and Other Media
No doubt that there are tons of old graphics and photos in your blog posts that can be improved with the tools we have today. Make sure all of the visuals used in your content are appealing and high quality.
Update Internal and External Links
Are your internal and external links up to date? They need to be for your SEO and user experience. Outdated links can lead to broken pages or irrelevant content, frustrating readers and hurting your site’s performance.
You need to check for any broken links on your old blog posts, and update them ASAP. Updating your old blog posts can also lead to new opportunities to link internally to other blog posts and pages, which may not have been available when the post was originally published.
Optimize for Conversions
When updating content, the ultimate goal is often to increase conversions. However, your conversion goals may have changed over the years.
So here’s what you need to check in your updated blog post. First, does the call-to-action (CTA) still link to the products or services you want to promote? If not, update it to direct readers to the current solution or offer.
Second, consider where you can use different conversion strategies. Don’t just add a CTA at the end of the post.
Last, make sure that the blog post leverages product-led content. It’s going to help you mention your products and services in a way that feels natural, without being too pushy. Being subtle can be a high ROI tactic for updated posts.
Key Takeaway
Reviving old blog articles for SEO is a powerful strategy that can breathe new life into your content and boost your website’s visibility. Instead of solely focusing on creating new posts, taking the time to refresh existing content can yield impressive results, both in terms of traffic and conversions.
By implementing these strategies, you can transform old blog posts into valuable resources that attract new readers and retain existing ones. So, roll up your sleeves, dive into your archives, and start updating your content today—your audience and search rankings will thank you!
SEO
How Compression Can Be Used To Detect Low Quality Pages
The concept of Compressibility as a quality signal is not widely known, but SEOs should be aware of it. Search engines can use web page compressibility to identify duplicate pages, doorway pages with similar content, and pages with repetitive keywords, making it useful knowledge for SEO.
Although the following research paper demonstrates a successful use of on-page features for detecting spam, the deliberate lack of transparency by search engines makes it difficult to say with certainty if search engines are applying this or similar techniques.
What Is Compressibility?
In computing, compressibility refers to how much a file (data) can be reduced in size while retaining essential information, typically to maximize storage space or to allow more data to be transmitted over the Internet.
TL/DR Of Compression
Compression replaces repeated words and phrases with shorter references, reducing the file size by significant margins. Search engines typically compress indexed web pages to maximize storage space, reduce bandwidth, and improve retrieval speed, among other reasons.
This is a simplified explanation of how compression works:
- Identify Patterns:
A compression algorithm scans the text to find repeated words, patterns and phrases - Shorter Codes Take Up Less Space:
The codes and symbols use less storage space then the original words and phrases, which results in a smaller file size. - Shorter References Use Less Bits:
The “code” that essentially symbolizes the replaced words and phrases uses less data than the originals.
A bonus effect of using compression is that it can also be used to identify duplicate pages, doorway pages with similar content, and pages with repetitive keywords.
Research Paper About Detecting Spam
This research paper is significant because it was authored by distinguished computer scientists known for breakthroughs in AI, distributed computing, information retrieval, and other fields.
Marc Najork
One of the co-authors of the research paper is Marc Najork, a prominent research scientist who currently holds the title of Distinguished Research Scientist at Google DeepMind. He’s a co-author of the papers for TW-BERT, has contributed research for increasing the accuracy of using implicit user feedback like clicks, and worked on creating improved AI-based information retrieval (DSI++: Updating Transformer Memory with New Documents), among many other major breakthroughs in information retrieval.
Dennis Fetterly
Another of the co-authors is Dennis Fetterly, currently a software engineer at Google. He is listed as a co-inventor in a patent for a ranking algorithm that uses links, and is known for his research in distributed computing and information retrieval.
Those are just two of the distinguished researchers listed as co-authors of the 2006 Microsoft research paper about identifying spam through on-page content features. Among the several on-page content features the research paper analyzes is compressibility, which they discovered can be used as a classifier for indicating that a web page is spammy.
Detecting Spam Web Pages Through Content Analysis
Although the research paper was authored in 2006, its findings remain relevant to today.
Then, as now, people attempted to rank hundreds or thousands of location-based web pages that were essentially duplicate content aside from city, region, or state names. Then, as now, SEOs often created web pages for search engines by excessively repeating keywords within titles, meta descriptions, headings, internal anchor text, and within the content to improve rankings.
Section 4.6 of the research paper explains:
“Some search engines give higher weight to pages containing the query keywords several times. For example, for a given query term, a page that contains it ten times may be higher ranked than a page that contains it only once. To take advantage of such engines, some spam pages replicate their content several times in an attempt to rank higher.”
The research paper explains that search engines compress web pages and use the compressed version to reference the original web page. They note that excessive amounts of redundant words results in a higher level of compressibility. So they set about testing if there’s a correlation between a high level of compressibility and spam.
They write:
“Our approach in this section to locating redundant content within a page is to compress the page; to save space and disk time, search engines often compress web pages after indexing them, but before adding them to a page cache.
…We measure the redundancy of web pages by the compression ratio, the size of the uncompressed page divided by the size of the compressed page. We used GZIP …to compress pages, a fast and effective compression algorithm.”
High Compressibility Correlates To Spam
The results of the research showed that web pages with at least a compression ratio of 4.0 tended to be low quality web pages, spam. However, the highest rates of compressibility became less consistent because there were fewer data points, making it harder to interpret.
Figure 9: Prevalence of spam relative to compressibility of page.
The researchers concluded:
“70% of all sampled pages with a compression ratio of at least 4.0 were judged to be spam.”
But they also discovered that using the compression ratio by itself still resulted in false positives, where non-spam pages were incorrectly identified as spam:
“The compression ratio heuristic described in Section 4.6 fared best, correctly identifying 660 (27.9%) of the spam pages in our collection, while misidentifying 2, 068 (12.0%) of all judged pages.
Using all of the aforementioned features, the classification accuracy after the ten-fold cross validation process is encouraging:
95.4% of our judged pages were classified correctly, while 4.6% were classified incorrectly.
More specifically, for the spam class 1, 940 out of the 2, 364 pages, were classified correctly. For the non-spam class, 14, 440 out of the 14,804 pages were classified correctly. Consequently, 788 pages were classified incorrectly.”
The next section describes an interesting discovery about how to increase the accuracy of using on-page signals for identifying spam.
Insight Into Quality Rankings
The research paper examined multiple on-page signals, including compressibility. They discovered that each individual signal (classifier) was able to find some spam but that relying on any one signal on its own resulted in flagging non-spam pages for spam, which are commonly referred to as false positive.
The researchers made an important discovery that everyone interested in SEO should know, which is that using multiple classifiers increased the accuracy of detecting spam and decreased the likelihood of false positives. Just as important, the compressibility signal only identifies one kind of spam but not the full range of spam.
The takeaway is that compressibility is a good way to identify one kind of spam but there are other kinds of spam that aren’t caught with this one signal. Other kinds of spam were not caught with the compressibility signal.
This is the part that every SEO and publisher should be aware of:
“In the previous section, we presented a number of heuristics for assaying spam web pages. That is, we measured several characteristics of web pages, and found ranges of those characteristics which correlated with a page being spam. Nevertheless, when used individually, no technique uncovers most of the spam in our data set without flagging many non-spam pages as spam.
For example, considering the compression ratio heuristic described in Section 4.6, one of our most promising methods, the average probability of spam for ratios of 4.2 and higher is 72%. But only about 1.5% of all pages fall in this range. This number is far below the 13.8% of spam pages that we identified in our data set.”
So, even though compressibility was one of the better signals for identifying spam, it still was unable to uncover the full range of spam within the dataset the researchers used to test the signals.
Combining Multiple Signals
The above results indicated that individual signals of low quality are less accurate. So they tested using multiple signals. What they discovered was that combining multiple on-page signals for detecting spam resulted in a better accuracy rate with less pages misclassified as spam.
The researchers explained that they tested the use of multiple signals:
“One way of combining our heuristic methods is to view the spam detection problem as a classification problem. In this case, we want to create a classification model (or classifier) which, given a web page, will use the page’s features jointly in order to (correctly, we hope) classify it in one of two classes: spam and non-spam.”
These are their conclusions about using multiple signals:
“We have studied various aspects of content-based spam on the web using a real-world data set from the MSNSearch crawler. We have presented a number of heuristic methods for detecting content based spam. Some of our spam detection methods are more effective than others, however when used in isolation our methods may not identify all of the spam pages. For this reason, we combined our spam-detection methods to create a highly accurate C4.5 classifier. Our classifier can correctly identify 86.2% of all spam pages, while flagging very few legitimate pages as spam.”
Key Insight:
Misidentifying “very few legitimate pages as spam” was a significant breakthrough. The important insight that everyone involved with SEO should take away from this is that one signal by itself can result in false positives. Using multiple signals increases the accuracy.
What this means is that SEO tests of isolated ranking or quality signals will not yield reliable results that can be trusted for making strategy or business decisions.
Takeaways
We don’t know for certain if compressibility is used at the search engines but it’s an easy to use signal that combined with others could be used to catch simple kinds of spam like thousands of city name doorway pages with similar content. Yet even if the search engines don’t use this signal, it does show how easy it is to catch that kind of search engine manipulation and that it’s something search engines are well able to handle today.
Here are the key points of this article to keep in mind:
- Doorway pages with duplicate content is easy to catch because they compress at a higher ratio than normal web pages.
- Groups of web pages with a compression ratio above 4.0 were predominantly spam.
- Negative quality signals used by themselves to catch spam can lead to false positives.
- In this particular test, they discovered that on-page negative quality signals only catch specific types of spam.
- When used alone, the compressibility signal only catches redundancy-type spam, fails to detect other forms of spam, and leads to false positives.
- Combing quality signals improves spam detection accuracy and reduces false positives.
- Search engines today have a higher accuracy of spam detection with the use of AI like Spam Brain.
Read the research paper, which is linked from the Google Scholar page of Marc Najork:
Detecting spam web pages through content analysis
Featured Image by Shutterstock/pathdoc
SEO
New Google Trends SEO Documentation
Google Search Central published new documentation on Google Trends, explaining how to use it for search marketing. This guide serves as an easy to understand introduction for newcomers and a helpful refresher for experienced search marketers and publishers.
The new guide has six sections:
- About Google Trends
- Tutorial on monitoring trends
- How to do keyword research with the tool
- How to prioritize content with Trends data
- How to use Google Trends for competitor research
- How to use Google Trends for analyzing brand awareness and sentiment
The section about monitoring trends advises there are two kinds of rising trends, general and specific trends, which can be useful for developing content to publish on a site.
Using the Explore tool, you can leave the search box empty and view the current rising trends worldwide or use a drop down menu to focus on trends in a specific country. Users can further filter rising trends by time periods, categories and the type of search. The results show rising trends by topic and by keywords.
To search for specific trends users just need to enter the specific queries and then filter them by country, time, categories and type of search.
The section called Content Calendar describes how to use Google Trends to understand which content topics to prioritize.
Google explains:
“Google Trends can be helpful not only to get ideas on what to write, but also to prioritize when to publish it. To help you better prioritize which topics to focus on, try to find seasonal trends in the data. With that information, you can plan ahead to have high quality content available on your site a little before people are searching for it, so that when they do, your content is ready for them.”
Read the new Google Trends documentation:
Get started with Google Trends
Featured Image by Shutterstock/Luis Molinero
You must be logged in to post a comment Login