SEO
You Can’t Compare Backlink Counts in SEO Tools: Here’s Why
Google knows about 300T pages on the web. It’s doubtful they crawl all of those, and at least according to some documents from their antitrust trial we learned they only indexed 400B. That’s around .133% of the pages they know about, roughly 1 out of every 752 pages.
For Ahrefs, we choose to store about 340B pages in our index as of December 2023.
At a certain point, the quality of the web becomes bad. There are lots of spam and junk pages that just add noise to the data without adding any value to the index.
Large parts of the web are also duplicate content, ~60% according to Google’s Gary Illyes. Most of this is technical duplication caused by different systems. However, if you don’t account for this duplication, it can waste more resources and create more noise in the data.
When building an index of the web, companies have to make many choices around crawling, parsing, and indexing data. While there’s going to be a lot of overlap between indexes, there’s also going to be some differences depending on each company’s decisions.
Comparing link indexes is hard because of all the different choices the various tools have made. I try my best to make some comparisons more fair, but even for a few sites I’m telling you that I don’t want to put in all of the work needed to make an accurate comparison, much less do it for an entire study. You’ll see why I say this later when you read what it would take to compare the data accurately.
However, I did run some tests on a sample of sites and I’ll show you how to check the data yourself. I also pulled some fairly large 3rd party data samples for some additional validation.
Let’s dive in.
If you just looked at dashboard numbers for links and RDs in different tools you might see completely different things.
For example, here’s what we count in Ahrefs:
- Live links
- Live RDs
- 6 months of data
In Semrush, here’s what they count:
- Live + dead links
- Live + dead RDs
- 6 months of data + a bit more*
*By a bit more, what I mean is that their data goes back 6 months and to the start of the previous month. So, for instance, if it’s the 15th of the month, they would actually have about 6.5 months of data instead of 6 months of data. If it’s the last week of the month, they may have close to 7 months of data instead of 6.
This may not seem like a lot, but it can increase the numbers shown by a lot, especially when you’re still counting dead links and dead RDs.
I don’t think SEOs want to see a number that includes dead links. I don’t see a good reason to count them, either, other than to have bigger and potentially misleading numbers.
I only say this because I’ve called Semrush out on making this type of biased comparison before on Twitter, but I stopped arguing when I realized that they really didn’t want the comparison to be fair; they just wanted to win the comparison.
There are some ways you can compare the data to get somewhat similar time periods and only look at active links.
If you filter the Semrush backlinks report for “Active” links, you’ll have a somewhat more accurate number to compare against the Ahrefs dashboard number.
Alternatively, if you use the “Show history: Last 6 months” option in the Ahrefs backlink report, this would include lost links and be a fairer comparison to Semrush’s dashboard number.
Here’s an example of how to get more similar data:
- Semrush Dashboard: 5.1K = Ahrefs (6-month date comparison): 5.6K
- Semrush All Links: 5.1K = Ahrefs (6-month date comparison): 5.6K
- Semrush Active Links: 2.9K = Ahrefs Dashboard: 3.5K = Ahrefs (no date comparison): 3.5K
What you should not compare is Semrush Dashboard and Ahrefs Dashboard numbers. The number in Semrush (5.1K) includes dead links. The number in Ahrefs (3.5K) doesn’t; it’s only live links!
Note that the time periods may not be exactly the same as mentioned before because of the extra days in the Semrush data. You could look at what day their data stops and select that exact day in the Ahrefs data to get an even more accurate, but still not quite accurate comparison.
I don’t think the comparison works at all with larger domains because of an issue in Semrush. Here’s what I saw for semrush.com:
- Semrush Dashboard: 48.7M = Ahrefs (6 month date comparison): 24.7M
- Semrush All Links: 48.7M = Ahrefs (6 month date comparison): 24.7M
- Semrush Active Links: 1.8M = Ahrefs Dashboard: 15.9M = Ahrefs (no date comparison): 15.9M
So that’s 1.8M active links in Semrush vs 15.9M active in Ahrefs. But as I said, I don’t think this is a fair comparison. Semrush seems to have an issue with larger sites. There is a warning in Semrush that says, “Due to the size of the analyzed domain, only the most relevant links will be shown.” It’s possible they’re not showing all the links, but this is suspicious because they will show the total for all links which is a larger number, and I can filter those in other ways.
I can also sort normally by the oldest last seen date and see all the links, but when I do last seen + active, I see only 608K links. I can’t get more than 50k rows in their system to investigate this further, but something is fishy here.
More link differences
The above comparison wouldn’t be enough to make an accurate comparison. There are still a number of differences and problems that make any sort of comparison troublesome.
This tweet is as relevant as the day I wrote it:
It’s almost impossible to do a fair link comparison
Here’s how we count links, but it’s worth mentioning that each tool counts links in different ways.
To recap some of the main points, here are some things we do:
- We store some links inserted with JavaScript, no one else does this. We render ~250M pages a day.
- We have a canonicalization system in place that others may not, which means we shouldn’t count as many duplicates as others do.
- Our crawler tries to be intelligent about what to prioritize for crawling to avoid spam and things like infinite crawl paths.
- We count one link per page, others may count multiple links per page.
These differences make a fair link comparison nearly impossible to do.
How to see where the biggest link differences are
The easiest way to see the biggest discrepancies in link totals is to go to the Referring Domains reports in the tools and sort by the number of links. You can use the dropdowns to see what kinds of issues each index may have with overcounting some links. In many cases, you’re likely to see millions of links from the same site for some of the reasons mentioned above.
For example, when I looked in Semrush I found blogspot links that they claimed to have recently checked, but these are showing 404 when I visit them. Semrush still counts them for some reason. I saw this issue on multiple domains I checked. This is one of those pages:
Lots of links counted as live are actually dead
Seeing the dead link above counted in the total made me want to check how many dead links were in each index. I ran crawls on the list of the most recent live links in each tool to see how many were actually still live.
For Semrush, 49.6% of the links they said were live were actually dead. Some churn is expected as the web changes, but half the links in 6 months indicates that a lot of these may be on the spammier part of the web that isn’t as stable or they’re not re-crawling the links often. For some context, the same number for Ahrefs came back as 17.2% dead.
It’s going to get more complicated to compare these numbers
Ahrefs recently added a filter for “Best links” which you can configure to filter out noise. For instance, if you want to remove all blogspot.com blogs from the report, you can add a filter for it.
This means you’ll only see links you consider important in the reports. This can also be applied to the main dashboard numbers and charts now. If the filter is active, people will see different numbers depending on their settings.
You would think this is straightforward, but it’s not.
Solving for all the issues is a lot of work
There are a lot of different things you’d have to solve for here:
- The extra days in Semrush’s data that you’ll have to remove or add to the Ahrefs number.
- Remember that Semrush also includes dead RDs in their dashboard numbers. So you need to filter their RD report to just “Active” to get the live ones.
- Remember that half the links in the test of Semrush live data were actually dead, so I would suspect that a number of the RDs are actually lost as well. You could possibly look for domains with low link counts and just crawl the listed links from those to remove most of the dead ones.
- After all that, you’re still going to need to strip the domains down to the root domain only to account for the differences in what each tool may be counting as a domain.
What is a domain?
Ahrefs currently shows 206.3M RDs in our database and Semrush shows 1.6B. Domains are being counted in extremely different ways between the tools.
According to the major sources who look at these kinds of things, the number of domains on the internet seems to be between 269M–359M and the number of websites between 1.1B–1.5B, with 191M–200M of them being active.
Semrush’s number of RDs is higher than the number of domains that exist.
I believe Semrush may be confusing different terms. Their numbers match fairly closely with the number of websites on the internet, but that’s not the same as the number of domains. Plus, many of those websites aren’t even live.
It’s going to get more complicated to compare these numbers
Part of our process is dropping spam domains, and we also treat some subdomains as different domains. We come up close to the numbers from other 3rd party studies for the number of active websites and domains, whereas Semrush seems to come in closer to the total number of websites (including inactive ones).
We’re going to simplify our methodology soon so that one domain is actually just one domain. This is going to make our RD numbers go down, but be more accurate to what people actually consider a domain. It’s also going to make for an even bigger disparity in the numbers between the tools.
I ran some quality checks for both the first-seen and last-seen link data. On every site I checked, Ahrefs picked up more links first and on most Ahrefs updated the links more recently than Semrush. Don’t just believe me, though; check for yourself.
Comparing this is biased no matter how you look at it because our data is more granular and includes the hours and minutes instead of just the day. Leaving the hours and minutes creates a biased comparison, and so does removing it. You’ll have to match the URLs and check which date is first or if there is a tie and then count the totals. There will be some different links in each dataset, so you’ll need to do the lookups on each set of data for comparison.
Semrush claims, “We update the backlinks data in the interface every 15 minutes.”
Ahrefs claims, “The world’s largest index of live backlinks, updated with fresh data every 15–30 minutes.”
I pulled data at the same time from both tools to see when the latest links for some popular websites were found. Here’s a summary table:
Domain | Ahrefs Latest | Semrush latest |
---|---|---|
semrush.com | 3 minutes ago | 7 days ago |
ahrefs.com | 2 minutes ago | 5 days ago |
hubspot.com | 0 minutes ago | 9 days ago |
foxnews.com | 1 minute ago | 12 days ago |
cnn.com | 0 minutes ago | 13 days ago |
amazon.com | 0 minutes ago | 6 days ago |
That doesn’t seem fresh at all. Their 15-minute update claim seems pretty dubious to me with so many websites not having updates for many days.
In fairness, for some smaller sites it was more mixed on who showed fresher data. I think they may have some issues with the processing of larger sites.
Don’t just trust me, though; I encourage you to check some websites yourself. Go into the backlinks reports in both tools and sort by last seen. Be sure to share your results on social media.
Ahrefs now receives data from IndexNow
This will make our data even fresher. That’s ~2.5B URLs / day in March 2024. The websites tell us about new pages, deleted pages, or any changes they make so that we can go crawl them and update the data. Read more here.
Ahrefs crawls 7B+ pages every day. Semrush claims they crawl 25B pages per day. This would be ~3.5x what Ahrefs crawls per day. The problem is that I can’t find any evidence that they crawl that fast.
We saw that around half the links that Semrush had marked as active were actually dead compared to about 17% in Ahrefs, which indicated to me that they may not re-crawl links as often. That and the freshness test both pointed to them crawling slower. I decided to look into it.
Logs of my sites
I checked the logs of some of my sites and sites I have access to, and I didn’t see anything to support the claim that Semrush crawls faster. If you have access to logs of your own site, you should be able to check which bots are crawling the fastest.
80,000 months of log data
I was curious and wanted to look at bigger samples. I used Web Explorer and a few different footprints (patterns) to find log file summaries produced by AWStats and Webalizer. These are often published on the web.
I scraped and parsed ~80,000 log file summaries that contained 1 month of data each and were generated in the last couple of years. This sample contained over 9k websites in total.
I did not see evidence of Semrush crawling many times faster than Ahrefs for these sites, as they claim they do. The only bot that was crawling much faster than Ahrefsbot in this dataset was Googlebot. Even other search engines were behind our crawl rate.
That’s just data from a small-ish number of sites compared to the scale of the web. What about for a larger chunk of the web?
Data from 20%+ of web traffic
At the time of writing, Cloudflare Radar has Ahrefsbot as the #7 most active bot on the web and Semrushbot at #40.
While this isn’t a complete picture of the web, it’s a fairly large chunk. In 2021, Cloudflare was said to manage ~20% of the web’s traffic, up from ~10% in 2018. It’s likely much higher now with that kind of growth. I couldn’t find the numbers from 2021, but in early 2022 they were handling 32 million HTTP requests / second on average and in early 2023 they had already grown to handling 45 million HTTP requests / second on average, over 40% more in one year!
Additionally, ~80% of websites that use a CDN use Cloudflare. They handle many of the larger sites on the web; BuiltWith shows that Cloudflare is used by ~32% of the Top 1M websites. That’s a significant sample size and likely the largest sample that exists.
How much do SEO tools crawl?
Some of the SEO tools share the number of pages they crawl on their websites. The only one in the chart below that doesn’t have a publicly published crawl rate is AhrefsSiteAudit bot, but I asked our team to pull the info for this. Let me put the rankings in perspective with actual and claimed crawl rates.
Ranking | Bot | Crawl Rate |
---|---|---|
7 | Ahrefsbot | 7B+ / day |
27 | DataForSEO Bot | 2B / day |
29 | AhrefsSiteAudit | 600M – 700M / day |
35 | Botify | 143.3M / day |
40 | Semrushbot | 25B / day* claimed |
The math isn’t mathing. How can Semrush claim they’re crawling multiple times as fast as these others, but their ranking is lower? Cloudflare doesn’t cover the entire web, but it’s a large chunk of the web and a more than representative sample size.
When they originally made this 25B claim, I believe they were closer to 90th on Cloudflare Radar, near the bottom of the list at the time. Semrush hasn’t updated this number since then, and I recall a period of time where they were in the 60s-70s on Cloudflare Radar as well. They do seem to be getting faster, but their claimed numbers still don’t add up.
I don’t hear SEOs raving about Moz or Sistrix having the best link data, but they are 21st and 36th on the list respectively. Both are higher than Semrush.
Possible explanations of differences
Semrush may be conflating the term pages with links, which is actually mentioned in some of their documentation. I don’t want to link to it, but you can find it with this quote: “Daily, our bot crawls over 25 billion links”. But links are not the same thing as pages and there can be hundreds of links on a single page.
It’s also possible they’re crawling a portion of the web that’s just more spammy and isn’t reflected in the data from either of the sources I looked at. Some of the numbers indicate this may be the case.
Y’all shouldn’t trust studies done by a specific vendor when it compares them to others, even this one. I try to be as fair as I can be and follow the data, but since I work at Ahrefs you can hardly consider me unbiased. Go look at the data yourselves and run your own tests.
There are some folks in the SEO community who try to do these tests every once in a while. The last major 3rd party study was run by Matthew Woodward, who initially declared Semrush the winner, but the conclusion was changed and Ahrefs was ultimately declared to be the rightful winner. What happened?
The methodology chosen for the study heavily favored Semrush and was investigated by a friend of mine, Russ Jones, may he rest in peace. Here’s what Russ had to say about it:
While services like Majestic and Ahrefs likely store a single canonical IP address per domain, SEMRush seems to store per link, which accounts for why there would be more IPs that referring domains in some cases. I do not think SEMRush is intentionally inflating their numbers, I think they are storing the data in a different way than competitors which results in a number that is higher and potentially misleading, but not due to ill intent.
The response from Matthew indicated that Semrush might have misled him in their favor. Here’s that comment:
In the end, Ahrefs won.
Check our current stats on our big data page.
While Semrush doesn’t provide current hardware stats, they did provide some in the past when they made changes to their link index.
In June 2019, they made an announcement that claimed they had the biggest index. The test from Matthew Woodward that I talked about happened after this test, and as you saw, Ahrefs won that.
In June 2021, they made another announcement about their link index that claimed they were the biggest, fastest, and best.
These are some stats they released at the time:
- 500 servers
- 16,128 cpu cores
- 245 TB of memory
- 13.9 PB of storage
- 25B+ pages / day
- 43.8T links
The release said they increased storage, but their previous release said they had 4000 PBs of storage. They said the storage was 4x, so I guess the previous number was supposed to be 4000 TBs and not 4000 PBs, and they just got mixed up on the terminology.
I checked our numbers at the time, and this is how we matched up:
- 2400 servers (~5x greater)
- 200,000 cpu cores (~12.5x greater)
- 900 TB of memory (~4x greater)
- 120 PB of storage (~9x greater)
- 7B pages / day (~3.5x less???)
- 2.8T live links (I’m not sure the total size, but to this day it’s not as big as the number they claimed)
They were claiming more links and faster crawling with much less storage and hardware. Granted, we don’t know the details of the hardware, but we don’t run on dated tech.
They claimed to store more links than we have even now and in less space than we add to our system each month. It really doesn’t make sense.
Final thoughts
Don’t blindly trust the numbers on the dashboards or the general numbers because they may represent completely different things. While there’s no perfect way to compare the data between different tools, you can run many of the checks I showed to try to compare similar things and clean up the data. If something looks off, ask the tool vendors for an explanation.
If there ever comes a time when we stop winning on things like tech and crawl speed, go ahead and switch to another tool and stop paying us. But until that time, I’d be highly skeptical of any claims by other tools.
If you have questions, message me on X.
SEO
How Compression Can Be Used To Detect Low Quality Pages
The concept of Compressibility as a quality signal is not widely known, but SEOs should be aware of it. Search engines can use web page compressibility to identify duplicate pages, doorway pages with similar content, and pages with repetitive keywords, making it useful knowledge for SEO.
Although the following research paper demonstrates a successful use of on-page features for detecting spam, the deliberate lack of transparency by search engines makes it difficult to say with certainty if search engines are applying this or similar techniques.
What Is Compressibility?
In computing, compressibility refers to how much a file (data) can be reduced in size while retaining essential information, typically to maximize storage space or to allow more data to be transmitted over the Internet.
TL/DR Of Compression
Compression replaces repeated words and phrases with shorter references, reducing the file size by significant margins. Search engines typically compress indexed web pages to maximize storage space, reduce bandwidth, and improve retrieval speed, among other reasons.
This is a simplified explanation of how compression works:
- Identify Patterns:
A compression algorithm scans the text to find repeated words, patterns and phrases - Shorter Codes Take Up Less Space:
The codes and symbols use less storage space then the original words and phrases, which results in a smaller file size. - Shorter References Use Less Bits:
The “code” that essentially symbolizes the replaced words and phrases uses less data than the originals.
A bonus effect of using compression is that it can also be used to identify duplicate pages, doorway pages with similar content, and pages with repetitive keywords.
Research Paper About Detecting Spam
This research paper is significant because it was authored by distinguished computer scientists known for breakthroughs in AI, distributed computing, information retrieval, and other fields.
Marc Najork
One of the co-authors of the research paper is Marc Najork, a prominent research scientist who currently holds the title of Distinguished Research Scientist at Google DeepMind. He’s a co-author of the papers for TW-BERT, has contributed research for increasing the accuracy of using implicit user feedback like clicks, and worked on creating improved AI-based information retrieval (DSI++: Updating Transformer Memory with New Documents), among many other major breakthroughs in information retrieval.
Dennis Fetterly
Another of the co-authors is Dennis Fetterly, currently a software engineer at Google. He is listed as a co-inventor in a patent for a ranking algorithm that uses links, and is known for his research in distributed computing and information retrieval.
Those are just two of the distinguished researchers listed as co-authors of the 2006 Microsoft research paper about identifying spam through on-page content features. Among the several on-page content features the research paper analyzes is compressibility, which they discovered can be used as a classifier for indicating that a web page is spammy.
Detecting Spam Web Pages Through Content Analysis
Although the research paper was authored in 2006, its findings remain relevant to today.
Then, as now, people attempted to rank hundreds or thousands of location-based web pages that were essentially duplicate content aside from city, region, or state names. Then, as now, SEOs often created web pages for search engines by excessively repeating keywords within titles, meta descriptions, headings, internal anchor text, and within the content to improve rankings.
Section 4.6 of the research paper explains:
“Some search engines give higher weight to pages containing the query keywords several times. For example, for a given query term, a page that contains it ten times may be higher ranked than a page that contains it only once. To take advantage of such engines, some spam pages replicate their content several times in an attempt to rank higher.”
The research paper explains that search engines compress web pages and use the compressed version to reference the original web page. They note that excessive amounts of redundant words results in a higher level of compressibility. So they set about testing if there’s a correlation between a high level of compressibility and spam.
They write:
“Our approach in this section to locating redundant content within a page is to compress the page; to save space and disk time, search engines often compress web pages after indexing them, but before adding them to a page cache.
…We measure the redundancy of web pages by the compression ratio, the size of the uncompressed page divided by the size of the compressed page. We used GZIP …to compress pages, a fast and effective compression algorithm.”
High Compressibility Correlates To Spam
The results of the research showed that web pages with at least a compression ratio of 4.0 tended to be low quality web pages, spam. However, the highest rates of compressibility became less consistent because there were fewer data points, making it harder to interpret.
Figure 9: Prevalence of spam relative to compressibility of page.
The researchers concluded:
“70% of all sampled pages with a compression ratio of at least 4.0 were judged to be spam.”
But they also discovered that using the compression ratio by itself still resulted in false positives, where non-spam pages were incorrectly identified as spam:
“The compression ratio heuristic described in Section 4.6 fared best, correctly identifying 660 (27.9%) of the spam pages in our collection, while misidentifying 2, 068 (12.0%) of all judged pages.
Using all of the aforementioned features, the classification accuracy after the ten-fold cross validation process is encouraging:
95.4% of our judged pages were classified correctly, while 4.6% were classified incorrectly.
More specifically, for the spam class 1, 940 out of the 2, 364 pages, were classified correctly. For the non-spam class, 14, 440 out of the 14,804 pages were classified correctly. Consequently, 788 pages were classified incorrectly.”
The next section describes an interesting discovery about how to increase the accuracy of using on-page signals for identifying spam.
Insight Into Quality Rankings
The research paper examined multiple on-page signals, including compressibility. They discovered that each individual signal (classifier) was able to find some spam but that relying on any one signal on its own resulted in flagging non-spam pages for spam, which are commonly referred to as false positive.
The researchers made an important discovery that everyone interested in SEO should know, which is that using multiple classifiers increased the accuracy of detecting spam and decreased the likelihood of false positives. Just as important, the compressibility signal only identifies one kind of spam but not the full range of spam.
The takeaway is that compressibility is a good way to identify one kind of spam but there are other kinds of spam that aren’t caught with this one signal. Other kinds of spam were not caught with the compressibility signal.
This is the part that every SEO and publisher should be aware of:
“In the previous section, we presented a number of heuristics for assaying spam web pages. That is, we measured several characteristics of web pages, and found ranges of those characteristics which correlated with a page being spam. Nevertheless, when used individually, no technique uncovers most of the spam in our data set without flagging many non-spam pages as spam.
For example, considering the compression ratio heuristic described in Section 4.6, one of our most promising methods, the average probability of spam for ratios of 4.2 and higher is 72%. But only about 1.5% of all pages fall in this range. This number is far below the 13.8% of spam pages that we identified in our data set.”
So, even though compressibility was one of the better signals for identifying spam, it still was unable to uncover the full range of spam within the dataset the researchers used to test the signals.
Combining Multiple Signals
The above results indicated that individual signals of low quality are less accurate. So they tested using multiple signals. What they discovered was that combining multiple on-page signals for detecting spam resulted in a better accuracy rate with less pages misclassified as spam.
The researchers explained that they tested the use of multiple signals:
“One way of combining our heuristic methods is to view the spam detection problem as a classification problem. In this case, we want to create a classification model (or classifier) which, given a web page, will use the page’s features jointly in order to (correctly, we hope) classify it in one of two classes: spam and non-spam.”
These are their conclusions about using multiple signals:
“We have studied various aspects of content-based spam on the web using a real-world data set from the MSNSearch crawler. We have presented a number of heuristic methods for detecting content based spam. Some of our spam detection methods are more effective than others, however when used in isolation our methods may not identify all of the spam pages. For this reason, we combined our spam-detection methods to create a highly accurate C4.5 classifier. Our classifier can correctly identify 86.2% of all spam pages, while flagging very few legitimate pages as spam.”
Key Insight:
Misidentifying “very few legitimate pages as spam” was a significant breakthrough. The important insight that everyone involved with SEO should take away from this is that one signal by itself can result in false positives. Using multiple signals increases the accuracy.
What this means is that SEO tests of isolated ranking or quality signals will not yield reliable results that can be trusted for making strategy or business decisions.
Takeaways
We don’t know for certain if compressibility is used at the search engines but it’s an easy to use signal that combined with others could be used to catch simple kinds of spam like thousands of city name doorway pages with similar content. Yet even if the search engines don’t use this signal, it does show how easy it is to catch that kind of search engine manipulation and that it’s something search engines are well able to handle today.
Here are the key points of this article to keep in mind:
- Doorway pages with duplicate content is easy to catch because they compress at a higher ratio than normal web pages.
- Groups of web pages with a compression ratio above 4.0 were predominantly spam.
- Negative quality signals used by themselves to catch spam can lead to false positives.
- In this particular test, they discovered that on-page negative quality signals only catch specific types of spam.
- When used alone, the compressibility signal only catches redundancy-type spam, fails to detect other forms of spam, and leads to false positives.
- Combing quality signals improves spam detection accuracy and reduces false positives.
- Search engines today have a higher accuracy of spam detection with the use of AI like Spam Brain.
Read the research paper, which is linked from the Google Scholar page of Marc Najork:
Detecting spam web pages through content analysis
Featured Image by Shutterstock/pathdoc
SEO
New Google Trends SEO Documentation
Google Search Central published new documentation on Google Trends, explaining how to use it for search marketing. This guide serves as an easy to understand introduction for newcomers and a helpful refresher for experienced search marketers and publishers.
The new guide has six sections:
- About Google Trends
- Tutorial on monitoring trends
- How to do keyword research with the tool
- How to prioritize content with Trends data
- How to use Google Trends for competitor research
- How to use Google Trends for analyzing brand awareness and sentiment
The section about monitoring trends advises there are two kinds of rising trends, general and specific trends, which can be useful for developing content to publish on a site.
Using the Explore tool, you can leave the search box empty and view the current rising trends worldwide or use a drop down menu to focus on trends in a specific country. Users can further filter rising trends by time periods, categories and the type of search. The results show rising trends by topic and by keywords.
To search for specific trends users just need to enter the specific queries and then filter them by country, time, categories and type of search.
The section called Content Calendar describes how to use Google Trends to understand which content topics to prioritize.
Google explains:
“Google Trends can be helpful not only to get ideas on what to write, but also to prioritize when to publish it. To help you better prioritize which topics to focus on, try to find seasonal trends in the data. With that information, you can plan ahead to have high quality content available on your site a little before people are searching for it, so that when they do, your content is ready for them.”
Read the new Google Trends documentation:
Get started with Google Trends
Featured Image by Shutterstock/Luis Molinero
SEO
All the best things about Ahrefs Evolve 2024
Hey all, I’m Rebekah and I am your Chosen One to “do a blog post for Ahrefs Evolve 2024”.
What does that entail exactly? I don’t know. In fact, Sam Oh asked me yesterday what the title of this post would be. “Is it like…Ahrefs Evolve 2024: Recap of day 1 and day 2…?”
Even as I nodded, I couldn’t get over how absolutely boring that sounded. So I’m going to do THIS instead: a curation of all the best things YOU loved about Ahrefs’ first conference, lifted directly from X.
Let’s go!
OUR HUGE SCREEN
The largest presentation screen I’ve ever seen! #ahrefsevolve pic.twitter.com/oboiMFW1TN
— Patrick Stox (@patrickstox) October 24, 2024
This is the biggest presentation screen I ever seen in my life. It’s like iMax for SEO presentations. #ahrefsevolve pic.twitter.com/sAfZ1rtePx
— Suganthan Mohanadasan (@Suganthanmn) October 24, 2024
CONFERENCE VENUE ITSELF
It was recently named the best new skyscraper in the world, by the way.
The Ahrefs conference venue feels like being in inception. #AhrefsEvolve pic.twitter.com/18Yjai1Cej
— Suganthan Mohanadasan (@Suganthanmn) October 24, 2024
I’m in Singapore for @ahrefs Evolve this week. Keen to connect with people doing interesting work on the future of search / AI #ahrefsevolve pic.twitter.com/s00UkIbxpf
— Alex Denning (@AlexDenning) October 23, 2024
OUR AMAZING SPEAKER LINEUP – SUPER INFORMATIVE, USEFUL TALKS!
A super insightful explanation of how Google Search Ranking works #ahrefsevolve pic.twitter.com/Cd1VSET2Aj
— Amanda Walls (@amandajwalls) October 24, 2024
“would I even do this if Google didn’t exist?” – what a great question to assess if you actually have the right focus when creating content amazing presentation from @amandaecking at #AhrefsEvolve pic.twitter.com/a6OKbKxwiS
— Aleyda Solis ️ (@aleyda) October 24, 2024
Attending @CyrusShepard ‘s talk on WTF is Helpful Content in Google’s algorithm at #AhrefsEvolve
“Focus on people first content”
Super relevant for content creators who want to stay ahead of the ever evolving Google search curve! #SEOTalk #SEO pic.twitter.com/KRTL13SB0g
This is the first time I am listening to @aleyda and it is really amazing. Lot of insights and actionable information.
Thank you #aleyda for power packed presentation.#AhrefsEvolve @ahrefs #seo pic.twitter.com/Xe3A9MGfrr
— Jignesh Gohel (@jigneshgohel) October 25, 2024
— Parth Suba (@parthsuba77) October 24, 2024
@thinking_slows thoughts on AI content – “it’s very good if you want to be average”.
We can do a lot better and Ryan explains how. Love it @ahrefs #AhrefsEvolve pic.twitter.com/qFqWs6QBH5
— Andy Chadwick (@digitalquokka) October 24, 2024
A super insightful explanation of how Google Search Ranking works #ahrefsevolve pic.twitter.com/Cd1VSET2Aj
— Amanda Walls (@amandajwalls) October 24, 2024
This is the first time I am listening to @aleyda and it is really amazing. Lot of insights and actionable information.
Thank you #aleyda for power packed presentation.#AhrefsEvolve @ahrefs #seo pic.twitter.com/Xe3A9MGfrr
— Jignesh Gohel (@jigneshgohel) October 25, 2024
GREAT MUSIC
First time I’ve ever Shazam’d a track during SEO conference ambience…. and the track wasn’t even Shazamable! #AhrefsEvolve @ahrefs pic.twitter.com/ZDzJOZMILt
— Lily Ray (@lilyraynyc) October 24, 2024
AMAZING GOODIES
Ahrefs Evolveきました!@ahrefs @AhrefsJP #AhrefsEvolve pic.twitter.com/33EiejQPdX
— さくらぎ (@sakuragi_ksy) October 24, 2024
Aside from the very interesting topics, what makes this conference even cooler are the ton of awesome freebies
Kudos for making all of these happen for #AhrefsEvolve @ahrefs team pic.twitter.com/DGzk5FSTN8
— Krista Melgarejo (@kimelgarejo) October 24, 2024
Content Goblin and SEO alligator party stickers are definitely going on my laptop. @ahrefs #ahrefsevolve pic.twitter.com/QBsBuY5Yix
— Patrick Stox (@patrickstox) October 24, 2024
This is one of the best swag bags I’ve received at any conference!
Either @ahrefs actually cares or the other conference swag bags aren’t up to par w Ahrefs!#AhrefsEvolve pic.twitter.com/Yc9e6wZPHn— Moses Sanchez (@SanchezMoses) October 25, 2024
SELFIE BATTLE
Some background: Tim and Sam have a challenge going on to see who can take the most number of selfies with all of you. Last I heard, Sam was winning – but there is room for a comeback yet!
Got the rare selfie with both @timsoulo and @samsgoh #AhrefsEvolve
— Bernard Huang (@bernardjhuang) October 24, 2024
THAT BELL
Everybody’s just waiting for this one.
@timsoulo @ahrefs #AhrefsEvolve pic.twitter.com/6ypWaTGDDP
— Jinbo Liang (@JinboLiang) October 24, 2024
STICKER WALL
Viva la vida, viva Seo!
Awante Argentina loco!#AhrefsEvolve pic.twitter.com/sfhbI2kWSH
— Gaston Riera. (@GastonRiera) October 24, 2024
AND, OF COURSE…ALL OF YOU!
#AhrefsEvolve let’s goooooooooooo!!! pic.twitter.com/THtdvdtUyB
— Tim Soulo (@timsoulo) October 24, 2024
–
There’s a TON more content on LinkedIn – click here – but I have limited time to get this post up and can’t quite figure out how to embed LinkedIn posts so…let’s stop here for now. I’ll keep updating as we go along!