SEO
10 webbplatser som försökte lura Google (och misslyckades)

SEOs have been trying to outsmart Google for as long as it has existed. Sometimes, they win; other times, their attempts fail.
Here are 10 extraordinary examples of websites that failed to fool Google.

Key stats
- 2,794,551 keywords ranking, August 2022
- 255,527 pages, August 2022
- Estimated traffic: 3.6M, August 2022
Conch House is a review site that ranked for over 250,000 “best+[product]” keywords and acquired 14,000 number #1 rankings.
This huge rise in organic traffic meant the website became notorious in certain SEO circles.


We can see there is some serious money at play here, and this didn’t go unnoticed on SEO Twitter.
How did it fool Google?
It used an authoritative expired domain and created content targeting “best +[product]” keywords.
It also used dynamically created pricing tables on its landing pages to encourage visitors to click through to Amazon and maximize its income. If a user bought a product, it would earn an affiliate commission.
Shockingly, for a few months, this setup was good enough to fool Google.
Let’s take a closer look at what happened.
Innehåll
There was speculation that the site used AI to create its content, but it might have been much simpler than that.
It might have used concatenation to stitch sentences together and insert product titles to slightly differentiate each post.
Take a look at some of its posts:




From the website, you can see several stock phrases it used to build content at scale quickly.
Examples:
- "Are you looking for the most reliable and efficient [product title]?”
- "If so, congratulations!”
- "What is the importance of [product title] to you?”
- "The service we provide will save you time from reading thousands of reviews.”
It also used dynamic pricing tables like the one below to supplement the content.




If we jump into the code, we can see this is a popular table plugin called AAWP.




This plugin allowed the site owners to:
- Create best-selling product lists at scale for all of their posts dynamically.
- Geo-target multiple locations.
- Get automatic updates of product information via Amazon’s Product Advertising API.
Links
I mentioned earlier that this site used an expired domain—let’s look at it.
Let’s go to the Top pages Rapportera in Site Explorer:




- Click on the downward facing caret ▼ next to the homepage URL
- Klick View on Archive.org
We can see the site’s history:




Surprisingly, the domain has a continuous history back to 1996, one of the things the new owners might have been interested in. With history also came links and a degree of trust from Google.




Sidenote.
Having a continuous history this far back in the Wayback Machine is not that common. Most domain names and websites have changed digital hands many times. (1996 was two years before Google was released—making this website older than Google.)
How did it fail?
The site got serious traction for about three months. But then the traffic suddenly plummeted, and the domain was auctioned off.
The site has had three significant dips in organic traffic in its lifespan:
In a further twist, Glen Gabe, who was following the site, tweeted that he believed it was not impacted by the HCU.
What do you think happened?
Did it really fail to fool Google?
In terms of organic traffic, it’s much lower than it was previously, with no hope of returning to those previous levels.
However, you could argue that the site owners successfully exploited a gap in Google’s algorithm at the right time to make money. If that was their only objective, then they achieved it.




Key stats
- 2,794,551 keywords ranking, August 2022
- 255,527 pages, August 2022
- Estimated organic traffic: 3.6M, August 2022
Just going by its homepage, this is a website that, in all honesty, doesn’t look that bad.




But this website may remind you not to judge a book by its cover.
How did it fool Google?
This site scraped content word for word from Amazon product listings and targeted “best +[product]” queries. It also likely used some dodgy-looking high Domain Rating (DR) links to try and fool Google.
Let’s take a closer look at what happened.
Innehåll
When it came to great content, this site didn’t disappoint. Somehow, Google didn’t seem to pick up on this site, but it didn’t go unnoticed by SEOs on Twitter.
Let’s look at the site.
Looking at one of the highest traffic pages from the Top pages report in August, we see the formatting of the page looks like an Amazon product page.




Let’s see how it compares.
If we copy and paste part of the product description into Google, it immediately brings up the equivalent Amazon product page for the same product in the search results.




If I paste the content from the Amazon product listing and the equivalent from this website into Diffchecker, we can see that the two files are identical.




Using Ahrefs, we can get an idea of how many of these types of pages exist.
Let’s go into Site Explorer’s Top pages report and add a URL filter containing “best-” to filter for any “best-” keywords.




We can see that this site was targeting ~77,809 “best” keywords at its peak.
If we add a “Position” filter to filter out only the number #1 ranking positions, we can see that this site had ~4,583 number #1 ranking pages at its peak.
This is what it looks like in Ahrefs’ Top pages report.




Using this level of low-quality scraped content, it was able to reach huge levels of traffic, even in 2022—albeit briefly.
Links
Let’s plug this site into the Wayback Machine to see its history.




As it’s a relatively new domain from 2020, we can probably assume to get these traffic levels, it’ll need some high DR links.
We can look at the domains by going to the Referring domains rapport i Ahrefs Site Explorer. We can see that this site has some dubious-looking high DR URLs pointing to it.




Here’s an example of an unconventional-looking DR 81 site.




We can see the link if we open the inspector and search for the domain name.




How did it fail?
The site had two sharp drops—one in mid-June and the other steeper one in September.
The organic traffic drop rebounds slightly in early 2023, potentially suggesting that this is not a manual penalty.




It’s impossible to confirm what exakt happened here. But based on the timing of the second drop, we can speculate that this site was hit at least by the HCU.
What do you think happened?
Did it really fail to fool Google?
For the most part, yes. But judging by the most recent organic traffic graph, the site appears to be trying to resuscitate itself.




Key stats
- 604,926 keywords, March 2022
- 82,611 pages, March 2022
- Total traffic: 1.3M, March 2022
This next site is a basic-looking blog loaded with lots of display ads.
Can you spot the content?




The site had quite a good run compared to other sites mentioned here but experienced a slow death for almost a year.
How did it fool Google?
One of the problems this site may have had (apart from its low-quality content) is the sharp increase in unnatural-looking links.
Let’s take a closer look at what happened.
Innehåll
The website describes itself as: “The place to answer many questions in life, study, and work.” If we head to the Handla om page, we don’t get much more information—just four lines of content surrounded by ads.
Not a good start.




The content appears to be a mix of People Also Ask (PAA) spam, forum comments, lists, and reviews. Most pages are no longer live. But if we look at the Wayback Machine, we can see how it used to look.




If we look at one of the articles, we see it isn’t helpful or high-quality content.




This content could have been scraped from a forum or somewhere like Quora.
And here’s another example:




These examples indicate that the content standard, on the whole, is very low.
Links
Looking at the Referring Domains report, we can see some potentially dubious-looking links.




The same DR 81 site we saw linking to our second site also links to this website.




At least two other sites in this list also seem suspicious. We will come back to these later.
How did it fail?
If we go back to Overview 2.0 and switch to the “Referring domains” graph, we can see a significant increase in referring domains in December. This looks suspicious to me.




The huge spike is probably a red flag for Google’s algorithm.
As soon as the sharp increase in the graph starts to fall, we see a much more dramatic plunge in organic traffic.




These updates appear to have killed the site completely, and now the organic traffic is flat-lining.
Did it really fail to fool Google?
Yes. Undoubtedly.




Key stats
- 4,047,559 keywords, April 2022
- 523,455 pages, April 2022
- Total traffic: 1.3M, April 2022
When you come across a website called Bike Hike, you’ll probably assume that it has something to do with bikes, cycling, or even hiking.
However, in this case—you’d be mistaken.
At its organic traffic peak, this site’s top keyword was “slurp fish.” Make of that what you will.
And here’s a Twitter reaction to the site:




How did it fool Google?
This site used an expired domain and created People Also Ask (PAA) spam content. It also 301 redirected several expired domains with high DR to its domain.
Let’s take a closer look at what happened.
Innehåll
Let’s start by going to the Organic keywords rapport i Ahrefs Site Explorer and see what terms this site was ranking for at its peak.




Looking at the URL column, we can see that this is probably a People Also Ask (PAA) spam site—as all ranking keywords appear to target these low Keyword Difficulty (KD) och unrelated PAA questions.
Let’s go further and filter the domains to include only those in position #1.




We can see that many PAA-style keywords held the top spot in Google in April 2022, near this site’s peak.
It’s safe to say that none of these landing pages are on a consistent topic. And it seems they were designed to exploit Google’s algorithm.
Looking at one of the pages, we can see that the content is never longer than four lines and is followed by a heading.




Once we put a few lines of this content into Google search, we can see this content is likely scraped from multiple websites.




Looking through some of the other top-performing pages, we can also find they follow the same content format. I believe this site scraped PAA questions and answers from multiple websites for content and turned them into articles.
Links
When it comes to links, one of the tactics used was to 301 redirect expired domains with bbc.co.uk (DR 93) links to its website.
We can see this in Ahrefs using the Bakåtlänkar report and adding a “Referring page domain” filter: bbc.co.uk.




Looking at the Anchor and target URL column, we can see that the ultimate destination for all these links is Bike Hike. This looks like an unnatural and unlikely pattern.
Adding to this, the site was built on an expired domain. We can see this using the Wayback Machine.




The site doesn’t have a continuous history, and the most recent owners likely bought the site in 2021.
Let’s go back in time and see what it used to look like.




Bike Hike 2018 looks like it had something to do with cycling. From what I can see, it used to be a legitimate site. So it seems that the recent owners of the domain revived it to exploit this site’s past authority and links.
How did it fail?
The sharp fall of this site’s organic traffic graph looks like a site hit with a manual action.
Sidenote.
The only way to confirm if there is a manual action is to check the site’s Google Search Console. As we don’t have access to this, we can only speculate that this happened based on the sharp drop in the organic traffic graph.
What do you think happened?
Did it really fail to fool Google?
Yes. (Once a site receives a manual action, it’s usually game over.)




Key stats
- 1,397 keywords, March 2022
- 957 pages, March 2022
- Total traffic: 664.6K, March 2022
From the domain name alone, it feels like this site could be a legitimate business. But as you’ll see, when we look at the content and the links on the site, it becomes clear that this website has changed from its original intended purpose.
How did it fool Google?
There are examples of paid link placements on the blog, and it looks like the site is trying to rank for betting login pages.
When it comes to links, there are also a few suspicious ones there. When it comes to content, it appears to not be the best either.
So what happened?
Innehåll
Let’s start by looking at the history of this domain through the Wayback Machine.




The history of this website looks continuous back to 2008–2009, but the domain expired around August 17, 2019.




This is how the site used to look in 2009. Even though the design is old-fashioned, it still looks much more legitimate than the present site.




We can see the difference if we compare that with the current site version.




Even though the images aren’t rendered properly, we can see how the text content has changed for the worse.
With the focus of the content changing to topics like “casino,” “gambling,” and “loans,” this immediately looks like a red flag.
Let’s take a closer look at some of the articles on the site to see what we can find.




Selling links is against Google’s terms of service. But the above is an example of a link that was probably paid for, with no attempt to hide it.
Here’s another example—this time for a different type of service.




You can see the pattern that is emerging.
It turns out that almost every article in the blog has a single exact-match keyword anchor text link within a body of text, which leads me to believe that links were likely being sold here.
The final nail in the coffin is the website categories—when you see dating, loan, and casino in the same category as SEO, you know that it’s likely links were sold here.




If we turn our attention to the Top pages Rapportera in Site Explorer, we can see that one of the primary purposes of the site was to rank for login pages of betting and cockfighting sites.




This shows the website’s intent and doesn’t look like a typical website’s top pages.
Links
As we have seen above, the site is an expired domain that has been revived, so it has gained a decent amount of links and authority in that period.
If we look at the Referring domains report, we can see the type of high DR links that this site once had.




Here’s an example of a directory link (DR 72) from a website called the Link Centre.




Here’s another example of a DR 72 website with a non-related link to our website on the profile page.




With two university links in the top Referring domains report, you wonder whether these were compromised sites as well.
How did it fail?
Taking a look at Overview 2.0, we can see that there were two critical Google updates in this period.




- The first is the Product Reviews Update, which occurred on March 23, 2022. The second is the May 2022 Core Update. Although there isn’t an exact correlation here, these updates could have impacted the site.
- As I suspect they were selling paid links, this could have been another factor. But the gradual fall of the organic traffic chart doesn’t align with the typical look of a manual penalty.
What do you think happened?
Did it really fail to fool Google?
The website didn’t reach the heights of some of our other sites mentioned here, but it did manage to fool Google for around a month.
The site hasn’t been resurrected since, so it’s fair to say that it didn’t fool Google.




Key stats
This next website was created by someone who hated vowels in their domain names.
The main idea behind the site seems to have been to create a database-driven site targeting “people named [insert name]” searches.
If we add an “Include” filter into Sökordsutforskaren, we can see a lot of low-competition searches for this type of query, almost 22,000. This is probably why they decided to create a site around this particular topic.




How did it fool Google?
One of the problems with this site is that most of the content was scraped, and it had several dodgy high DR links pointing to it that didn’t look natural.
Let’s take a closer look at what happened.
Innehåll
If we go to the Top pages report on August 19, we can see that the top page is “People named Dick.”




Let’s look at what this page looks like on the site.




You can see there isn’t that much information here, and most of the content is ads.
Even when we scroll down the page, we can see that it’s a list of people with some key facts about them and nothing more.




I’ve decided to click on the profile of the number #1 Dick—Dick Cheney.
Pasting the text of his detailed profile in Google brings up a match for Wikipedia.




Checking a few other profiles on the site using this method also brings up Wikipedia, so it seems reasonable to conclude that this site was scraping at least some of its content directly from Wikipedia.
Returning to the homepage, you can see the general strategy of the site is quite basic. This is what the site owners did for country selectors:




And when it came to birth years, they did something similar.




Overall, the content standard is low, and we see a lot of scraped content on this site.
Links
Looking at the Wayback Machine, we can see this is a relatively new site that was started around 2021.




We can assume that it needs some powerful links to get this site off the ground.
There are several suspicious high DR links linking to this site.
To look at them, we can jump into the Referring domains report. I’ve chosen to take a quick look at the DR 82 link, but there are many examples of high DR links with spammy content here.




Here’s an example of a DR 82 website linking to this website, and this is what one of the links looks like.




We have just scratched the surface here, but it’s likely that this is not an isolated case. It’s probably fair to say that this looks like a website designed to exploit search engines rather than inform people.
How did it fail?
The website appears to have been hit by two Google updates. With neither the content nor the links being that great, I can’t say I am surprised.
I believe that this site was hit by the following:
Sidenote.
Even after the October Spam Update appears to have crushed the site, it seems there was an attempt to build more links to the site.




What do you think happened?
Did it really fail to fool Google?
Yes. I imagine this site’s traffic will be flat-lining shortly.




Key stats
- 22,852 keywords, October 2021
- 195 pages, October 2021
- Total traffic: 8.3M, October 2021
This website allows users to download YouTube videos by simply entering a URL and then hitting the download button—you can see why this type of website would be popular.
How did it fool Google?
The website used an interesting tactic to stay at the top of the search results and try to fool Google repeatedly.
Let’s take a closer look at what happened.
Innehåll
The murky world of YouTube download sites is often filled with ads, surveys, and random floating buttons.
YT5S is, by contrast, a fairly minimal site, which may be how it became popular early on.




When it comes to content for a YouTube download site, there usually isn’t much on the page—this site doesn’t have a blog, for example.
So its main focus is on content to assist people in downloading YouTube videos. Ahrefs’ SEO Toolbar shows us it has just 818 words on the page. Not a lot.




If we look at its Site structure report, we can see that there are many subfolders with numbers at the end.




Clicking on the en variations often results in pages that are 301 redirected to the latest version.
As this is such a competitive industry, certain subdomains will get penalized for various reasons. When a subdomain loses traffic, it will be redirected to a new number subfolder.
We see a pattern emerging if we overlap the organic traffic charts for each subfolder.




The site’s owners appear to be 301 redirecting the entire website to a new subfolder once the old subfolder has lost traffic.
This site is not alone in this tactic. Looking at the Position history in Ahrefs’ Sökordsutforskaren, we can see that some of the other websites in this category have cottoned on to this strategy and are doing a similar thing, judging by some of the names of the subfolders.




Links
With a site on this topic, it will be shared widely and linked to naturally. Most of the links appear to be relatively natural.
How did it fail?
The cut-throat nature of this SERP means that it’s highly competitive. The site owners thought they had found a way to get around the loss in traffic by redirecting the website into a new subfolder. It worked for a while. But over time, the impact diminished.
Did it really fail to fool Google?
Yes, for now. But this method will continue being used to try and fool Google in the short term.




Key stats
- 6,347,025 keywords
- 698,643 pages
- Total traffic: 3.8M
In case you were wondering, this site didn’t have much to do with board game tips.
Despite this, it reached 3.9 million organic traffic during its peak through low-quality content and some dubious-looking high DR links.
How did it fool Google?
It seems to have taken PAA scraping to the extreme, sometimes veering way off topic in the process on some of the posts.
From my research, it also had some suspicious-looking links from some unlikely sources.
Let’s take a closer look at what happened.
Innehåll
Using the Top pages report, we can look at some of the articles.
Let’s highlight a section of the “Organic traffic” graph to select the top pages between two dates.




Let’s click through to the Wayback Machine again and take a look at one of the URLs. (I’ve picked a random URL with Wayback history.)




Even from the table of contents, we can see that this is PAA spam. The subheadings wildly switch from deploying parachutes to shotguns, tasers, and guns.
Scrolling down the page further, we get the familiar three or four lines of text followed by a heading.




If we paste the content into Google, we can understand whether it’s scraped.




It looks to me like this answer came from this site.
Turning to Twitter, we can see that we are not the only people to have discovered this.
We can conclude then that the content standard for this site is again incredibly low. But it was enough to fool Google at one point.
Links
There is a horror show of links from this site.
Our old friend, Grid Server (DR 82), is linking to the site again.




Here’s another reference to a broken link to this DR 81 domain, which you may remember from earlier on.




Finally, here’s a dubious-looking link from an unlikely source—Swansea University (UK) (DR74). It’s one of the domains we saw also linking to our third website.




Should these other academic sites be concerned?




How did it fail?
This site has it all—bad links and scraped content. As the organic traffic drop doesn’t tie in with any specific Google algorithm updates, the website may have received a manual penalty.
What do you think happened?
Did it really fail to fool Google?
Yes. The site owners tried to 301 redirect their old domain to a new domain. Looking at the new domain in Overview 2.0, we see that the new website worked for a while and likely got hit again by a few other updates.








Key stats
- 4,046,226 keywords, April 2022
- 911,725 pages, April 2022
- Total traffic: 1.9M, April 2022
This next website only lasted for a few months, but it did fairly well considering the standard of content and the type of links it had.
How did it fool Google?
As the name suggests, this was a low-quality PAA spam site. It also had some dubious links pointing to it. When the site tanked, the site owners tried to 301 redirect it to a new name, but that site also had issues.
Let’s take a closer look at what happened.
Innehåll
You can probably guess what type of content this will be with a name like this. Let’s look at one of the pages to see how bad it is.




We’ve seen quite a few of these examples. But if I paste the first two lines of content into Google, we can see that this is likely scraped from Quora.




The flow of the content on the page follows the familiar pattern of two or three lines of text followed by a subheading. Scrolling down the page, we see the topics become less related.




Checking a few other pages, I see the content has also been lifted from other sites. It’s relatively safe to assume that this is another PAA spam site.
Links
When we look at links, we see a few of the same culprits again.
Swansea University’s website is back, along with 38 other academic domains.




I didn’t check them all, but they could likely be spam. Some more familiar faces in the links department include our favorite DR 81 website.




As well as Grid Server’s DR 82 links.




It seems reasonably clear that this activity is not normal and may be designed to manipulate rankings.
How did it fail?
Looking at the Overview 2.0’s “Performance” graph, we see that the drop in organic traffic occurred before the May 2022 Core Update.




This update may have contributed to the drop, but it doesn’t appear as though it is the main reason.
It’s clear from this quick analysis that bad links and poor content are present. It may be likely that this site received a manual action for its links, although it’s impossible to confirm this.
What do you think happened?
Did it really fail to fool Google?
Yes. I can’t see any way back for this site.




Key stats
- 79,543 keywords, April 2022
- 1,204 pages, April 2022
- Total traffic: 5.2M, April 2022
This website describes itself as the “world’s best news site”—a bold claim.
Sadly though, it doesn’t appear to have the content or the links to back up this claim.
How did it fool Google?
It mainly focused on providing information to the Indian market about popular movie torrent sites.
When it came to fooling Google, it used suspicious-looking links combined with fairly low-quality content that was just about good enough to fool Google for a short period.
Let’s take a closer look at what happened.
Innehåll
Going to the Top pages rapport i Ahrefs Site Explorer, we can see everything’s not right here.
Rather than providing news content, this site seems to contain content about movie torrent sites for the Indian market.
Let’s look at the top-performing landing page that got an estimated 4,276,681 organic traffic on April 25.




The top-performing page appears to be an instruction manual for downloading movies from this site.
Running the text through an AI detection tool—GLTR—it seems from my tests that this content may be partially AI content or just badly translated.




Either way, you’ll probably agree it’s not the type of content you would expect from the “world’s best news site.”
This is low-quality content by most people’s definition.
Links
When it comes to links, I noticed the website has many Blogspot links.
If we go to the Anchors report and add the following settings, we can filter the Blogspot domain anchors. As they are all very specific, it seems likely that this was part of a low-quality link building campaign.




If we click on the links to target for the second row and filter by Dofollow, we can see around 50 links with exactly the same, tediously long anchor text.




Again, this type of link building doesn’t look great.
If we head back to the Referring domains report, we can see Grid Server is mentioned again.




Finally, to top it off, it has a DR 67 link that redirected a domain into its domain from this site below.




And here’s what this site looks like.




How did it fail?
Looking at Overview 2.0, this site was probably hit by the May 2022 Core Update.




What do you think happened?
Did it really fail to fool Google?
Yes. Looking at the organic traffic, it’s nowhere near the levels it used to be. The site isn’t completely dead, but it’s fair to say that it won’t fool anyone in the future that it’s the “world’s best news site.”
How did you find these sites?
Here’s how we did it: Internally, we call this the “Innehållsutforskaren Hack.”




- Gå till Ahrefs Innehållsutforskaren and start an empty search
- Gå till Websites tab
- Filter for high traffic and low DR
And that’s it!
Slutgiltiga tankar
As we have seen, what Google can giveth, it can also taketh away.
The methods used by these sites to rank in Google are obviously not replicable for businesses, and I don’t suggest you try any of the methods above.
Many SEOs like to think that the days of paid links, trashy content, and shortcuts to ranking are long gone. But clearly, from these examples, it’s still possible to get significant organic traffic by breaking the rules—but only if you are willing to risk everything.
Har du fler frågor? Pinga mig på Twitter. 🙂
SEO
What Are SEO Benchmarks, & Which Ones Actually Matter?


To set goals and track and measure your performance in any campaign, you will need key performance indicators (KPIs) and benchmarks.
But with so many KPIs, knowing exactly which ones you should be benchmarking can be challenging. In this article, we will look at which SEO benchmarks matter and why.
Many people usually talk about key performance indicators (KPIs) and benchmarks interchangeably, which can be confusing, especially if you’re new to SEO. Although they do work together, they are not the same.
KPIs are industry statistics you can use to measure performance over time and give insights as to how effective your SEO campaign is.
Benchmarks, however, are KPIs you set as your reference point when building your SEO strategy.
For example, organic traffic is a KPI. But you can use last month’s organic traffic as a benchmark.
SEO benchmarks allow us to have a before and after picture for any particular KPI. This helps us to see how our SEO campaign is progressing and can help us to adjust our strategy if needed.
Benchmarks also allow us to communicate the value of our work to clients.
There are many different KPIs you can measure. And like most things in SEO, which ones you should track will depend on the type of site you’re working on and their individual goals.
However, there are several KPIs that are important for tracking the performance of Allt websites.
Let’s take a look at which KPIs everyone should be benchmarking and why they are important.
Traffic and user experience benchmarks
Driving users to your site is only part of the work.
If a site user has a bad experience, they are likely to leave the site and never return. This is why we not only want to set traffic-related benchmarks but also user experience benchmarks too.
Organic search traffic
This metric shows how many users visit your site from unpaid listings on search engines like Google and Bing. You should be tracking traffic on a monthly basis.
When setting benchmarks, generally speaking, it is advisable to use the last full month’s data and not set it any further back than this, as the goal should always be to outperform your closest benchmark.
However, if seasonality is a factor in your business, it’s advisable to use your best month in the peak season as your ongoing benchmark.
For accuracy, when it comes to organic traffic from Google, it is advisable to check Google Search Console (GSC).
There are a number of discrepancies between GSC and Google Analytics due to how they collect data. But when focusing on organic traffic from Google itself, GSC is considered more accurate.
Head over to Google Search Console and go to Performance > Search results.


In the “Performance” report, you will see four metrics. The first metric, “clicks,” is the number of people who clicked through from the Google search results to your website. This is the number we are interested in.




Below this, you can also see the number of clicks at page level.




If you want to split organic traffic by search engine, you can do this with GA4. Go to Acquisition > Traffic acquisition.
Then you can go to “All Users” and choose “First user source / medium” from the “Audience name” drop-down menu.




Then you can select the organic search channels you want to include from the “Dimension values” drop-down menu. This can be all organic traffic from multiple search engines, or you can set individual benchmarks for each search engine, like Bing or Yahoo.




With these filters applied, you will see your website’s organic traffic for the past month. If you would like to see it broken down at the page level, you can simply go to Engagement > Pages and screens.
Engaged sessions
In GA4, “Bounce rate” has essentially been replaced by “Engaged sessions.” In order for a session to be engaged, it must last longer than 10 seconds, have multiple screen or page views, or result in a conversion.
You can see the number of engaged sessions per user in Engagement > Overview.




Average engagement time
Average engagement time in GA4 is important because, generally speaking, we want users to stay on the site for a longer period of time.
Low engagement time isn’t always a bad thing. It can simply mean the visitor got what they needed fast. If you’re working with a site that monetizes content like an affiliate site, you will want your visitor to click that affiliate link as soon as possible. So take this one with a grain of salt.
However, it can sometimes be an indicator of:
- Low-quality content
- Poor user experience
Overall average engagement time is listed on the “Report snapshot” in GA4.




But you can get a detailed breakdown in Engagement > Pages and screens.




Backlink profile benchmarks
Backlinks are links from another website to a page on your website. They help Google and other search engines understand your content and how authoritative your website is.
The backlinks’ quality, quantity, relevance, authority, and anchor text are among the many ranking factors for Google.
Number of backlinks
You want the number of (quality) links to be growing at a consistent rate. You need backlinks both to rank and maintain your rankings. Benchmarking the number of backlinks your website has will help you to monitor growth as you go forward.
Med Bakåtlänkar rapport i Ahrefs Site Explorer, you can see the total number of links to your website.




You can also see the number of individual referring domains and how they are growing month over month (and compare that against competitors on the same graph).




This is an important thing to benchmark, as there is a strong positive correlation between the number of referring domains and increased organic traffic.




Domänbetyg
Ahrefs' Domain Rating (DR) is a measure of the strength of a website’s backlink profile. It shows how your website’s backlink profile compares to the others in the Ahrefs database on a 100-point scale.
The idea would be for your website’s DR to increase over time as an indication that the strength of your backlink profile is improving.
Benchmarking DR is a pretty common practice, especially among those working with clients who may not fully comprehend SEO and, in particular, link building. It’s easier to relay that DR getting higher indicates improvement.




URL Rating
Although DR correlates with Google rankings pretty well, it doesn’t do this as well as Ahrefs’ URL Rating (UR). UR is a measure of an individual page’s backlink profile on a 100-point scale.
UR considers both internal and external links and “nofollow” attributes when calculating the UR score, following the same principles as Google’s PageRank. Therefore, benchmarking UR can help you understand how well an individual page can rank on the search engine results pages (SERPs).




Keyword benchmarks
Nyckelord are the bread and butter of your SEO campaign. After all, you need to understand what relevant queries your potential audience is searching for in order to optimize your pages.
Individual keyword positions
Your website could naturally rank for thousands of keywords on the SERPs. However, there should be some keywords you care about more than others—likely those that are most relevant to your products or services.
Benchmarking individual keyword positions (where they rank in the search results) will allow you to track and set goals for important keywords. For example, if your website currently ranks in position #6 for “seo consultant,” you can use that as your benchmark to improve upon.
While you can monitor keywords in Google Search Console, using a rank tracking tool like Ahrefs’ Rank Tracker will allow you to track the keywords you care about most and see how you stack up against competitors. You can even get email alerts about the progress of your tracked keywords.




Keyword profile value
Although benchmarking the keyword profile value may not be relevant for everyone, I find that for anyone working with clients, it can help them to relay the value of the work they’re doing. Keyword profile value can be seen in Ahrefs’ Site Explorer as “Traffic value.”
Organic traffic value is the equivalent monthly cost of traffic from all keywords that the target website/URL ranks for if paid via PPC instead of ranking organically.




Keyword Difficulty
Ahrefs’ Keyword Difficulty (KD) is a metric that can help you determine how hard it would be to rank in the top 10 for a given keyword in a given country.
It is calculated by taking a trimmed mean of the number of linking domains to the current top 10 ranking pages and then plotting the result on a logarithmic scale from 0 to 100.




KD only takes into account linking domains, but there are many other variables you will need to rank highly, like great content. However, it is a good indicator.
KD can be used as a benchmark for choosing keywords. For example, you may find that, currently, you can only rank for keywords that are considered “easy” or “medium” in terms of KD. Whereas your most important keywords may be considered “hard.”
However, the level of KD you can achieve should improve over time. That’s why KD can be an important metric to benchmark and improve upon.




Share of voice
Share of voice (SOV) takes rank tracking to another level. You can see SOV in the Översikt Rapportera in Rank Tracker.




The SOV metric shows you the percentage of all possible organic clicks (from the SERPs) for the tracked keywords landing on your website. It basically shows you how visible your brand is on the SERPs.
There is a strong positive correlation between SOV and market share. So it is an important KPI to benchmark.




By heading to the “Competitors” tab in Rank Tracker and entering the websites you consider your competition, you can compare your SOV to those sites.




Slutgiltiga tankar
Benchmarking important KPIs is one of the best ways to not only see where your website is currently at but also give you data you can improve upon. It allows you to set strategic goals and measure ongoing performance.
Har du frågor? Pinga mig på Twitter.
SEO
YouTube ändrar policy för felaktig information om val


In a significant policy shift, YouTube announced it wouldn’t remove content suggesting that fraud, errors, or glitches occurred in the 2020 US Presidential and other US elections.
The company confirmed this reversal of its election integrity policy on Friday.
In this article, we’re diving deep into YouTube’s decision. What led to this point?
It’s not just YouTube, though. We’re seeing this delicate dance all around the tech world. Platforms are trying to figure out how to let people express themselves without letting misinformation run wild.
Look at this balancing act and how it’s playing out.
A Shift Towards Free Speech?
YouTube first implemented its policy against election misinformation in December 2020, once several states certified the 2020 election results.
The policy aimed to prevent the spread of misinformation that could incite violence or cause real-world harm.
However, the company is concerned that maintaining this policy may have the unintended effect of stifling political speech.
Reflecting on the impact of the policy over the past two years, which led to tens of thousands of video removals, YouTube states:
“Two years, tens of thousands of video removals, and one election cycle later, we recognized it was time to reevaluate the effects of this policy in today’s changed landscape. With that in mind, and with 2024 campaigns well underway, we will stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections.”
In the coming months, YouTube promises more details about its approach to the 2024 election.
Other Misinformation Policies Unchanged
While this change shifts YouTube’s approach to election-related content, it doesn’t impact other misinformation policies.
YouTube clarifies:
“The rest of our election misinformation policies remain in place, including those that disallow content aiming to mislead voters about the time, place, means, or eligibility requirements for voting; false claims that could materially discourage voting, including those disputing the validity of voting by mail; and content that encourages others to interfere with democratic processes.”
The Greater Context: Balancing Free Speech and Misinformation
This decision occurs in a broader context where media companies and tech platforms are wrestling with the balance between curbing misinformation and upholding freedom of speech.
With that in mind, there are several implications for advertisers and content creators.
Implications For Advertisers
- Brand Safety Concerns: Advertisers may be concerned about their ads appearing alongside content that spreads election misinformation.
- Increased Scrutiny: With this change, advertisers may have to scrutinize more closely where their ads are being placed.
- Potential for Boycotts: If certain brands’ advertisements are repeatedly seen on videos spreading election misinformation, it could lead to consumer boycotts.
Implications For Content Creators
- Monetization Opportunities: This could open up new monetization opportunities for content creators who focus on political content, particularly those previously penalized under the old policy.
- Increased Viewership: If their content is no longer being removed, specific creators might see an increase in viewership, leading to higher ad revenue and more engagement.
- Potential Backlash: On the flip side, content creators could face backlash from viewers who disagree with the misinformation or those who feel the platform should be taking a stronger stand against such content.
It’s important to note these are potential implications and may not be realized universally across the platform.
The impact will likely vary based on specific content, audience demographics, advertiser preferences, and other factors.
Sammanfattningsvis
YouTube’s decision showcases the ongoing struggle to balance freedom of speech and prevent misinformation.
If you’re an advertiser on the platform, remember to be vigilant about where your ads are placed.
For content creators, this change could be a double-edged sword. While it may bring more ad revenue to YouTube, there’s a risk of viewers perceiving the ads as spreading misinformation.
As participants in the digital world, we should all strive for critical thinking and fact-checking when consuming content. The responsibility to curb misinformation doesn’t rest solely with tech platforms – it’s a collective task we all share.
Källa: Youtube
Utvald bild genererad av författaren med Midjourney.
SEO
Ny e-handelsexploatering påverkar WooCommerce, Shopify, Magento


A serious hacking attack has been exploiting ecommerce websites to steal credit card information from users and to spread the attack to other websites.
These hacking attacks are called Magecart style skimmer and it’s spreading worldwide across multiple ecommerce platforms.
Attackers are targeting a variety of ecommerce platforms:
- Magento
- Shopify
- WooCommerce
- WordPress
What Does the Attack Do?
The attackers have two goals when infecting a website:
1. Use the site to spread itself to other sites
2. Steal personal information like credit card data from customers of the infected website.
Identifying a vulnerability is difficult because the code dropped on a website is encoded and sometimes masked as a Google Tag or a Facebook Pixel code.


What the code does however is target input forms for credit card information.
It also serves as an intermediary to carry out attacks on behalf of the attacker, thus covering up the true source of the attacks.
Magecart Style Skimmer
A Magecart attack is an attack that enters through an existing vulnerability on the ecommerce platform itself.
On WordPress and WooCommerce it could be a vulnerability in a theme or plugin.
On Shopify it could an existing vulnerability in that platform.
In all cases, the attackers are taking advantage of vulnerabilities that are present in the platform the ecommerce sites are using.
This is not a case where there is one single vulnerability that can be conveniently fixed. It’s a wide range of them.
The report by Akamai states:
“Before the campaign can start in earnest, the attackers will seek vulnerable websites to act as “hosts” for the malicious code that is used later on to create the web skimming attack.
…Although it is unclear how these sites are being breached, based on our recent research from similar, previous campaigns, the attackers will usually look for vulnerabilities in the targeted websites’ digital commerce platform (such as Magento, WooCommerce, WordPress, Shopify, etc.) or in vulnerable third-party services used by the website.”
Rekommenderad åtgärd
Akamai recommends that all Ecommerce users secure their websites. That means making sure all third party apps and plugins are updated and that the platform is the very latest version.
They also recommend using a Web Application Firewall (WAF), which detects and prevents intrusions when hackers are probing a site in search of a vulenerable website.
Users of platforms like WordPress have multiple security solutions, with popular and trusted ones being Sucuri Security (website hardening) and WordFence (WAF).
Akamai recommends:
“…the complexity, deployment, agility, and distribution of current web application environments — and the various methods attackers can use to install web skimmers — require more dedicated security solutions, which can provide visibility into the behavior of scripts running within the browser and offer defense against client-side attacks.
An appropriate solution must move closer to where the actual attack on the clients occurs. It should be able to successfully identify the attempted reads from sensitive input fields and the exfiltration of data (in our testing we employed Akamai Page Integrity Manager).
We recommend that these events are properly collected in order to facilitate fast and effective mitigation.”
Read the original report for more details:
New Magecart-Style Campaign Abusing Legitimate Websites to Attack Others
-
SÖKMOTORER4 dagar sedan
Google uppdaterar policycenter för shoppingannonser och policycenter för gratisannonser
-
SEO4 dagar sedan
Hur man använder AI för att förbättra ditt SEO-innehållsskrivande [Webinar]
-
SÖKMOTORER4 dagar sedan
Google Local Service Ads skickar ut masspolicyöverträdelser
-
SÖKMOTORER5 dagar sedan
Google-sökning med mer detaljerade verktyg för biljämförelse
-
SEO5 dagar sedan
Googles sökrelationsteam undersöker Web3:s SEO-effekt
-
PPC5 dagar sedan
49 Fars dag Instagram bildtexter och färdiga bilder
-
WORDPRESS7 dagar sedan
Anpassade temadesigner blev precis enklare – WordPress.com Nyheter
-
SÖKMOTORER6 dagar sedan
Bing-videosökningsknapp "Mer så här".