Ta kontakt med oss

SEO

Why we’re hardwired to believe SEO myths (and how to spot them!)

Publicerad

Why we’re hardwired to believe SEO myths (and how to spot them!)

Give someone a fish and they’ll EAT for one day. Teach someone to fish and they’ll EAT for a lifetime. Yes, that’s an SEO pun. It’s also the goal of this article.

If you pop into either of the fantastic SEO communities on Twitter or LinkedIn, you’ll inevitably encounter some common SEO myths:

  • “Longer dwell time means a good user experience, so it must be a ranking factor”
  • “A high bounce rate indicates a bad user experience, so it must be bad for SEO”

Social media posts like these get tons of engagement. As a result, they amplify the myths we try to squash through repetition, false evidence, and faulty logic. The problem isn’t limited to social media, either. There are plenty of high-profile websites that package hypotheses as facts because readers eat them up.

These myths are a huge problem because they’re red herrings. They cause marketers to prioritize projects that won’t improve the content, user experience, or Google search performance.

So how can the SEO community rally around the truth? We can start by doing two things:

  1. SEOs must admit our personalities and professions hardwire us to believe myths. We have a deep desire for answers, control, and predictability, as well as a fierce distrust of Google.
  2. We need to recognize the psychological and environmental factors that influence our ability to sort fact from fiction.

So rather than busting individual myths, let’s ask ourselves “why?” instead. In other words, let’s learn to fish.

Internal reasons we believe SEO myths

Let’s dig into some internal factors, such as our thoughts and feelings, that influence our beliefs.

1. SEOs need structure and control

SEO is a fascinating branch of marketing because our performance is driven by a constantly evolving algorithm that we don’t control. In fact, there were more than 5,000 Google algorithm updates in 2021 alone.

In other words, SEOs live in a world of crippling dependency. Even the top-ranking signals that we know about can fluctuate based on the industry, query, or available content within Google’s index. For example, if you manage websites in the finance or health space, E-A-T is critical. If you publish news content, then recency is very important.

To gain a sense of structure and control, we look for more ways to influence outcomes. But there are two problems with that approach:

  • We overestimate the impact of individual ranking factors
  • We falsely believe something is a Google ranking factor that is not

Our need to amplify our own level of control is supported by psychology. A 2016 study revealed an individual’s need for structure made them more likely to believe in a conspiracy theory.

“The human tendency to recognize patterns even when none exist is shown to have applications in consumer behavior. The current research demonstrates that as one’s personal need for structure (PNS) increases (that is, requiring predictability and disfavoring uncertainty), false consumer pattern perceptions emerge.”

If you find yourself waffling between fact and fiction, don’t let your desire for control dictate your final decision.

2. The primal need to recognize patterns

The human brain is excellent at recognizing patterns. Throughout history, we’ve relied on that ability to make better decisions and ensure the survival of our species. Unfortunately, we’re so good at spotting patterns that we also fabricate them.

False pattern recognition has several drawbacks –

  • It might influence SEO decisions that could have a sitewide impact
  • If you overstate the connection publicly, others might misinterpret it as fact

An excellent example surfaced on Twitter recently. Google’s John Mueller was asked if adding too many links to your site’s main navigation could impact Google Discover traffic. The individual who asked the question ran several tests and saw positive results, but Mueller said it was merely an interesting correlation.

“I’d still go with ’unrelated’. As mentioned in our docs: Given the serendipitous nature of Discover, traffic from Discover is less predictable or dependable when compared to Search, and is considered supplemental to your Search traffic.”

Fortunately, this individual went straight to the source for an answer instead of publishing a case study that could have had serious implications for website navigation decisions.

3. Confirmation bias

It’s well-documented that people accept information that supports their beliefs and reject information that doesn’t. It’s a primordial trait that evolved when we began to form social groups. Early humans surrounded themselves with others who thought and acted the same way to ensure their survival.

One of the most famous confirmation bias studies comes from Stanford. For the study, researchers segmented students into two opposing groups based on their beliefs about capital punishment.

One group supported capital punishment and believed it reduced crime. The other opposed it and believed it had no impact on crime.

Each group was asked to react to two studies, one which supported their views, and one which contradicted them. Both groups found the study that aligned with their beliefs much more credible, and each became more entrenched in their original beliefs.

SEO practitioners are particularly prone to confirmation bias because we’re terrified of being wrong. We hypothesize, test, build, optimize, and iterate. If we’re wrong too often, we’ll waste time and money, and we could risk our reputation and our jobs.

We need to be right so badly that we may accept myths that confirm our beliefs rather than admit failure.

4. Lack of trust in Google

It’s safe to say most SEOs don’t trust Google. That has led to some of the longest-running SEO myths I could find. For example, even after seven years of repeated rejections from Google, many SEO experts still believe engagement is a ranking signal.

Here’s John Mueller shooting down the engagement myth in 2015:

“I don’t think we even see what people are doing on your website. If they are filling out forms or not, if they are converting and actually buying something… So if we can’t see that, then that is something we cannot take into account. So from my point of view, that is not something I’d really treat as a ranking factor.”

Nearly seven years later, in March 2022, John was asked the same question again, and his response was pretty much the same:

“So I don’t think we would use engagement as a factor.”

And yet, the SEOs piled on in the comments. I encourage you to read them if you want a sense of the intense level of mistrust. Essentially, SEOs overanalyzed Mueller’s words, questioned his honesty, and claimed he was misinformed because they had contradictory insider information.

5. Impostor syndrome

Even the most seasoned SEO professionals admit they’ve felt the pain of impostor syndrome. You can easily find discussions on Reddit, Twitter, and LinkedIn about how we question our own level of knowledge. That’s especially true in public settings when we’re surrounded by our peers.

Not long ago Azeem Ahmad and Izzie Smith chatted about impostor syndrome. Here’s what Izzie said:

“It’s really hard to put yourself out there and share your learnings. We’re all really afraid. I think most of us have this impostor syndrome that’s telling us we’re not good enough.”

This contributes to SEO myths in several ways. First, it erodes self-confidence, which makes individuals more prone to believe myths. Second, it prevents folks who might want to challenge inaccurate information from speaking out publicly because they’re afraid they’ll be attacked.

Needless to say, that enables myths to spread throughout the broader community.

The best way to combat impostor syndrome is to ensure SEO communities are safe and supportive of new members and new ideas. Be respectful, open-minded, and accepting. If more folks speak out when something doesn’t feel accurate, then we can keep some troublesome myths in check.

External reasons we believe SEO myths

Now let’s explore the external forces, like peers and publishers, that cause us to believe SEO myths.

1. Peer pressure

Peer pressure is closely related to impostor syndrome, except it comes from the outside. It’s a feeling of coercion from peers, whether a large group of SEOs, a widely known expert or a close mentor or colleague.

Because humans are social creatures, our urge to fit in often overpowers our desire to be right. When something doesn’t feel right, we go with the flow anyway for fear of being ostracized. In fact, social proof can be more persuasive than purely evidence-based proof.

I asked the Twitter SEO community if anyone ever felt compelled to accept an SEO ranking factor as fact based on popular opinion. Several folks replied, and there was an interesting theme around website code.

“Back in 2014, a web developer told me he truly believed text-to-code ratio was a ranking factor. For a while, I believed him because he made convincing arguments and he was the first developer I met who had an opinion about SEO.”

—  Alice Roussel

“Years and years ago I wanted code quality to be a ranking factor. Many thought it was because it made sense to reward well-written code. But it never was. Browsers had to be very forgiving because most sites were so badly built.”

—  Simon Cox

Similar to combatting impostor syndrome, if we develop a more tolerable SEO community that’s willing to respectfully debate issues, we’ll all benefit from more reliable information.

2. Outdated information

If you publish content about SEO, then you’ll be guilty of spreading SEO myths at some point. Google updates its algorithms thousands of times each year, which means assumptions are disproven and once-good advice becomes outdated.

Trusted publishers have a duty to refresh or remove inaccurate content to prevent SEO misconceptions from spreading.

For example, in 2019 Google changed how it handles outbound links. It introduced two new link attributes into the nofollow family, UGC and sponsored, and began to treat all three of these as hints instead of ignoring nofollow links.

So if you wrote about link attributes prior to September 2019, your advice is probably out of date.

Unfortunately, most SEOs update content because it’s underperforming, not because it’s wrong. So perhaps publishers should put integrity above performance to strengthen our community.

3. Jumping on trends

Sometimes SEO myths explode because the facts can’t keep up with the virality of the myth. One of my favorite examples is the LSI keyword trend. This one pops up on Twitter from time to time, and thankfully Bill Slawski is quick to quash it.

Trend-based myths go viral because they tap into the fear of missing out (FOMO), and SEOs hate to miss out on the opportunity to gain a competitive advantage. They also resonate with SEOs because they appear to offer a secret glimpse into Google’s black box.

Although trends eventually fade, they will remain a thorn in our side as long as the original sources remain unchanged.

4. Correlation vs causation

The most difficult myths to bust are those backed by data. No matter how many times Google debunks them, they won’t die if folks come armed with case studies.

Take exact match domains (EMD) for example. This article lists several reasons why EMDs are good for SEO, using Hotels.com as a case study. But it’s a classic chicken and egg argument. Does the site rank number one for “hotels” because it’s an EMD? Or is it because the owner clearly understood SEO strategy and prioritized keyword research, link building, internal links, page speed, and high-quality content marketing for the last 27 years?

We also can’t discount the fact that the domain has 42 million backlinks.

But if you want to hear it directly from the horse’s mouth, Google’s John Mueller says EMDs provide no SEO bonus. Here’s what he said on Reddit:

“There’s no secret SEO bonus for having your keywords in the domain name. And for those coming with “but there are keyword domains ranking well” — of course, you can also rank well with a domain that has keywords in it. But you can rank well with other domain names too, and a domain won’t rank well just because it has keywords in it.”

This is obviously correlation, not causation.

To be clear, I fully support running SEO tests to learn more about Google’s algorithm. But it’s incredibly difficult to create a signal vacuum that prevents outside influences from skewing your results. And even if you manage to isolate one ranking factor, you have no way of knowing how strong the signal is in relation to other signals. In a total vacuum, one signal may win. But in the wilderness of Google, it may be so weak that it’s virtually nonexistent.

Furthermore, the signal may only apply to certain types of content. We’ve seen signal fluctuations before regarding product reviews and E-A-T in YMYL spaces. So even if data suggests something might improve organic rankings, how reliable is the information, and how important is the signal?

All this is to say that we should be very careful when proclaiming new ranking factors, especially if they contradict Google’s statements or stray too far from universally measuring user experience.

5. It’s plausible, but not measurable

This group of myths is rooted in logic, which makes them particularly dangerous and sticky. Usually, they follow a simple formula: if A = B, and B = C, then A = C.

Here’s an example:

  • Google wants to rank content that provides a good user experience
  • If a webpage has a high bounce rate, it must provide a bad user experience
  • Therefore, a high bounce rate is bad for SEO

This seems to make sense, right? Yet, Google has said many times they can’t see what users do on your website, and they don’t look at bounce rate.

I’ve seen the same argument applied to dwell time, time on page, SERP click-through rates (CTR), and so on. To be clear, Google says CTR  does not drive organic search engine rankings because that would cause results to be overrun with spammy, low-quality content.

Most often these myths stem from competing views about what a good user experience looks like and how to measure it. What constitutes a good experience for one type of search query might be a terrible experience for another. This lack of consistency makes it virtually impossible to identify metrics that can be deployed universally across all websites.

In other words, if potential user experience signals depend on too many factors, Google can’t use them. That’s why they launched the page experience update in 2021 which quantifies user experience with specific, universal metrics.

Here’s your fishing pole

In many cases, SEO myths fall into more than one of the above categories which makes them even more difficult to dispel. That’s why we keep seeing social media posts falsely identifying ranking factors like keyword density, domain authority, conversions, and meta keywords.

If you understand a few basic concepts about ranking factors, you’ll be better equipped to sort fact from fiction and prioritize SEO initiatives that drive more organic traffic.

Ask yourself these five questions when you smell the stench of a myth:

  • Is it quantifiable and measurable?
  • Is it scalable?
  • Is it broadly or universally true, or does it depend on the user?
  • Does it support Google’s goals of delivering a better user experience?
  • Has Google confirmed or denied it publicly?

If you can check each of those boxes, then you may have a valid ranking factor on your hands. But don’t take my word for it. Run some tests, ask some friends, use logic, and confirm your theory. And if all else fails, just ask John Mueller.


Jonas Sickler is a published author and digital marketer. He writes about SEO, brand reputation, customer attention, and marketing. His advice has appeared in hundreds of publications, including Forbes, CNBC, CMI, and Search Engine Watch. He can be found on Twitter @JonasSickler.

Subscribe to the Search Engine Watch newsletter for insights on SEO, the search landscape, search marketing, digital marketing, leadership, podcasts, and more.

Join the conversation with us on LinkedIn och Twitter.

Källlänk

Klicka för att kommentera

Lämna ett svar

Din e-postadress kommer inte publiceras. Obligatoriska fält är märkta *

SEO

Google uppdaterar Search Console Video Indexing Report

Publicerad

Google Updates Search Console Video Indexing Report

Google’s updated Search Console Video indexing report now includes daily video impressions and a sitemap filter feature.

  • Google has updated the Search Console Video indexing report to provide more comprehensive insights into video performance in search results.
  • The updated report includes daily video impressions, which are grouped by page, and a new sitemap filter feature to focus on the most important video pages.
  • These updates are part of Google’s ongoing efforts to help website owners and content creators understand and improve the visibility of their videos in search results.



Källlänk

Fortsätt läsa

SEO

Bing förnyar krypsystemet för att förbättra effektiviteten

Publicerad

Bing Revamps Crawl System To Enhance Efficiency

According to a recent study by Bing, most websites have XML sitemaps, with the “lastmod” tag being the most critical component of these sitemaps.

The “lastmod” tag indicates the last time the webpages linked by the sitemap were modified and is used by search engines to determine how often to crawl a site and which pages to index.

However, the study also revealed that a significant number of “lastmod” values in XML sitemaps were set incorrectly, with the most prevalent issue being identical dates on all sitemaps.

Upon consulting with web admins, Microsoft discovered that the dates were set to the date of sitemap generation rather than content modification.

To address this issue, Bing is revamping its crawl scheduling stack to better utilize the information provided by the “lastmod” tag in sitemaps.

This will improve crawl efficiency by reducing unnecessary crawling of unchanged content and prioritizing recently updated content.

The improvements have already begun on a limited scale and are expected to roll out by June fully.

Additionally, Microsoft has updated sitemap.org for improved clarity by adding the following line:

“Note that the date must be set to the date the linked page was last modified, not when the sitemap is generated.”

How To Use The Lastmod Tag Correctly

To correctly set the “lastmod” tag in a sitemap, you should include it in the <url> tag for each page in the sitemap.

The date should be in W3C Datetime format, with the most commonly used formats being YYYY-MM-DD or YYYY-MM-DDThh:mm:ssTZD.

The date should reflect the last time the page was modified and should be updated regularly to ensure that search engines understand the relevance and frequency of updates.

Here’s an example code snippet:

<?xml version=”1.0″ encoding=”UTF-8″?>

<urlset xmlns=”http://www.sitemaps.org/schemas/sitemap/0.9″>

   <url>

      <loc>http://www.example.com/</loc>

      <lastmod>2023-01-23</lastmod>      

   </url>

Google’s Advice: Use Lastmod Tag After Significant Changes Only

Google’s crawlers also utilize the “lastmod” tag, and the suggestions on using it by both major search engines are similar.

Google Search Advocate John Mueller recently discussed the lastmod tag in the January edition of Google’s office-hours Q&A sessions.

It’s worth noting that Google recommends only using the “lastmod” tag for substantial modifications, which was not mentioned in Microsoft’s blog post.

Changing the date in the lastmod tag after minor edits can be viewed as an attempt to manipulate search snippets.

In Summary

Microsoft’s recent study and efforts to improve the utilization of the “lastmod” tag in sitemaps will result in more efficient and effective webpage crawling.

Publishers are encouraged to regularly update their sitemaps and lastmod tags to ensure that their pages are correctly indexed and easily accessible by search engines.


Featured Image: mundissima/Shutterstock

Source: Microsoft



Källlänk

Fortsätt läsa

SEO

Allt du behöver veta

Publicerad

Everything You Need To Know

Now more than ever, marketing and sales leaders are taking a critical look at where to allocate their resources and how to staff their teams.

Attribution modeling is one of the best tools for providing clear guidance on what’s working, and what isn’t.

What Is Marketing Attribution?

Marketing attribution is the approach to understanding how various marketing and sales touchpoints influence the prospects’ move from visitor, to lead, to customer.

By implementing attribution in your organization, you’ll have a better idea of:

  • Which channels are most influential during different phases of the sales cycle.
  • Which content formats are more or less impactful in your marketing or sales enablement efforts.
  • Which campaigns drove the most revenue and return on investment (ROI).
  • The most common sequence of online or offline events that prospects interact with before becoming a customer.

Why Is Attribution Important In Marketing?

Analyzing attribution data provides you with an understanding of which marketing, sales, and customer success efforts are contributing most effectively and efficiently toward revenue generation.

Attribution modeling helps you identify opportunities for growth and improvement, while also informing budget allocation decisions.

With accurate attribution models, marketers are able to make more informed decisions about their campaigns, which has allowed them to increase ROI and reduce wasted budgets on ineffective strategies.

What Are The Challenges Of Marketing Attribution?

Developing a perfect attribution model that guides all of your decisions is a pipedream for most marketers.

Here are five challenges that result in inconclusive data models or total project abandonment:

Cross-Channel Management

This is a common challenge for enterprise marketers who have web assets across multiple websites, channels, and teams.

Without proper analytics tagging and system settings configuration, your web activities may not be tracked accurately as a visitor goes from one campaign micro-site to the main domain.

Or, the prospect may not be tracked as they go from your website to get directions to then go to your physical storefront to transact.

Making Decisions Based On Small Sample Sizes

For smaller trafficked websites, marketers using attribution data may not have statistically significant data sets to draw accurate correlations for future campaigns.

This results in faulty assumptions and the inability to repeat prior success.

Lack Of Tracking Compliance

If your attribution models rely on offline activities, then you may require manual imports of data or proper logging of sales activities.

From my experience in overseeing hundreds of CRM implementations, there is always some level of non-compliance in logging activities (like calls, meetings, or emails). This leads to skewed attribution models.

Mo‘ models, mo’ problems: Each analytics platform has a set of five or more attribution models you can use to optimize your campaigns around.

Without a clear understanding of the pros and cons of each model, the person building the attribution reporting may not be structuring or configuring them to align with your organizational goals.

Data Privacy

Since GDPR, CCPA, and other privacy laws were enacted, analytics data continues to get murkier each year.

For organizations that rely on web visitors to opt-in to tracking, attribution modeling suffers due to the inability to pull in tracking for every touchpoint.

How Do You Measure Marketing Attribution?

Measuring attribution is all about giving credit where it is due. There are dozens of attribution tools out there to assign credit to the digital or offline touchpoint.

Attribution measurement starts with choosing the data model that aligns with your business goals.

Certain attribution models favor interactions earlier on in the customer journey whereas others give the most credit towards interactions closer to a transaction.

Here is a scenario of how to measure marketing attribution in a first-touch attribution model (we’ll get to the different models next):

A prospect comes to the website through a paid search ad and reads the blog.

Two days later, she comes back to the site and views a couple of product pages.

Three days later, she comes back through an organic listing from Google and then converts on the site by signing up for a discount coupon.

With a first-touch attribution model, the paid search ad will get 100% of the credit for that conversion.

As you can see, choosing the “right” model can be a contentious issue, as each model gives a percentage of credit to a specific interaction or placement along the path toward becoming a customer.

If your business relies on paid search, SEO, offline, and other channels, then likely one of the individuals working on one of those channels is going to look like the superhero, whereas the other marketers will look like they aren’t pulling their weight.

Ideally, when you are choosing an attribution tool, you’ll be able to build reports that allow you to compare various attribution models, so you have a better understanding of which channels and interactions are most influential during certain time periods leading up to conversion or purchase.

What Are Different Marketing Attribution Models?

Marketers can use various marketing attribution models to examine the effectiveness of their campaigns.

Each attribution tool has will have a handful of models you can optimize campaigns and build reports around. Here is a description of each model:

First-Click Attribution

This model gives credit to the first channel that the customer interacted with.

This model is popular to use when optimizing for brand awareness and top-of-funnel conversions/engagement.

Last-Click Attribution

This model gives all of the credit to the last channel that the customer interacts with.

This model is useful when looking to understand which channels/interactions were most influential immediately before converting/purchasing.

Last-click attribution is the default attribution model for Google Analytics.

Multi-Touch/Channel Attribution

This model gives credit to all of the channels or touchpoints that the customer interacted with throughout their journey.

This model is used when you are looking to give weight evenly or to specific interactions.

There are variations of the multi-touch model including time-decay, linear, U-shaped, W-shaped, and J-shaped.

Customized

This model allows you to manually set the weight for individual channels or placements within the customer journey.

This model is best for organizations that have experience in using attribution modeling, and have clear goals for what touchpoints are most impactful in the buyers’ journey.

Marketing Attribution Tools

There are several different tools available to help marketers measure and analyze marketing attribution. Some attribution tools are features within marketing automation platforms or CRM systems like Active Campaign or HubSpot.

Others are stand-alone attribution tools that rely on API or integrations to pull in and analyze data, like Triple Whale eller Dreamdata.

As you are evaluating tools, consider how much offline or sales data needs to be included within your attribution models.

For systems like HubSpot, you can include sales activities (like phone calls and 1:1 sales emails) and offline list import data (from tradeshows).

Other tools, like Google Analytics, are not natively built to pull in that kind of data and would require advanced development work to include these activities as part of your model.

(Full disclosure: I work with HubSpot’s highest-rated partner agency, SmartBug Media.)

Additionally, if you need to be able to see the very specific touchpoints (like a specific email sent or an ad clicked), then you need a full-funnel attribution system that shows this level of granularity.

Attribution modeling is a powerful tool that marketers can use to measure the success of their campaigns, optimize online/offline channels, and improve customer interactions.

It is important, though, to understand attribution’s limitations, the pros and cons of each model, and the challenges with extracting conclusive data before investing large budgets towards attribution technology.

Fler resurser: 


Featured Image: Yuriy K/Shutterstock



Källlänk

Fortsätt läsa

Trendigt

sv_SESvenska