Connect with us

SEO

The ultimate 2022 Google updates round up

Published

on

The ultimate 2022 Google updates round up

30-second summary:

  • 2022 saw nine confirmed updates (including two core updates,) five unconfirmed instances where volatility was observed in page rankings, and one data outage that caused chaos for 48 hours
  • Video and commerce sites were the biggest winners in the May core update, while reference and news sites lost out most, especially outlets without industry specificity
  • This theme largely continued and saw ripple effects from the helpful content update
  • What were these ebbs and flows, who won, who lost? Let’s find out!
  • Joe Dawson takes us through another round-up post that gives you the complete picture of Google’s moves

Only three things are certain in this life – death, taxes, and an industry-wide hubbub whenever Google launches an algorithm update. Like any year, 2022 has seen substantial changes in how the world’s largest search engine manages traffic and page rankings, with some businesses winning and others losing out.

Arguably the most significant change in 2022 is awareness of the rise of AI for content creation, becoming a hot topic in the world of marketing software. “Helpful content” updates have intended to bolster content written by human beings, penned with consumer needs in mind, over auto-generated articles designed to game the SEO system.

Has this been successful, or is the world of online marketing set for a rise of machines in 2023 and beyond? Similar to my last year’s column, let’s review the Google algorithm updates issued in 2022. I hope this helps you decide for yourself and build your business model around the latest developments in page ranking.

Complete list of 2022 Google updates

2022 has seen nine confirmed updates to Google’s algorithms, while an additional five instances of volatility were noticed and discussed by influential content marketing strategists across the year. We also saw one major data outage that caused a short-term panic! Let’s take a look at each of these updates in turn.

1) Unconfirmed, suspected update (January)

The core update of November 2021 was famously volatile, and just as web admins were coming to terms with a new status quo, further fluctuations were noted in early January 2021. Google remained tight-lipped about whether adjustments had been made to the algorithm, but sharp adjustments to SERPs were acknowledged across various industries.

2) Unconfirmed, suspected update (February)

Again, webmasters noticed a sudden temperature shift in page rankings in early February, just as things settled down after the January changes. While again unconfirmed by Google, these adjustments may have been laying the groundwork for the page experience update scheduled for later in the same month.

3) Page experience update (February)

Back in 2021, Google rolled out a page experience update designed to improve the mobile browsing experience. In February 2022, the same update was extended to encompass desktop browsing.

The consequences were not earth-shattering, but a handful of sites that previously enjoyed SERPs at the top of page one found their ranking drop. As with the mobile update, the driving forces behind the page experience update were performance measured against Google’s core web vitals.

4) Unconfirmed, suspected update (March)

Fluxes in page ranking and traffic were detected in mid-March, with enough chatter around the industry that Danny Sullivan, Public Liaison for Search at Google, felt compelled to confirm that he or his colleagues were unaware of any conscious updates.

5) Product reviews update (March)

March saw the first of three product review updates that would unfold throughout the year. As we’ll discuss shortly, ecommerce sites experienced a real shot in the arm throughout 2022 after the core updates, so this would prove to be a significant adjustment.

The fundamental aim of this product review update was to boost sites that offer more than just a template review of consumer goods – especially when linking to affiliates to encourage purchase. Best practice in product reviews following this update includes:

  • Detailed specifications beyond those found in a manufacturer description, including pros and cons and comparisons to previous generations of the same item.
  • Evidence of personal experience with a product to bolster the authenticity of the review, ideally in the form of a video or audio recording.
  • Multiple links to a range of merchants to enhance consumer choice, rather than the popular model of linking to Amazon.
  • Comparisons to rival products, explaining how the reviewed product stacks up against the competition – for good or ill.

The product review update did not punish or penalize sites that failed to abide by these policies, preferring to list a selection of items with brief (and arguably thin) copies to discuss their merits. However, sites, that offered more detail in their assessments quickly found themselves rising in the rankings.

6) Core update (May)

The first core update of the year is always a nerve-wracking event in the industry, and as always, there were winners and losers in May’s adjustments.

The most striking outcome of this update was just how many major names benefitted, especially in the realm of ecommerce, much to the delight ecommerce agencies around the world. Sites like Amazon, eBay, and Etsy saw considerable increases in traffic and prominence following the update, perhaps due to the product review update that unfolded two months prior.

Video sites also saw a spike in viewers and positioning following the May update. YouTube videos began outranking text articles while streaming services such as Disney Plus and Hulu rose to the top of many searches. Health sites began to see a slow and steady recovery after the May core update, for the first time since the rollout of 2018’s Medic update.

News and reference sites were the biggest losers in the May core update. News and media outlets suffered the most, especially those with a generic focus, such as the online arm of newspapers. Big hitters like Wikipedia and Dictionary.com were also pushed down the pecking order. Specialist sites that dedicate their reporting to a single area of interest fared a little better, but still took a hit in traffic and visibility.

7) Unconfirmed, suspected update (June)

Minor nips and tucks frequently follow when a major core update concludes. In late June, many webmasters started comparing notes on sharp changes in traffic and page ranking. Google failed to confirm any updates. These may have just been delayed aftershocks in the aftermath of May’s core update, but the industries that saw the biggest adjustments were:

  • Property and real estate
  • Hobbies and leisure
  • Pets and animal care

8) Unconfirmed, suspected update (July)

More websites saw a sharp drop in traffic in late July, especially blogs that lacked a prominent social media presence. SERPs for smaller sites were among the biggest losers in this unconfirmed update.

9) Product reviews update (July)

A minor tweak to March’s product review update was announced and rolled out in July, but caused little impact – while some review sites saw traffic drop, most were untouched, especially in comparison to changes at the start of the year.

10) Data center outage (August)

Not an update but a notable event in the 2022 SEO calendar. In early August, Google Search experienced an overnight outage. This was revealed to be caused by a fire in a data center in Iowa, in which three technicians were injured (thankfully, there were no fatalities.)

This outage caused 48 hours of panic and chaos among web admins, with page rankings undergoing huge, unexpected fluctuations, a failure of newly-uploaded pages to be indexed, and evergreen content disappearing from Google Search.

Normal service was resumed within 48 hours, and these sudden changes were reversed. All the same, it led to a great deal of short-term confusion within the industry.

11) Helpful content update (August)

The first helpful content update of 2022 saw significant changes to the SEO landscape – and may change how many websites operate in the future.

As the name suggests, this update is engineered to ensure that the most helpful, consumer-focused content rises to the top of Google’s search rankings. Some of the elements targeted and penalized during this update were as follows.

AI content An increasing number of sites have been relying on AI to create content, amalgamating and repurposing existing articles from elsewhere on the web with SEO in mind. On paper, the helpful content update pushed human-generated content above these computerized texts.
Subject focus As with the core update in May, websites that cover a broad range of subjects were likeliest to be hit by the helpful content update. Google has been taking steps to file every indexed website under a niche industry, so it’s easier for a target audience to find.
Expertise The EAT algorithm has been the driving force behind page rankings for a while now, and the helpful content update has doubled down on this. Pages that offer first-hand experience of their chosen subject matter will typically outrank those based on external research.
User behavior As a part of the helpful content update, Google is paying increasing attention to user behavior – most notably the time spent on a site. High bounce rates will see even harsher penalties in a post-helpful content update world.
Bait-and-switch titles If your content does not match your title or H2 headings, your site’s ranking will suffer. Avoid speculation, too. Attempts to gain traffic by asking questions that cannot be answered (for example, a headline asking when a new show will drop on Netflix, followed by an answer of, “Netflix has not confirmed when >TV show name< will drop”) suffered in this update.
Word stuffing Google has long denied that word count influences page ranking and advised against elongating articles for the sake of keyword stuffing. The helpful content update has made this increasingly important. 1,000 relevant words that answer a question quickly will outrank a meandering missive of 3,000 words packed with thin content.

12) Core update (September)

The second core update of 2022 unfolded in September, hot on the heels of the helpful content update.

This update repaired some of the damage for reputable reference sites that suffered in May, while those impacted by the unconfirmed update in June continued to see fluctuations in visibility – some enjoyed sharp uptakes, while others continued to hemorrhage traffic.

The biggest ecommerce brands continued to enjoy success following this update, while news and media outlets continued to plummet in visibility. Household names like CNN and the New York Post, for example, were hit very hard.

The fortunes of medical sites also continued to improve, especially those with government domains. Interestingly, the trend for promoting videos over prose was reversed in September – YouTube was the biggest loser overall.

13) Product reviews update (September)

A final tweak was made to the product reviews update in September as part of the core update, and it proved to be unpopular with many smaller sites, which saw a substantial drop in traffic and conversions. As discussed, it seems that 2022’s core updates have benefitted the biggest hitters in the market.

14) Spam update (October)

In October, Google rolled out a 48-hour spam update. This was an extension of the helpful content updates designed to filter out irrelevant and inexpert search results, in addition to sites loaded with malicious malware or phishing schemes.

Sites identified as potential spam during the update were severely penalized in terms of page ranking and, in some cases, removed from Google Search altogether. The most prominent targets of the update were:

  • Thin copy irrelevant to the search term, especially if auto-generated
  • Hacked websites with malicious or irrelevant redirects and sites that failed to adopt appropriate security protocols
  • Hidden links or excessive, unrelated affiliate links and pages
  • Artificial, machine-generated traffic

15) Helpful content update (December)

Early in December, Google began rolling out an update to August’s helpful content update. At the time of writing, it’s too early to announce what the impact of this has been. However, it promises to be an interesting time.

The August update faced criticism for being too sedate and failing to crack down hard enough on offending sites, especially those that utilize AI content and black-hat SEO tactics.

Many site owners will be crossing their fingers and toes that this update boosts genuine, human-generated copy created by and for a website’s target audience. The impact will become evident early in 2023.

This concludes the summary of 2022’s Google algorithm updates. It’s been an interesting – and frequently tumultuous – twelve months, and one that may set the tone for the years to come.

Google will always tweak and finesse its policies, and attempting to second-guess what Alphabet will do next is frequently a fool’s errand. All the same, it’s always helpful to check in with Google’s priorities and see which way the wind is blowing.


Joe Dawson is Director of strategic growth agency Creative.onl, based in the UK. He can be found on Twitter @jdwn.

Subscribe to the Search Engine Watch newsletter for insights on SEO, the search landscape, search marketing, digital marketing, leadership, podcasts, and more.

Join the conversation with us on LinkedIn and Twitter.

Source link

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

SEO

11 Disadvantages Of ChatGPT Content

Published

on

11 Disadvantages Of ChatGPT Content

ChatGPT produces content that is comprehensive and plausibly accurate.

But researchers, artists, and professors warn of shortcomings to be aware of which degrade the quality of the content.

In this article, we’ll look at 11 disadvantages of ChatGPT content. Let’s dive in.

1. Phrase Usage Makes It Detectable As Non-Human

Researchers studying how to detect machine-generated content have discovered patterns that make it sound unnatural.

One of these quirks is how AI struggles with idioms.

An idiom is a phrase or saying with a figurative meaning attached to it, for example, “every cloud has a silver lining.” 

A lack of idioms within a piece of content can be a signal that the content is machine-generated – and this can be part of a detection algorithm.

This is what the 2022 research paper Adversarial Robustness of Neural-Statistical Features in Detection of Generative Transformers says about this quirk in machine-generated content:

“Complex phrasal features are based on the frequency of specific words and phrases within the analyzed text that occur more frequently in human text.

…Of these complex phrasal features, idiom features retain the most predictive power in detection of current generative models.”

This inability to use idioms contributes to making ChatGPT output sound and read unnaturally.

2. ChatGPT Lacks Ability For Expression

An artist commented on how the output of ChatGPT mimics what art is, but lacks the actual qualities of artistic expression.

Expression is the act of communicating thoughts or feelings.

ChatGPT output doesn’t contain expressions, only words.

It cannot produce content that touches people emotionally on the same level as a human can – because it has no actual thoughts or feelings.

Musical artist Nick Cave, in an article posted to his Red Hand Files newsletter, commented on a ChatGPT lyric that was sent to him, which was created in the style of Nick Cave.

He wrote:

“What makes a great song great is not its close resemblance to a recognizable work.

…it is the breathless confrontation with one’s vulnerability, one’s perilousness, one’s smallness, pitted against a sense of sudden shocking discovery; it is the redemptive artistic act that stirs the heart of the listener, where the listener recognizes in the inner workings of the song their own blood, their own struggle, their own suffering.”

Cave called the ChatGPT lyrics a mockery.

This is the ChatGPT lyric that resembles a Nick Cave lyric:

“I’ve got the blood of angels, on my hands
I’ve got the fire of hell, in my eyes
I’m the king of the abyss, I’m the ruler of the dark
I’m the one that they fear, in the shadows they hark”

And this is an actual Nick Cave lyric (Brother, My Cup Is Empty):

“Well I’ve been sliding down on rainbows
I’ve been swinging from the stars
Now this wretch in beggar’s clothing
Bangs his cup across the bars
Look, this cup of mine is empty!
Seems I’ve misplaced my desires
Seems I’m sweeping up the ashes
Of all my former fires”

It’s easy to see that the machine-generated lyric resembles the artist’s lyric, but it doesn’t really communicate anything.

Nick Cave’s lyrics tell a story that resonates with the pathos, desire, shame, and willful deception of the person speaking in the song. It expresses thoughts and feelings.

It’s easy to see why Nick Cave calls it a mockery.

3. ChatGPT Does Not Produce Insights

An article published in The Insider quoted an academic who noted that academic essays generated by ChatGPT lack insights about the topic.

ChatGPT summarizes the topic but does not offer a unique insight into the topic.

Humans create through knowledge, but also through their personal experience and subjective perceptions.

Professor Christopher Bartel of Appalachian State University is quoted by The Insider as saying that, while a ChatGPT essay may exhibit high grammar qualities and sophisticated ideas, it still lacked insight.

Bartel said:

“They are really fluffy. There’s no context, there’s no depth or insight.”

Insight is the hallmark of a well-done essay and it’s something that ChatGPT is not particularly good at.

This lack of insight is something to keep in mind when evaluating machine-generated content.

4. ChatGPT Is Too Wordy

A research paper published in January 2023 discovered patterns in ChatGPT content that makes it less suitable for critical applications.

The paper is titled, How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection.

The research showed that humans preferred answers from ChatGPT in more than 50% of questions answered related to finance and psychology.

But ChatGPT failed at answering medical questions because humans preferred direct answers – something the AI didn’t provide.

The researchers wrote:

“…ChatGPT performs poorly in terms of helpfulness for the medical domain in both English and Chinese.

The ChatGPT often gives lengthy answers to medical consulting in our collected dataset, while human experts may directly give straightforward answers or suggestions, which may partly explain why volunteers consider human answers to be more helpful in the medical domain.”

ChatGPT tends to cover a topic from different angles, which makes it inappropriate when the best answer is a direct one.

Marketers using ChatGPT must take note of this because site visitors requiring a direct answer will not be satisfied with a verbose webpage.

And good luck ranking an overly wordy page in Google’s featured snippets, where a succinct and clearly expressed answer that can work well in Google Voice may have a better chance to rank than a long-winded answer.

OpenAI, the makers of ChatGPT, acknowledges that giving verbose answers is a known limitation.

The announcement article by OpenAI states:

“The model is often excessively verbose…”

The ChatGPT bias toward providing long-winded answers is something to be mindful of when using ChatGPT output, as you may encounter situations where shorter and more direct answers are better.

5. ChatGPT Content Is Highly Organized With Clear Logic

ChatGPT has a writing style that is not only verbose but also tends to follow a template that gives the content a unique style that isn’t human.

This inhuman quality is revealed in the differences between how humans and machines answer questions.

The movie Blade Runner has a scene featuring a series of questions designed to reveal whether the subject answering the questions is a human or an android.

These questions were a part of a fictional test called the “Voigt-Kampff test“.

One of the questions is:

“You’re watching television. Suddenly you realize there’s a wasp crawling on your arm. What do you do?”

A normal human response would be to say something like they would scream, walk outside and swat it, and so on.

But when I posed this question to ChatGPT, it offered a meticulously organized answer that summarized the question and then offered logical multiple possible outcomes – failing to answer the actual question.

Screenshot Of ChatGPT Answering A Voight-Kampff Test Question

Screenshot from ChatGPT, January 2023

The answer is highly organized and logical, giving it a highly unnatural feel, which is undesirable.

6. ChatGPT Is Overly Detailed And Comprehensive

ChatGPT was trained in a way that rewarded the machine when humans were happy with the answer.

The human raters tended to prefer answers that had more details.

But sometimes, such as in a medical context, a direct answer is better than a comprehensive one.

What that means is that the machine needs to be prompted to be less comprehensive and more direct when those qualities are important.

From OpenAI:

“These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues.”

7. ChatGPT Lies (Hallucinates Facts)

The above-cited research paper, How Close is ChatGPT to Human Experts?, noted that ChatGPT has a tendency to lie.

It reports:

“When answering a question that requires professional knowledge from a particular field, ChatGPT may fabricate facts in order to give an answer…

For example, in legal questions, ChatGPT may invent some non-existent legal provisions to answer the question.

…Additionally, when a user poses a question that has no existing answer, ChatGPT may also fabricate facts in order to provide a response.”

The Futurism website documented instances where machine-generated content published on CNET was wrong and full of “dumb errors.”

CNET should have had an idea this could happen, because OpenAI published a warning about incorrect output:

“ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”

CNET claims to have submitted the machine-generated articles to human review prior to publication.

A problem with human review is that ChatGPT content is designed to sound persuasively correct, which may fool a reviewer who is not a topic expert.

8. ChatGPT Is Unnatural Because It’s Not Divergent

The research paper, How Close is ChatGPT to Human Experts? also noted that human communication can have indirect meaning, which requires a shift in topic to understand it.

ChatGPT is too literal, which causes the answers to sometimes miss the mark because the AI overlooks the actual topic.

The researchers wrote:

“ChatGPT’s responses are generally strictly focused on the given question, whereas humans’ are divergent and easily shift to other topics.

In terms of the richness of content, humans are more divergent in different aspects, while ChatGPT prefers focusing on the question itself.

Humans can answer the hidden meaning under the question based on their own common sense and knowledge, but the ChatGPT relies on the literal words of the question at hand…”

Humans are better able to diverge from the literal question, which is important for answering “what about” type questions.

For example, if I ask:

“Horses are too big to be a house pet. What about raccoons?”

The above question is not asking if a raccoon is an appropriate pet. The question is about the size of the animal.

ChatGPT focuses on the appropriateness of the raccoon as a pet instead of focusing on the size.

Screenshot of an Overly Literal ChatGPT Answer

11 Disadvantages Of ChatGPT ContentScreenshot from ChatGPT, January 2023

9. ChatGPT Contains A Bias Towards Being Neutral

The output of ChatGPT is generally neutral and informative. It’s a bias in the output that can appear helpful but isn’t always.

The research paper we just discussed noted that neutrality is an unwanted quality when it comes to legal, medical, and technical questions.

Humans tend to pick a side when offering these kinds of opinions.

10. ChatGPT Is Biased To Be Formal

ChatGPT output has a bias that prevents it from loosening up and answering with ordinary expressions. Instead, its answers tend to be formal.

Humans, on the other hand, tend to answer questions with a more colloquial style, using everyday language and slang – the opposite of formal.

ChatGPT doesn’t use abbreviations like GOAT or TL;DR.

The answers also lack instances of irony, metaphors, and humor, which can make ChatGPT content overly formal for some content types.

The researchers write:

“…ChatGPT likes to use conjunctions and adverbs to convey a logical flow of thought, such as “In general”, “on the other hand”, “Firstly,…, Secondly,…, Finally” and so on.

11. ChatGPT Is Still In Training

ChatGPT is currently still in the process of training and improving.

OpenAI recommends that all content generated by ChatGPT should be reviewed by a human, listing this as a best practice.

OpenAI suggests keeping humans in the loop:

“Wherever possible, we recommend having a human review outputs before they are used in practice.

This is especially critical in high-stakes domains, and for code generation.

Humans should be aware of the limitations of the system, and have access to any information needed to verify the outputs (for example, if the application summarizes notes, a human should have easy access to the original notes to refer back).”

Unwanted Qualities Of ChatGPT

It’s clear that there are many issues with ChatGPT that make it unfit for unsupervised content generation. It contains biases and fails to create content that feels natural or contains genuine insights.

Further, its inability to feel or author original thoughts makes it a poor choice for generating artistic expressions.

Users should apply detailed prompts in order to generate content that is better than the default content it tends to output.

Lastly, human review of machine-generated content is not always enough, because ChatGPT content is designed to appear correct, even when it’s not.

That means it’s important that human reviewers are subject-matter experts who can discern between correct and incorrect content on a specific topic.

More resources: 


Featured image by Shutterstock/fizkes



Source link

Continue Reading

SEO

9 Common Technical SEO Issues That Actually Matter

Published

on

9 Common Technical SEO Issues That Actually Matter

In this article, we’ll see how to find and fix technical SEO issues, but only those that can seriously affect your rankings.

If you’d like to follow along, get Ahrefs Webmaster Tools and Google Search Console (both are free) and check for the following issues.

Indexability is a webpage’s ability to be indexed by search engines. Pages that are not indexable can’t be displayed on the search engine results pages and can’t bring in any search traffic. 

Three requirements must be met for a page to be indexable:

  1. The page must be crawlable. If you haven’t blocked Googlebot from entering the page robots.txt or you have a website with fewer than 1,000 pages, you probably don’t have an issue there. 
  2. The page must not have a noindex tag (more on that in a bit).
  3. The page must be canonical (i.e., the main version). 

Solution

In Ahrefs Webmaster Tools (AWT):  

  1. Open Site Audit
  2. Go to the Indexability report 
  3. Click on issues related to canonicalization and “noindex” to see affected pages
Indexability issues in Site Audit

For canonicalization issues in this report, you will need to replace bad URLs in the link rel="canonical" tag with valid ones (i.e., returning an “HTTP 200 OK”). 

As for pages marked by “noindex” issues, these are the pages with the “noindex” meta tag placed inside their code. Chances are most of the pages found in the report there should stay as is. But if you see any pages that shouldn’t be there, simply remove the tag. Do make sure those pages aren’t blocked by robots.txt first. 

Recommendation

Click on the question mark on the right to see instructions on how to fix each issue. For more detailed instructions, click on the “Learn more” link. 

Instruction on how to fix an SEO issue in Site Audit

A sitemap should contain only pages that you want search engines to index. 

When a sitemap isn’t regularly updated or an unreliable generator has been used to make it, a sitemap may start to show broken pages, pages that became “noindexed,” pages that were de-canonicalized, or pages blocked in robots.txt. 

Solution 

In AWT:

  1. Open Site Audit 
  2. Go to the All issues report
  3. Click on issues containing the word “sitemap” to find affected pages 
Sitemap issues shown in Site Audit

Depending on the issue, you will have to:

  • Delete the pages from the sitemap.
  • Remove the noindex tag on the pages (if you want to keep them in the sitemap). 
  • Provide a valid URL for the reported page. 

Google uses HTTPS encryption as a small ranking signal. This means you can experience lower rankings if you don’t have an SSL or TLS certificate securing your website. 

But even if you do, some pages and/or resources on your pages may still use the HTTP protocol. 

Solution 

Assuming you already have an SSL/TLS certificate for all subdomains (if not, do get one), open AWT and do these: 

  1. Open Site Audit
  2. Go to the Internal pages report 
  3. Look at the protocol distribution graph and click on HTTP to see affected pages
  4. Inside the report showing pages, add a column for Final redirect URL 
  5. Make sure all HTTP pages are permanently redirected (301 or 308 redirects) to their HTTPS counterparts 
Protocol distribution graph
Internal pages issues report with added column

Finally, let’s check if any resources on the site still use HTTP: 

  1. Inside the Internal pages report, click on Issues
  2. Click on HTTPS/HTTP mixed content to view affected resources 
Site Audit reporting six HTTPS/HTTP mixed content issues

You can fix this issue by one of these methods:

  • Link to the HTTPS version of the resource (check this option first) 
  • Include the resource from a different host, if available 
  • Download and host the content on your site directly if you are legally allowed to do so
  • Exclude the resource from your site altogether

Learn more: What Is HTTPS? Everything You Need to Know 

Duplicate content happens when exact or near-duplicate content appears on the web in more than one place. 

It’s bad for SEO mainly for two reasons: It can cause undesirable URLs to show in search results and can dilute link equity

Content duplication is not necessarily a case of intentional or unintentional creation of similar pages. There are other less obvious causes such as faceted navigation, tracking parameters in URLs, or using trailing and non-trailing slashes

Solution 

First, check if your website is available under only one URL. Because if your site is accessible as:

  • http://domain.com
  • http://www.domain.com
  • https://domain.com
  • https://www.domain.com

Then Google will see all of those URLs as different websites. 

The easiest way to check if users can browse only one version of your website: type in all four variations in the browser, one by one, hit enter, and see if they get redirected to the master version (ideally, the one with HTTPS). 

You can also go straight into Site Audit’s Duplicates report. If you see 100% bad duplicates, that is likely the reason.

Duplicates report showing 100% bad duplicates
Simulation (other types of duplicates turned off).

In this case, choose one version that will serve as canonical (likely the one with HTTPS) and permanently redirect other versions to it. 

Then run a New crawl in Site Audit to see if there are any other bad duplicates left. 

Running a new crawl in Site Audit

There are a few ways you can handle bad duplicates depending on the case. Learn how to solve them in our guide

Learn more: Duplicate Content: Why It Happens and How to Fix It 

Pages that can’t be found (4XX errors) and pages returning server errors (5XX errors) won’t be indexed by Google so they won’t bring you any traffic. 

Furthermore, if broken pages have backlinks pointing to them, all of that link equity goes to waste. 

Broken pages are also a waste of crawl budget—something to watch out for on bigger websites. 

Solution

In AWT, you should: 

  1. Open Site Audit.
  2. Go to the Internal pages report.
  3. See if there are any broken pages. If so, the Broken section will show a number higher than 0. Click on the number to show affected pages.
Broken pages report in Site Audit

In the report showing pages with issues, it’s a good idea to add a column for the number of referring domains. This will help you make the decision on how to fix the issue. 

Internal pages report with no. of referring domains column added

Now, fixing broken pages (4XX error codes) is quite simple, but there is more than one possibility. Here’s a short graph explaining the process:

How to deal with broken pages

Dealing with server errors (the ones reporting a 5XX) can be a tougher one, as there are different possible reasons for a server to be unresponsive. Read this short guide for troubleshooting.

Recommendation

With AWT, you can also see 404s that were caused by incorrect links to your website. While this is not a technical issue per se, reclaiming those links may give you an additional SEO boost.

  1. Go to Site Explorer
  2. Enter your domain 
  3. Go to the Best by links report
  4. Add a “404 not found” filter
  5. Then sort the report by referring domains from high to low
How to find broken backlinks in Site Explorer
In this example, someone linked to us, leaving a comma inside the URL.

If you’ve already dealt with broken pages, chances are you’ve fixed most of the broken links issues. 

Other critical issues related to links are: 

  • Orphan pages – These are the pages without any internal links. Web crawlers have limited ability to access those pages (only from sitemap or backlinks), and there is no link equity flowing to them from other pages on your site. Last but not least, users won’t be able to access this page from the site navigation. 
  • HTTPS pages linking to internal HTTP pages – If an internal link on your website brings users to an HTTP URL, web browsers will likely show a warning about a non-secure page. This can damage your overall website authority and user experience.

Solution

In AWT, you can:

  1. Go to Site Audit.
  2. Open the Links report.
  3. Open the Issues tab. 
  4. Look for the following issues in the Indexable category. Click to see affected pages. 
Important SEO issues related to links

Fix the first issue by changing the links from HTTP to HTTPS or simply delete those links if no longer needed.

For the second issue, an orphan page needs to be either linked to from some other page on your website or deleted if a given page holds no value to you.

Sidenote.

Ahrefs’ Site Audit can find orphan pages as long as they have backlinks or are included in the sitemap. For a more thorough search for this issue, you will need to analyze server logs to find orphan pages with hits. Find out how in this guide.

7. Mobile experience issues

Having a mobile-friendly website is a must for SEO. Two reasons: 

  1. Google uses mobile-first indexing – It’s mostly using the content of mobile pages for indexing and ranking.
  2. Mobile experience is part of the Page Experience signals – While Google will allegedly always “promote” the page with the best content, page experience can be a tiebreaker for pages offering content of similar quality. 

Solution

In GSC: 

  1. Go to the Mobile Usability report in the Experience section
  2. View affected pages by clicking on issues in the Why pages aren’t usable on mobile section 
Mobile Usability report in Google Search Console

You can read Google’s guide for fixing mobile issues here.  

8. Performance and stability issues 

Performance and visual stability are other aspects of Page Experience signals used by Google to rank pages. 

Google has developed a special set of metrics to measure user experience called Core Web Vitals (CWV). Site owners and SEOs can use those metrics to see how Google perceives their website in terms of UX. 

Google's search signals for page experience

While page experience can be a ranking tiebreaker, CWV is not a race. You don’t need to have the fastest website on the internet. You just need to score “good” ideally in all three categories: loading, interactivity, and visual stability. 

Three categories of Core Web Vitals

Solution 

In GSC: 

  1. First, click on Core Web Vitals in the Experience section of the reports.
  2. Then click Open report in each section to see how your website scores. 
  3. For pages that aren’t considered good, you’ll see a special section at the bottom of the report. Use it to see pages that need your attention.
How to find Core Web Vitals in Google Search Console
CWV issue report in Google Search Console

Optimizing for CWV may take some time. This may include things like moving to a faster (or closer) server, compressing images, optimizing CSS, etc. We explain how to do this in the third part of this guide to CWV. 

Bad website structure in the context of technical SEO is mainly about having important organic pages too deep into the website structure. 

Pages that are nested too deep (i.e., users need >6 clicks from the website to get to them) will receive less link equity from your homepage (likely the page with the most backlinks), which may affect their rankings. This is because link value diminishes with every link “hop.” 

Sidenote.

Website structure is important for other reasons too such as the overall user experience, crawl efficiency, and helping Google understand the context of your pages. Here, we’ll only focus on the technical aspect, but you can read more about the topic in our full guide: Website Structure: How to Build Your SEO Foundation.

Solution 

In AWT

  1. Open Site Audit
  2. Go to Structure explorer, switch to the Depth tab, and set the data type to Data table
  3. Configure the Segment to only valid HTML pages and click Apply
  4. Use the graph to investigate pages with more than six clicks away from the homepage 
How to find site structure issues in Site Audit
Adding a new segment in Site Audit

The way to fix the issue is to link to these deeper nested pages from pages closer to the homepage. More important pages could find their place in site navigation, while less important ones can be just linked to the pages a few clicks closer.

It’s a good idea to weigh in user experience and the business role of your website when deciding what goes into sitewide navigation. 

For example, we could probably give our SEO glossary a slightly higher chance to get ahead of organic competitors by including it in the main site navigation. Yet we decided not to because it isn’t such an important page for users who are not particularly searching for this type of information. 

We’ve moved the glossary only up a notch by including a link inside the beginner’s guide to SEO (which itself is just one click away from the homepage). 

Structure explorer showing glossary page is two clicks away from the homepage
One page from the glossary folder is two clicks away from the homepage.
Link that moved SEO glossary a click closer to the homepage
Just one link, even at the bottom of a page, can move a page higher in the overall structure.

Final thoughts 

When you’re done fixing the more pressing issues, dig a little deeper to keep your site in perfect SEO health. Open Site Audit and go to the All issues report to see other issues regarding on-page SEO, image optimization, redirects, localization, and more. In each case, you will find instructions on how to deal with the issue. 

All issues report in Site Audit

You can also customize this report by turning issues on/off or changing their priority. 

Issue report in Site Audit is customizable

Did I miss any important technical issues? Let me know on Twitter or Mastodon.



Source link

Continue Reading

SEO

New Google Ads Feature: Account-Level Negative Keywords

Published

on

New Google Ads Feature: Account-Level Negative Keywords

Google Ads Liaison Ginny Marvin has announced that account-level negative keywords are now available to Google Ads advertisers worldwide.

The feature, which was first announced last year and has been in testing for several months, allows advertisers to add keywords to exclude traffic from all search and shopping campaigns, as well as the search and shopping portion of Performance Max, for greater brand safety and suitability.

Advertisers can access this feature from the account settings page to ensure their campaigns align with their brand values and target audience.

This is especially important for brands that want to avoid appearing in contexts that may be inappropriate or damaging to their reputation.

In addition to the brand safety benefits, the addition of account-level negative keywords makes the campaign management process more efficient for advertisers.

Instead of adding negative keywords to individual campaigns, advertisers can manage them at the account level, saving time and reducing the chances of human error.

You no longer have to worry about duplicating negative keywords in multiple campaigns or missing any vital to your brand safety.

Additionally, account-level negative keywords can improve the accuracy of ad targeting by excluding irrelevant or low-performing keywords that may adversely impact campaign performance. This can result in higher-quality traffic and a better return on investment.

Google Ads offers a range of existing brand suitability controls, including inventory types, digital content labels, placement exclusions, and negative keywords at the campaign level.

Marvin added that Google Ads is expanding account-level negative keywords to address various use cases and will have more to share soon.

This rollout is essential in giving brands more control over their advertising and ensuring their campaigns target the appropriate audience.


Featured Image: Primakov/Shutterstock



Source link

Continue Reading

Trending

en_USEnglish