Connect with us

SEO

Top 10 SEO Priorities For Your First Week As A New Marketing Manager

Published

on

Top 10 SEO Priorities For Your First Week As A New Marketing Manager

As a new marketing manager, the first week can feel like a whirlwind of attempting to understand the people, processes, technologies, and campaigns under development.

When you couple that with “owning the SEO” side of the department, one may ask themselves, Where do I even start?

Given that SEO isn’t a one-and-done initiative, you’re on the lookout for the highest impact actions you can take to set the foundation for your longer-term SEO success.

The recommendations here are from a marketer’s perspective at a mid-sized, multi-location business.

The most important objectives in the first week are to understand your organizational, departmental, and team goals.

Advertisement

These north stars ensure alignment with your teammates and organizational mission before you can start executing.

Along with getting to know your team and the resources available, here are the systems to prioritize and ensure are providing accurate data.

1. Install Website And Conversion Analytics

It will take you more than a week to audit your analytics systems and ensure that your session and conversion data are 100% accurate. However, having any level of analytics tracking is better than nothing.

At a baseline, make sure Google Analytics tracking is firing on your website, landing pages, and blog.

If your website is hosted on one CMS and your blog on another, you’ll need to check both places to ensure tracking is configured properly.

GTM/GA Debugger is my favorite free browser-based tool for quickly debugging erroneous or duplicative GA and GTM tracking code on site.

Advertisement

Run the debugger on your site to ensure you aren’t seeing multiple pageviews firing on every page. Here are a couple of examples that show that the GA or GA4 tag is only firing a single time on the page.

Screenshot from GTM/GA Debugger, April 2022
ga4 debugScreenshot from GTM/GA Debugger, April 2022

If you see multiple pageviews firing on each page, you’ll know you have analytics issues to address down the road.

2. Set Up Google Analytics Alerts

After configuring your baseline analytics, it’s time to set up custom alerts in GA. Alerts are a simple way to get notified if your site sees a sudden dip in traffic or conversions.

Feel free to use this alert configuration for your own site, which you can access in admin settings.

Custom GA AlertsScreenshot from Google Analytics, April 2022

3. Implement Rank Tracking

You’ll likely spend the first few weeks on the job learning about your buyer, products, competitors, marketing channels, and much more.

One of the easiest-to-understand metrics for helping your team track your SEO performance is overall growth for first page, non-branded Google rankings.

Theoretically, as you create content, optimize your site, and grow your backlink portfolio, you should be seeing an increase in first page rankings for non-branded keywords.

During your first week, you can benchmark this value and start to understand what topics/keywords are on the cusp of ranking on the first page of Google.

Advertisement

Consider these keywords as your “low hanging fruit.” If you are looking for a quick win, focus on improving the content on the pages that are about to rank on page one.

Here is an example of a Semrush report tracking these metrics to provide this baseline to your team quickly:

First page Google rankingsScreenshot from Semrush, April 2022

It will likely take you longer than a week to determine the topics you need to build your content and SEO strategy around, but this will at least give you a starting point.

4. Set Up Google Search Console

At a basic level, GSC tracks your ability to get crawled and indexed in Google and highlights potential issues that impact Google’s crawlers from accessing your site.

In your first week, you’ll want to check:

Your sitemaps are submitted, and the volume of pages listed in your sitemaps matches the volume of pages being indexed in Google (as noted in the coverage report).

They will likely never exactly match, but if you see a discrepancy of 50% (of pages in the sitemap vs. valid pages in the coverage report), there could be content quality or technical issues causing Google not to index your site.

Advertisement

You don’t have any manual actions or security issues.

If you’re unsure what your predecessors did from a marketing or CMS security perspective, check these areas to ensure you’re not being impacted.

Any spikes in impressions or clicks data as listed in the ”search results” report.

Pull the last 16 months of data and note any specific timeframes for when your site saw these impacts within search.

Google Search consoleScreenshot from Google Search Console, April 2022

5. Set Up Brand Mentions Listening

The easiest way to generate backlinks to your site is by ensuring that any other site that mentions your brand also links to your site.

If you don’t yet have an SEO tool, Feedly follows industry publications and brand mentions.

However, my favorite SEO-specific tool is Semrush’s brand monitoring tool which allows you to track unlinked brand mentions.

Advertisement
Brand monitoring in SemrushScreenshot from Semrush, April 2022

6. Verify Google Business Profile Listings

The complexity in your marketing department increases when you are also responsible for the local digital presence of individual branches, franchises or sales offices.

In your first week, make sure each location has a Google Business Profile page with accurate name, address, and phone number information.

As part of this process, start the claims process for verifying your listings. This can take up to a couple of weeks, so you’ll want to get started.

7. Set Up Annotations

If you’re fortunate, your predecessor left records of the most important dates in your company’s marketing history, including website launches, CMS migrations, campaign start/end dates, etc.

Some of these records may be stored in Google Analytics Annotations which allow you to leave detailed notes about any events that may impact your traffic, conversion, or revenue data.

In your first week, if nothing else, review the annotations from the last years and add in the date that you started at the company to show the progress you’ve made once you’ve reached the 90, 180, and 365-day mark at the organization.

8. Install Google Tag Manager

The best configuration for most organizations to manage tracking scripts is through Google Tag Manager.

Advertisement

Proper implementation of GTM allows you to see all of the scripts running on your site and the pages that those scripts are firing on.

If you’re coming into a new role without being given clear tech stack documentation, Google Tag Manager can help you what systems are used on-site for tracking, advertising, and much more.

9. Run A Crawl To Establish Benchmarks

Ideally, by the time you start your new role, you already have a general idea of the web presence of your new organization.

In your first week, run a crawl using ScreamingFrog or another crawling tool to identify the volume of SEO issues to address and get a better sense of your information architecture.

Whatever crawl tool you use, make sure it can crawl all of the subdomains connected to your site, so you can gather a complete picture of all of the web properties you may be working with.

Here is an example of the kind of visuals to help you understand your information architecture.

Advertisement
Screaming Frog ReportScreenshot from Screaming Frog, April 2022

10. Inventory Your MarTech Stack

As spending on SaaS applications continues to rise, your new organization may use between 20 to 150 different applications across the organization.

You don’t need to know the ins and outs of all of them.

If you can document all marketing and sales tools and their respective uses, you’ll better understand what you can start using right away (vs. going through the purchase process of other tools).

Your first week will fly by. If you can tackle this list, you will have the foundation to track, optimize and launch your upcoming campaigns.

More resources:


Featured Image: wear it out/Shutterstock



Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

How Compression Can Be Used To Detect Low Quality Pages

Published

on

By

Compression can be used by search engines to detect low-quality pages. Although not widely known, it's useful foundational knowledge for SEO.

The concept of Compressibility as a quality signal is not widely known, but SEOs should be aware of it. Search engines can use web page compressibility to identify duplicate pages, doorway pages with similar content, and pages with repetitive keywords, making it useful knowledge for SEO.

Although the following research paper demonstrates a successful use of on-page features for detecting spam, the deliberate lack of transparency by search engines makes it difficult to say with certainty if search engines are applying this or similar techniques.

What Is Compressibility?

In computing, compressibility refers to how much a file (data) can be reduced in size while retaining essential information, typically to maximize storage space or to allow more data to be transmitted over the Internet.

TL/DR Of Compression

Compression replaces repeated words and phrases with shorter references, reducing the file size by significant margins. Search engines typically compress indexed web pages to maximize storage space, reduce bandwidth, and improve retrieval speed, among other reasons.

This is a simplified explanation of how compression works:

  • Identify Patterns:
    A compression algorithm scans the text to find repeated words, patterns and phrases
  • Shorter Codes Take Up Less Space:
    The codes and symbols use less storage space then the original words and phrases, which results in a smaller file size.
  • Shorter References Use Less Bits:
    The “code” that essentially symbolizes the replaced words and phrases uses less data than the originals.

A bonus effect of using compression is that it can also be used to identify duplicate pages, doorway pages with similar content, and pages with repetitive keywords.

Research Paper About Detecting Spam

This research paper is significant because it was authored by distinguished computer scientists known for breakthroughs in AI, distributed computing, information retrieval, and other fields.

Advertisement

Marc Najork

One of the co-authors of the research paper is Marc Najork, a prominent research scientist who currently holds the title of Distinguished Research Scientist at Google DeepMind. He’s a co-author of the papers for TW-BERT, has contributed research for increasing the accuracy of using implicit user feedback like clicks, and worked on creating improved AI-based information retrieval (DSI++: Updating Transformer Memory with New Documents), among many other major breakthroughs in information retrieval.

Dennis Fetterly

Another of the co-authors is Dennis Fetterly, currently a software engineer at Google. He is listed as a co-inventor in a patent for a ranking algorithm that uses links, and is known for his research in distributed computing and information retrieval.

Those are just two of the distinguished researchers listed as co-authors of the 2006 Microsoft research paper about identifying spam through on-page content features. Among the several on-page content features the research paper analyzes is compressibility, which they discovered can be used as a classifier for indicating that a web page is spammy.

Detecting Spam Web Pages Through Content Analysis

Although the research paper was authored in 2006, its findings remain relevant to today.

Then, as now, people attempted to rank hundreds or thousands of location-based web pages that were essentially duplicate content aside from city, region, or state names. Then, as now, SEOs often created web pages for search engines by excessively repeating keywords within titles, meta descriptions, headings, internal anchor text, and within the content to improve rankings.

Section 4.6 of the research paper explains:

Advertisement

“Some search engines give higher weight to pages containing the query keywords several times. For example, for a given query term, a page that contains it ten times may be higher ranked than a page that contains it only once. To take advantage of such engines, some spam pages replicate their content several times in an attempt to rank higher.”

The research paper explains that search engines compress web pages and use the compressed version to reference the original web page. They note that excessive amounts of redundant words results in a higher level of compressibility. So they set about testing if there’s a correlation between a high level of compressibility and spam.

They write:

“Our approach in this section to locating redundant content within a page is to compress the page; to save space and disk time, search engines often compress web pages after indexing them, but before adding them to a page cache.

…We measure the redundancy of web pages by the compression ratio, the size of the uncompressed page divided by the size of the compressed page. We used GZIP …to compress pages, a fast and effective compression algorithm.”

High Compressibility Correlates To Spam

The results of the research showed that web pages with at least a compression ratio of 4.0 tended to be low quality web pages, spam. However, the highest rates of compressibility became less consistent because there were fewer data points, making it harder to interpret.

Figure 9: Prevalence of spam relative to compressibility of page.

The researchers concluded:

Advertisement

“70% of all sampled pages with a compression ratio of at least 4.0 were judged to be spam.”

But they also discovered that using the compression ratio by itself still resulted in false positives, where non-spam pages were incorrectly identified as spam:

“The compression ratio heuristic described in Section 4.6 fared best, correctly identifying 660 (27.9%) of the spam pages in our collection, while misidentifying 2, 068 (12.0%) of all judged pages.

Using all of the aforementioned features, the classification accuracy after the ten-fold cross validation process is encouraging:

95.4% of our judged pages were classified correctly, while 4.6% were classified incorrectly.

More specifically, for the spam class 1, 940 out of the 2, 364 pages, were classified correctly. For the non-spam class, 14, 440 out of the 14,804 pages were classified correctly. Consequently, 788 pages were classified incorrectly.”

The next section describes an interesting discovery about how to increase the accuracy of using on-page signals for identifying spam.

Insight Into Quality Rankings

The research paper examined multiple on-page signals, including compressibility. They discovered that each individual signal (classifier) was able to find some spam but that relying on any one signal on its own resulted in flagging non-spam pages for spam, which are commonly referred to as false positive.

Advertisement

The researchers made an important discovery that everyone interested in SEO should know, which is that using multiple classifiers increased the accuracy of detecting spam and decreased the likelihood of false positives. Just as important, the compressibility signal only identifies one kind of spam but not the full range of spam.

The takeaway is that compressibility is a good way to identify one kind of spam but there are other kinds of spam that aren’t caught with this one signal. Other kinds of spam were not caught with the compressibility signal.

This is the part that every SEO and publisher should be aware of:

“In the previous section, we presented a number of heuristics for assaying spam web pages. That is, we measured several characteristics of web pages, and found ranges of those characteristics which correlated with a page being spam. Nevertheless, when used individually, no technique uncovers most of the spam in our data set without flagging many non-spam pages as spam.

For example, considering the compression ratio heuristic described in Section 4.6, one of our most promising methods, the average probability of spam for ratios of 4.2 and higher is 72%. But only about 1.5% of all pages fall in this range. This number is far below the 13.8% of spam pages that we identified in our data set.”

So, even though compressibility was one of the better signals for identifying spam, it still was unable to uncover the full range of spam within the dataset the researchers used to test the signals.

Combining Multiple Signals

The above results indicated that individual signals of low quality are less accurate. So they tested using multiple signals. What they discovered was that combining multiple on-page signals for detecting spam resulted in a better accuracy rate with less pages misclassified as spam.

Advertisement

The researchers explained that they tested the use of multiple signals:

“One way of combining our heuristic methods is to view the spam detection problem as a classification problem. In this case, we want to create a classification model (or classifier) which, given a web page, will use the page’s features jointly in order to (correctly, we hope) classify it in one of two classes: spam and non-spam.”

These are their conclusions about using multiple signals:

“We have studied various aspects of content-based spam on the web using a real-world data set from the MSNSearch crawler. We have presented a number of heuristic methods for detecting content based spam. Some of our spam detection methods are more effective than others, however when used in isolation our methods may not identify all of the spam pages. For this reason, we combined our spam-detection methods to create a highly accurate C4.5 classifier. Our classifier can correctly identify 86.2% of all spam pages, while flagging very few legitimate pages as spam.”

Key Insight:

Misidentifying “very few legitimate pages as spam” was a significant breakthrough. The important insight that everyone involved with SEO should take away from this is that one signal by itself can result in false positives. Using multiple signals increases the accuracy.

What this means is that SEO tests of isolated ranking or quality signals will not yield reliable results that can be trusted for making strategy or business decisions.

Takeaways

We don’t know for certain if compressibility is used at the search engines but it’s an easy to use signal that combined with others could be used to catch simple kinds of spam like thousands of city name doorway pages with similar content. Yet even if the search engines don’t use this signal, it does show how easy it is to catch that kind of search engine manipulation and that it’s something search engines are well able to handle today.

Here are the key points of this article to keep in mind:

Advertisement
  • Doorway pages with duplicate content is easy to catch because they compress at a higher ratio than normal web pages.
  • Groups of web pages with a compression ratio above 4.0 were predominantly spam.
  • Negative quality signals used by themselves to catch spam can lead to false positives.
  • In this particular test, they discovered that on-page negative quality signals only catch specific types of spam.
  • When used alone, the compressibility signal only catches redundancy-type spam, fails to detect other forms of spam, and leads to false positives.
  • Combing quality signals improves spam detection accuracy and reduces false positives.
  • Search engines today have a higher accuracy of spam detection with the use of AI like Spam Brain.

Read the research paper, which is linked from the Google Scholar page of Marc Najork:

Detecting spam web pages through content analysis

Featured Image by Shutterstock/pathdoc

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

New Google Trends SEO Documentation

Published

on

By

Google publishes new documentation for how to use Google Trends for search marketing

Google Search Central published new documentation on Google Trends, explaining how to use it for search marketing. This guide serves as an easy to understand introduction for newcomers and a helpful refresher for experienced search marketers and publishers.

The new guide has six sections:

  1. About Google Trends
  2. Tutorial on monitoring trends
  3. How to do keyword research with the tool
  4. How to prioritize content with Trends data
  5. How to use Google Trends for competitor research
  6. How to use Google Trends for analyzing brand awareness and sentiment

The section about monitoring trends advises there are two kinds of rising trends, general and specific trends, which can be useful for developing content to publish on a site.

Using the Explore tool, you can leave the search box empty and view the current rising trends worldwide or use a drop down menu to focus on trends in a specific country. Users can further filter rising trends by time periods, categories and the type of search. The results show rising trends by topic and by keywords.

To search for specific trends users just need to enter the specific queries and then filter them by country, time, categories and type of search.

The section called Content Calendar describes how to use Google Trends to understand which content topics to prioritize.

Advertisement

Google explains:

“Google Trends can be helpful not only to get ideas on what to write, but also to prioritize when to publish it. To help you better prioritize which topics to focus on, try to find seasonal trends in the data. With that information, you can plan ahead to have high quality content available on your site a little before people are searching for it, so that when they do, your content is ready for them.”

Read the new Google Trends documentation:

Get started with Google Trends

Featured Image by Shutterstock/Luis Molinero

Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

All the best things about Ahrefs Evolve 2024

Published

on

All the best things about Ahrefs Evolve 2024

Hey all, I’m Rebekah and I am your Chosen One to “do a blog post for Ahrefs Evolve 2024”.

What does that entail exactly? I don’t know. In fact, Sam Oh asked me yesterday what the title of this post would be. “Is it like…Ahrefs Evolve 2024: Recap of day 1 and day 2…?” 

Even as I nodded, I couldn’t get over how absolutely boring that sounded. So I’m going to do THIS instead: a curation of all the best things YOU loved about Ahrefs’ first conference, lifted directly from X.

Let’s go!

OUR HUGE SCREEN

CONFERENCE VENUE ITSELF

It was recently named the best new skyscraper in the world, by the way.

 

OUR AMAZING SPEAKER LINEUP – SUPER INFORMATIVE, USEFUL TALKS!

 

Advertisement

GREAT MUSIC

 

AMAZING GOODIES

 

SELFIE BATTLE

Some background: Tim and Sam have a challenge going on to see who can take the most number of selfies with all of you. Last I heard, Sam was winning – but there is room for a comeback yet!

 

THAT BELL

Everybody’s just waiting for this one.

 

STICKER WALL

AND, OF COURSE…ALL OF YOU!

 

Advertisement

There’s a TON more content on LinkedIn – click here – but I have limited time to get this post up and can’t quite figure out how to embed LinkedIn posts so…let’s stop here for now. I’ll keep updating as we go along!



Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending