Connect with us

MARKETING

How Stacker.com Earned 1M+ Organic Monthly Visits Through Content Syndication [Case Study]

Published

on

How Stacker.com Earned 1M+ Organic Monthly Visits Through Content Syndication [Case Study]

The author’s views are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.

Note: Amanda Milligan collaborated with Stacker’s SEO specialist, Sam Kaye, to create this case study.

When a marketer is asked about the value of content syndication, they’ll typically list two main benefits:

  1. Increased brand awareness, as you’re reaching a wider audience.

  2. Improved engagement, as people can share and comment across multiple versions of the story.

But one benefit of content syndication that marketers frequently overlook is the potential to improve a site’s SEO performance.

While paid syndication (like press release distribution) can’t carry SEO value, developing strong content that’s appealing to publishers and their readers can generate massive amounts of link authority back to a publishing domain, and drive significant organic growth.

But it’s difficult to test and implement a comprehensive syndication strategy, so there aren’t many resources about its SEO impact.

In this case study, we:

  • Outline the processes used by Stacker to syndicate content.

  • Look into organic results on Stacker.com as a result of content syndication efforts.

  • Discuss how content syndication can be used as part of a long-term organic growth strategy.

The content creation and distribution methods used for Stacker.com are the same as those used for Stacker Studio brand partners, making Stacker.com’s organic success an excellent case study for the long-term effectiveness for content syndication strategies.

The evidence of syndication’s impact

Before digging into how syndication works for SEO, let’s begin by proving that content syndication works.

Stacker.com has no proactive digital PR or backlinking strategies. Our growth strategy has been utilizing content syndication as a model to reach new audiences and drive valuable domain authority. The result has been Stacker accumulating 20K “dofollowed” referring domains and over one million unique backlinks over the last four years.

Organic traffic growth

Organic traffic: Google Search Console

Over a period of 16 months, Stacker.com saw a significant acceleration in organic growth, increasing by approximately 500% — from fewer than 10K organic entries per day to more than 50K entries per day. (Our site used to be TheStacker.com, and you can see the exponential growth on that domain as well before migrating to Stacker.com.)

Backlinks

Backlinks: Google Search Console

Backlinks that appear on pages including rel=canonical tags are processed and valued by search engines, as evidenced by the 8M+ links created by this method & identified in Search Console. The majority of these links are in-text dofollows from syndicated article pickups with rel=canonical tags. This is an excellent indicator that Google is crawling and valuing these links.

See also  Traffic, Lead & Email Data from 150K+ Brands
GSC top external links overview for Stacker.com

Backlinks: Moz Pro (domain-wide)

Backlinks created via content syndication are also being picked up by Moz Pro and other third-party reporting tools.

Moz Pro reports a steady growth in the number of referring domains that correlates well with GSC link reporting metrics:

1657238511 409 How Stackercom Earned 1M Organic Monthly Visits Through Content Syndication

Moz: individual links

In addition to tracking account-wide backlinking growth, Moz also picks up individual instances of links created via content syndication, such as these syndicated SFGate pickups.

1657238511 520 How Stackercom Earned 1M Organic Monthly Visits Through Content Syndication

Domain Authority: Moz Pro

This accumulation of link authority over time has allowed Stacker to increase our Moz Pro Domain Authority score from 56 to 59 over the past year:

1657238511 967 How Stackercom Earned 1M Organic Monthly Visits Through Content Syndication

Organic performance: Summary

In 2021 alone, Stacker.com saw a 500% increase in referring domains, a 380% increase in organic traffic, and an improvement in domain authority from 56 to 59 due in large part to our content syndication efforts.

These long-term trends of organic growth, paired with the fact that syndicated links are being picked up by both Google Search Console and Moz Pro, are a clear indication that content syndication is an effective way to drive organic traffic.

How content syndication improves SEO authority

Stacker’s syndication approach provides link authority in two ways: in-text dofollow backlinks and rel=canonical tags.

An in-text backlink acts as a signal of source attribution, telling search engines that a particular piece of data or content has been taken from another source. A canonical tag does the same thing, except that it attributes the entire article, not just a piece of it, back to the original publisher. Both are signals of source attribution, and both indicate that a publisher trusts your content enough to feature and share the article on their website.

When a piece of Stacker content is syndicated (re-published in its original form on another publisher’s site), the syndicated version includes a rel=canonical tag back to the publishers’ hosted version, as well as an in-text dofollow backlink in the content intro:

Example rel=canonical tag from a syndicated piece
Example rel=canonical tag from a syndicated piece
Example of an in-text, dofollow backlink attributing authorship in a syndicated piece
Example of an in-text, dofollow backlink attributing authorship in a syndicated piece

When a Stacker article is rewritten instead of syndicated, (e.g., a publisher creates a locally-focused variant using Stacker source data), we request a backlink citing us as the original provider of the study.

Owned syndication vs. earned syndication

In the same way the industry talks about owned and earned media, you can think of two types of syndication as “owned syndication” and “earned syndication.”

Owned syndication involves reposting an article on multiple platforms by you. An example of this would be publishing an article on your blog and then republishing it on Medium, LinkedIn, and other accounts you run. While this might increase the number of people that see your article, the likelihood of driving organic traffic from these strategies reliably or at scale is virtually nil.

See also  Google Publisher Center Gains Reader Revenue Manager & New Subscribed Content Report In Search Console

Earned syndication involves the approval from another publisher that your content is valuable to their audience, so this type of syndication is harder to achieve. However, in addition to reaching a wider audience than with owned syndication, you get the authority signal of having your content hosted on another publisher’s domain. (Someone decided your content was worth republishing in full, and what’s a greater sign of trust than that?)

Why isn’t everyone doing this?

Because it’s not easy. For the first few years of our existence, Stacker did nothing but build publisher relationships and master the art of newsworthy content. Getting content pickups at scale requires building trust with large news publishers, as well as a large volume of content news publishers find uniquely interesting and relevant. Content syndication is built upon a foundation of content quality, publisher trust, and the technical capability to share content at scale, and these three components can take years to develop.

Stacker journalists are committed to understanding the coverage needs of local news organizations and investing in stories that can drive meaningful value for their audiences. After five years of working with publishing partners, we’ve studied the data on pickups and audience reach to uncover insights into what stories can be most useful.

We landed on some key earned syndication tenets:

Contextualization is key

Any type of publisher you come across will have their core editorial calendar established with key topics they know their audience cares about. They’re not looking for outsiders to contribute to the heart of their publication, so don’t approach it that way. Instead, explore topics they typically cover and perhaps even particular stories they’ve run and ask yourself: What other perspective can I add to this story to contextualize it? Perhaps a historical angle or other comparison

Data always helps

Some publishers don’t have access to data analysts, or if they do, they’re working on a ton of other projects and it’s hard to scale data-focused content. If you’re able to provide stories based on data that’s been distilled and presented with clear insights, many publishers would appreciate that. Additionally, just knowing your content is backed by data rather than opinion makes it easier to vet (and trust).

See also  11 Free Email Hacks to Step Up Your Productivity

Help publishers reach their goals

Our direct line of communication with multiple publishers, both local and national, has led to fascinating conversations around their goals. To sum it up, every publisher has unique focus areas when it comes to audience acquisition and engagement. Some are focused on converting users to subscription while others are focused on pageviews or time on site. Explore their site, see how they monetize, and consider how your content can help them meet these goals.

Let’s look at an example story Stacker created.

Feature image for Stacker MLB piece.

This piece uses Major League Baseball data to determine the most successful postseason teams. With data being the basis for the ranking, publishers don’t have to worry about the validity of the order, which is a major advantage in vetting.

This story offers original analysis in a way that can complement the local coverage of news organizations. While a sports beat writer might focus on the area team’s history, current team performance, or other local and newsy aspects of the story—this story offers contextual data analysis that can work for a variety of news organizations to augment their boots-on-the-ground reporting.

All in all, the article earned more than 300 publisher pickups and more than 100,000 story impressions. That’s an incredible amount of payoff for one piece of content, and earned syndication is the vehicle that made it possible.

The syndication takeaway

Like so many other SEO tactics, not all syndication is created equal. Potential clients have often asked me how Stacker is different from services like press release distribution platforms, with which they didn’t see SEO results.

Well, when you have sponsored or nofollow links, it’s never going to be the same as earned syndication. Getting white hat content pickups with consistency is difficult — it requires both top-tier content and the attention of journalists.

So my advice? Consider whether there are high-authority publications in your niche. Study what they publish and ask yourself:

  • Do you already publish content that they’d love?

  • Can you make some tweaks to already existing content to better fit their editorial style?

  • Can you create original research/reports that would interest their audience?

  • Would getting brand awareness with their audience help us improve your brand reach?

If the answer is yes to at least two of these questions, consider content syndication as a strategy.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

MARKETING

The Moz Links API: An Introduction

Published

on

The Moz Links API: An Introduction

What exactly IS an API? They’re those things that you copy and paste long strange codes into Screaming Frog for links data on a Site Crawl, right?

I’m here to tell you there’s so much more to them than that – if you’re willing to take just a few little steps. But first, some basics.

What’s an API?

API stands for “application programming interface”, and it’s just the way of… using a thing. Everything has an API. The web is a giant API that takes URLs as input and returns pages.

But special data services like the Moz Links API have their own set of rules. These rules vary from service to service and can be a major stumbling block for people taking the next step.

When Screaming Frog gives you the extra links columns in a crawl, it’s using the Moz Links API, but you can have this capability anywhere. For example, all that tedious manual stuff you do in spreadsheet environments can be automated from data-pull to formatting and emailing a report.

If you take this next step, you can be more efficient than your competitors, designing and delivering your own SEO services instead of relying upon, paying for, and being limited by the next proprietary product integration.

GET vs. POST

Most APIs you’ll encounter use the same data transport mechanism as the web. That means there’s a URL involved just like a website. Don’t get scared! It’s easier than you think. In many ways, using an API is just like using a website.

As with loading web pages, the request may be in one of two places: the URL itself, or in the body of the request. The URL is called the “endpoint” and the often invisibly submitted extra part of the request is called the “payload” or “data”. When the data is in the URL, it’s called a “query string” and indicates the “GET” method is used. You see this all the time when you search:

https://www.google.com/search?q=moz+links+api <-- GET method 

When the data of the request is hidden, it’s called a “POST” request. You see this when you submit a form on the web and the submitted data does not show on the URL. When you hit the back button after such a POST, browsers usually warn you against double-submits. The reason the POST method is often used is that you can fit a lot more in the request using the POST method than the GET method. URLs would get very long otherwise. The Moz Links API uses the POST method.

Making requests

A web browser is what traditionally makes requests of websites for web pages. The browser is a type of software known as a client. Clients are what make requests of services. More than just browsers can make requests. The ability to make client web requests is often built into programming languages like Python, or can be broken out as a standalone tool. The most popular tools for making requests outside a browser are curl and wget.

We are discussing Python here. Python has a built-in library called URLLIB, but it’s designed to handle so many different types of requests that it’s a bit of a pain to use. There are other libraries that are more specialized for making requests of APIs. The most popular for Python is called requests. It’s so popular that it’s used for almost every Python API tutorial you’ll find on the web. So I will use it too. This is what “hitting” the Moz Links API looks like:

response = requests.post(endpoint, data=json_string, auth=auth_tuple)

Given that everything was set up correctly (more on that soon), this will produce the following output:

{'next_token': 'JYkQVg4s9ak8iRBWDiz1qTyguYswnj035nqrQ1oIbW96IGJsb2dZgGzDeAM7Rw==',
 'results': [{'anchor_text': 'moz',
              'external_pages': 7162,
              'external_root_domains': 2026}]}

This is JSON data. It’s contained within the response object that was returned from the API. It’s not on the drive or in a file. It’s in memory. So long as it’s in memory, you can do stuff with it (often just saving it to a file).

See also  Most marketers don't expect ad spend drop until next year

If you wanted to grab a piece of data within such a response, you could refer to it like this:

response['results'][0]['external_pages']

This says: “Give me the first item in the results list, and then give me the external_pages value from that item.” The result would be 7162.

NOTE: If you’re actually following along executing code, the above line won’t work alone. There’s a certain amount of setup we’ll do shortly, including installing the requests library and setting up a few variables. But this is the basic idea.

JSON

JSON stands for JavaScript Object Notation. It’s a way of representing data in a way that’s easy for humans to read and write. It’s also easy for computers to read and write. It’s a very common data format for APIs that has somewhat taken over the world since the older ways were too difficult for most people to use. Some people might call this part of the “restful” API movement, but the much more difficult XML format is also considered “restful” and everyone seems to have their own interpretation. Consequently, I find it best to just focus on JSON and how it gets in and out of Python.

Python dictionaries

I lied to you. I said that the data structure you were looking at above was JSON. Technically it’s really a Python dictionary or dict datatype object. It’s a special kind of object in Python that’s designed to hold key/value pairs. The keys are strings and the values can be any type of object. The keys are like the column names in a spreadsheet. The values are like the cells in the spreadsheet. In this way, you can think of a Python dict as a JSON object. For example here’s creating a dict in Python:

my_dict = {
    "name": "Mike",
    "age": 52,
    "city": "New York"
}

And here is the equivalent in JavaScript:

var my_json = {
    "name": "Mike",
    "age": 52,
    "city": "New York"
}

Pretty much the same thing, right? Look closely. Key-names and string values get double-quotes. Numbers don’t. These rules apply consistently between JSON and Python dicts. So as you might imagine, it’s easy for JSON data to flow in and out of Python. This is a great gift that has made modern API-work highly accessible to the beginner through a tool that has revolutionized the field of data science and is making inroads into marketing, Jupyter Notebooks.

Flattening data

But beware! As data flows between systems, it’s not uncommon for the data to subtly change. For example, the JSON data above might be converted to a string. Strings might look exactly like JSON, but they’re not. They’re just a bunch of characters. Sometimes you’ll hear it called “serializing”, or “flattening”. It’s a subtle point, but worth understanding as it will help with one of the largest stumbling blocks with the Moz Links (and most JSON) APIs.

Objects have APIs

Actual JSON or dict objects have their own little APIs for accessing the data inside of them. The ability to use these JSON and dict APIs goes away when the data is flattened into a string, but it will travel between systems more easily, and when it arrives at the other end, it will be “deserialized” and the API will come back on the other system.

Data flowing between systems

This is the concept of portable, interoperable data. Back when it was called Electronic Data Interchange (or EDI), it was a very big deal. Then along came the web and then XML and then JSON and now it’s just a normal part of doing business.

See also  The Future of AI in Content Is in Your Hands [Rose-Colored Glasses]

If you’re in Python and you want to convert a dict to a flattened JSON string, you do the following:

import json

my_dict = {
    "name": "Mike",
    "age": 52,
    "city": "New York"
}

json_string = json.dumps(my_dict)

…which would produce the following output:

'{"name": "Mike", "age": 52, "city": "New York"}'

This looks almost the same as the original dict, but if you look closely you can see that single-quotes are used around the entire thing. Another obvious difference is that you can line-wrap real structured data for readability without any ill effect. You can’t do it so easily with strings. That’s why it’s presented all on one line in the above snippet.

Such stringifying processes are done when passing data between different systems because they are not always compatible. Normal text strings on the other hand are compatible with almost everything and can be passed on web-requests with ease. Such flattened strings of JSON data are frequently referred to as the request.

Anatomy of a request

Again, here’s the example request we made above:

response = requests.post(endpoint, data=json_string, auth=auth_tuple)

Now that you understand what the variable name json_string is telling you about its contents, you shouldn’t be surprised to see this is how we populate that variable:

 data_dict = {
    "target": "moz.com/blog",
    "scope": "page",
    "limit": 1
}

json_string = json.dumps(data_dict)

…and the contents of json_string looks like this:

'{"target": "moz.com/blog", "scope": "page", "limit": 1}'

This is one of my key discoveries in learning the Moz Links API. This is in common with countless other APIs out there but trips me up every time because it’s so much more convenient to work with structured dicts than flattened strings. However, most APIs expect the data to be a string for portability between systems, so we have to convert it at the last moment before the actual API-call occurs.

Pythonic loads and dumps

Now you may be wondering in that above example, what a dump is doing in the middle of the code. The json.dumps() function is called a “dumper” because it takes a Python object and dumps it into a string. The json.loads() function is called a “loader” because it takes a string and loads it into a Python object.

The reason for what appear to be singular and plural options are actually binary and string options. If your data is binary, you use json.load() and json.dump(). If your data is a string, you use json.loads() and json.dumps(). The s stands for string. Leaving the s off means binary.

Don’t let anybody tell you Python is perfect. It’s just that its rough edges are not excessively objectionable.

Assignment vs. equality

For those of you completely new to Python or programming in general, what we’re doing when we hit the API is called an assignment. The result of requests.post() is being assigned to the variable named response.

response = requests.post(endpoint, data=json_string, auth=auth_tuple)

We are using the = sign to assign the value of the right side of the equation to the variable on the left side of the equation. The variable response is now a reference to the object that was returned from the API. Assignment is different from equality. The == sign is used for equality.

# This is assignment:
a = 1  # a is now equal to 1

# This is equality:
a == 1  # True, but relies that the above line has been executed

The POST method

response = requests.post(endpoint, data=json_string, auth=auth_tuple)

The requests library has a function called post() that takes 3 arguments. The first argument is the URL of the endpoint. The second argument is the data to send to the endpoint. The third argument is the authentication information to send to the endpoint.

See also  The Two Psychological Biases MrBeast Uses to Garner Millions of Views, and What Marketers Can Learn From Them

Keyword parameters and their arguments

You may notice that some of the arguments to the post() function have names. Names are set equal to values using the = sign. Here’s how Python functions get defined. The first argument is positional both because it comes first and also because there’s no keyword. Keyworded arguments come after position-dependent arguments. Trust me, it all makes sense after a while. We all start to think like Guido van Rossum.

def arbitrary_function(argument1, name=argument2):
    # do stuff

The name in the above example is called a “keyword” and the values that come in on those locations are called “arguments”. Now arguments are assigned to variable names right in the function definition, so you can refer to either argument1 or argument2 anywhere inside this function. If you’d like to learn more about the rules of Python functions, you can read about them here.

Setting up the request

Okay, so let’s let you do everything necessary for that success assured moment. We’ve been showing the basic request:

response = requests.post(endpoint, data=json_string, auth=auth_tuple)

…but we haven’t shown everything that goes into it. Let’s do that now. If you’re following along and don’t have the requests library installed, you can do so with the following command from the same terminal environment from which you run Python:

pip install requests

Often times Jupyter will have the requests library installed already, but in case it doesn’t, you can install it with the following command from inside a Notebook cell:

!pip install requests

And now we can put it all together. There’s only a few things here that are new. The most important is how we’re taking 2 different variables and combining them into a single variable called AUTH_TUPLE. You will have to get your own ACCESSID and SECRETKEY from the Moz.com website.

The API expects these two values to be passed as a Python data structure called a tuple. A tuple is a list of values that don’t change. I find it interesting that requests.post() expects flattened strings for the data parameter, but expects a tuple for the auth parameter. I suppose it makes sense, but these are the subtle things to understand when working with APIs.

Here’s the full code:

import json
import pprint
import requests

# Set Constants
ACCESSID = "mozscape-1234567890"  # Replace with your access ID
SECRETKEY = "1234567890abcdef1234567890abcdef"  # Replace with your secret key
AUTH_TUPLE = (ACCESSID, SECRETKEY)

# Set Variables
endpoint = "https://lsapi.seomoz.com/v2/anchor_text"
data_dict = {"target": "moz.com/blog", "scope": "page", "limit": 1}
json_string = json.dumps(data_dict)

# Make the Request
response = requests.post(endpoint, data=json_string, auth=AUTH_TUPLE)

# Print the Response
pprint(response.json())

…which outputs:

{'next_token': 'JYkQVg4s9ak8iRBWDiz1qTyguYswnj035nqrQ1oIbW96IGJsb2dZgGzDeAM7Rw==',
 'results': [{'anchor_text': 'moz',
              'external_pages': 7162,
              'external_root_domains': 2026}]}

Using all upper case for the AUTH_TUPLE variable is a convention many use in Python to indicate that the variable is a constant. It’s not a requirement, but it’s a good idea to follow conventions when you can.

You may notice that I didn’t use all uppercase for the endpoint variable. That’s because the anchor_text endpoint is not a constant. There are a number of different endpoints that can take its place depending on what sort of lookup we wanted to do. The choices are:

  1. anchor_text

  2. final_redirect

  3. global_top_pages

  4. global_top_root_domains

  5. index_metadata

  6. link_intersect

  7. link_status

  8. linking_root_domains

  9. links

  10. top_pages

  11. url_metrics

  12. usage_data

And that leads into the Jupyter Notebook that I prepared on this topic located here on Github. With this Notebook you can extend the example I gave here to any of the 12 available endpoints to create a variety of useful deliverables, which will be the subject of articles to follow.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

MARKETING

What Businesses Get Wrong About Content Marketing in 2023 [Expert Tips]

Published

on

What Businesses Get Wrong About Content Marketing in 2023 [Expert Tips]

The promise of inbound marketing is a lure that attracts businesses of all kinds, but few understand the efforts it takes to be successful. After a few blog posts, they flame out and grumble “We tried content marketing, but it didn’t really work for us.” I hear this from prospective clients all the time.

(more…)

See also  The Two Psychological Biases MrBeast Uses to Garner Millions of Views, and What Marketers Can Learn From Them
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

MARKETING

Oracle subtracts social sharing tool AddThis

Published

on

Oracle subtracts social sharing tool AddThis

Oracle has removed social sharing and insights tool AddThis from its marketing cloud services. Customers who used AddThis widgets on their sites, enabling visitors to share content on social platforms, have seen the tools disappear with little warning.

A company notice provided by Oracle said that it had planned to terminate all AddThis services, effective May 31. The termination was “part of a periodic product portfolio review,” the statement read.

Oracle acquired AddThis in 2016.

Why we care. AddThis was a popular tool for upwards of 15 million publishers. Not only did it allow web visitors to easily share content on social, it also provided analytics to publishers via dashboard and weekly reports.

What’s next. Oracle provided the following steps for AddThis users in their notice:

  • The user must immediately cease its use of AddThis services, and promptly remove all AddThis related code and technology from its websites;
  • AddThis buttons may disappear from the user’s websites;
  • The AddThis dashboard associated with the user’s registration for AddThis, and all support for AddThis services, will no longer be available;
  • All features of AddThis configured to interoperate with user’s websites, any other Oracle services, or any third-party tools and plug-ins will no longer function.

Dig deeper: Marketers need a unified platform, not more standalone tools

2023 Replacement Survey Small

See also  Factors That Affect Your Personal Loan Interest Rates

Get MarTech! Daily. Free. In your inbox.



About the author

Chris Wood

Chris Wood draws on over 15 years of reporting experience as a B2B editor and journalist. At DMN, he served as associate editor, offering original analysis on the evolving marketing tech landscape. He has interviewed leaders in tech and policy, from Canva CEO Melanie Perkins, to former Cisco CEO John Chambers, and Vivek Kundra, appointed by Barack Obama as the country’s first federal CIO. He is especially interested in how new technologies, including voice and blockchain, are disrupting the marketing world as we know it. In 2019, he moderated a panel on “innovation theater” at Fintech Inn, in Vilnius. In addition to his marketing-focused reporting in industry trades like Robotics Trends, Modern Brewery Age and AdNation News, Wood has also written for KIRKUS, and contributes fiction, criticism and poetry to several leading book blogs. He studied English at Fairfield University, and was born in Springfield, Massachusetts. He lives in New York.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending

en_USEnglish