Connect with us

MARKETING

15 Steps for the Perfect Split Test

Published

on

15 Steps for the Perfect Split Test


When marketers like us create landing pages, write email copy, or design call-to-action buttons, it can be tempting to use our intuition to predict what will make people click and connect.

However, you’re much better off conducting A/B testing than basing marketing decisions off of a “feeling”, as this can be detrimental to your results.

Keep reading to learn how to conduct the entire A/B testing process before, during, and after data collection so you can make the best decisions from your results.

A/B testing can be valuable because different audiences behave, well, differently. Something that works for one company may not necessarily work for another. In fact, conversion rate optimization (CRO) experts hate the term “best practices” because it may not actually be the best practice for you. But, this kind of testing can be complex if you’re not careful.

Let’s go over how A/B testing works to ensure that you don’t make incorrect assumptions about what your audience likes.

How does A/B testing Work?

To run an A/B test, you need to create two different versions of one piece of content, with changes to a single variable. Then, you’ll show these two versions to two similarly sized audiences and analyze which one performed better over a specific period of time (long enough to make accurate conclusions about your results).

Explanation of what a/b testing is

Image Source

A/B testing helps marketers observe how one version of a piece of marketing content performs alongside another. Here are two types of A/B tests you might conduct in an effort to increase your website’s conversion rate:

Example 1: User Experience Test

Perhaps you want to see if moving a certain call-to-action (CTA) button to the top of your homepage instead of keeping it in the sidebar will improve its click-through rate.

To A/B test this theory, you’d create another, alternative web page that uses the new CTA placement. The existing design with the sidebar CTA — or the “control” — is Version A. Version B with the CTA at the top is the “challenger.” Then, you’d test these two versions by showing each of them to a predetermined percentage of site visitors. Ideally, the percentage of visitors seeing either version is the same.

Learn how to easily A/B test a component of your website with HubSpot’s Marketing Hub.

Example 2: Design Test

Perhaps you want to find out if changing the color of your call-to-action (CTA) button can increase its click-through rate.

To A/B test this theory, you’d design an alternative CTA button with a different button color that leads to the same landing page as the control. If you usually use a red call-to-action button in your marketing content, and the green variation receives more clicks after your A/B test, this could merit changing the default color of your call-to-action buttons to green from now on.

To learn more about A/B testing, download our free introductory guide here.

A/B Testing in Marketing

A/B testing has a multitude of benefits to a marketing team, depending on what it is you decide to test. Above all, though, these tests are valuable to a business because they’re low in cost but high in reward.

Let’s say you employ a content creator with a salary of $50,000/year. This content creator publishes five articles per week for the company blog, totaling 260 articles per year. If the average post on the company’s blog generates 10 leads, you could say it costs just over $192 to generate 10 leads for the business ($50,000 salary ÷ 260 articles = $192 per article). That’s a solid chunk of change.

Now, if you ask this content creator to spend two days developing an A/B test on one article, instead of writing two articles in that time period, you might burn $192 because you’re publishing one fewer article. But if that A/B test finds you can increase each article’s conversion rate from 10 to 20 leads, you just spent $192 to potentially double the number of customers your business gets from your blog.

If the test fails, of course, you lost $192 — but now you can make your next A/B test even more educated. If that second test succeeds in doubling your blog’s conversion rate, you ultimately spent $384 to potentially double your company’s revenue. No matter how many times your A/B test fails, its eventual success will almost always outweigh the cost to conduct it.

There are many types of split tests you can run to make the experiment worth it in the end. Here are some common goals marketers have for their business when A/B testing:

Now, let’s walk through the checklist for setting up, running, and measuring an A/B test.

How to Conduct A/B Testing

ab test graphic

Follow along with our free A/B testing kit with everything you need to run A/B testing including a test tracking template, a how-to guide for instruction and inspiration, and a statistical significance calculator to see if your tests were wins, losses, or inconclusive.

Before the A/B Test

Let’s cover the steps to take before you start your A/B test.

1. Pick one variable to test.

As you optimize your web pages and emails, you might find there are a number of variables you want to test. But to evaluate how effective a change is, you’ll want to isolate one “independent variable” and measure its performance. Otherwise, you can’t be sure which variable was responsible for changes in performance.

You can test more than one variable for a single web page or email — just be sure you’re testing them one at a time.

To determine your variable, look at the elements in your marketing resources and their possible alternatives for design, wording, and layout. Other things you might test include email subject lines, sender names, and different ways to personalize your emails.

Keep in mind that even simple changes, like changing the image in your email or the words on your call-to-action button, can drive big improvements. In fact, these sorts of changes are usually easier to measure than the bigger ones.

Note: There are some times when it makes more sense to test multiple variables rather than a single variable. This is a process called multivariate testing. If you’re wondering whether you should run an A/B test versus a multivariate test, here’s a helpful article from Optimizely that compares the two processes.

2. Identify your goal.

Although you’ll measure several metrics during any one test, choose a primary metric to focus on before you run the test. In fact, do it before you even set up the second variation. This is your “dependent variable,” which changes based on how you manipulate the independent variable.

Think about where you want this dependent variable to be at the end of the split test. You might even state an official hypothesis and examine your results based on this prediction.

If you wait until afterward to think about which metrics are important to you, what your goals are, and how the changes you’re proposing might affect user behavior, then you might not set up the test in the most effective way.

3. Create a ‘control’ and a ‘challenger.’

You now have your independent variable, your dependent variable, and your desired outcome. Use this information to set up the unaltered version of whatever you’re testing as your control scenario. If you’re testing a web page, this is the unaltered page as it exists already. If you’re testing a landing page, this would be the landing page design and copy you would normally use.

From there, build a challenger — the altered website, landing page, or email that you’ll test against your control. For example, if you’re wondering whether adding a testimonial to a landing page would make a difference in conversions, set up your control page with no testimonials. Then, create your challenger with a testimonial.

4. Split your sample groups equally and randomly.

For tests where you have more control over the audience — like with emails — you need to test with two or more audiences that are equal in order to have conclusive results.

How you do this will vary depending on the A/B testing tool you use. If you’re a HubSpot Enterprise customer conducting an A/B test on an email, for example, HubSpot will automatically split traffic to your variations so that each variation gets a random sampling of visitors.

5. Determine your sample size (if applicable).

How you determine your sample size will also vary depending on your A/B testing tool, as well as the type of A/B test you’re running.

If you’re A/B testing an email, you’ll probably want to send an A/B test to a subset of your list that is large enough to achieve statistically significant results. Eventually, you’ll pick a winner and send the winning variation on to the rest of the list. (See “The Science of Split Testing” ebook at the end of this article for more on calculating your sample size.)

If you’re a HubSpot Enterprise customer, you’ll have some help determining the size of your sample group using a slider. It’ll let you do a 50/50 A/B test of any sample size — although all other sample splits require a list of at least 1,000 recipients.

ab testing sample size settings in hubspot

If you’re testing something that doesn’t have a finite audience, like a web page, then how long you keep your test running will directly affect your sample size. You’ll need to let your test run long enough to obtain a substantial number of views. Otherwise, it will be hard to tell whether there was a statistically significant difference between variations.

6. Decide how significant your results need to be.

Once you’ve picked your goal metric, think about how significant your results need to be to justify choosing one variation over another. Statistical significance is a super important part of the A/B testing process that’s often misunderstood. If you need a refresher, I recommend reading this blog post on statistical significance from a marketing standpoint.

The higher the percentage of your confidence level, the more sure you can be about your results. In most cases, you’ll want a confidence level of 95% minimum — preferably even 98% — especially if it was a time-intensive experiment to set up. However, sometimes it makes sense to use a lower confidence rate if you don’t need the test to be as stringent.

Matt Rheault, a senior software engineer at HubSpot, likes to think of statistical significance like placing a bet. What odds are you comfortable placing a bet on? Saying “I’m 80% sure this is the right design and I’m willing to bet everything on it” is similar to running an A/B test to 80% significance and then declaring a winner.

Rheault also says you’ll likely want a higher confidence threshold when testing for something that only slightly improves conversion rate. Why? Because random variance is more likely to play a bigger role.

“An example where we could feel safer lowering our confidence threshold is an experiment that will likely improve conversion rate by 10% or more, such as a redesigned hero section,” he explained.

“The takeaway here is that the more radical the change, the less scientific we need to be process-wise. The more specific the change (button color, microcopy, etc.), the more scientific we should be because the change is less likely to have a large and noticeable impact on conversion rate.”

7. Make sure you’re only running one test at a time on any campaign.

Testing more than one thing for a single campaign — even if it’s not on the same exact asset — can complicate results. For example, if you A/B test an email campaign that directs to a landing page at the same time that you’re A/B testing that landing page, how can you know which change caused the increase in leads?

During the A/B Test

Let’s cover the steps to take during your A/B test.

8. Use an A/B testing tool.

To do an A/B test on your website or in an email, you’ll need to use an A/B testing tool. If you’re a HubSpot Enterprise customer, the HubSpot software has features that let you A/B test emails (learn how here), calls-to-action (learn how here), and landing pages (learn how here).

For non-HubSpot Enterprise customers, other options include Google Analytics, which lets you A/B test up to 10 full versions of a single web page and compare their performance using a random sample of users.

9. Test both variations simultaneously.

Timing plays a significant role in your marketing campaign’s results, whether it’s time of day, day of the week, or month of the year. If you were to run Version A during one month and Version B a month later, how would you know whether the performance change was caused by the different design or the different month?

When you run A/B tests, you’ll need to run the two variations at the same time, otherwise you may be left second-guessing your results.

The only exception here is if you’re testing timing itself, like finding the optimal times for sending out emails. This is a great thing to test because depending on what your business offers and who your subscribers are, the optimal time for subscriber engagement can vary significantly by industry and target market.

10. Give the A/B test enough time to produce useful data.

Again, you’ll want to make sure that you let your test run long enough to obtain a substantial sample size. Otherwise, it’ll be hard to tell whether there was a statistically significant difference between the two variations.

How long is long enough? Depending on your company and how you execute the A/B test, getting statistically significant results could happen in hours … or days … or weeks. A big part of how long it takes to get statistically significant results is how much traffic you get — so if your business doesn’t get a lot of traffic to your website, it’ll take much longer for you to run an A/B test.

Read this blog post to learn more about sample size and timing.

11. Ask for feedback from real users.

A/B testing has a lot to do with quantitative data … but that won’t necessarily help you understand why people take certain actions over others. While you’re running your A/B test, why not collect qualitative feedback from real users?

One of the best ways to ask people for their opinions is through a survey or poll. You might add an exit survey on your site that asks visitors why they didn’t click on a certain CTA, or one on your thank-you pages that asks visitors why they clicked a button or filled out a form.

You might find, for example, that a lot of people clicked on a call-to-action leading them to an ebook, but once they saw the price, they didn’t convert. That kind of information will give you a lot of insight into why your users are behaving in certain ways.

After the A/B Test

Finally, let’s cover the steps to take after your A/B test.

12. Focus on your goal metric.

Again, although you’ll be measuring multiple metrics, keep your focus on that primary goal metric when you do your analysis.

For example, if you tested two variations of an email and chose leads as your primary metric, don’t get caught up on open rate or click-through rate. You might see a high click-through rate and poor conversion rates, in which case you might end up choosing the variation that had a lower click-through rate in the end.

13. Measure the significance of your results using our A/B testing calculator.

Now that you’ve determined which variation performs the best, it’s time to determine whether your results are statistically significant. In other words, are they enough to justify a change?

To find out, you’ll need to conduct a test of statistical significance. You could do that manually … or you could just plug in the results from your experiment to our free A/B testing calculator.

For each variation you tested, you’ll be prompted to input the total number of tries, like emails sent or impressions seen. Then, enter the number of goals it completed — generally you’ll look at clicks, but this could also be other types of conversions.

hubspot ab testing calculator

The calculator will spit out the confidence level your data produces for the winning variation. Then, measure that number against the value you chose to determine statistical significance.

14. Take action based on your results.

If one variation is statistically better than the other, you have a winner. Complete your test by disabling the losing variation in your A/B testing tool.

If neither variation is statistically better, you’ve just learned that the variable you tested didn’t impact results, and you’ll have to mark the test as inconclusive. In this case, stick with the original variation, or run another test. You can use the failed data to help you figure out a new iteration on your new test.

While A/B tests help you impact results on a case-by-case basis, you can also apply the lessons you learn from each test and apply it to future efforts.

For example, if you’ve conducted A/B tests in your email marketing and have repeatedly found that using numbers in email subject lines generates better clickthrough rates, you might want to consider using that tactic in more of your emails.

15. Plan your next A/B test.

The A/B test you just finished may have helped you discover a new way to make your marketing content more effective — but don’t stop there. There’s always room for more optimization.

You can even try conducting an A/B test on another feature of the same web page or email you just did a test on. For example, if you just tested a headline on a landing page, why not do a new test on body copy? Or a color scheme? Or images? Always keep an eye out for opportunities to increase conversion rates and leads.

You can use HubSpot’s A/B Test Tracking Kit to plan and organize your experiments.

ab test tracking

Download This Template Now

How to Read A/B Testing Results

As a marketer, you know the value of automation. Given this, you likely use software that handles the A/B test calculations for you — a huge help. But, after the calculations are done, you need to know how to read your results. Let’s go over how.

1. Check your goal metric.

The first step in reading your A/B test results is looking at your goal metric, which is usually conversion rate. After you’ve plugged your results into your A/B testing calculator, you’ll get two results for each version you’re testing. You’ll also get a significant result for each of your variations.

2. Compare your conversion rates.

By looking at your results, you’ll likely be able to tell if one of your variations performed better than the other. However, the true test of success is whether the results you have are statistically significant. This means that one variation performed better than the other at a significant level because, say, the CTA text was more compelling.

Say, for example, Variation A had a 16.04% conversion rate and variation B had a 16.02% conversion rate, and your confidence interval of statistical significance is 95%. Variation A has a higher conversion rate, but the results are not statistically significant, meaning that Variation A won’t significantly improve your overall conversion rate.

3. Segment your audiences for further insights.

Regardless of significance, it’s valuable to break down your results by audience segment to understand how each key area responded to your variations. Common variables for segmenting audiences are:

  • Visitor type, or which version performed best for new visitors versus repeat visitors.
  • Device type, or which version performed best on mobile versus desktop.
  • Traffic source, or which version performed best based on where traffic to your two variations originated.

Let’s go over some examples of A/B experiments you could run for your business.

A/B Testing Examples

We’ve discussed how A/B tests are used in marketing and how to conduct one — but how do they actually look in practice?

As you might guess, we run many A/B tests to increase engagement and drive conversions across our platform. Here are five examples of A/B tests to inspire your own experiments.

1. Site Search

Site search bars help users quickly find what they’re after on a particular website. HubSpot found from previous analysis that visitors who interacted with its site search bar were more likely to convert on a blog post. So, we ran an A/B test in an attempt to increase engagement with the search bar.

In this test, search bar functionality was the independent variable and views on the content offer thank you page was the dependent variable. We used one control condition and three challenger conditions in the experiment.

In the control condition (variant A), the search bar remained unchanged.

control condition in the hubspot search bar A B test

In variant B, the search bar was made larger and more visually prominent, and the placeholder text was set to “search by topic.”

variant b of the hubspot search bar AB test

Variant C appeared identical to variant B, but only searched the HubSpot Blog rather than the entire website.

In variant D, the search bar was made larger but the placeholder text was set to “search the blog.” This variant also searched only the HubSpot Blog

variant c of the hubspot search bar AB test

We found variant D to be the most effective: It increased conversions by 3.4% over the control and increased the percentage of users who used the search bar by 6.5%.

2. Mobile CTAs

HubSpot uses several CTAs for content offers in our blog posts, including ones in the body of posts as well as at the bottom of the page. We test these CTAs extensively for optimize their performance.

For our mobile users, we ran an A/B test to see which type of bottom-of-page CTA converted best. For our independent variable, we altered the design of the CTA bar. Specifically, we used one control and three challengers in our test. For our dependent variables, we used pageviews on the CTA thank you page and CTA clicks.

The control condition included our normal placement of CTAs at the bottom of posts. In variant B, the CTA had no close or minimize option.

variant B of the hubspot mobile CTA AB testIn variant C, mobile readers could close the CTA by tapping an X icon. Once it was closed out, it wouldn’t reappear.

variant C of the hubspot mobile CTA AB test

In variant D, we included an option to minimize the CTA with an up/down caret.

variant d of hubspot's mobile cta A B test

Our tests found all variants to be successful. Variant D was the most successful, with a 14.6% increase in conversions over the control. This was followed by variant C with an 11.4% increase and variant B with a 7.9% increase.

3. Author CTAs

In another CTA experiment, HubSpot tested whether adding the word “free” and other descriptive language to author CTAs at the top of blog posts would increase content leads. Past research suggested that using “free” in CTA text would drive more conversions and that text specifying the type of content offered would be helpful for SEO and accessibility.

In the test, the independent variable was CTA text and the main dependent variable was conversion rate on the content offer form.

In the control condition, author CTA text was unchanged (see the orange button in the image below).

variant A of the author CTA AB test

In variant B, the word “free” was added to the CTA text.

variant B of the author CTA AB test

In variant C, descriptive wording was added to the CTA text in addition to “free.”

variant C of the author CTA AB test

Interestingly, variant B saw a loss in form submissions, down by 14% compared to the control. This was unexpected, since including “free” in content offer text is widely considered a best practice.

Meanwhile, form submissions in variant C outperformed the control by 4%. It was concluded that adding descriptive text to the author CTA helped users understand the offer and thus made them more likely to download.

4. Blog Table of Contents

To help users better navigate the blog, HubSpot tested a new Table of Contents (TOC) module. The goal was to improve user experience by presenting readers with their desired content more quickly. We also tested whether adding a CTA to this TOC module would increase conversions.

The independent variable of this A/B test was the inclusion and type of TOC module in blog posts, and the dependent variables were conversion rate on content offer form submissions and clicks on the CTA inside the TOC module.

The control condition did not include the new TOC module —control posts either had no table of contents, or a simple bulleted list of anchor links within the body of the post near the top of the article (pictured below).

variant A of the hubspot blog chapter module AB test

In variant B, the new TOC module was added to blog posts. This module was sticky, meaning it remained onscreen as users scrolled down the page. Variant B also included a content offer CTA at the bottom of the module.

variant B of the hubspot blog chapter module AB test

Variant C included an identical module to variant B but with the CTA removed.

variant C of the hubspot blog chapter module AB test

Both variants B and C did not increase the conversion rate on blog posts. The control condition outperformed variant B by 7% and performed equally with variant C. Also, few users interacted with the new TOC module or the CTA inside the module.

5. Review Notifications

To determine the best way of gathering customer reviews, we ran a split test of email notifications versus in-app notifications. Here, the independent variable was the type of notification and the dependent variable was the percentage of those who left a review out of all those who opened the notification.

In the control, HubSpot sent a plain text email notification asking users to leave a review. In variant B, HubSpot sent an email with a certificate image including the user’s name.

variant B of the hubspot notification AB test

For variant C, HubSpot sent users an in app-notification.

variant C of the hubspot notification AB test

Ultimately, both emails performed similarly and outperformed the in-app notifications. About 25% of users who opened an email left a review versus the 10.3% who opened in-app notifications. Emails were also more often opened by users.

Start A/B Testing Today

A/B testing allows you to get to the truth of what content and marketing your audience wants to see. Learn how to best carry out some of the steps above using the free e-book below.

Editor’s note: This post was originally published in May 2016 and has been updated for comprehensiveness.


The Ultimate A/B Testing Kit



Source link

MARKETING

How the LinkedIn Algorithm Works in 2023 [Updated]

Published

on

How the LinkedIn Algorithm Works in 2023 [Updated]

LinkedIn bills itself as “the world’s largest professional network” — and they have the numbers to prove it. With over 875 million members in more than 200 countries and regions, LinkedIn is immensely popular and well-used. On top of the sheer size of the platform, nearly 25% of users are senior-level influencers; about 10 million are categorized as C-level executives, and LinkedIn classifies 63 million as “decision makers.”

If you’re a B2B marketer or brand, you probably already know this social media platform offers you an excellent opportunity to reach your target demographic. However, seizing that opportunity is easier said than done since LinkedIn uses a unique algorithm to serve content to users.

In this article, we will walk through how the LinkedIn algorithm works in 2023, best practices for beating the algorithm with organic content, and how brands can elevate their presence on the platform.
 

What is the LinkedIn Algorithm?

 
The LinkedIn algorithm is a formula that determines which content gets seen by certain users on the platform. It’s designed to make each user’s newsfeed as relevant and interesting to them as possible to increase engagement and time spent on the platform. In this way, the LinkedIn algorithm is similar to the Facebook or TikTok algorithm, though LinkedIn’s is slightly more transparent (which is good news!). 

In fact, LinkedIn itself is a good source for demystifying the algorithm and understanding what content is prioritized for members. But the general function of the LinkedIn algorithm is to review and assess billions of posts every day and position those that are most authentic, substantive and relevant to each user at the top of their feeds.  

How the algorithm achieves that function is a little more complex.
 

How the LinkedIn Algorithm Works in 2023

 
 
LinkedIn users’ feeds don’t show posts in chronological order. Instead, the LinkedIn algorithm determines which posts show up at the top of users’ feeds, meaning that sometimes users see older or more popular posts before they see more recent ones.

Several factors influence the LinkedIn algorithm, and the factors change relatively often. Let’s take a closer look.
 

1. Assess and Filter Content by Quality

 
When someone posts on LinkedIn, the algorithm determines whether it’s spam, low-quality, or high-quality content. High-quality content is cleared, low-quality content undergoes additional screening, and spam content is eliminated. 

 

  • Spam – Content flagged as spam can have poor grammar, contain multiple links within the post, tag more than five people, use more than ten hashtags (or use expressly prescriptive hashtags like #follow, #like, and #comment) or be one of multiple postings from the same user within three hours. 
  • Low-quality – Content categorized as low quality isn’t spam but is judged as not particularly relevant to the audience. These posts can be hard to read, tag people who are unlikely to respond or interact, or deal with topics too broad to be interesting to users.  
  • High-quality – “Clear” content is easy to read, encourages engagement, incorporates strong keywords, uses three or fewer hashtags, and reserves outbound links to the comments. In other words, it’s something your audience will want to read or see and react to in a substantive way.

 

2. Test Post Engagement with a Small Follower Group

 
Once a post has made it through the spam filter, the algorithm distributes it to a small subset of your followers for a short time (about an hour) to test its ability to generate engagement. If this group of followers likes, comments or shares the post within this “golden hour,” the LinkedIn algorithm will push it to more people. 

If, on the other hand, the post is ignored, or your followers choose to hide it from their feeds (or, worst of all, mark it as spam), the algorithm will not share it further.  
 

3. Expand the Audience Based on Ranking Signals

 
If the algorithm decides your post is worthy of being sent to a broader audience, it will use a series of three ranking signals to determine exactly who sees it: personal connection, interest relevance and engagement probability. 

These signals boil down to the level of connection between you and the user who potentially sees the post, that user’s interest in the content’s topic and the likelihood of that user interacting with the content. We’ll break down exactly what these ranking signals are further in the post.
 

4. Additional Spam Checks and Continued Engagement Monitoring

 
Even after a post is pushed to a broader audience, the LinkedIn algorithm continues monitoring how users perceive it in terms of quality. If your content is marked as spam or entirely ignored by the new audience group, LinkedIn will stop showing it to those audiences. On the other hand, if your post resonates with new audiences, LinkedIn will keep the post in rotation. So long as the post gets a steady stream of engagement, posts can stay in circulation for months.
 

8 Best Practices to Make the LinkedIn Algorithm Work for You

 
 Understanding how the LinkedIn algorithm works is the first step to reaching more people on LinkedIn and ensuring your content is well-received and engaging. The next step is optimizing your content based on the factors the algorithm prioritizes to maximize its effect. This is where mastering the ranking signals comes into play.

Here are eight tips for crafting high-performing LinkedIn content:
 

1. Know What’s Relevant to Your Audience

 
Relevance is what the algorithm prizes above all other content qualities. For LinkedIn, relevance translates to engagement, which leads to more time spent on the platform, which results in more ad revenue and continued growth. Following this tip will win you points in the “interest relevance” and “engagement probability” ranking categories. 

The entire LinkedIn ecosystem is set up to prioritize highly relevant content. To ensure your posts are relevant, create content focused on your niche and your audience’s specific needs and interests. As LinkedIn’s then-Director of Product Management Linda Leung explained in 2022, “we are continuously investing in the teams, tools, and technology to ensure that the content that you see on your feed adds value to your professional journey.” 

Use customer research and analytics from other social media platforms to learn more about what your audience wants to know. Focus on creating high-quality, valuable content that helps professionals succeed in formats they prefer (for example, videos, which get three times the average engagement of text-only posts). But above all, posting content that is personal and has industry relevance is vital. 
 

2. Post at the Right Time

 
As with most things, timing is crucial for successful LinkedIn posts. It’s even more critical when considering the “golden hour” testing process integral to the algorithm’s rankings. Remember, how much interaction a post gets within the first hour after it’s published determines whether it gets pushed to a broader audience. That means posting at the optimal time when your followers are online and primed to respond is a central factor to success.

You are the best judge of when your top LinkedIn followers and people in your network are most likely to be on the platform and engaging with content. But for the general public, data suggests the best time to post is at 9:00 a.m. EST on Tuesdays and Wednesdays. Cross-reference these times with your own analytics and knowledge about your audience — like a common time zone, for example — to find the best time for your posts.
 

3. Encourage Engagement

 
Your post format can play a significant role in user engagement. The LinkedIn algorithm doesn’t explicitly prioritize videos over photo and text posts, but LinkedIn’s internal research has found video ads are five times more likely to start conversations compared to other types of promoted content. 

Asking a question is another great way to encourage interaction with your post. If you’re sharing industry insights, open the conversation to commenters by asking them to share their opinions or experiences on the topic. 

Additionally, tagging someone in your LinkedIn post can expand its reach, but only tag relevant users and people likely to engage with the post. You don’t automatically get in front of a celebrity’s entire following just because you tagged them. In fact, the algorithm’s spam filter can penalize your post for that. But when you tag someone relevant, the tagged person’s connections and followers will also see your post in their feeds. 
 

4. … But don’t beg users to engage

 
The LinkedIn algorithm penalizes posts and hashtags that expressly ask for an engagement action like a follow or a comment. In an official blog post from May 2022, LinkedIn said that it “won’t be promoting” posts that “ask or encourage the community to engage with content via likes or reactions posted with the exclusive intent of boosting reach on the platform.” Essentially, content that begs for engagement is now considered low-quality and should be avoided.
 

5. Promote new posts on non-LinkedIn channels

 
LinkedIn doesn’t exist in a vacuum, and neither do its users. Content that gains traction in other channels can help boost LinkedIn posts and vice versa. Sharing posts on your website, other social media platforms, or with coworkers can spark the initial engagement required for a viral LinkedIn post. Promoting content on other channels can also encourage inactive LinkedIn users to re-engage with the platform, and that interaction will be interpreted as net new engagement for your post.
 

6. Keep Your Posts Professional

 
As the “professional social networking site,” LinkedIn has a well-honed identity that extends to the type of content it favors. Specifically, business-related content that users will find relevant and helpful to their careers or within their industry. 

This might seem common sense, but it can be tempting to think that content that earns lots of clicks or likes on other social media platforms will perform similarly when cross-posted on LinkedIn. Unfortunately (or fortunately), hilarious memes, TikTok dance clips and personal videos don’t resonate with the LinkedIn algorithm. 
 

7. Avoid Outbound Links
 
 

The urge to include an outbound link in a LinkedIn post is real, especially for B2B marketers using LinkedIn to generate leads and traffic to their websites. But this is universally regarded as a tactic to avoid. LinkedIn wants to keep users on the platform and engaging; link-outs defeat that purpose. Therefore, the algorithm tends to downgrade content that includes an outbound link. 

Posts without outbound links enjoyed six times more reach than posts containing links. Does that mean there’s no room for a link to your brand’s website or blog with additional resources? No. But the best practice is creating content that encourages a conversation and letting the audience request an outbound link. If you feel compelled to link to something off-platform, include that link in the comments. 
 

8. Keep an Eye on SSI

 
LinkedIn has a proprietary metric called the Social Selling Index, which measures “how effective you are at establishing your professional brand, finding the right people, engaging with insights, and building relationships.” Per LinkedIn, social selling leaders create 45% more opportunities than those users with lower SSI scores.

A higher SSI boosts users’ posts closer to the top of their audience’s feeds. While this impacts post visibility for individual posters rather than brands and companies, it remains a significant influence on LinkedIn’s algorithm and is worth noting. 

Source: Business 2 Community
 

An Overview of Ranking Signals on LinkedIn’s Algorithm

 
 
As mentioned earlier, there are three ranking signals the LinkedIn algorithm uses to rank posts in a user’s feed:
 

  1. Personal connections
  2. Interest relevance
  3. Engagement probability

 
And here’s how each signal impacts a post’s ranking:
 

Personal Connections

 
In 2019, LinkedIn began deprioritizing content from mega influencers (think Oprah and Richard Brandon) and instead began highlighting content from users’ personal connections. To determine a user’s connections, LinkedIn considers these two things:
 

  1. Who a user works with or has previously worked with
  2. Who a user has interacted with before on the platform

 
At the top of the feed, users now see posts by people they engage with often and by anyone who posts consistently. Users also see more posts from connections with whom they share interests and skills (according to their LinkedIn profiles). 

That said, as of 2022, LinkedIn is also “creating more ways to follow people throughout the feed experience,” including thought leaders, industry experts, and creators that may be outside of a user’s network. So it’s important to remember that personal connection is just one factor influencing post ranking.
 

Interest relevance

 
Relevance is another of the three ranking signals – and in many ways, the most important one. LinkedIn explains on its engineering blog: “We already have a strong set of explicit and implicit signals that provide context on what content a member may find interesting based on their social connections and the Knowledge Graph (e.g., a company that they follow, or news widely shared within their company).”

LinkedIn also uses what they call an “interest graph” that represents the relationships between users and a variety of topics. This lets the LinkedIn algorithm measure the following:
 

  • How interested users are in certain topics
  • How related are different topics to one another
  • Which connections share a user’s interests

 
The algorithm also considers the companies, people, hashtags, and topics mentioned in a post to predict interest. To maximize the interest relevance ranking, you have to understand your target audience and craft content that they’ll find relevant.
 

Engagement Probability

 
Interaction plays a significant role in a post’s ranking on LinkedIn. The platform uses machine learning to rank interaction in two ways:
 

  1. How likely a user is to comment on, share, or react to a post based on the content and people they have interacted with
  2. How quickly a post starts receiving engagement after it’s published. The faster users interact with a post, the more likely it will appear at the top of others’ feeds

 
Users who regularly interact with others’ posts in their LinkedIn feed are more likely to see interactions on their content, which in turn means that they’ll be more likely to show up on other people’s feeds.
 

Elevate Your Brand’s LinkedIn Presence

 
The LinkedIn algorithm can seem intimidating, but it really isn’t. It relies on a series of rules and ranking measures that can be understood and mastered to present users with content they find helpful in their professional lives.

Knowing that the algorithm prioritizes engagement, relevance and connection will help get your posts in front of more LinkedIn users and improve your overall performance on the platform. And by following the eight best practices outlined in this article, you’ll be able to keep your audience’s interest and create plenty of opportunities for them to engage with your content. 

Tinuiti helps brands strengthen relationships with new and current customers through expert social media strategy and brilliant creative. Reach out to our Paid Social services team to learn how to start advancing your LinkedIn strategy today.

Editor’s Note: This post was originally published in September 2021 and has been regularly updated for freshness, accuracy, and comprehensiveness.

Source link

Continue Reading

MARKETING

A Digital Practioner’s Guide to Starting the New Year Right

Published

on

A Digital Practioner’s Guide to Starting the New Year Right



It’s that time of year again – the holiday excitement has faded as we fall back into the workweek. With a year’s worth of work stretched in front of us, there can be both a sense of opportunity and overwhelmedness 

Because transitioning back into the swing of things can be daunting, We’ve gathered key takeaways from the previous year, global Opticon Tour, and how we can successfully apply those learnings in 2023.  

1. “Work about work” is holding teams back. Take this chance to declutter.  

Consider the reality of what most digital teams are up against. When it comes to managing the content lifecycle, draft documents that are stored in separate places and disparate tools that don’t work together are the norm for many. With no centralized point of communication and cumbersome workflows, it can take forever for teams to create and approve content, and work is often duplicated or unused.  

After work is completed, it can be easy to dismiss the headaches caused by inefficient, siloed workflows and processes. But the long-term effects of inefficient and bulky collaboration can be detrimental to a brand’s digital experience – and bottom line. (Those who joined us in San Diego at Opticon might recall this concept played out via ). 

Digital teams with unwieldy content lifecycles can take back control using , saving countless hours and frustration over the year.  

2. Change is constant. Set your team up to be adaptive. 

We all know how difficult it is to create amazing customer experiences these days. The world is moving faster than ever, and change is constant and chaotic with uncertainty on nearly every level: economic upheaval, rapid cultural change, ever-escalating customer expectations (thanks, Amazon), and a tight talent market.  

To not only stay the course but to also grow in this unpredictable environment, it’s important that teams constantly stay on the lookout for new ways to drive more sales and increase loyalty. In other words, consistently deliver modern, relevant, and personalized commerce experiences.  

But keeping pace doesn’t necessarily mean working harder. Optimizely’s Monetize solutions, teams can drive sales and loyalty with fewer costs and efforts.  

3. Data fuels a great customer experience. Test and optimize every touchpoint. 

As practitioners, we all know that the best customer experience wins.  

When teams don’t clearly understand what’s happening and when, they miss the mark. With little patience and high expectations, today’s customers will simply switch to a competitor that better understands them and provides a more personalized experience.  

But when teams work together to inject data across silos, they have the insight needed to make the right decisions and create with confidence.  

For instance, take the marketing team: with access to a slew of customer touchpoints and experimentation data, marketers should be a critical resource for understanding customers’ wants and needs. Developers, product teams, and beyond should utilize this data to remove the guesswork and inform strategies, priorities, roadmaps, and decisions.  

With customer-centricity at the heart of any great digital experience, the best experiences are fueled by data uncovered by high-velocity experimentation. Consider the power that Optimizely’s Experimentation products can have on your entire team’s ability to unlock personalized insights and better connect with customers.  

Hopefully, your new year is off to a great start – but if you’re feeling a little off track, contact Optimizely today to learn more about our DXP can impact your business and set you up for a successful and productive year.  

A special thanks to our sponsors at Opticon London – Microsoft, Google Cloud, Valtech, and Siteimprove – and Opticon Stockholm – Microsoft, Google Cloud, Valtech, and Contentsquare. 


Source link

Continue Reading

MARKETING

Top 6 SEO Tips for Bloggers that Will Skyrocket Google Rankings

Published

on

Top 6 SEO Tips for Bloggers that Will Skyrocket Google Rankings

The majority of blogs rely heavily on search engines to drive traffic. On the other hand, there is a misunderstanding that creating “SEO-optimized content” entails stuffing keywords into paragraphs and headers, which leads to barely readable blog articles.

But that’s not what SEO is all about. In this article, you’ll discover the top 6 SEO strategies and how crucial they are for improving your blog posts rank in Google search results.

How Important Are Google Rankings For Your Blog?

Search engine traffic is essential if you’re blogging in hopes of growing your business. After all, what’s the point in writing content if no one is going to see it? The higher your blog post ranks in Google search results, the more likely people will find and read it.

And the more people who read your blog post, the more likely someone will take the desired action, whether signing up for your email list, buying one of your products, or hiring you as a coach or consultant. So, it is essential to have SEO optimized blog.

How To Incorporate SEO Into Your Blogs?

It would help if you started putting these six pieces of constructive SEO advice for bloggers into practice immediately:

1. Write For Your Readers

The standard of blog writing started significantly declining when “SEO content” became a buzzword. Instead of writing for people, they began to write mainly for robots in search engines. Unfortunately, some bloggers still express themselves in this way nowadays.  

But luckily, things have greatly improved, especially since the Hummingbird update and the rise of voice searches. The Hummingbird update was developed to assist Google in comprehending the purpose of searches.  

For instance, Google would understand that you are seeking nearby restaurants if you Googled “places to buy burgers.” It influences SEO because search engines are now more geared toward providing answers to queries and supporting semantic search rather than merely focusing on keywords.

You typically utilize Google, Bing, YouTube, or even Siri to find answers to questions. Take that idea and use it to improve your blog. Your writing should address the concerns of your intended audience in detail.

Your blog shouldn’t exist solely to help you rank for a particular keyword. Instead of concentrating on keywords, shift your attention to creating content that addresses the issues of your target audience.

2. Link to High-Authority websites

Don’t be scared to use external links when you construct your blog content. In addition to giving blog visitors more resources to read and learn from, linking to reputable websites demonstrates to search engines that you have done your research.

Research-based statistics from reputable websites are the best way to support blog content. Using compelling statistics will help you create a stronger, more specific argument that will help you win your readers’ trust.

3. Design a link building Strategy

Your search ranking is significantly impacted by link building. Why? Consider search results a contest where the people who receive the most votes win.

Google considers every website that links back to you as a vote for your website, elevating your content’s credibility. You will move up in ranking as a result. Here are some starter ideas for your link-building:

  • Communicate to other bloggers in your niche and offer to guest post on their website. Include a link back to your blog in your guest post.
  • Participate in online and offline community events related to your niche. For example, if you blog about fitness, you could attend a trade show related to fitness or health.
  • Create helpful resources that other bloggers in your niche find valuable, such as an eBook, cheat sheet or template. Include a link back to your blog on these resources.
  • Leverage social media to get your content in front of as many people as possible.

4. Learn About Google Webmaster Tools

Do you remember getting a warning from your teacher when you did anything incorrectly in elementary school? Your opportunity to clean up your act and get back on track to avoid punishment was given to you with that warning. In a way, Google Webmaster Tools serves that purpose for your blog.

Google Webmaster Tools will warn you when something suspicious is happening with your blog by giving you diagnostics, tools, and data to keep your site in good condition.

What you can observe in the Webmaster Tools Search Console is:

  • The percentage of your pages that Google has indexed
  • If your website is having issues with Google’s bots indexing it
  • If your website was hacked
  • How search engine bots see your website
  • Links to your site
  • If Google penalized your website manually

The great thing about Webmaster Tools is that it informs you what’s wrong with your website and how to fix it. To resolve any difficulties Google discovers with your blog, you can utilize a vast knowledge base of articles and a forum.

5. Include Keywords in your Meta Description

Does your post include meta descriptions? If not, you’re probably not providing your content with the best chance of being seen. Google also analyzes meta-descriptions to determine search results. The one- to three-sentence summaries beneath a result’s title is known as meta descriptions.

Use meta descriptions to briefly summarize the subject of your post, and keep in mind to:

  1. Make it brief.
  2. Use between one and two keywords.
  3. Since there will likely be other postings that are identical to yours, you should make your description stand out from the competition.

6. Establish Linkable Assets

A linkable asset is a unique, instrumental piece of content that’s so valuable people can’t resist linking to it. It’s similar to dining at a fantastic restaurant and a merely adequate one. You’ll go out of your way to tell everyone about the excellent restaurant, but if someone asks if you’ve been there, you’ll probably only mention the merely adequate one.

The ProBlogger job board is an excellent example of a linkable asset. For independent bloggers looking for compensated writing opportunities, it’s a terrific resource. The page is constantly linked in blog posts on monetizing your blog or websites that pay you to write for them. Why? Because it is rare and costly.

You can build the following linkable assets for your blog:

  • Free software or apps
  • Ultimate guide posts
  • Huge lists
  • Infographics
  • Online guide
  • Influencer tally reports
  • Quizzes
  • A case studies
  • Industry studies or surveys

Final Thoughts

By following these six SEO tips for bloggers, you’ll be well on your journey to improving your blog’s Google ranking. Remember that SEO is an ongoing process, so don’t get discouraged if you don’t see results immediately. The key is to be patient and consistent in your efforts, and soon you’ll start reaping the rewards of your hard work!

Source link

Continue Reading

Trending

en_USEnglish