Connect with us

MARKETING

15 Steps for the Perfect Split Test

Published

on

15 Steps for the Perfect Split Test

When marketers like us create landing pages, write email copy, or design call-to-action buttons, it can be tempting to use our intuition to predict what will make people click and connect.

However, you’re much better off conducting A/B testing than basing marketing decisions off of a “feeling”, as this can be detrimental to your results.

Keep reading to learn how to conduct the entire A/B testing process before, during, and after data collection so you can make the best decisions from your results.

A/B testing can be valuable because different audiences behave, well, differently. Something that works for one company may not necessarily work for another. In fact, conversion rate optimization (CRO) experts hate the term “best practices” because it may not actually be the best practice for you. But, this kind of testing can be complex if you’re not careful.

Let’s go over how A/B testing works to ensure that you don’t make incorrect assumptions about what your audience likes.

How does A/B testing Work?

To run an A/B test, you need to create two different versions of one piece of content, with changes to a single variable. Then, you’ll show these two versions to two similarly sized audiences and analyze which one performed better over a specific period of time (long enough to make accurate conclusions about your results).

Explanation of what a/b testing is

Image Source

A/B testing helps marketers observe how one version of a piece of marketing content performs alongside another. Here are two types of A/B tests you might conduct in an effort to increase your website’s conversion rate:

Example 1: User Experience Test

Perhaps you want to see if moving a certain call-to-action (CTA) button to the top of your homepage instead of keeping it in the sidebar will improve its click-through rate.

To A/B test this theory, you’d create another, alternative web page that uses the new CTA placement. The existing design with the sidebar CTA — or the “control” — is Version A. Version B with the CTA at the top is the “challenger.” Then, you’d test these two versions by showing each of them to a predetermined percentage of site visitors. Ideally, the percentage of visitors seeing either version is the same.

Learn how to easily A/B test a component of your website with HubSpot’s Marketing Hub.

Example 2: Design Test

Perhaps you want to find out if changing the color of your call-to-action (CTA) button can increase its click-through rate.

To A/B test this theory, you’d design an alternative CTA button with a different button color that leads to the same landing page as the control. If you usually use a red call-to-action button in your marketing content, and the green variation receives more clicks after your A/B test, this could merit changing the default color of your call-to-action buttons to green from now on.

To learn more about A/B testing, download our free introductory guide here.

A/B Testing in Marketing

A/B testing has a multitude of benefits to a marketing team, depending on what it is you decide to test. Above all, though, these tests are valuable to a business because they’re low in cost but high in reward.

Let’s say you employ a content creator with a salary of $50,000/year. This content creator publishes five articles per week for the company blog, totaling 260 articles per year. If the average post on the company’s blog generates 10 leads, you could say it costs just over $192 to generate 10 leads for the business ($50,000 salary ÷ 260 articles = $192 per article). That’s a solid chunk of change.

Now, if you ask this content creator to spend two days developing an A/B test on one article, instead of writing two articles in that time period, you might burn $192 because you’re publishing one fewer article. But if that A/B test finds you can increase each article’s conversion rate from 10 to 20 leads, you just spent $192 to potentially double the number of customers your business gets from your blog.

If the test fails, of course, you lost $192 — but now you can make your next A/B test even more educated. If that second test succeeds in doubling your blog’s conversion rate, you ultimately spent $384 to potentially double your company’s revenue. No matter how many times your A/B test fails, its eventual success will almost always outweigh the cost to conduct it.

There are many types of split tests you can run to make the experiment worth it in the end. Here are some common goals marketers have for their business when A/B testing:

Now, let’s walk through the checklist for setting up, running, and measuring an A/B test.

How to Conduct A/B Testing

ab test graphic

Follow along with our free A/B testing kit with everything you need to run A/B testing including a test tracking template, a how-to guide for instruction and inspiration, and a statistical significance calculator to see if your tests were wins, losses, or inconclusive.

Before the A/B Test

Let’s cover the steps to take before you start your A/B test.

1. Pick one variable to test.

As you optimize your web pages and emails, you might find there are a number of variables you want to test. But to evaluate how effective a change is, you’ll want to isolate one “independent variable” and measure its performance. Otherwise, you can’t be sure which variable was responsible for changes in performance.

You can test more than one variable for a single web page or email — just be sure you’re testing them one at a time.

To determine your variable, look at the elements in your marketing resources and their possible alternatives for design, wording, and layout. Other things you might test include email subject lines, sender names, and different ways to personalize your emails.

Keep in mind that even simple changes, like changing the image in your email or the words on your call-to-action button, can drive big improvements. In fact, these sorts of changes are usually easier to measure than the bigger ones.

Note: There are some times when it makes more sense to test multiple variables rather than a single variable. This is a process called multivariate testing. If you’re wondering whether you should run an A/B test versus a multivariate test, here’s a helpful article from Optimizely that compares the two processes.

2. Identify your goal.

Although you’ll measure several metrics during any one test, choose a primary metric to focus on before you run the test. In fact, do it before you even set up the second variation. This is your “dependent variable,” which changes based on how you manipulate the independent variable.

Think about where you want this dependent variable to be at the end of the split test. You might even state an official hypothesis and examine your results based on this prediction.

If you wait until afterward to think about which metrics are important to you, what your goals are, and how the changes you’re proposing might affect user behavior, then you might not set up the test in the most effective way.

3. Create a ‘control’ and a ‘challenger.’

You now have your independent variable, your dependent variable, and your desired outcome. Use this information to set up the unaltered version of whatever you’re testing as your control scenario. If you’re testing a web page, this is the unaltered page as it exists already. If you’re testing a landing page, this would be the landing page design and copy you would normally use.

From there, build a challenger — the altered website, landing page, or email that you’ll test against your control. For example, if you’re wondering whether adding a testimonial to a landing page would make a difference in conversions, set up your control page with no testimonials. Then, create your challenger with a testimonial.

4. Split your sample groups equally and randomly.

For tests where you have more control over the audience — like with emails — you need to test with two or more audiences that are equal in order to have conclusive results.

How you do this will vary depending on the A/B testing tool you use. If you’re a HubSpot Enterprise customer conducting an A/B test on an email, for example, HubSpot will automatically split traffic to your variations so that each variation gets a random sampling of visitors.

5. Determine your sample size (if applicable).

How you determine your sample size will also vary depending on your A/B testing tool, as well as the type of A/B test you’re running.

If you’re A/B testing an email, you’ll probably want to send an A/B test to a subset of your list that is large enough to achieve statistically significant results. Eventually, you’ll pick a winner and send the winning variation on to the rest of the list. (See “The Science of Split Testing” ebook at the end of this article for more on calculating your sample size.)

If you’re a HubSpot Enterprise customer, you’ll have some help determining the size of your sample group using a slider. It’ll let you do a 50/50 A/B test of any sample size — although all other sample splits require a list of at least 1,000 recipients.

ab testing sample size settings in hubspot

If you’re testing something that doesn’t have a finite audience, like a web page, then how long you keep your test running will directly affect your sample size. You’ll need to let your test run long enough to obtain a substantial number of views. Otherwise, it will be hard to tell whether there was a statistically significant difference between variations.

6. Decide how significant your results need to be.

Once you’ve picked your goal metric, think about how significant your results need to be to justify choosing one variation over another. Statistical significance is a super important part of the A/B testing process that’s often misunderstood. If you need a refresher, I recommend reading this blog post on statistical significance from a marketing standpoint.

The higher the percentage of your confidence level, the more sure you can be about your results. In most cases, you’ll want a confidence level of 95% minimum — preferably even 98% — especially if it was a time-intensive experiment to set up. However, sometimes it makes sense to use a lower confidence rate if you don’t need the test to be as stringent.

Matt Rheault, a senior software engineer at HubSpot, likes to think of statistical significance like placing a bet. What odds are you comfortable placing a bet on? Saying “I’m 80% sure this is the right design and I’m willing to bet everything on it” is similar to running an A/B test to 80% significance and then declaring a winner.

Rheault also says you’ll likely want a higher confidence threshold when testing for something that only slightly improves conversion rate. Why? Because random variance is more likely to play a bigger role.

“An example where we could feel safer lowering our confidence threshold is an experiment that will likely improve conversion rate by 10% or more, such as a redesigned hero section,” he explained.

“The takeaway here is that the more radical the change, the less scientific we need to be process-wise. The more specific the change (button color, microcopy, etc.), the more scientific we should be because the change is less likely to have a large and noticeable impact on conversion rate.”

7. Make sure you’re only running one test at a time on any campaign.

Testing more than one thing for a single campaign — even if it’s not on the same exact asset — can complicate results. For example, if you A/B test an email campaign that directs to a landing page at the same time that you’re A/B testing that landing page, how can you know which change caused the increase in leads?

During the A/B Test

Let’s cover the steps to take during your A/B test.

8. Use an A/B testing tool.

To do an A/B test on your website or in an email, you’ll need to use an A/B testing tool. If you’re a HubSpot Enterprise customer, the HubSpot software has features that let you A/B test emails (learn how here), calls-to-action (learn how here), and landing pages (learn how here).

For non-HubSpot Enterprise customers, other options include Google Analytics, which lets you A/B test up to 10 full versions of a single web page and compare their performance using a random sample of users.

9. Test both variations simultaneously.

Timing plays a significant role in your marketing campaign’s results, whether it’s time of day, day of the week, or month of the year. If you were to run Version A during one month and Version B a month later, how would you know whether the performance change was caused by the different design or the different month?

When you run A/B tests, you’ll need to run the two variations at the same time, otherwise you may be left second-guessing your results.

The only exception here is if you’re testing timing itself, like finding the optimal times for sending out emails. This is a great thing to test because depending on what your business offers and who your subscribers are, the optimal time for subscriber engagement can vary significantly by industry and target market.

10. Give the A/B test enough time to produce useful data.

Again, you’ll want to make sure that you let your test run long enough to obtain a substantial sample size. Otherwise, it’ll be hard to tell whether there was a statistically significant difference between the two variations.

How long is long enough? Depending on your company and how you execute the A/B test, getting statistically significant results could happen in hours … or days … or weeks. A big part of how long it takes to get statistically significant results is how much traffic you get — so if your business doesn’t get a lot of traffic to your website, it’ll take much longer for you to run an A/B test.

Read this blog post to learn more about sample size and timing.

11. Ask for feedback from real users.

A/B testing has a lot to do with quantitative data … but that won’t necessarily help you understand why people take certain actions over others. While you’re running your A/B test, why not collect qualitative feedback from real users?

One of the best ways to ask people for their opinions is through a survey or poll. You might add an exit survey on your site that asks visitors why they didn’t click on a certain CTA, or one on your thank-you pages that asks visitors why they clicked a button or filled out a form.

You might find, for example, that a lot of people clicked on a call-to-action leading them to an ebook, but once they saw the price, they didn’t convert. That kind of information will give you a lot of insight into why your users are behaving in certain ways.

After the A/B Test

Finally, let’s cover the steps to take after your A/B test.

12. Focus on your goal metric.

Again, although you’ll be measuring multiple metrics, keep your focus on that primary goal metric when you do your analysis.

For example, if you tested two variations of an email and chose leads as your primary metric, don’t get caught up on open rate or click-through rate. You might see a high click-through rate and poor conversion rates, in which case you might end up choosing the variation that had a lower click-through rate in the end.

13. Measure the significance of your results using our A/B testing calculator.

Now that you’ve determined which variation performs the best, it’s time to determine whether your results are statistically significant. In other words, are they enough to justify a change?

To find out, you’ll need to conduct a test of statistical significance. You could do that manually … or you could just plug in the results from your experiment to our free A/B testing calculator.

For each variation you tested, you’ll be prompted to input the total number of tries, like emails sent or impressions seen. Then, enter the number of goals it completed — generally you’ll look at clicks, but this could also be other types of conversions.

hubspot ab testing calculator

The calculator will spit out the confidence level your data produces for the winning variation. Then, measure that number against the value you chose to determine statistical significance.

14. Take action based on your results.

If one variation is statistically better than the other, you have a winner. Complete your test by disabling the losing variation in your A/B testing tool.

If neither variation is statistically better, you’ve just learned that the variable you tested didn’t impact results, and you’ll have to mark the test as inconclusive. In this case, stick with the original variation, or run another test. You can use the failed data to help you figure out a new iteration on your new test.

While A/B tests help you impact results on a case-by-case basis, you can also apply the lessons you learn from each test and apply it to future efforts.

For example, if you’ve conducted A/B tests in your email marketing and have repeatedly found that using numbers in email subject lines generates better clickthrough rates, you might want to consider using that tactic in more of your emails.

15. Plan your next A/B test.

The A/B test you just finished may have helped you discover a new way to make your marketing content more effective — but don’t stop there. There’s always room for more optimization.

You can even try conducting an A/B test on another feature of the same web page or email you just did a test on. For example, if you just tested a headline on a landing page, why not do a new test on body copy? Or a color scheme? Or images? Always keep an eye out for opportunities to increase conversion rates and leads.

You can use HubSpot’s A/B Test Tracking Kit to plan and organize your experiments.

ab test tracking

Download This Template Now

How to Read A/B Testing Results

As a marketer, you know the value of automation. Given this, you likely use software that handles the A/B test calculations for you — a huge help. But, after the calculations are done, you need to know how to read your results. Let’s go over how.

1. Check your goal metric.

The first step in reading your A/B test results is looking at your goal metric, which is usually conversion rate. After you’ve plugged your results into your A/B testing calculator, you’ll get two results for each version you’re testing. You’ll also get a significant result for each of your variations.

2. Compare your conversion rates.

By looking at your results, you’ll likely be able to tell if one of your variations performed better than the other. However, the true test of success is whether the results you have are statistically significant. This means that one variation performed better than the other at a significant level because, say, the CTA text was more compelling.

Say, for example, Variation A had a 16.04% conversion rate and variation B had a 16.02% conversion rate, and your confidence interval of statistical significance is 95%. Variation A has a higher conversion rate, but the results are not statistically significant, meaning that Variation A won’t significantly improve your overall conversion rate.

3. Segment your audiences for further insights.

Regardless of significance, it’s valuable to break down your results by audience segment to understand how each key area responded to your variations. Common variables for segmenting audiences are:

  • Visitor type, or which version performed best for new visitors versus repeat visitors.
  • Device type, or which version performed best on mobile versus desktop.
  • Traffic source, or which version performed best based on where traffic to your two variations originated.

Let’s go over some examples of A/B experiments you could run for your business.

A/B Testing Examples

We’ve discussed how A/B tests are used in marketing and how to conduct one — but how do they actually look in practice?

As you might guess, we run many A/B tests to increase engagement and drive conversions across our platform. Here are five examples of A/B tests to inspire your own experiments.

1. Site Search

Site search bars help users quickly find what they’re after on a particular website. HubSpot found from previous analysis that visitors who interacted with its site search bar were more likely to convert on a blog post. So, we ran an A/B test in an attempt to increase engagement with the search bar.

In this test, search bar functionality was the independent variable and views on the content offer thank you page was the dependent variable. We used one control condition and three challenger conditions in the experiment.

In the control condition (variant A), the search bar remained unchanged.

control condition in the hubspot search bar A B test

In variant B, the search bar was made larger and more visually prominent, and the placeholder text was set to “search by topic.”

variant b of the hubspot search bar AB test

Variant C appeared identical to variant B, but only searched the HubSpot Blog rather than the entire website.

In variant D, the search bar was made larger but the placeholder text was set to “search the blog.” This variant also searched only the HubSpot Blog

variant c of the hubspot search bar AB test

We found variant D to be the most effective: It increased conversions by 3.4% over the control and increased the percentage of users who used the search bar by 6.5%.

2. Mobile CTAs

HubSpot uses several CTAs for content offers in our blog posts, including ones in the body of posts as well as at the bottom of the page. We test these CTAs extensively for optimize their performance.

For our mobile users, we ran an A/B test to see which type of bottom-of-page CTA converted best. For our independent variable, we altered the design of the CTA bar. Specifically, we used one control and three challengers in our test. For our dependent variables, we used pageviews on the CTA thank you page and CTA clicks.

The control condition included our normal placement of CTAs at the bottom of posts. In variant B, the CTA had no close or minimize option.

variant B of the hubspot mobile CTA AB testIn variant C, mobile readers could close the CTA by tapping an X icon. Once it was closed out, it wouldn’t reappear.

variant C of the hubspot mobile CTA AB test

In variant D, we included an option to minimize the CTA with an up/down caret.

variant d of hubspot's mobile cta A B test

Our tests found all variants to be successful. Variant D was the most successful, with a 14.6% increase in conversions over the control. This was followed by variant C with an 11.4% increase and variant B with a 7.9% increase.

3. Author CTAs

In another CTA experiment, HubSpot tested whether adding the word “free” and other descriptive language to author CTAs at the top of blog posts would increase content leads. Past research suggested that using “free” in CTA text would drive more conversions and that text specifying the type of content offered would be helpful for SEO and accessibility.

In the test, the independent variable was CTA text and the main dependent variable was conversion rate on the content offer form.

In the control condition, author CTA text was unchanged (see the orange button in the image below).

variant A of the author CTA AB test

In variant B, the word “free” was added to the CTA text.

variant B of the author CTA AB test

In variant C, descriptive wording was added to the CTA text in addition to “free.”

variant C of the author CTA AB test

Interestingly, variant B saw a loss in form submissions, down by 14% compared to the control. This was unexpected, since including “free” in content offer text is widely considered a best practice.

Meanwhile, form submissions in variant C outperformed the control by 4%. It was concluded that adding descriptive text to the author CTA helped users understand the offer and thus made them more likely to download.

4. Blog Table of Contents

To help users better navigate the blog, HubSpot tested a new Table of Contents (TOC) module. The goal was to improve user experience by presenting readers with their desired content more quickly. We also tested whether adding a CTA to this TOC module would increase conversions.

The independent variable of this A/B test was the inclusion and type of TOC module in blog posts, and the dependent variables were conversion rate on content offer form submissions and clicks on the CTA inside the TOC module.

The control condition did not include the new TOC module —control posts either had no table of contents, or a simple bulleted list of anchor links within the body of the post near the top of the article (pictured below).

variant A of the hubspot blog chapter module AB test

In variant B, the new TOC module was added to blog posts. This module was sticky, meaning it remained onscreen as users scrolled down the page. Variant B also included a content offer CTA at the bottom of the module.

variant B of the hubspot blog chapter module AB test

Variant C included an identical module to variant B but with the CTA removed.

variant C of the hubspot blog chapter module AB test

Both variants B and C did not increase the conversion rate on blog posts. The control condition outperformed variant B by 7% and performed equally with variant C. Also, few users interacted with the new TOC module or the CTA inside the module.

5. Review Notifications

To determine the best way of gathering customer reviews, we ran a split test of email notifications versus in-app notifications. Here, the independent variable was the type of notification and the dependent variable was the percentage of those who left a review out of all those who opened the notification.

In the control, HubSpot sent a plain text email notification asking users to leave a review. In variant B, HubSpot sent an email with a certificate image including the user’s name.

variant B of the hubspot notification AB test

For variant C, HubSpot sent users an in app-notification.

variant C of the hubspot notification AB test

Ultimately, both emails performed similarly and outperformed the in-app notifications. About 25% of users who opened an email left a review versus the 10.3% who opened in-app notifications. Emails were also more often opened by users.

Start A/B Testing Today

A/B testing allows you to get to the truth of what content and marketing your audience wants to see. Learn how to best carry out some of the steps above using the free e-book below.

Editor’s note: This post was originally published in May 2016 and has been updated for comprehensiveness.


The Ultimate A/B Testing Kit


Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

MARKETING

Why We Are Always ‘Clicking to Buy’, According to Psychologists

Published

on

Why We Are Always 'Clicking to Buy', According to Psychologists

Amazon pillows.

(more…)

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

MARKETING

A deeper dive into data, personalization and Copilots

Published

on

A deeper dive into data, personalization and Copilots

Salesforce launched a collection of new, generative AI-related products at Connections in Chicago this week. They included new Einstein Copilots for marketers and merchants and Einstein Personalization.

To better understand, not only the potential impact of the new products, but the evolving Salesforce architecture, we sat down with Bobby Jania, CMO, Marketing Cloud.

Dig deeper: Salesforce piles on the Einstein Copilots

Salesforce’s evolving architecture

It’s hard to deny that Salesforce likes coming up with new names for platforms and products (what happened to Customer 360?) and this can sometimes make the observer wonder if something is brand new, or old but with a brand new name. In particular, what exactly is Einstein 1 and how is it related to Salesforce Data Cloud?

“Data Cloud is built on the Einstein 1 platform,” Jania explained. “The Einstein 1 platform is our entire Salesforce platform and that includes products like Sales Cloud, Service Cloud — that it includes the original idea of Salesforce not just being in the cloud, but being multi-tenancy.”

Data Cloud — not an acquisition, of course — was built natively on that platform. It was the first product built on Hyperforce, Salesforce’s new cloud infrastructure architecture. “Since Data Cloud was on what we now call the Einstein 1 platform from Day One, it has always natively connected to, and been able to read anything in Sales Cloud, Service Cloud [and so on]. On top of that, we can now bring in, not only structured but unstructured data.”

That’s a significant progression from the position, several years ago, when Salesforce had stitched together a platform around various acquisitions (ExactTarget, for example) that didn’t necessarily talk to each other.

“At times, what we would do is have a kind of behind-the-scenes flow where data from one product could be moved into another product,” said Jania, “but in many of those cases the data would then be in both, whereas now the data is in Data Cloud. Tableau will run natively off Data Cloud; Commerce Cloud, Service Cloud, Marketing Cloud — they’re all going to the same operational customer profile.” They’re not copying the data from Data Cloud, Jania confirmed.

Another thing to know is tit’s possible for Salesforce customers to import their own datasets into Data Cloud. “We wanted to create a federated data model,” said Jania. “If you’re using Snowflake, for example, we more or less virtually sit on your data lake. The value we add is that we will look at all your data and help you form these operational customer profiles.”

Let’s learn more about Einstein Copilot

“Copilot means that I have an assistant with me in the tool where I need to be working that contextually knows what I am trying to do and helps me at every step of the process,” Jania said.

For marketers, this might begin with a campaign brief developed with Copilot’s assistance, the identification of an audience based on the brief, and then the development of email or other content. “What’s really cool is the idea of Einstein Studio where our customers will create actions [for Copilot] that we hadn’t even thought about.”

Here’s a key insight (back to nomenclature). We reported on Copilot for markets, Copilot for merchants, Copilot for shoppers. It turns out, however, that there is just one Copilot, Einstein Copilot, and these are use cases. “There’s just one Copilot, we just add these for a little clarity; we’re going to talk about marketing use cases, about shoppers’ use cases. These are actions for the marketing use cases we built out of the box; you can build your own.”

It’s surely going to take a little time for marketers to learn to work easily with Copilot. “There’s always time for adoption,” Jania agreed. “What is directly connected with this is, this is my ninth Connections and this one has the most hands-on training that I’ve seen since 2014 — and a lot of that is getting people using Data Cloud, using these tools rather than just being given a demo.”

What’s new about Einstein Personalization

Salesforce Einstein has been around since 2016 and many of the use cases seem to have involved personalization in various forms. What’s new?

“Einstein Personalization is a real-time decision engine and it’s going to choose next-best-action, next-best-offer. What is new is that it’s a service now that runs natively on top of Data Cloud.” A lot of real-time decision engines need their own set of data that might actually be a subset of data. “Einstein Personalization is going to look holistically at a customer and recommend a next-best-action that could be natively surfaced in Service Cloud, Sales Cloud or Marketing Cloud.”

Finally, trust

One feature of the presentations at Connections was the reassurance that, although public LLMs like ChatGPT could be selected for application to customer data, none of that data would be retained by the LLMs. Is this just a matter of written agreements? No, not just that, said Jania.

“In the Einstein Trust Layer, all of the data, when it connects to an LLM, runs through our gateway. If there was a prompt that had personally identifiable information — a credit card number, an email address — at a mimum, all that is stripped out. The LLMs do not store the output; we store the output for auditing back in Salesforce. Any output that comes back through our gateway is logged in our system; it runs through a toxicity model; and only at the end do we put PII data back into the answer. There are real pieces beyond a handshake that this data is safe.”

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

MARKETING

Why The Sales Team Hates Your Leads (And How To Fix It)

Published

on

Why The Sales Team Hates Your Leads (And How To Fix It)

Why The Sales Team Hates Your Leads And How To

You ask the head of marketing how the team is doing and get a giant thumbs up. 👍

“Our MQLs are up!”

“Website conversion rates are at an all-time high!”

“Email click rates have never been this good!”

But when you ask the head of sales the same question, you get the response that echoes across sales desks worldwide — the leads from marketing suck. 

If you’re in this boat, you’re not alone. The issue of “leads from marketing suck” is a common situation in most organizations. In a HubSpot survey, only 9.1% of salespeople said leads they received from marketing were of very high quality.

Why do sales teams hate marketing-generated leads? And how can marketers help their sales peers fall in love with their leads? 

Let’s dive into the answers to these questions. Then, I’ll give you my secret lead gen kung-fu to ensure your sales team loves their marketing leads. 

Marketers Must Take Ownership

“I’ve hit the lead goal. If sales can’t close them, it’s their problem.”

How many times have you heard one of your marketers say something like this? When your teams are heavily siloed, it’s not hard to see how they get to this mindset — after all, if your marketing metrics look strong, they’ve done their part, right?

Not necessarily. 

The job of a marketer is not to drive traffic or even leads. The job of the marketer is to create messaging and offers that lead to revenue. Marketing is not a 100-meter sprint — it’s a relay race. The marketing team runs the first leg and hands the baton to sales to sprint to the finish.

​​

via GIPHY

To make leads valuable beyond the vanity metric of watching your MQLs tick up, you need to segment and nurture them. Screen the leads to see if they meet the parameters of your ideal customer profile. If yes, nurture them to find out how close their intent is to a sale. Only then should you pass the leads to sales. 

Lead Quality Control is a Bitter Pill that Works

Tighter quality control might reduce your overall MQLs. Still, it will ensure only the relevant leads go to sales, which is a win for your team and your organization.

This shift will require a mindset shift for your marketing team: instead of living and dying by the sheer number of MQLs, you need to create a collaborative culture between sales and marketing. Reinforce that “strong” marketing metrics that result in poor leads going to sales aren’t really strong at all.  

When you foster this culture of collaboration and accountability, it will be easier for the marketing team to receive feedback from sales about lead quality without getting defensive. 

Remember, the sales team is only holding marketing accountable so the entire organization can achieve the right results. It’s not sales vs marketing — it’s sales and marketing working together to get a great result. Nothing more, nothing less. 

We’ve identified the problem and where we need to go. So, how you do you get there?

Fix #1: Focus On High ROI Marketing Activities First

What is more valuable to you:

  • One more blog post for a few more views? 
  • One great review that prospective buyers strongly relate to?

Hopefully, you’ll choose the latter. After all, talking to customers and getting a solid testimonial can help your sales team close leads today.  Current customers talking about their previous issues, the other solutions they tried, why they chose you, and the results you helped them achieve is marketing gold.

On the other hand, even the best blog content will take months to gain enough traction to impact your revenue.

Still, many marketers who say they want to prioritize customer reviews focus all their efforts on blog content and other “top of the funnel” (Awareness, Acquisition, and Activation) efforts. 

The bottom half of the growth marketing funnel (Retention, Reputation, and Revenue) often gets ignored, even though it’s where you’ll find some of the highest ROI activities.

1716755163 123 Why The Sales Team Hates Your Leads And How To1716755163 123 Why The Sales Team Hates Your Leads And How To

Most marketers know retaining a customer is easier than acquiring a new one. But knowing this and working with sales on retention and account expansion are two different things. 

When you start focusing on retention, upselling, and expansion, your entire organization will feel it, from sales to customer success. These happier customers will increase your average account value and drive awareness through strong word of mouth, giving you one heck of a win/win.

Winning the Retention, Reputation, and Referral game also helps feed your Awareness, Acquisition, and Activation activities:

  • Increasing customer retention means more dollars stay within your organization to help achieve revenue goals and fund lead gen initiatives.
  • A fully functioning referral system lowers your customer acquisition cost (CAC) because these leads are already warm coming in the door.
  • Case studies and reviews are powerful marketing assets for lead gen and nurture activities as they demonstrate how you’ve solved identical issues for other companies.

Remember that the bottom half of your marketing and sales funnel is just as important as the top half. After all, there’s no point pouring leads into a leaky funnel. Instead, you want to build a frictionless, powerful growth engine that brings in the right leads, nurtures them into customers, and then delights those customers to the point that they can’t help but rave about you.

So, build a strong foundation and start from the bottom up. You’ll find a better return on your investment. 

Fix #2: Join Sales Calls to Better Understand Your Target Audience

You can’t market well what you don’t know how to sell.

Your sales team speaks directly to customers, understands their pain points, and knows the language they use to talk about those pains. Your marketing team needs this information to craft the perfect marketing messaging your target audience will identify with.

When marketers join sales calls or speak to existing customers, they get firsthand introductions to these pain points. Often, marketers realize that customers’ pain points and reservations are very different from those they address in their messaging. 

Once you understand your ideal customers’ objections, anxieties, and pressing questions, you can create content and messaging to remove some of these reservations before the sales call. This effort removes a barrier for your sales team, resulting in more SQLs.

Fix #3: Create Collateral That Closes Deals

One-pagers, landing pages, PDFs, decks — sales collateral could be anything that helps increase the chance of closing a deal. Let me share an example from Lean Labs. 

Our webinar page has a CTA form that allows visitors to talk to our team. Instead of a simple “get in touch” form, we created a drop-down segmentation based on the user’s challenge and need. This step helps the reader feel seen, gives them hope that they’ll receive real value from the interaction, and provides unique content to users based on their selection.

1716755163 298 Why The Sales Team Hates Your Leads And How To1716755163 298 Why The Sales Team Hates Your Leads And How To

So, if they select I need help with crushing it on HubSpot, they’ll get a landing page with HubSpot-specific content (including a video) and a meeting scheduler. 

Speaking directly to your audience’s needs and pain points through these steps dramatically increases the chances of them booking a call. Why? Because instead of trusting that a generic “expert” will be able to help them with their highly specific problem, they can see through our content and our form design that Lean Labs can solve their most pressing pain point. 

Fix #4: Focus On Reviews and Create an Impact Loop

A lot of people think good marketing is expensive. You know what’s even more expensive? Bad marketing

To get the best ROI on your marketing efforts, you need to create a marketing machine that pays for itself. When you create this machine, you need to think about two loops: the growth loop and the impact loop.

1716755163 789 Why The Sales Team Hates Your Leads And How To1716755163 789 Why The Sales Team Hates Your Leads And How To
  • Growth loop — Awareness ➡ Acquisition ➡ Activation ➡ Revenue ➡ Awareness: This is where most marketers start. 
  • Impact loop — Results ➡ Reviews ➡ Retention ➡ Referrals ➡ Results: This is where great marketers start. 

Most marketers start with their growth loop and then hope that traction feeds into their impact loop. However, the reality is that starting with your impact loop is going to be far more likely to set your marketing engine up for success

Let me share a client story to show you what this looks like in real life.

Client Story: 4X Website Leads In A Single Quarter

We partnered with a health tech startup looking to grow their website leads. One way to grow website leads is to boost organic traffic, of course, but any organic play is going to take time. If you’re playing the SEO game alone, quadrupling conversions can take up to a year or longer.

But we did it in a single quarter. Here’s how.

We realized that the startup’s demos were converting lower than industry standards. A little more digging showed us why: our client was new enough to the market that the average person didn’t trust them enough yet to want to invest in checking out a demo. So, what did we do?

We prioritized the last part of the funnel: reputation.

We ran a 5-star reputation campaign to collect reviews. Once we had the reviews we needed, we showcased them at critical parts of the website and then made sure those same reviews were posted and shown on other third-party review platforms. 

Remember that reputation plays are vital, and they’re one of the plays startups often neglect at best and ignore at worst. What others say about your business is ten times more important than what you say about yourself

By providing customer validation at critical points in the buyer journey, we were able to 4X the website leads in a single quarter!

1716755164 910 Why The Sales Team Hates Your Leads And How To1716755164 910 Why The Sales Team Hates Your Leads And How To

So, when you talk to customers, always look for opportunities to drive review/referral conversations and use them in marketing collateral throughout the buyer journey. 

Fix #5: Launch Phantom Offers for Higher Quality Leads 

You may be reading this post thinking, okay, my lead magnets and offers might be way off the mark, but how will I get the budget to create a new one that might not even work?

It’s an age-old issue: marketing teams invest way too much time and resources into creating lead magnets that fail to generate quality leads

One way to improve your chances of success, remain nimble, and stay aligned with your audience without breaking the bank is to create phantom offers, i.e., gauge the audience interest in your lead magnet before you create them.

For example, if you want to create a “World Security Report” for Chief Security Officers, don’t do all the research and complete the report as Step One. Instead, tease the offer to your audience before you spend time making it. Put an offer on your site asking visitors to join the waitlist for this report. Then wait and see how that phantom offer converts. 

This is precisely what we did for a report by Allied Universal that ended up generating 80 conversions before its release.

1716755164 348 Why The Sales Team Hates Your Leads And How To1716755164 348 Why The Sales Team Hates Your Leads And How To

The best thing about a phantom offer is that it’s a win/win scenario: 

  • Best case: You get conversions even before you create your lead magnet.
  • Worst case: You save resources by not creating a lead magnet no one wants.  

Remember, You’re On The Same Team 

We’ve talked a lot about the reasons your marketing leads might suck. However, remember that it’s not all on marketers, either. At the end of the day, marketing and sales professionals are on the same team. They are not in competition with each other. They are allies working together toward a common goal. 

Smaller companies — or anyone under $10M in net new revenue — shouldn’t even separate sales and marketing into different departments. These teams need to be so in sync with one another that your best bet is to align them into a single growth team, one cohesive front with a single goal: profitable customer acquisition.

Interested in learning more about the growth marketing mindset? Check out the Lean Labs Growth Playbook that’s helped 25+ B2B SaaS marketing teams plan, budget, and accelerate growth.


Disruptive Design Raising the Bar of Content Marketing with Graphic

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending