Connect with us

MARKETING

7 common problems that derail A/B/n email testing success

Published

on

7 common problems that derail A/B/n email testing success

Whenever I begin working with new clients who face major problems with their email marketing, one of the first things I review is how they conduct their email testing.

A/B/n testing is the best way I know to structure effective campaigns and to measure whether a brand’s email strategies and tactics are succeeding or failing. But all too often, teams struggle to set up tests correctly and measure results accurately. That usually leads to ineffective email experiments and poor results.

If your testing program is unreliable, you won’t know whether your chosen strategies and tactics are working or failing. Don’t blame the email channel itself if your email efforts don’t deliver the results you need. Instead, look at how you test and measure results.


Get the daily newsletter digital marketers rely on.


7 common testing problems and how to fix them

These crop up most often in my work with clients. Solutions to some of these challenges will require a total mindset change. For others, just learning the proper way to set up tests can resolve many of your current issues.

That’s the good part about testing. For every problem, there’s a way to correct it. Every time you solve a problem via testing, you take another step toward putting your email program on the right path.

1. Testing without a hypothesis

Many email marketers pick up the rudiments of testing by using the tools their ESPs give them, mainly for setting up basic A/B split tests on simple features such as subject lines. 

However, this ad hoc, one-off approach is like learning to drive a car without knowing how to read a map. You can turn the car on just fine. But you need map skills to plan out a journey that will get you where you want to go with the fewest traffic jams and detours.

Yes, you could let Google Maps do the planning work for you. But all the data – what you provide and what they pull from other sources – must line up right. If you type in the wrong destination or drive into a dead zone, you could end up miles from where you want to be.

That’s what happens to your email program when you either don’t test or test incorrectly. Your hypothesis is your road map for testing. It lays out what you think might happen and guides your choices for variables, testing segments, success metrics and even how to use the results.

2. Using the wrong conversion calculation

This relates to the customer‘s journey and the test’s objective.

When you do a standard A/B split test on a website landing page, you often use “transactions/web sessions” as your conversion calculation to see how well the page is converting. This makes sense because you don’t know the path your customers took to get there on the site, so you focus on this particular part of the journey, as it ignores everything that happens before it.

In email, we do know the path our customers took to get from the email to the landing page. We put them on it, and we want to optimize it. We want to understand how well our email converted, so we need to use “transactions/emails delivered” to calculate our conversion. This takes the whole email journey into account and doesn’t just look at how well the landing page converted.

As you can see in these two client examples, the conversion followed through with what the opens and clicks signified. Marketers use the “page sessions/purchases” calculation for vanity as it yields a higher percentage. However, it means that you could be optimizing for the wrong result.

Testing segments via business-as-usual campaigns

7 common problems that derail ABn email testing success

Testing automated programs

1648648132 500 7 common problems that derail ABn email testing success

3. Measuring success with the wrong metrics

A workable testing plan needs relevant metrics to measure success accurately. The wrong metrics can inflate or deflate your results. This, in turn, can mislead you into optimizing for the losing variant instead of the winner.  

The open rate, for example, has been a popular success metric ever since we learned how to use it back in the early days of HTML email. But it’s a flawed and unreliable metric, especially now that Apple’s Mail Privacy Protection feature masks a campaign’s true open rate. But even if opens were accurate every time, the open rate is still not necessarily the right metric.

Clicks, for example, are a more accurate engagement measure, but they don’t reveal how much money your campaign generated. If your goal is only to get clicks, go ahead and use the click rate. But if you’re rewarded on campaign revenue, you need to use a revenue metric such as number of purchases or basket value.

4. Testing without statistical significance

If your testing results are statistically significant, it means that the differences between testing groups (the control group, which was unchanged, and the group that received a variable, such as a different call to action or subject line) didn’t happen because of chance, error or uncounted events.

Having a small number of results can throw off significance testing, either because you could test only a fraction of your population or because the test didn’t run long enough to generate enough results. That’s why tests should run as long as possible (for automations) and reach a statistically significant sample size (for campaigns).

Most testing uses a 5% significance factor. This means your variable made a difference in at least 95 of every 100 results in your test, and the remaining five results could be random.

Results that aren’t statistically significant can lead you to assume the wrong conclusions and misinterpret both the test results and your campaign’s outcomes. Achieving 95% statistical significance indicates a 5% risk of concluding that a difference exists when there is no actual difference.


1648648132 986 7 common problems that derail ABn email testing success

Everything you need to know about email marketing deliverability that your customers want and that inboxes won’t block. Get MarTech’s Email Marketing Periodic Table.

Click here to check it out!


5. Stopping with one test

The philosopher Heraclitus said, “No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.”

The same is true for your email campaigns. Your subscriber base is always gaining new subscribers and losing old ones, and customers don’t react the same way every time to every campaign. A campaign that worked well one time might fall flat the next.

If you run only one test and then apply the results to all future campaigns, you’ll miss these subtle but important changes. That’s why you must bake testing into every campaign, testing everything more than once to exclude anomalies.

This will give you trends you can consult to learn general truths about your audience and indicate important shifts in attitudes and behavior. Use these to fine-tune or overhaul your campaigns’ approaches.

6. Testing only one element in a campaign

Subject-line testing is ubiquitous, mainly because many email platforms build A/B subject line split testing into their platforms. That’s a great start, but it gives you only part of a picture and is often misleading. A winning subject line that’s measured on the open rate doesn’t always predict a goal-achieving campaign.

That’s one reason why I developed the practice called Holistic Testing, which moves beyond single-channel, one-off, single-variable testing.

Here’s an example of a motivation-based hypothesis you could use as part of holistic testing. It names the appropriate metric (conversions) and incorporates copy-related factors such as subject lines, headings, copy blocks, calls to actions and even landing pages:

“Loss aversion copy will drive more conversions than benefit-led copy because numerous studies have shown that people hate losing out more than they enjoy benefiting.”

As long as the changes to the variables support the hypothesis, then, by using multiple variables, you are making the test more robust. The difference between this and a multivariate test is that all the variables support the hypothesis, and when the winner is announced, we can apply what we’ve learned.

7. Not using what you learned to make email better

We don’t test to see what happens in a single campaign or satisfy curiosity. We test to find out how our programs are working and what will improve them – now and the long term. We test to determine if we are spending money on things that help us achieve our goals.

We test to discover trends and shifts in our audience that we can apply across other marketing channels – because our email audience is our customer population in a microcosm. Don’t let your test results languish in your email platform or in a team notebook.

An action plan for testing to refine an email campaign would look like this:

1. Develop a hypothesis that states what you expect to see and why and how you will measure success.

2. Report results accurately following the established testing plan.

3. Choose relevant metrics that measure outcomes (conversions, revenue, downloads, registrations, completed processes and the like).

4. Set a time length for the test (if an automation) or the number of tests to be performed (if a campaign) to generate enough results to pass significant testing.

5. Analyze results, write the conclusion and recommend future campaigns.

6. Put results into action – both within your email marketing program and other channels where appropriate.

7. Refine and repeat the testing process to improve and continue the cycle of testing, analysis and implementation.

Read next: Is A/B testing dead?

Testing is more important than ever. Are you ready?

The COVID-19 pandemic upended email marketers’ knowledge of our customers. In 2020, we needed testing to detect what customers wanted and what changed and what stayed the same in their responses to our campaigns.

The pandemic is receding in many areas but threatening to rise again in others. Testing will help us stay ahead of new changes and put those insights to work right away. That keeps our email programs relevant and valued to customers and raises email’s profile as a reliable tool to help our companies achieve success.

I mentioned earlier that your email database is a microcosm of your customer base. Accurate testing results can uncover shifts in customer thinking and motivation that you can use to test and update your social media, your website, SMS marketing and even offline in direct marketing.

I can’t think of any other tool in the marketing kit that’s more versatile, cost-effective and adaptable than email. Accurate and up-to-date testing keeps this old reliable tool shiny and new.


Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.


About The Author

7 common problems that derail ABn email testing success
Kath Pay is CEO at Holistic Email Marketing and the author of the award-winning Amazon #1 best-seller “Holistic Email Marketing: A practical philosophy to revolutionise your business and delight your customers.”


Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

MARKETING

YouTube Ad Specs, Sizes, and Examples [2024 Update]

Published

on

YouTube Ad Specs, Sizes, and Examples

Introduction

With billions of users each month, YouTube is the world’s second largest search engine and top website for video content. This makes it a great place for advertising. To succeed, advertisers need to follow the correct YouTube ad specifications. These rules help your ad reach more viewers, increasing the chance of gaining new customers and boosting brand awareness.

Types of YouTube Ads

Video Ads

  • Description: These play before, during, or after a YouTube video on computers or mobile devices.
  • Types:
    • In-stream ads: Can be skippable or non-skippable.
    • Bumper ads: Non-skippable, short ads that play before, during, or after a video.

Display Ads

  • Description: These appear in different spots on YouTube and usually use text or static images.
  • Note: YouTube does not support display image ads directly on its app, but these can be targeted to YouTube.com through Google Display Network (GDN).

Companion Banners

  • Description: Appears to the right of the YouTube player on desktop.
  • Requirement: Must be purchased alongside In-stream ads, Bumper ads, or In-feed ads.

In-feed Ads

  • Description: Resemble videos with images, headlines, and text. They link to a public or unlisted YouTube video.

Outstream Ads

  • Description: Mobile-only video ads that play outside of YouTube, on websites and apps within the Google video partner network.

Masthead Ads

  • Description: Premium, high-visibility banner ads displayed at the top of the YouTube homepage for both desktop and mobile users.

YouTube Ad Specs by Type

Skippable In-stream Video Ads

  • Placement: Before, during, or after a YouTube video.
  • Resolution:
    • Horizontal: 1920 x 1080px
    • Vertical: 1080 x 1920px
    • Square: 1080 x 1080px
  • Aspect Ratio:
    • Horizontal: 16:9
    • Vertical: 9:16
    • Square: 1:1
  • Length:
    • Awareness: 15-20 seconds
    • Consideration: 2-3 minutes
    • Action: 15-20 seconds

Non-skippable In-stream Video Ads

  • Description: Must be watched completely before the main video.
  • Length: 15 seconds (or 20 seconds in certain markets).
  • Resolution:
    • Horizontal: 1920 x 1080px
    • Vertical: 1080 x 1920px
    • Square: 1080 x 1080px
  • Aspect Ratio:
    • Horizontal: 16:9
    • Vertical: 9:16
    • Square: 1:1

Bumper Ads

  • Length: Maximum 6 seconds.
  • File Format: MP4, Quicktime, AVI, ASF, Windows Media, or MPEG.
  • Resolution:
    • Horizontal: 640 x 360px
    • Vertical: 480 x 360px

In-feed Ads

  • Description: Show alongside YouTube content, like search results or the Home feed.
  • Resolution:
    • Horizontal: 1920 x 1080px
    • Vertical: 1080 x 1920px
    • Square: 1080 x 1080px
  • Aspect Ratio:
    • Horizontal: 16:9
    • Square: 1:1
  • Length:
    • Awareness: 15-20 seconds
    • Consideration: 2-3 minutes
  • Headline/Description:
    • Headline: Up to 2 lines, 40 characters per line
    • Description: Up to 2 lines, 35 characters per line

Display Ads

  • Description: Static images or animated media that appear on YouTube next to video suggestions, in search results, or on the homepage.
  • Image Size: 300×60 pixels.
  • File Type: GIF, JPG, PNG.
  • File Size: Max 150KB.
  • Max Animation Length: 30 seconds.

Outstream Ads

  • Description: Mobile-only video ads that appear on websites and apps within the Google video partner network, not on YouTube itself.
  • Logo Specs:
    • Square: 1:1 (200 x 200px).
    • File Type: JPG, GIF, PNG.
    • Max Size: 200KB.

Masthead Ads

  • Description: High-visibility ads at the top of the YouTube homepage.
  • Resolution: 1920 x 1080 or higher.
  • File Type: JPG or PNG (without transparency).

Conclusion

YouTube offers a variety of ad formats to reach audiences effectively in 2024. Whether you want to build brand awareness, drive conversions, or target specific demographics, YouTube provides a dynamic platform for your advertising needs. Always follow Google’s advertising policies and the technical ad specs to ensure your ads perform their best. Ready to start using YouTube ads? Contact us today to get started!

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

MARKETING

Why We Are Always ‘Clicking to Buy’, According to Psychologists

Published

on

Why We Are Always 'Clicking to Buy', According to Psychologists

Amazon pillows.

(more…)

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

MARKETING

A deeper dive into data, personalization and Copilots

Published

on

A deeper dive into data, personalization and Copilots

Salesforce launched a collection of new, generative AI-related products at Connections in Chicago this week. They included new Einstein Copilots for marketers and merchants and Einstein Personalization.

To better understand, not only the potential impact of the new products, but the evolving Salesforce architecture, we sat down with Bobby Jania, CMO, Marketing Cloud.

Dig deeper: Salesforce piles on the Einstein Copilots

Salesforce’s evolving architecture

It’s hard to deny that Salesforce likes coming up with new names for platforms and products (what happened to Customer 360?) and this can sometimes make the observer wonder if something is brand new, or old but with a brand new name. In particular, what exactly is Einstein 1 and how is it related to Salesforce Data Cloud?

“Data Cloud is built on the Einstein 1 platform,” Jania explained. “The Einstein 1 platform is our entire Salesforce platform and that includes products like Sales Cloud, Service Cloud — that it includes the original idea of Salesforce not just being in the cloud, but being multi-tenancy.”

Data Cloud — not an acquisition, of course — was built natively on that platform. It was the first product built on Hyperforce, Salesforce’s new cloud infrastructure architecture. “Since Data Cloud was on what we now call the Einstein 1 platform from Day One, it has always natively connected to, and been able to read anything in Sales Cloud, Service Cloud [and so on]. On top of that, we can now bring in, not only structured but unstructured data.”

That’s a significant progression from the position, several years ago, when Salesforce had stitched together a platform around various acquisitions (ExactTarget, for example) that didn’t necessarily talk to each other.

“At times, what we would do is have a kind of behind-the-scenes flow where data from one product could be moved into another product,” said Jania, “but in many of those cases the data would then be in both, whereas now the data is in Data Cloud. Tableau will run natively off Data Cloud; Commerce Cloud, Service Cloud, Marketing Cloud — they’re all going to the same operational customer profile.” They’re not copying the data from Data Cloud, Jania confirmed.

Another thing to know is tit’s possible for Salesforce customers to import their own datasets into Data Cloud. “We wanted to create a federated data model,” said Jania. “If you’re using Snowflake, for example, we more or less virtually sit on your data lake. The value we add is that we will look at all your data and help you form these operational customer profiles.”

Let’s learn more about Einstein Copilot

“Copilot means that I have an assistant with me in the tool where I need to be working that contextually knows what I am trying to do and helps me at every step of the process,” Jania said.

For marketers, this might begin with a campaign brief developed with Copilot’s assistance, the identification of an audience based on the brief, and then the development of email or other content. “What’s really cool is the idea of Einstein Studio where our customers will create actions [for Copilot] that we hadn’t even thought about.”

Here’s a key insight (back to nomenclature). We reported on Copilot for markets, Copilot for merchants, Copilot for shoppers. It turns out, however, that there is just one Copilot, Einstein Copilot, and these are use cases. “There’s just one Copilot, we just add these for a little clarity; we’re going to talk about marketing use cases, about shoppers’ use cases. These are actions for the marketing use cases we built out of the box; you can build your own.”

It’s surely going to take a little time for marketers to learn to work easily with Copilot. “There’s always time for adoption,” Jania agreed. “What is directly connected with this is, this is my ninth Connections and this one has the most hands-on training that I’ve seen since 2014 — and a lot of that is getting people using Data Cloud, using these tools rather than just being given a demo.”

What’s new about Einstein Personalization

Salesforce Einstein has been around since 2016 and many of the use cases seem to have involved personalization in various forms. What’s new?

“Einstein Personalization is a real-time decision engine and it’s going to choose next-best-action, next-best-offer. What is new is that it’s a service now that runs natively on top of Data Cloud.” A lot of real-time decision engines need their own set of data that might actually be a subset of data. “Einstein Personalization is going to look holistically at a customer and recommend a next-best-action that could be natively surfaced in Service Cloud, Sales Cloud or Marketing Cloud.”

Finally, trust

One feature of the presentations at Connections was the reassurance that, although public LLMs like ChatGPT could be selected for application to customer data, none of that data would be retained by the LLMs. Is this just a matter of written agreements? No, not just that, said Jania.

“In the Einstein Trust Layer, all of the data, when it connects to an LLM, runs through our gateway. If there was a prompt that had personally identifiable information — a credit card number, an email address — at a mimum, all that is stripped out. The LLMs do not store the output; we store the output for auditing back in Salesforce. Any output that comes back through our gateway is logged in our system; it runs through a toxicity model; and only at the end do we put PII data back into the answer. There are real pieces beyond a handshake that this data is safe.”

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending