Connect with us

MARKETING

Metrics: How to supercharge your experimentation program

Published

on

Metrics: How to supercharge your experimentation program


Experimentation can be pretty…well, testing at times. Downsized teams, restricted budgets, lack of buy-in, or a misguided focus on win rates (which, by the way, are always low – read on to see why that shouldn’t concern you). 

So, how do you actually go about scaling an experimentation program? The process starts with choosing the right experimentation metrics.  

Metrics are everywhere, but do they really impact business outcomes?  

Yes – because metrics are all about measuring success. As a practitioner, you need to decide what really matters, what you’re trying to achieve, and how well you’re doing. 

  

Establishing your key metrics also sets a baseline for comparisons, which you can use to chart progress over time. They also make your results easier to share across the organization by setting the benchmark for what ‘good’ looks like. As such, metrics play a key role in communicating the value of experimentation, achieving buy-in, and ultimately attracting the resources you need to scale up.   

Image: Metrics by impact share

Measuring the impact of programs is a common challenge – and key metrics are a great way to answer that. Eric pointed out that setting the right metrics is perhaps the most important thing to get right, not just when launching an experimentation program but also in terms of moving it to the next level.   

But… Only 12% of experiments actually win??? 

One of the most eye-catching findings was the fact that only 12% of experiments won. Yet you’d think win rate must be the most important factor in experimentation, right?   

Wrong.

 

Our team discovered that win rates are more a vanity metric than a key indicator of performance.  Eric highlighted the fact that impact is a far more reliable indicator of success. It not only covers your win rate, but also the uplift delivered – because at the end of the day, the monetary impact is what businesses really care about.   

For example, would you rather have tests that win 10% of the time but deliver a million-dollar uplift? Or tests that win 50% of the time but only deliver $100 in additional revenues? (You don’t really have to answer that one.)  

Look at the win rate alone and sure, it can be psychologically challenging to see just 12%. It’s not hard to imagine an underwhelmed management ask: why bother in the first place? Well, first of all, that’s where your metrics come into play, so you can evaluate what success really looks like.   

At the same time, the need to flip things around and see yourself as learning 100% of the time. For example: the figures showed that just 12% of experiments also lose, which means you get to eliminate features that can have a negative effect. The 76% proving to be inconclusive means you can stop investing time and resources into irrelevant areas.  

So sure, win rate is important – especially if you’re trying to get buy-in at the beginning of your program. But we saw you need to move past that and start framing the value of experimentation in terms of uplift, and translating win rates into expected impact per test.  

 

So… what other metrics should you be looking at to get up to speed 

The report also identified the most common metrics used to gauge the overall success of experimentation programs. The team found that velocity was far and away the most widely used metric for experimentation programs. The number of tests performed matters more than anything else – with the proviso, they are underpinned by quality. By that, we mean experiments with some thought behind them.   

number of experiments run by a median company per year

Image: Median companies run 34 tests a year.

The impact is another key KPI. It may be a lagging metric but, as we’ve already seen, uplift is what the top dog, the big cheese, the head honcho, really cares about.  

A third significant metric that came up is the percentage of an organization contributing to the program. This is critical when you’re on the road to maturity, creating momentum, and measuring how well you’re doing. Gaining buy-in from across the organization also plays a big part in extending the pipeline of testing ideas that will allow you to keep growing and improving.    

More tests = more value. Even data says that’s not true.

Both I and Eric also discussed the importance of testing velocity: is it really as simple as more tests = more value? As a rule, yes: the evidence shows that more tests = more wins = more value. After all, however much research you do, whatever kind of preparations you make, you never know for sure what’s going to work. It stands to reason the more experiments you run, the greater your chance of hitting a winner.  

When you’re getting your program up and running, say the first 12-18 months, yes – run as many tests as possible. That’ll help you build a data bank of successful stories with the aim of winning more resources and establishing a culture of experimentation. 

But we also saw that moving to the next level is not necessarily about increasing velocity. It’s about focusing on complexity and moving beyond cosmetic changes. Minute tweaks tend to result in minute uplifts. Our research showed us that the highest uplift experiments share two things in common:  

  1. They make larger code changes with more effect on the user experience.   
  2. They test a higher number of variations simultaneously.   

More complex experiments that make major changes to the user experience e.g. pricing, discounts, checkout flow, data collection, etc. are more likely to generate higher uplifts.   

 

Solely focusing on revenue can be a roadblock

Revenue is another key metric that teams report on when highlighting the value of their experimentation program. Not surprising since ultimately revenue keeps a business… in business. Most companies are focused on making money, so that’s where you’re going to want to focus to gain support from the execs. Having said that, Mark and Eric discussed the flip side of the coin.   

Uplift in revenue can be hard to track – say if you don’t have an ecommerce website, or if your business necessitates super complex buying cycles that last two or three years. In that case, if you can’t directly track revenue you want to focus your efforts as low down in the funnel as you can.  

Or say your site or app doesn’t actually aim to generate revenue. They gave the example of a company that makes big sales directly to a small group of customers, and use their site to educate the public.   

The key here is to understand the ultimate purpose of the channel and build your strategy around that. Rather than boosting conversions, you may instead be more interested in click-throughs to certain pages, maximizing time on site, encouraging sign-ups for an event or downloads for a paper.  

It’s also worth considering a few other metrics that we found to be undervalued. Take search rate, for example. As the most undervalued experiment goal, it is only tested 1% of the time, yet it has the highest expected impact at 2.3%. Customers who actively search for a product or service are likely to convert at two to three times the rate of all other users.  

Metrics impact on website visitors

Remember, no journey means no conversions

Three at-a-glance takeaways   

To wrap up, here are three takeaways to help you build and scale a successful experimentation program. 

  1. Think ABCD instead of simply AB. Experiments that test multiple treatments are three times more successful than standard A/B tests.     
  2. Conduct complex experiments. Tests that make major changes to the user experience (pricing, discounts, checkout flow, data collection, etc.) are more likely to win and with higher uplifts.   
  3. Choose the right metrics that match the overall purpose of your site or app’s objectives – and don’t get too hung up on win rates!  

All this is just a taster. 

Why not see the full report? 

Our Evolution of Experimentation report is packed with data from 127,000 experiments, revealing insights, techniques, and examples for scaling up a successful experimentation program. Read the report.


Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

MARKETING

YouTube Ad Specs, Sizes, and Examples [2024 Update]

Published

on

YouTube Ad Specs, Sizes, and Examples

Introduction

With billions of users each month, YouTube is the world’s second largest search engine and top website for video content. This makes it a great place for advertising. To succeed, advertisers need to follow the correct YouTube ad specifications. These rules help your ad reach more viewers, increasing the chance of gaining new customers and boosting brand awareness.

Types of YouTube Ads

Video Ads

  • Description: These play before, during, or after a YouTube video on computers or mobile devices.
  • Types:
    • In-stream ads: Can be skippable or non-skippable.
    • Bumper ads: Non-skippable, short ads that play before, during, or after a video.

Display Ads

  • Description: These appear in different spots on YouTube and usually use text or static images.
  • Note: YouTube does not support display image ads directly on its app, but these can be targeted to YouTube.com through Google Display Network (GDN).

Companion Banners

  • Description: Appears to the right of the YouTube player on desktop.
  • Requirement: Must be purchased alongside In-stream ads, Bumper ads, or In-feed ads.

In-feed Ads

  • Description: Resemble videos with images, headlines, and text. They link to a public or unlisted YouTube video.

Outstream Ads

  • Description: Mobile-only video ads that play outside of YouTube, on websites and apps within the Google video partner network.

Masthead Ads

  • Description: Premium, high-visibility banner ads displayed at the top of the YouTube homepage for both desktop and mobile users.

YouTube Ad Specs by Type

Skippable In-stream Video Ads

  • Placement: Before, during, or after a YouTube video.
  • Resolution:
    • Horizontal: 1920 x 1080px
    • Vertical: 1080 x 1920px
    • Square: 1080 x 1080px
  • Aspect Ratio:
    • Horizontal: 16:9
    • Vertical: 9:16
    • Square: 1:1
  • Length:
    • Awareness: 15-20 seconds
    • Consideration: 2-3 minutes
    • Action: 15-20 seconds

Non-skippable In-stream Video Ads

  • Description: Must be watched completely before the main video.
  • Length: 15 seconds (or 20 seconds in certain markets).
  • Resolution:
    • Horizontal: 1920 x 1080px
    • Vertical: 1080 x 1920px
    • Square: 1080 x 1080px
  • Aspect Ratio:
    • Horizontal: 16:9
    • Vertical: 9:16
    • Square: 1:1

Bumper Ads

  • Length: Maximum 6 seconds.
  • File Format: MP4, Quicktime, AVI, ASF, Windows Media, or MPEG.
  • Resolution:
    • Horizontal: 640 x 360px
    • Vertical: 480 x 360px

In-feed Ads

  • Description: Show alongside YouTube content, like search results or the Home feed.
  • Resolution:
    • Horizontal: 1920 x 1080px
    • Vertical: 1080 x 1920px
    • Square: 1080 x 1080px
  • Aspect Ratio:
    • Horizontal: 16:9
    • Square: 1:1
  • Length:
    • Awareness: 15-20 seconds
    • Consideration: 2-3 minutes
  • Headline/Description:
    • Headline: Up to 2 lines, 40 characters per line
    • Description: Up to 2 lines, 35 characters per line

Display Ads

  • Description: Static images or animated media that appear on YouTube next to video suggestions, in search results, or on the homepage.
  • Image Size: 300×60 pixels.
  • File Type: GIF, JPG, PNG.
  • File Size: Max 150KB.
  • Max Animation Length: 30 seconds.

Outstream Ads

  • Description: Mobile-only video ads that appear on websites and apps within the Google video partner network, not on YouTube itself.
  • Logo Specs:
    • Square: 1:1 (200 x 200px).
    • File Type: JPG, GIF, PNG.
    • Max Size: 200KB.

Masthead Ads

  • Description: High-visibility ads at the top of the YouTube homepage.
  • Resolution: 1920 x 1080 or higher.
  • File Type: JPG or PNG (without transparency).

Conclusion

YouTube offers a variety of ad formats to reach audiences effectively in 2024. Whether you want to build brand awareness, drive conversions, or target specific demographics, YouTube provides a dynamic platform for your advertising needs. Always follow Google’s advertising policies and the technical ad specs to ensure your ads perform their best. Ready to start using YouTube ads? Contact us today to get started!

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

MARKETING

Why We Are Always ‘Clicking to Buy’, According to Psychologists

Published

on

Why We Are Always 'Clicking to Buy', According to Psychologists

Amazon pillows.

(more…)

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

MARKETING

A deeper dive into data, personalization and Copilots

Published

on

A deeper dive into data, personalization and Copilots

Salesforce launched a collection of new, generative AI-related products at Connections in Chicago this week. They included new Einstein Copilots for marketers and merchants and Einstein Personalization.

To better understand, not only the potential impact of the new products, but the evolving Salesforce architecture, we sat down with Bobby Jania, CMO, Marketing Cloud.

Dig deeper: Salesforce piles on the Einstein Copilots

Salesforce’s evolving architecture

It’s hard to deny that Salesforce likes coming up with new names for platforms and products (what happened to Customer 360?) and this can sometimes make the observer wonder if something is brand new, or old but with a brand new name. In particular, what exactly is Einstein 1 and how is it related to Salesforce Data Cloud?

“Data Cloud is built on the Einstein 1 platform,” Jania explained. “The Einstein 1 platform is our entire Salesforce platform and that includes products like Sales Cloud, Service Cloud — that it includes the original idea of Salesforce not just being in the cloud, but being multi-tenancy.”

Data Cloud — not an acquisition, of course — was built natively on that platform. It was the first product built on Hyperforce, Salesforce’s new cloud infrastructure architecture. “Since Data Cloud was on what we now call the Einstein 1 platform from Day One, it has always natively connected to, and been able to read anything in Sales Cloud, Service Cloud [and so on]. On top of that, we can now bring in, not only structured but unstructured data.”

That’s a significant progression from the position, several years ago, when Salesforce had stitched together a platform around various acquisitions (ExactTarget, for example) that didn’t necessarily talk to each other.

“At times, what we would do is have a kind of behind-the-scenes flow where data from one product could be moved into another product,” said Jania, “but in many of those cases the data would then be in both, whereas now the data is in Data Cloud. Tableau will run natively off Data Cloud; Commerce Cloud, Service Cloud, Marketing Cloud — they’re all going to the same operational customer profile.” They’re not copying the data from Data Cloud, Jania confirmed.

Another thing to know is tit’s possible for Salesforce customers to import their own datasets into Data Cloud. “We wanted to create a federated data model,” said Jania. “If you’re using Snowflake, for example, we more or less virtually sit on your data lake. The value we add is that we will look at all your data and help you form these operational customer profiles.”

Let’s learn more about Einstein Copilot

“Copilot means that I have an assistant with me in the tool where I need to be working that contextually knows what I am trying to do and helps me at every step of the process,” Jania said.

For marketers, this might begin with a campaign brief developed with Copilot’s assistance, the identification of an audience based on the brief, and then the development of email or other content. “What’s really cool is the idea of Einstein Studio where our customers will create actions [for Copilot] that we hadn’t even thought about.”

Here’s a key insight (back to nomenclature). We reported on Copilot for markets, Copilot for merchants, Copilot for shoppers. It turns out, however, that there is just one Copilot, Einstein Copilot, and these are use cases. “There’s just one Copilot, we just add these for a little clarity; we’re going to talk about marketing use cases, about shoppers’ use cases. These are actions for the marketing use cases we built out of the box; you can build your own.”

It’s surely going to take a little time for marketers to learn to work easily with Copilot. “There’s always time for adoption,” Jania agreed. “What is directly connected with this is, this is my ninth Connections and this one has the most hands-on training that I’ve seen since 2014 — and a lot of that is getting people using Data Cloud, using these tools rather than just being given a demo.”

What’s new about Einstein Personalization

Salesforce Einstein has been around since 2016 and many of the use cases seem to have involved personalization in various forms. What’s new?

“Einstein Personalization is a real-time decision engine and it’s going to choose next-best-action, next-best-offer. What is new is that it’s a service now that runs natively on top of Data Cloud.” A lot of real-time decision engines need their own set of data that might actually be a subset of data. “Einstein Personalization is going to look holistically at a customer and recommend a next-best-action that could be natively surfaced in Service Cloud, Sales Cloud or Marketing Cloud.”

Finally, trust

One feature of the presentations at Connections was the reassurance that, although public LLMs like ChatGPT could be selected for application to customer data, none of that data would be retained by the LLMs. Is this just a matter of written agreements? No, not just that, said Jania.

“In the Einstein Trust Layer, all of the data, when it connects to an LLM, runs through our gateway. If there was a prompt that had personally identifiable information — a credit card number, an email address — at a mimum, all that is stripped out. The LLMs do not store the output; we store the output for auditing back in Salesforce. Any output that comes back through our gateway is logged in our system; it runs through a toxicity model; and only at the end do we put PII data back into the answer. There are real pieces beyond a handshake that this data is safe.”

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending