Connect with us

MARKETING

Here’s Optimizely’s Automatic Sample Ratio Mismatch Detection

Published

on

Here's Optimizely’s Automatic Sample Ratio Mismatch Detection

Optimizely Experiment’s automatic sample ratio mismatch (SRM) detection delivers peace of mind to experimenters. It reduces a user’s exposure time to bad experiences by rapidly detecting any experiment deterioration.

This deterioration is caused by unexpected imbalances of visitors to a variation in an experiment. Most importantly, this auto SRM detection empowers product managers, marketers, engineers, and experimentation teams to confidently launch more experiments. 

How Optimizely Experiment’s stats engine and automatic sample rate mismatch detection work together

The sample ratio mismatch actslike the bouncer at the door who has a mechanical counter, checking guests’ tickets (users) and telling them which room they get to party in.

Stats engine is like the party host who is always checking the vibes (behavior) of the guests as people come into the room.

Advertisement

If SRM does its job right, then stats engine can confidently tell which party room is better and direct more traffic to the winning variation (the better party) sooner.

Why would I want Optimizely Experiment’s SRM detection?

It’s equally important to ensure Optimizely Experiment users know their experiment results are trustworthy and have the tools to understand what an imbalance can mean for their results and how to prevent it.

Uniquely, Optimizely Experiment goes further by combining the power of automatic visitor imbalance detection with an insightful experiment health indicator. This experiment health indicator plays double duty by letting our customers know when all is well and there is no imbalance present.

Then, when just-in-time insight is needed to protect your business decisions, Optimizely also delivers just-in-time alerts that help our customers recognize the severity of, diagnose, and recover from errors.

Why should I care about sample ratio mismatch (SRM)?

Just like a fever is a symptom of many illnesses, a SRM is a symptom of a variety of data quality issues. Ignoring a SRM without knowing the root cause may result in a bad feature appearing to be good and being shipped out to users, or vice versa. Finding an experiment with an unknown source of traffic imbalance lets you turn it off quickly and reduce the blast radius.

Then what is the connection between a “mismatch” and “sample ratio”?

When we get ready to launch an experiment, we assign a traffic split of users for Optimizely Experiment to distribute to each variation. We expect the assigned traffic split to reasonably match up with the actual traffic split in a live experiment. An experiment is exposed to an SRM imbalance when there is a statistically significant difference between the expected and the actual assigned traffic splits of visitors to an experiment’s variations.

Advertisement
1. A mismatch doesn’t mean an imperfect match

Remember: A bonified imbalance requires a statistically significant result of the difference in visitors. Don’t expect a picture-perfect, identical, exact match of the launch-day traffic split to your in-production traffic split. There will always be some ever-so-slight deviation.

Not every traffic disparity automatically signifies that an experiment is useless. Because Optimizely deeply values our customers’ time and energy, we developed a new statistical test that continuously monitors experiment results and detects harmful SRMs as early as possible. All while still controlling for crying wolf over false positives (AKA when we conclude there is a surprising difference between a test variation and the baseline when there is no real difference). 

2. Going under the hood of Optimizely Experiment’s SRM detection algorithm

Optimizely Experiment’s automatic SRM detection feature employs a sequential Bayesian multinomial test (say that 5 times fast!), named sequential sample ratio mismatch. Optimizely statisticians Michael Lindon and Alen Malek pioneered this method, and it is a new contribution to the field of Sequential Statistics. Optimizely Experiment’s sample ratio mismatch detection harmonizes sequential and Bayesian methodologies by continuously checking traffic counts and testing for any significant imbalance in a variation’s visitor counts. The algorithm’s construction is Bayesian inspired to account for an experiment’s optional stopping and continuation while delivering sequential guarantees of Type-I error probabilities.

3. Beware of chi-eap alternatives!

The most popular freely available SRM calculators employ the chi-square test. We highly recommend a careful review of the mechanics of chi-square testing. The main issue with the chi-squared method is that problems are discovered only after collecting all the data. This is arguably far too late and goes against why most clients want SRM detention in the first place. In our blog post “A better way to test for sample ratio mismatches (or why I don’t use a chi-squared test)”, we go deeper into chi-square mechanics and how what we built accounts for the gaps left behind by the alternatives.

Common causes of an SRM  

1. Redirects & Delays

A SRM usually results from some visitors closing out and leaving the page before the redirect finishes executing. Because we only send the decision events once they arrive on the page and Optimizely Experiment loads, we can’t count these visitors in our results page unless they return at some point and send an event to Optimizely Experiment.

A SRM can emerge in the case of anything that would cause Optimizely Experiment’s event calls to delay or not fire, such as variation code changes. It also occurs when redirect experiments shuttle visitors to a different domain. This occurrence is exacerbated by slow connection times.

Advertisement
2. Force-bucketing

If a user first gets bucketed in the experiment and then that decision is used to force-bucket them in a subsequent experiment, then the results of that subsequent experiment will become imbalanced.

Here’s an example:

Variation A provides a wildly different user experience than Variation B.

Visitors bucketed into Variation A have a great experience, and many of them continue to log in and land into the subsequent experiment where they’re force-bucketed into Variation A.

But, visitors who were bucketed into Variation B aren’t having a good experience. Only a few users log in and land into a subsequent experiment where they will be force-bucketed into Variation B.

Well, now you have many more visitors in Variation A than in Variation B.

Advertisement
3. Site has its own redirects

Some sites have their own redirects (for example, 301s) that, combined with our redirects, can result in a visitor landing on a page without the snippet. This causes pending decision events to get locked in localStorage and Optimizely Experiment never receives or counts them.

4. Hold/send events API calls are housed outside of the snippet

Some users include hold/send events in project JS. However, others include it in other scripts on the page, such as in vendor bundles or analytics tracking scripts. This represents another script that must be properly loaded for the decisions to fire appropriately. Implementation or loading rates may differ across variations, particularly in the case of redirects.

Interested?  

If you’re already an Optimizely Experiment customer and you’d like to learn more about how automatic SRM detection benefits your A/B tests, check out our knowledge base documentation:

For further details you can always reach out to your customer success manager but do take a moment to review our documentation first!

If you’re not a customer, get started with us here! 

And if you’d like to dig deeper into the engine that powers Optimizely experimentation, you can check out our page faster decisions you can trust for digital experimentation. 

Advertisement

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

MARKETING

YouTube Ad Specs, Sizes, and Examples [2024 Update]

Published

on

YouTube Ad Specs, Sizes, and Examples

Introduction

With billions of users each month, YouTube is the world’s second largest search engine and top website for video content. This makes it a great place for advertising. To succeed, advertisers need to follow the correct YouTube ad specifications. These rules help your ad reach more viewers, increasing the chance of gaining new customers and boosting brand awareness.

Types of YouTube Ads

Video Ads

  • Description: These play before, during, or after a YouTube video on computers or mobile devices.
  • Types:
    • In-stream ads: Can be skippable or non-skippable.
    • Bumper ads: Non-skippable, short ads that play before, during, or after a video.

Display Ads

  • Description: These appear in different spots on YouTube and usually use text or static images.
  • Note: YouTube does not support display image ads directly on its app, but these can be targeted to YouTube.com through Google Display Network (GDN).

Companion Banners

  • Description: Appears to the right of the YouTube player on desktop.
  • Requirement: Must be purchased alongside In-stream ads, Bumper ads, or In-feed ads.

In-feed Ads

  • Description: Resemble videos with images, headlines, and text. They link to a public or unlisted YouTube video.

Outstream Ads

  • Description: Mobile-only video ads that play outside of YouTube, on websites and apps within the Google video partner network.

Masthead Ads

  • Description: Premium, high-visibility banner ads displayed at the top of the YouTube homepage for both desktop and mobile users.

YouTube Ad Specs by Type

Skippable In-stream Video Ads

  • Placement: Before, during, or after a YouTube video.
  • Resolution:
    • Horizontal: 1920 x 1080px
    • Vertical: 1080 x 1920px
    • Square: 1080 x 1080px
  • Aspect Ratio:
    • Horizontal: 16:9
    • Vertical: 9:16
    • Square: 1:1
  • Length:
    • Awareness: 15-20 seconds
    • Consideration: 2-3 minutes
    • Action: 15-20 seconds

Non-skippable In-stream Video Ads

  • Description: Must be watched completely before the main video.
  • Length: 15 seconds (or 20 seconds in certain markets).
  • Resolution:
    • Horizontal: 1920 x 1080px
    • Vertical: 1080 x 1920px
    • Square: 1080 x 1080px
  • Aspect Ratio:
    • Horizontal: 16:9
    • Vertical: 9:16
    • Square: 1:1

Bumper Ads

  • Length: Maximum 6 seconds.
  • File Format: MP4, Quicktime, AVI, ASF, Windows Media, or MPEG.
  • Resolution:
    • Horizontal: 640 x 360px
    • Vertical: 480 x 360px

In-feed Ads

  • Description: Show alongside YouTube content, like search results or the Home feed.
  • Resolution:
    • Horizontal: 1920 x 1080px
    • Vertical: 1080 x 1920px
    • Square: 1080 x 1080px
  • Aspect Ratio:
    • Horizontal: 16:9
    • Square: 1:1
  • Length:
    • Awareness: 15-20 seconds
    • Consideration: 2-3 minutes
  • Headline/Description:
    • Headline: Up to 2 lines, 40 characters per line
    • Description: Up to 2 lines, 35 characters per line

Display Ads

  • Description: Static images or animated media that appear on YouTube next to video suggestions, in search results, or on the homepage.
  • Image Size: 300×60 pixels.
  • File Type: GIF, JPG, PNG.
  • File Size: Max 150KB.
  • Max Animation Length: 30 seconds.

Outstream Ads

  • Description: Mobile-only video ads that appear on websites and apps within the Google video partner network, not on YouTube itself.
  • Logo Specs:
    • Square: 1:1 (200 x 200px).
    • File Type: JPG, GIF, PNG.
    • Max Size: 200KB.

Masthead Ads

  • Description: High-visibility ads at the top of the YouTube homepage.
  • Resolution: 1920 x 1080 or higher.
  • File Type: JPG or PNG (without transparency).

Conclusion

YouTube offers a variety of ad formats to reach audiences effectively in 2024. Whether you want to build brand awareness, drive conversions, or target specific demographics, YouTube provides a dynamic platform for your advertising needs. Always follow Google’s advertising policies and the technical ad specs to ensure your ads perform their best. Ready to start using YouTube ads? Contact us today to get started!

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

MARKETING

Why We Are Always ‘Clicking to Buy’, According to Psychologists

Published

on

Why We Are Always 'Clicking to Buy', According to Psychologists

Amazon pillows.

(more…)

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

MARKETING

A deeper dive into data, personalization and Copilots

Published

on

A deeper dive into data, personalization and Copilots

Salesforce launched a collection of new, generative AI-related products at Connections in Chicago this week. They included new Einstein Copilots for marketers and merchants and Einstein Personalization.

To better understand, not only the potential impact of the new products, but the evolving Salesforce architecture, we sat down with Bobby Jania, CMO, Marketing Cloud.

Dig deeper: Salesforce piles on the Einstein Copilots

Salesforce’s evolving architecture

It’s hard to deny that Salesforce likes coming up with new names for platforms and products (what happened to Customer 360?) and this can sometimes make the observer wonder if something is brand new, or old but with a brand new name. In particular, what exactly is Einstein 1 and how is it related to Salesforce Data Cloud?

“Data Cloud is built on the Einstein 1 platform,” Jania explained. “The Einstein 1 platform is our entire Salesforce platform and that includes products like Sales Cloud, Service Cloud — that it includes the original idea of Salesforce not just being in the cloud, but being multi-tenancy.”

Data Cloud — not an acquisition, of course — was built natively on that platform. It was the first product built on Hyperforce, Salesforce’s new cloud infrastructure architecture. “Since Data Cloud was on what we now call the Einstein 1 platform from Day One, it has always natively connected to, and been able to read anything in Sales Cloud, Service Cloud [and so on]. On top of that, we can now bring in, not only structured but unstructured data.”

Advertisement



That’s a significant progression from the position, several years ago, when Salesforce had stitched together a platform around various acquisitions (ExactTarget, for example) that didn’t necessarily talk to each other.

“At times, what we would do is have a kind of behind-the-scenes flow where data from one product could be moved into another product,” said Jania, “but in many of those cases the data would then be in both, whereas now the data is in Data Cloud. Tableau will run natively off Data Cloud; Commerce Cloud, Service Cloud, Marketing Cloud — they’re all going to the same operational customer profile.” They’re not copying the data from Data Cloud, Jania confirmed.

Another thing to know is tit’s possible for Salesforce customers to import their own datasets into Data Cloud. “We wanted to create a federated data model,” said Jania. “If you’re using Snowflake, for example, we more or less virtually sit on your data lake. The value we add is that we will look at all your data and help you form these operational customer profiles.”

Let’s learn more about Einstein Copilot

“Copilot means that I have an assistant with me in the tool where I need to be working that contextually knows what I am trying to do and helps me at every step of the process,” Jania said.

For marketers, this might begin with a campaign brief developed with Copilot’s assistance, the identification of an audience based on the brief, and then the development of email or other content. “What’s really cool is the idea of Einstein Studio where our customers will create actions [for Copilot] that we hadn’t even thought about.”

Here’s a key insight (back to nomenclature). We reported on Copilot for markets, Copilot for merchants, Copilot for shoppers. It turns out, however, that there is just one Copilot, Einstein Copilot, and these are use cases. “There’s just one Copilot, we just add these for a little clarity; we’re going to talk about marketing use cases, about shoppers’ use cases. These are actions for the marketing use cases we built out of the box; you can build your own.”

Advertisement



It’s surely going to take a little time for marketers to learn to work easily with Copilot. “There’s always time for adoption,” Jania agreed. “What is directly connected with this is, this is my ninth Connections and this one has the most hands-on training that I’ve seen since 2014 — and a lot of that is getting people using Data Cloud, using these tools rather than just being given a demo.”

What’s new about Einstein Personalization

Salesforce Einstein has been around since 2016 and many of the use cases seem to have involved personalization in various forms. What’s new?

“Einstein Personalization is a real-time decision engine and it’s going to choose next-best-action, next-best-offer. What is new is that it’s a service now that runs natively on top of Data Cloud.” A lot of real-time decision engines need their own set of data that might actually be a subset of data. “Einstein Personalization is going to look holistically at a customer and recommend a next-best-action that could be natively surfaced in Service Cloud, Sales Cloud or Marketing Cloud.”

Finally, trust

One feature of the presentations at Connections was the reassurance that, although public LLMs like ChatGPT could be selected for application to customer data, none of that data would be retained by the LLMs. Is this just a matter of written agreements? No, not just that, said Jania.

“In the Einstein Trust Layer, all of the data, when it connects to an LLM, runs through our gateway. If there was a prompt that had personally identifiable information — a credit card number, an email address — at a mimum, all that is stripped out. The LLMs do not store the output; we store the output for auditing back in Salesforce. Any output that comes back through our gateway is logged in our system; it runs through a toxicity model; and only at the end do we put PII data back into the answer. There are real pieces beyond a handshake that this data is safe.”

Source link

Advertisement



Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending