Connect with us

MARKETING

Mobile A/B Testing: 7 Big Errors and Misconceptions You Need to Avoid

Published

on

mobile a b testing 7 big errors and misconceptions you need to avoid

It’s no secret that marketing overall rests largely on data. The same applies to mobile marketing and user acquisition. In this domain, choosing the right product page elements for the App Store and Google Play can make a crucial difference to the success of an app or mobile game. Mobile A/B testing is a tool that helps to make that choice based on data.

However, how many times have we heard the argument that A/B testing doesn’t bring the desired results, or that someone is not sure they run mobile experiments right? This often happens due to some common mistakes and misinterpretation of data. In this post, I will cover the biggest mistakes and misleading conclusions in mobile app A/B testing, knowledge of which will help you achieve success.

1.  Finishing an experiment before you get the right amount of traffic

This is one of the most common mistakes in mobile A/B testing. In case you are an adherent of classic A/B testing, finishing an experiment before you get the necessary amount of traffic  – sample size – presents a risk that you will get statistically unreliable results.

To get reliable evidence, you need to wait until the required amount of traffic is reached for both A and B variations.

If you are looking for an alternative to the classic option, resort to sequential A/B Testing. You will need to start with specifying the baseline conversion rate (conversion rate of your current variation), statistical power (80% by default), significance level and the Minimum Detectable Effect (MDE) – this will help you determine the sample size.

The significance level is 5% by default, which means the error margin will not exceed 5%. You can customize this value along with the MDEthe minimum expected conversion rate increase you would like to see. Note: don’t change the significance level, MDE or statistical power after you’ve started an experiment.

With sequential A/B testing, the algorithm will constantly check your variations on significance level and the amount of traffic left until the completion of the experiment. That’s how it works on our SplitMetrics A/B testing platform.

Lesson learned: if you run classic A/B tests, don’t finish an experiment until the right amount of traffic is reached. Or else, try sequential A/B testing, and you will be able to check the results at any time.

2. Finishing an experiment before 7 days have passed

Why do you have to wait for at least seven days? Well, various apps and mobile games experience peaks in activity on different days during the week. For example, business apps watch bursts in activity on Mondays, while games are most popular among users on the weekends.

For getting reliable results from mobile A/B testing experiments, you should capture peak days of your app during an experiment. Otherwise, you run the risk to jump to conclusions.

For example, you run tests for a task management app. You start an experiment on Wednesday and finish it on Saturday. But most of your target audience is utilizing your app on Mondays, so you will just miss the point since the surge of activity hasn’t got into the experiment period. Or vice versa, you’ve been running A/B tests for your racing game from Friday to Sunday: on the game’s peak days. In this case, the results will also be inadequate.

So, even if you’ve avoided the number one mistake and have already got the required amount of traffic on the very first day, don’t stop the experiment until seven days have passed.

Lesson learned: due to weakly peaks in activity that vary for each mobile game or app, do not finish an experiment before the complete (seven-day) cycle has passed.

2

3. Testing too small changes in design

One more common mistake in mobile A/B testing is comparing variations that look almost the same due to minor differences in design.

If the only difference between the mobile app icons you are testing is a blue background color instead of the light blue, or you have added a tiny detail to another screenshot variation, you are definitely in trouble. Users just do not notice such small changes.

In this case, both variations will show the same result, and it’s perfectly normal. So if you ever tried to run the app store A/B tests but then gave up on them, since variations performed equally, it’s time for reflection on what went wrong. Maybe your variations looked pretty much the same.

To make sure you are A/B testing a significant change, show both versions to your family or friends. Let your colleague look at each variation for 3-5 seconds. If they don’t know the difference, you’d better redesign your visual assets.

Lesson learned: in case you test variations with too small changes in design, you should expect that they will show the same result. Such changes are too insignificant for users, so it’s better to test app icons and screenshots that differ markedly from each other.

4. Your banner ads have the same design as one of the app store visual assets

If you use a third-party mobile A/B testing tool, like, for example, SplitMetrics, you buy traffic and place banners on ad networks. The point is that such a banner shouldn’t look like one of the visual assets you are testing, whether it is a screenshot or the same elements on an icon.

For example, you run experiments for your educational app. You design a banner that has the same element, as the icon from the variation A, while the variation B is completely another icon. Variation A will show the higher conversion rate, since it has the same design, as the banner users initially saw and clicked on.

Studies show that if people see something repeatedly, their brain processes information faster, which brings them a sense of liking. You can read more about it here. So, users tend to unconsciously tap on the already familiar images.

Lesson learned: when working on banner ads, make the design as neutral as possible. The banner design should not coincide with the design of your app’s icon or screenshots variations.

5. Testing several hypotheses at once

It makes no sense to make multiple changes and test them within the same experiment. Some mobile marketers draw the wrong conclusions after running a test, since they have made several changes and, in fact, cannot know what exactly affected the result.

If you have decided to change the color of your app store product page screenshots, create one or a few variations with another background color and run a test. Don’t change the color, order, and text on the screenshots at the same time. Otherwise, you will see the winning variation (let it be the variation B), and you will have no clue if it is the color change that actually worked.

Lesson learned: if you test multiple hypotheses at once, you will not be able to understand which one of them is correct.

3

6. Misinterpreting the situation when two variations are the same but you get a winner

When running A/A tests, you might be confused when an A/B testing tool is showing the winning variation among two identical assets. In particular, this is common for the Google Play Store’s in-built tool for running experiments.

On the SplitMetrics platform, at the 5% level of significance, in such a case, you will see that the result is insignificant.

Small differences between two exactly the same variations are pure coincidence. Different users reacted a little differently. It’s just like flipping a coin: there is a 50-50 chance you’ll get heads or tails, and there is a 50-50 chance one of the variations will show a better result.

To get a statistically significant result in a situation like this, you should get results from absolutely all users, which is impossible.

Lessons learned: if you get the winning variation when testing identical assets, there is nothing wrong with your A/B testing tool, it’s just a coincidence. However, with sequential A/B testing, you’ll see that the result is insignificant.

7. Getting upset when a new variation loses to the current one 

Some mobile marketers and user acquisition managers get disappointed when an experiment shows that the current variation wins, which they didn’t expect, and start wasting a budget on more paid traffic in the hope that the new variation will eventually win.

There is no reason to feel bad if your hypothesis has not been confirmed. If you had changed something on your app store product page without testing, you would have lost a part of your potential customers and, consequently, money. At the same time, having spent money on this experiment, you have paid for the knowledge. Now you know what works for your app and what doesn’t.

Lesson learned: everything happens for a reason, and you should not be sorry if your A/B test hasn’t confirmed your hypothesis. You now have a clear vision of what assets perform best for your game or app.

PPChero.com

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

MARKETING

YouTube Ad Specs, Sizes, and Examples [2024 Update]

Published

on

YouTube Ad Specs, Sizes, and Examples

Introduction

With billions of users each month, YouTube is the world’s second largest search engine and top website for video content. This makes it a great place for advertising. To succeed, advertisers need to follow the correct YouTube ad specifications. These rules help your ad reach more viewers, increasing the chance of gaining new customers and boosting brand awareness.

Types of YouTube Ads

Video Ads

  • Description: These play before, during, or after a YouTube video on computers or mobile devices.
  • Types:
    • In-stream ads: Can be skippable or non-skippable.
    • Bumper ads: Non-skippable, short ads that play before, during, or after a video.

Display Ads

  • Description: These appear in different spots on YouTube and usually use text or static images.
  • Note: YouTube does not support display image ads directly on its app, but these can be targeted to YouTube.com through Google Display Network (GDN).

Companion Banners

  • Description: Appears to the right of the YouTube player on desktop.
  • Requirement: Must be purchased alongside In-stream ads, Bumper ads, or In-feed ads.

In-feed Ads

  • Description: Resemble videos with images, headlines, and text. They link to a public or unlisted YouTube video.

Outstream Ads

  • Description: Mobile-only video ads that play outside of YouTube, on websites and apps within the Google video partner network.

Masthead Ads

  • Description: Premium, high-visibility banner ads displayed at the top of the YouTube homepage for both desktop and mobile users.

YouTube Ad Specs by Type

Skippable In-stream Video Ads

  • Placement: Before, during, or after a YouTube video.
  • Resolution:
    • Horizontal: 1920 x 1080px
    • Vertical: 1080 x 1920px
    • Square: 1080 x 1080px
  • Aspect Ratio:
    • Horizontal: 16:9
    • Vertical: 9:16
    • Square: 1:1
  • Length:
    • Awareness: 15-20 seconds
    • Consideration: 2-3 minutes
    • Action: 15-20 seconds

Non-skippable In-stream Video Ads

  • Description: Must be watched completely before the main video.
  • Length: 15 seconds (or 20 seconds in certain markets).
  • Resolution:
    • Horizontal: 1920 x 1080px
    • Vertical: 1080 x 1920px
    • Square: 1080 x 1080px
  • Aspect Ratio:
    • Horizontal: 16:9
    • Vertical: 9:16
    • Square: 1:1

Bumper Ads

  • Length: Maximum 6 seconds.
  • File Format: MP4, Quicktime, AVI, ASF, Windows Media, or MPEG.
  • Resolution:
    • Horizontal: 640 x 360px
    • Vertical: 480 x 360px

In-feed Ads

  • Description: Show alongside YouTube content, like search results or the Home feed.
  • Resolution:
    • Horizontal: 1920 x 1080px
    • Vertical: 1080 x 1920px
    • Square: 1080 x 1080px
  • Aspect Ratio:
    • Horizontal: 16:9
    • Square: 1:1
  • Length:
    • Awareness: 15-20 seconds
    • Consideration: 2-3 minutes
  • Headline/Description:
    • Headline: Up to 2 lines, 40 characters per line
    • Description: Up to 2 lines, 35 characters per line

Display Ads

  • Description: Static images or animated media that appear on YouTube next to video suggestions, in search results, or on the homepage.
  • Image Size: 300×60 pixels.
  • File Type: GIF, JPG, PNG.
  • File Size: Max 150KB.
  • Max Animation Length: 30 seconds.

Outstream Ads

  • Description: Mobile-only video ads that appear on websites and apps within the Google video partner network, not on YouTube itself.
  • Logo Specs:
    • Square: 1:1 (200 x 200px).
    • File Type: JPG, GIF, PNG.
    • Max Size: 200KB.

Masthead Ads

  • Description: High-visibility ads at the top of the YouTube homepage.
  • Resolution: 1920 x 1080 or higher.
  • File Type: JPG or PNG (without transparency).

Conclusion

YouTube offers a variety of ad formats to reach audiences effectively in 2024. Whether you want to build brand awareness, drive conversions, or target specific demographics, YouTube provides a dynamic platform for your advertising needs. Always follow Google’s advertising policies and the technical ad specs to ensure your ads perform their best. Ready to start using YouTube ads? Contact us today to get started!

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

MARKETING

Why We Are Always ‘Clicking to Buy’, According to Psychologists

Published

on

Why We Are Always 'Clicking to Buy', According to Psychologists

Amazon pillows.

(more…)

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

MARKETING

A deeper dive into data, personalization and Copilots

Published

on

A deeper dive into data, personalization and Copilots

Salesforce launched a collection of new, generative AI-related products at Connections in Chicago this week. They included new Einstein Copilots for marketers and merchants and Einstein Personalization.

To better understand, not only the potential impact of the new products, but the evolving Salesforce architecture, we sat down with Bobby Jania, CMO, Marketing Cloud.

Dig deeper: Salesforce piles on the Einstein Copilots

Salesforce’s evolving architecture

It’s hard to deny that Salesforce likes coming up with new names for platforms and products (what happened to Customer 360?) and this can sometimes make the observer wonder if something is brand new, or old but with a brand new name. In particular, what exactly is Einstein 1 and how is it related to Salesforce Data Cloud?

“Data Cloud is built on the Einstein 1 platform,” Jania explained. “The Einstein 1 platform is our entire Salesforce platform and that includes products like Sales Cloud, Service Cloud — that it includes the original idea of Salesforce not just being in the cloud, but being multi-tenancy.”

Data Cloud — not an acquisition, of course — was built natively on that platform. It was the first product built on Hyperforce, Salesforce’s new cloud infrastructure architecture. “Since Data Cloud was on what we now call the Einstein 1 platform from Day One, it has always natively connected to, and been able to read anything in Sales Cloud, Service Cloud [and so on]. On top of that, we can now bring in, not only structured but unstructured data.”

That’s a significant progression from the position, several years ago, when Salesforce had stitched together a platform around various acquisitions (ExactTarget, for example) that didn’t necessarily talk to each other.

“At times, what we would do is have a kind of behind-the-scenes flow where data from one product could be moved into another product,” said Jania, “but in many of those cases the data would then be in both, whereas now the data is in Data Cloud. Tableau will run natively off Data Cloud; Commerce Cloud, Service Cloud, Marketing Cloud — they’re all going to the same operational customer profile.” They’re not copying the data from Data Cloud, Jania confirmed.

Another thing to know is tit’s possible for Salesforce customers to import their own datasets into Data Cloud. “We wanted to create a federated data model,” said Jania. “If you’re using Snowflake, for example, we more or less virtually sit on your data lake. The value we add is that we will look at all your data and help you form these operational customer profiles.”

Let’s learn more about Einstein Copilot

“Copilot means that I have an assistant with me in the tool where I need to be working that contextually knows what I am trying to do and helps me at every step of the process,” Jania said.

For marketers, this might begin with a campaign brief developed with Copilot’s assistance, the identification of an audience based on the brief, and then the development of email or other content. “What’s really cool is the idea of Einstein Studio where our customers will create actions [for Copilot] that we hadn’t even thought about.”

Here’s a key insight (back to nomenclature). We reported on Copilot for markets, Copilot for merchants, Copilot for shoppers. It turns out, however, that there is just one Copilot, Einstein Copilot, and these are use cases. “There’s just one Copilot, we just add these for a little clarity; we’re going to talk about marketing use cases, about shoppers’ use cases. These are actions for the marketing use cases we built out of the box; you can build your own.”

It’s surely going to take a little time for marketers to learn to work easily with Copilot. “There’s always time for adoption,” Jania agreed. “What is directly connected with this is, this is my ninth Connections and this one has the most hands-on training that I’ve seen since 2014 — and a lot of that is getting people using Data Cloud, using these tools rather than just being given a demo.”

What’s new about Einstein Personalization

Salesforce Einstein has been around since 2016 and many of the use cases seem to have involved personalization in various forms. What’s new?

“Einstein Personalization is a real-time decision engine and it’s going to choose next-best-action, next-best-offer. What is new is that it’s a service now that runs natively on top of Data Cloud.” A lot of real-time decision engines need their own set of data that might actually be a subset of data. “Einstein Personalization is going to look holistically at a customer and recommend a next-best-action that could be natively surfaced in Service Cloud, Sales Cloud or Marketing Cloud.”

Finally, trust

One feature of the presentations at Connections was the reassurance that, although public LLMs like ChatGPT could be selected for application to customer data, none of that data would be retained by the LLMs. Is this just a matter of written agreements? No, not just that, said Jania.

“In the Einstein Trust Layer, all of the data, when it connects to an LLM, runs through our gateway. If there was a prompt that had personally identifiable information — a credit card number, an email address — at a mimum, all that is stripped out. The LLMs do not store the output; we store the output for auditing back in Salesforce. Any output that comes back through our gateway is logged in our system; it runs through a toxicity model; and only at the end do we put PII data back into the answer. There are real pieces beyond a handshake that this data is safe.”

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending