Connect with us

MARKETING

Mobile A/B Testing: 7 Big Errors and Misconceptions You Need to Avoid

Published

on

mobile a b testing 7 big errors and misconceptions you need to avoid

It’s no secret that marketing overall rests largely on data. The same applies to mobile marketing and user acquisition. In this domain, choosing the right product page elements for the App Store and Google Play can make a crucial difference to the success of an app or mobile game. Mobile A/B testing is a tool that helps to make that choice based on data.

However, how many times have we heard the argument that A/B testing doesn’t bring the desired results, or that someone is not sure they run mobile experiments right? This often happens due to some common mistakes and misinterpretation of data. In this post, I will cover the biggest mistakes and misleading conclusions in mobile app A/B testing, knowledge of which will help you achieve success.

1.  Finishing an experiment before you get the right amount of traffic

This is one of the most common mistakes in mobile A/B testing. In case you are an adherent of classic A/B testing, finishing an experiment before you get the necessary amount of traffic  – sample size – presents a risk that you will get statistically unreliable results.

To get reliable evidence, you need to wait until the required amount of traffic is reached for both A and B variations.

If you are looking for an alternative to the classic option, resort to sequential A/B Testing. You will need to start with specifying the baseline conversion rate (conversion rate of your current variation), statistical power (80% by default), significance level and the Minimum Detectable Effect (MDE) – this will help you determine the sample size.

Advertisement

The significance level is 5% by default, which means the error margin will not exceed 5%. You can customize this value along with the MDEthe minimum expected conversion rate increase you would like to see. Note: don’t change the significance level, MDE or statistical power after you’ve started an experiment.

With sequential A/B testing, the algorithm will constantly check your variations on significance level and the amount of traffic left until the completion of the experiment. That’s how it works on our SplitMetrics A/B testing platform.

Lesson learned: if you run classic A/B tests, don’t finish an experiment until the right amount of traffic is reached. Or else, try sequential A/B testing, and you will be able to check the results at any time.

2. Finishing an experiment before 7 days have passed

Why do you have to wait for at least seven days? Well, various apps and mobile games experience peaks in activity on different days during the week. For example, business apps watch bursts in activity on Mondays, while games are most popular among users on the weekends.

For getting reliable results from mobile A/B testing experiments, you should capture peak days of your app during an experiment. Otherwise, you run the risk to jump to conclusions.

Advertisement

For example, you run tests for a task management app. You start an experiment on Wednesday and finish it on Saturday. But most of your target audience is utilizing your app on Mondays, so you will just miss the point since the surge of activity hasn’t got into the experiment period. Or vice versa, you’ve been running A/B tests for your racing game from Friday to Sunday: on the game’s peak days. In this case, the results will also be inadequate.

So, even if you’ve avoided the number one mistake and have already got the required amount of traffic on the very first day, don’t stop the experiment until seven days have passed.

Lesson learned: due to weakly peaks in activity that vary for each mobile game or app, do not finish an experiment before the complete (seven-day) cycle has passed.

2

3. Testing too small changes in design

One more common mistake in mobile A/B testing is comparing variations that look almost the same due to minor differences in design.

If the only difference between the mobile app icons you are testing is a blue background color instead of the light blue, or you have added a tiny detail to another screenshot variation, you are definitely in trouble. Users just do not notice such small changes.

In this case, both variations will show the same result, and it’s perfectly normal. So if you ever tried to run the app store A/B tests but then gave up on them, since variations performed equally, it’s time for reflection on what went wrong. Maybe your variations looked pretty much the same.

Advertisement

To make sure you are A/B testing a significant change, show both versions to your family or friends. Let your colleague look at each variation for 3-5 seconds. If they don’t know the difference, you’d better redesign your visual assets.

Lesson learned: in case you test variations with too small changes in design, you should expect that they will show the same result. Such changes are too insignificant for users, so it’s better to test app icons and screenshots that differ markedly from each other.

4. Your banner ads have the same design as one of the app store visual assets

If you use a third-party mobile A/B testing tool, like, for example, SplitMetrics, you buy traffic and place banners on ad networks. The point is that such a banner shouldn’t look like one of the visual assets you are testing, whether it is a screenshot or the same elements on an icon.

For example, you run experiments for your educational app. You design a banner that has the same element, as the icon from the variation A, while the variation B is completely another icon. Variation A will show the higher conversion rate, since it has the same design, as the banner users initially saw and clicked on.

Studies show that if people see something repeatedly, their brain processes information faster, which brings them a sense of liking. You can read more about it here. So, users tend to unconsciously tap on the already familiar images.

Advertisement

Lesson learned: when working on banner ads, make the design as neutral as possible. The banner design should not coincide with the design of your app’s icon or screenshots variations.

5. Testing several hypotheses at once

It makes no sense to make multiple changes and test them within the same experiment. Some mobile marketers draw the wrong conclusions after running a test, since they have made several changes and, in fact, cannot know what exactly affected the result.

If you have decided to change the color of your app store product page screenshots, create one or a few variations with another background color and run a test. Don’t change the color, order, and text on the screenshots at the same time. Otherwise, you will see the winning variation (let it be the variation B), and you will have no clue if it is the color change that actually worked.

Lesson learned: if you test multiple hypotheses at once, you will not be able to understand which one of them is correct.

3

6. Misinterpreting the situation when two variations are the same but you get a winner

When running A/A tests, you might be confused when an A/B testing tool is showing the winning variation among two identical assets. In particular, this is common for the Google Play Store’s in-built tool for running experiments.

Advertisement

On the SplitMetrics platform, at the 5% level of significance, in such a case, you will see that the result is insignificant.

Small differences between two exactly the same variations are pure coincidence. Different users reacted a little differently. It’s just like flipping a coin: there is a 50-50 chance you’ll get heads or tails, and there is a 50-50 chance one of the variations will show a better result.

To get a statistically significant result in a situation like this, you should get results from absolutely all users, which is impossible.

Lessons learned: if you get the winning variation when testing identical assets, there is nothing wrong with your A/B testing tool, it’s just a coincidence. However, with sequential A/B testing, you’ll see that the result is insignificant.

7. Getting upset when a new variation loses to the current one 

Some mobile marketers and user acquisition managers get disappointed when an experiment shows that the current variation wins, which they didn’t expect, and start wasting a budget on more paid traffic in the hope that the new variation will eventually win.

Advertisement

There is no reason to feel bad if your hypothesis has not been confirmed. If you had changed something on your app store product page without testing, you would have lost a part of your potential customers and, consequently, money. At the same time, having spent money on this experiment, you have paid for the knowledge. Now you know what works for your app and what doesn’t.

Lesson learned: everything happens for a reason, and you should not be sorry if your A/B test hasn’t confirmed your hypothesis. You now have a clear vision of what assets perform best for your game or app.

PPChero.com

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

MARKETING

Trends in Content Localization – Moz

Published

on

Trends in Content Localization - Moz

Multinational fast food chains are one of the best-known examples of recognizing that product menus may sometimes have to change significantly to serve distinct audiences. The above video is just a short run-through of the same business selling smokehouse burgers, kofta, paneer, and rice bowls in an effort to appeal to people in a variety of places. I can’t personally judge the validity of these representations, but what I can see is that, in such cases, you don’t merely localize your content but the products on which your content is founded.

Sometimes, even the branding of businesses is different around the world; what we call Burger King in America is Hungry Jack’s in Australia, Lays potato chips here are Sabritas in Mexico, and DiGiorno frozen pizza is familiar in the US, but Canada knows it as Delissio.

Tales of product tailoring failures often become famous, likely because some of them may seem humorous from a distance, but cultural sensitivity should always be taken seriously. If a brand you are marketing is on its way to becoming a large global seller, the best insurance against reputation damage and revenue loss as a result of cultural insensitivity is to employ regional and cultural experts whose first-hand and lived experiences can steward the organization in acting with awareness and respect.

Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

MARKETING

How AI Is Redefining Startup GTM Strategy

Published

on

How AI Is Redefining Startup GTM Strategy

AI and startups? It just makes sense.

(more…)

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

MARKETING

More promotions and more layoffs

Published

on

More promotions and more layoffs

For martech professionals salaries are good and promotions are coming faster, unfortunately, layoffs are coming faster, too. That’s according to the just-released 2024 Martech Salary and Career Survey. Another very unfortunate finding: The median salary of women below the C-suite level is 35% less than what men earn.

The last year saw many different economic trends, some at odds with each other. Although unemployment remained very low overall and the economy grew, some businesses — especially those in technology and media — cut both jobs and spending. Reasons cited for the cuts include during the early years of the pandemic, higher interest rates and corporate greed.

Dig deeper: How to overcome marketing budget cuts and hiring freezes

Be that as it may, for the employed it remains a good time to be a martech professional. Salaries remain lucrative compared to many other professions, with an overall median salary of $128,643. 

Advertisement

Here are the median salaries by role:

  • Senior management $199,653
  • Director $157,776
  • Manager $99,510
  • Staff $89,126

Senior managers make more than twice what staff make. Directors and up had a $163,395 median salary compared to manager/staff roles, where the median was $94,818.

One-third of those surveyed said they were promoted in the last 12 months, a finding that was nearly equal among director+ (32%) and managers and staff (30%). 

PX3zocqNZfzMbWNEZhW9dZnAgkdPrLW8fjkrbVrcEkrNJpJiXrVKkjlQ0Tzuj8YKh Ht9HTEvmxDDt0ZsntfYiZHS0NJ7zEZ 6yMT3OjZajbaXBFV1D2Pk5euJeHKdRuzOzM5ZUxwNtsVNaiIbNrd Q

Extend the time frame to two years, and nearly three-quarters of director+ respondents say they received a promotion, while the same can be said for two-thirds of manager and staff respondents.

Dig deeper: Skills-based hiring for modern marketing teams

Employee turnover 

In 2023, we asked survey respondents if they noticed an increase in employee churn and whether they would classify that churn as a “moderate” or “significant” increase. For 2024, given the attention on cost reductions and layoffs, we asked if the churn they witnessed was “voluntary” (e.g., people leaving for another role) or “involuntary” (e.g., a layoff or dismissal). More than half of the marketing technology professionals said churn increased in the last year. Nearly one-third classified most of the churn as “involuntary.”

FIHUBtZJfK3IzbyZl C6WXBPTE64Gzg1URDzQUXCrD8YkAPZS7mmjpmAAiuhhheJUE4dGVcn6e9XW87ogLVz0Ya4rqHwB8WfXTHS W0hRW7yEdr2bQNjlTwnXvNhMv9NZ092pq1ws7lu DYqLV8i6fcFIHUBtZJfK3IzbyZl C6WXBPTE64Gzg1URDzQUXCrD8YkAPZS7mmjpmAAiuhhheJUE4dGVcn6e9XW87ogLVz0Ya4rqHwB8WfXTHS W0hRW7yEdr2bQNjlTwnXvNhMv9NZ092pq1ws7lu DYqLV8i6fc

Men and Women

Screenshot 2024 03 21 124540Screenshot 2024 03 21 124540

This year, instead of using average salary figures, we used the median figures to lessen the impact of outliers in the salary data. As a result, the gap between salaries for men and women is even more glaring than it was previously.

In last year’s report, men earned an average of 24% more than women. This year the median salary of men is 35% more than the median salary of women. That is until you get to the upper echelons. Women at director and up earned 5% more than men.

Methodology

The 2024 MarTech Salary and Career Survey is a joint project of MarTech.org and chiefmartec.com. We surveyed 305 marketers between December 2023 and February 2024; 297 of those provided salary information. Nearly 63% (191) of respondents live in North America; 16% (50) live in Western Europe. The conclusions in this report are limited to responses from those individuals only. Other regions were excluded due to the limited number of respondents. 

Advertisement

Download your copy of the 2024 MarTech Salary and Career Survey here. No registration is required.

Get MarTech! Daily. Free. In your inbox.

Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending

Follow by Email
RSS