A/B testing is one of the most important parts of user experience research that is conducted by most companies.
For many product managers and data scientists, it may be crucial to experiment with product changes and decide which to move forward on.
If you have upcoming product manager or data science interviews, you should expect to be asked several questions on A/B testing and be prepared to answer them fully.
Analytical questions involve reasoning about metrics, and quantitative analysis to improve and ship excellent products. In this section, we'll break down the major types of analytical questions, and show you detailed examples of how to ace the analytical PM interview from the PM interviewers who ask the questions
Quite frankly, if you do your homework, there's no reason you can't ace the A/B interview questions in your following PM interview.
We even have a lesson dedicated to A/B testing in our PM interview course; check it out here!
But that's what this article is also going to help you with. Here's a guide on what A/B testing is and how you can ace your A/B interview questions.
Nobody can tell you precisely what A/B testing interview questions will be thrown your way during your upcoming PM or data science interviews.
However, here are some common ones that we’ve compiled below from previously asked at Big Tech companies like Google, Microsoft, Amazon, and others.
So long as you get a good handle on the fundamentals of A/B testing (which we’ll discuss in greater detail later in this article), you should have the tools to answer any unforeseen or off-the-cuff A/B questions.
Unsurprisingly, the best way to ace the A/B portion of your PM interview is to understand what A/B testing is.
Nobody can tell you precisely what A/B questions you'll face on interview day. Nor is that actually necessary.
A comprehensive understanding of the methodology and its use will be the best way to adequately prepare for any random questions you may have to answer on the fly.
So what is A/B testing anyway?
Well, quite simply, A/B testing, (also called split testing or bucket testing) is the process of testing two versions, A and B, of a web page, product design, layout, etc., to compare each performance under a chosen metric.
This is done by randomly showing one of the versions to users to see which is better suited under a selected metric. Of course, the metrics can vary - page views, conversions, bounce rate, etc.
Whether it's the language on a landing page, the style of a button, or the color of some element, A/B testing can help PMs make data-driven decisions resulting in objective improvements.
For example, imagine you have a call-to-action on the bottom of a landing page calling visitors to sign-up for an email list.
If a PM wanted to choose between two phrases that would best push visitors to subscribe to that list, they could run an A/B test.
They would formulate two variants.
For example, let's say "Sign up today!" or "Join our newsletter!" run both simultaneously and then measure which produces more sign-ups.
That's A/B testing in a nutshell—the measuring and comparing of two variants to see which performs better under a specific metric.
While that's all well and good, why are A/B tests necessary, really?
Ultimately, this form of experimental design is one of the many ways to objectively compare and contrast aspects of product choices that may otherwise be difficult to quantify.
Many elements of a product may traditionally be thought to be subjective. For instance, whether a button is red or blue could be chalked up to a preferential choice.
Worried about data science interview questions? Practice common data science questions in this course.
Thus, differentiating which would be better from a conversion perspective may be difficult without a data-driven method like an A/B test.
In this way, A/B testing is essential because it provides objective and quantifiable data regarding elements of a product that may otherwise be qualitative.
As such, the goal of A/B testing is to objectively measure which version of a product performs better.
The purpose of A/B testing would be to provide actionable data so the most prudent decisions can be made.
For example, should this button have rounded or square corners?
Should it be flat or have a drop-down shadow?
Should it be this color or that?
A/B testing can be used to help objectively answer such questions.
While everything we've covered is relatively straightforward, aspiring PMs should be aware of some more complexity behind the scenes.
Here are some fundamental things every PM should know about A/B testing to prepare for their upcoming PM interview.
First and foremost, a product manager or data scientist should understand the difference between the related testing methodologies to determine if an A/B test is the best course of action.
There are at least two other U/X test designs that PMs should know.
These are:
Another similar kind of test is the A/B/N test.
In this kind of test (usually done for web pages), more than two versions are tested, whereas an A/B test only tests two versions against each other.
The N in A/B/N stands for "number," meaning the number of tested versions.
A/B/N tests are similar to multivariate tests, except multivariate tests involve testing all possible combinations of the different variables at once, whereas A/B/N does not.
Instead, A/B/N is used to try several different versions against each other.
These tests are best used for major layout or design decisions rather than testing the individual differences between specific elements.
Each experimental method is different from the others. More often than not, your interviewer will probably ask a question about when to use each and why.
A/B tests, as the name suggests, only compare two variables. If too many variables are included in an A/B test, it becomes difficult to discern why one version outperformed the other.
When multiple variables are needed to be tested, multivariate testing is the way to go.
Multivariate testing is when all possible combinations of versions and all their variables are tested simultaneously.
This test design is best used when several product changes are to be decided.
Rather than running dozens of A/B tests on every design change, a multivariate test can be performed in which every possible combination is tested against each other.
Aspiring PMs must also know how to successfully conduct an A/B test.
Not every A/B test is made equally for a few reasons.
Make sure you understand:
Before starting your A/B test, you must evaluate valuable and relevant metrics.
Well, what are the usual suspects in the metric department?
Generally speaking, A/B tests experiment with one of the following:
Ask yourself beforehand which of these conveys the most valuable and relevant information for the product and engineering teams?
Of course, it'll depend on the thing that's being tested.
If you're testing a landing page with valuable information on your product, time spent on the page and bounce rate may be the wisest choices.
If you're testing a CTA, a clickthrough rate is probably the way.
The impression count will be the most valuable if you compare two versions of a social media ad.
In the worst-case scenario, you spend all this time and effort conducting an A/B test, and your chosen metrics aren't that insightful. As such, the results would ultimately be useless.
By and large, the software you'll use for A/B testing will come with many tools and features to help you understand and measure the results of your tests.
For example, after the conclusion of your tests, the software will show you the difference in conversion rates, or whatever other metrics you've chosen to measure, between the two versions, along with a margin of error.
More likely than not, the A/B testing software will make the results relatively easy to interpret.
You'll see the total number of test users, the measure of your chosen metric, possibly the device type of the tested users, and the particular uplift of the versions.
Deducing the winning version is as easy as comparing the conversion rates. Whichever is highest is the better performing version.
Because A/B testing falls into the category of statistical analysis, PMs must understand the importance of their P-value and the null hypothesis.
The P-Value is a number between 0 and 1 that indicates if their test results are statistically significant rather than just a product of randomness.
This is very important, as a successful A/B test must actually demonstrate which version performs better, not which randomly performs better during the tests.
The actual number indicates the probability of your test results occurring by chance rather than for a proposed hypothesis.
So, for instance, if version A converts more website visitors than version B, you may think that it's because version A had a more straightforward layout, for example.
However, the p-value will tell you if that hypothesis is true or if it just happened that version A performed better by chance.
In other words, p-values measure the likelihood of the null hypothesis being true. Any p-value higher than .05 is considered strong evidence for the null hypothesis.
Read more about p-values and how to calculate them in A/B testing here.
Each of these aspects can determine the success and effectiveness of an A/B test. Therefore, some of the A/B testing interview questions you encounter will likely test your understanding of them.
Not only that, but U/X and experimental design can be time-consuming and expensive.
Time and money are a business's most valuable assets, neither of which can be wasted.
Therefore, your interviewer's questions will most likely be focused on whether you know how to successfully and efficiently conduct A/B tests.
If that's how you properly run an A/B test, let's look at an example from start to finish.
Let's imagine we have a banner for a software product featuring a call-to-action. Now, let's say we had two versions we'd like to test, one with a very minimal amount of words-A and one with a little more information-B.
First of all, we must design our A/B test for these banners.
Given that the difficulty in question features a CTA, we can figure that the most insightful metric, in this case, is click through rate.
The version that pushes more people to click is the clear winner.
Next, we must decide on an adequate sample size for our experiment.
The sample size of your tests has a lot to do with the p-value in your results.
Ultimately, you need a large enough number of users to demonstrate statistical significance after your experiment. Otherwise, the results may not be that trustworthy or accurate.
Remember that the p-value is the measure of this significance. If the sample size is too small, chances are that your p-value will be high, which says that your results are insignificant and, thus, probably a byproduct of randomness.
However, suppose there is a significant difference in the variants.
In that case, you can be confident in your results with a smaller sample size.
In other words, if our CTA version A covers 60% more users than version B after testing a sample of 1,000 users, we can be confident that version A performs better.
This is because the sample size of 1,000 users is adequate.
However, suppose version A only had a 10% difference. In that case, we'd probably need to increase the sample size substantially to be confident in the significance of our results.
Again, the sample size necessary has everything to do with the p-value of your results.
The size must be large enough to demonstrate that your results are statistically significant, not simply random.
As we mentioned earlier, your p-value is .05 or less, indicating a 5% or less chance that the outcomes could be a product of randomness and chance.
Ultimately, the sample size in your A/B tests should be increased, and the runtimes should continue until your results demonstrate this 95% confidence level.
Determining the size of your sample is the most statistically heavy part of the A/B testing process. We understand it can be a little confusing, especially if you don't have much experience in statistics. This article can be beneficial in determining what sample size you'd need for your A/B experiments.
After we've run our test on these CTA banners, we must interpret our results.
Generally speaking, the A/B software of your choice will show you the difference in measured metrics and a margin of error.
For example, let's imagine that version A converted 18% of its visitors and version B 10% with a margin of error of 2.3%.
Therefore, implementing version A as the chosen banner results in an 8% "lift" or difference between the two versions.
It should be no surprise that version A is the winner in this A/B test.
Nobody can tell you precisely what A/B testing interview questions will be thrown your way during your upcoming PM interview.
However, there are some common ones that we've compiled below from previously asked product manager and data science interview questions. Not only that, but if you get a good handle on everything we've mentioned so far, you should have the tools to answer any unforeseen or off-the-cuff A/B questions.
A/B tests, as the name suggests, only compare two variables. If too many variables are included in an A/B test, it becomes difficult to discern why one version outperformed the other.
Remember that when multiple variables are needed to be tested at once, multivariate testing is the way to go.
The primary issue is not that these tests couldn't tell you which version would outperform the other.
Instead, it becomes impossible to extrapolate the relationship between versions and metrics if too many factors are involved.
For an A/B test to be successful and insightful, they need to be as focused as a possible-one winning metric with one variable.
Obviously, you could still measure multiple metrics simultaneously, but how would you then choose which version wins?
Which metric should you look at to determine the winner?
This is the primary issue with several different metrics during an A/B test.
A successful A/B testing hypothesis follows a simple yet essential formula with three crucial components:
Typically speaking, you can structure your A/B testing hypotheses in this way:
If <variable>, then <result> because of <rationale>.
So, how would this look in reality? Let's take our previous example with the CTA banners.
One version, A, has a CTA with fewer words, whereas the other, B, has more detailed information. Before your experiments, you can postulate the following hypothesis:
Suppose the CTA contains fewer words (variable). In that case, more visitors will be converted (result) because the CTA will be easier to digest and requires less reading on the part of the customer (rationale).
Using this formula, you can reliably create solid and successful hypotheses for your A/B tests.
A detailed video answer can be found in our PM interview course, here.
Our product management interview course teaches you the essential skills you need to ace your PM interview, with hours of example questions, videos, and interview tips.
An A/B test can fail for many reasons. There are many potential reasons.
Some common ones are choosing the wrong metrics, not understanding a statistically insignificant p-value, not having the proper sample size of users, or forming a misguided hypothesis.
A/B tests provide the most value when:
Ultimately, A/B testing is critical for businesses to learn how to improve their operations, products, websites, and, most importantly, their bottom lines.
A/B testing provides actionable insights for companies so that they may reduce risks, improve customer engagement, convert more customers, and increase sales.
In addition, A/B tests will demonstrate precisely how businesses can maximize the use of their resources in the most efficient way possible, which, in turn, will improve their ROI.
Given that the most successful companies are the ones that can maximize the most profits, A/B testing is a crucial piece of any business's overall strategy to do just that.
Here at Exponent, we know better than anyone that it may be exhilarating and nerve-wracking when you have a PM or data science interview.
So, to help you boost your chances, we've designed several Interview Prep Courses for Product Management, Software Engineering, Data Science, Product Marketing Management, Technical Program Management, and Product Design.
Not only that, but we also offer industry-leading interviewing coaching to help you seal the deal. Book a session with an Exponent coach to:
We've partnered with dozens of industry insiders and career experts in product management, program management, product design, software engineering, and data science fields who can help you ace your interviews and nail your dream job.
Check out our list here and book a session today!
💬 Review more commonly asked sample PM interview questions.
📖 Read through our company-specific Product Manager interview guides
👯♂️ Practice your behavioral and leadership skills with our mock interview practice tool.
👨🎓 Take our complete Product Management interview course.
Exponent is the fastest-growing tech interview prep platform. Get free interview guides, insider tips, and courses.
Create your free account