Mozilla’s Experimentation Strategy

Mozilla’s experimentation team has been the gold standard for experimentation in the industry for many years. Here are the tips that make up Mozilla’s Experimentation Strategy:

Experiment new features with Firefox Test Pilot – YouTube
Key Takeaways
1. Experimentation Culture: Mozilla’s success is driven by a strong culture of experimentation, where continuous testing and learning are prioritized.
2. User-Centric Approach: Mozilla focuses on meeting user needs through data-driven experiments, enhancing user experiences and product offerings.
3. Hypothesis Testing: The organization formulates clear hypotheses before conducting experiments, enabling precise evaluation of outcomes.
4. Iterative Improvement: Mozilla’s strategy involves iterative refinement based on experiment results, leading to constant product enhancement.
5. Embracing Failure: Failure is seen as an opportunity to learn and iterate, fostering a growth mindset within Mozilla’s experimentation culture.

Table of Contents

Understand That Not All Experiments Are Created Equal

We’ve found that some experiments are more complex than others. Some are riskier, some are more expensive, and some are easier to run. In general, the complexity of an experiment depends on how many people will be affected by it.

The riskiness of an experiment depends on how hard it is to predict its outcome and whether there may be negative effects from running it (for example, if someone gets hurt).

The expense of an experiment also varies depending on factors like the number of time employees need to spend working on it or purchasing equipment for use in the experiment (such as cameras or 3D printers).

Finally and this is perhaps most important there’s also a difference between “easy-to-run” experiments and “hard-to-run” ones.

An easy-to-run experiment might only take a few hours per week over three months while introducing minimal change into your day-to-day routine; conversely, a hard one may require weeks or months before yielding any results at all.

In the ever-evolving landscape of marketing, embracing a new paradigm can be the key to staying ahead. Learn more about the shift in our article on The New Marketing Research Paradigm and how it’s reshaping strategies

Focus On The Business Value Of Your Experiment And Make Sure You Know What You’re Trying To Measure

While it’s important to understand the business value of your experiment, it’s equally important to know what you’re trying to measure. The business value of an experiment is different from the business value of a product or feature.

You shouldn’t measure the success of an experiment by how many users have tried and abandoned it that doesn’t give you any information about whether people using your product in a way that helped them achieve their goals.

Instead, try measuring how long people spent using the feature and whether it made their lives better in some meaningful way. It’s also important not to confuse “user engagement” with customer satisfaction.

If users aren’t feeling satisfied with your product after trying out a new feature, that could be due to poor design rather than lack of engagement (or vice versa).

For example, if someone installs Firefox but doesn’t use any extensions at all because they don’t like our extension marketplace experience (so they don’t get any value from adding extensions), then they would still count in this metric even though they’re probably not very happy with our product overall (we haven’t given them what they want).

Focus On Learning And Adapting, Not Proving Something Worked

Your experimentation strategy should be driven by a focus on learning, not proving something worked. The goal is to learn from each experiment and make the appropriate changes. Don’t get bogged down in proving that a specific approach will work or not work.

Instead of focusing on proving something works, you should focus on adapting what is working into more experiments with higher impact.

For example, if an experiment is performing well but improving user experience is the ultimate goal (and not just moving metrics), then the next step would be to replicate the best parts of this experiment while also making adjustments for a better user experience.

Before implementing a new idea, it’s crucial to ensure its viability. We’ve compiled 14 effective steps that can help you confidently verify your concepts. Dive into our guide: 14 Steps to Effectively Verify Your Ideas for a systematic approach.

Don’t Do Multiple Things At Once; Keep Your Experiment Simple, Focused, And Measurable

Next, we need to make sure you’re not doing too much at once. You can’t manage more than one thing at a time, so don’t try! Instead, focus on your experiment and make it the most successful possible. Keep it simple by testing one thing at a time and measuring that one thing as best you can. If something goes wrong with your experiment, know what you will do next (for example: throw away this idea and move on to something else). If something goes right with your experiment, know what you’ll do next (for example: run another test).

Make Sure You’re Ready To Learn From The Results Of Your Experiment

Make sure you’re ready to learn from the results of your experiment by setting yourself up to work with a data scientist or analyst who can help you analyze the results and help you understand what they mean for your business.

The first step in this process is understanding that experiments are not just about testing ideas; they’re also an opportunity to collect data and determine how well something works. The second step is knowing how you want to use that data.

 Your goal should be to make decisions based on information gleaned from your experiments (and analyzing your website’s analytics) that will improve your product or service, improve the experience of using it, or improve its marketing.

You may also want to consider using the information gathered through experimentation as a basis for making strategic decisions about pricing models and marketing channels.

Don’t Just Run An Analysis; Make Sure You’ve Got A Plan In Place To Act On It

There are many reasons why people run experiments. Some want to figure out if they should push a new feature or product, while others want to test their hypotheses about how customers behave and what they value.

The results of these experiments can be insightful but only if you know how best to use them. For example, Moz wanted to know whether the community would benefit from having “expert content writers” write blog posts for them on topics like SEO, social media marketing, and website optimization (SMO).

They did so by running an experiment: using A/B testing software that allows two versions of any given page the original vs another version with slight changes to be compared against each other (in this case: the expert writing copy).

This test showed that both versions performed equally well and confirmed their hypothesis that having experts write blog posts helped build brand loyalty in the online marketing space and increase traffic back onto their site.

This gave Moz confidence about continuing this strategy moving forward and led them down a path toward becoming one of our most successful partners in 2017.

Success in marketing research is often a result of learning from those who have mastered the craft. Discover the strategies employed by successful marketers in our article: How Successful Marketers Do Marketing Research to enhance your own practices.

Test Big Ideas, Not Minor Tweaks

We’ve learned that testing small improvements to your product rarely results in breakthroughs and often gets in the way of the real work of making progress on bigger problems that need solving. That doesn’t mean there’s no room for change within a product but when you’re trying something new.

It’s better to put your whole heart into it than half-heartedly test a few things at once. When we take the time to flesh out an idea with real data and thoughtful design thinking (see points #3 and #5), we end up learning so much more about our users’ needs than if we’d just been tinkering around the edges all along.

Prioritize The Questions With The Highest Uncertainty And The Biggest Impact

This is a principle we learned from [the Lean Startup](https://theleanstartup.com/) method, which states that it’s important to focus on customer validation early to avoid unnecessary time spent on building products and features that customers won’t use or don’t want.

At Mozilla, we’ve taken this approach and applied it across our organization: prioritizing experiments based on how much they can help us learn about our target users’ needs and build the right product for them while minimizing risk.

For example, if you’re working on an online banking feature for mobile phones but aren’t sure if people would use it or not.

Then it may make sense for your team to run an experiment like A/B testing different user interfaces before putting resources into development (because doing so will give you data about whether people do or don’t prefer that specific design).

However, if there aren’t any other alternatives besides “yes” or “no” for instance, if someone wants to know whether their idea would work in practice then running an experiment isn’t necessary because all other factors have been accounted for already (i.e., no need for more information).

When You Are Testing Multiple Things, Prioritize That Learning Over Finding A Winner Among All Of Them

  • When you are testing multiple things, prioritize that learning over finding a winner among all of them.
  • You should always make sure your experiments have a specific hypothesis or set of hypotheses about what you’re trying to learn from the experiment. If it’s just “we want to see if this will work,” then you don’t have anything specific to learn from the experiment beyond whether or not it worked.
  • Make sure that you have the right tools in place to measure the results of your experiment(s). Also, make sure that there is someone with access to those tools who can analyze those results once they come in so that they can be used effectively later on when making future decisions based on these experiments’ findings.

Include A Control Group To Determine Causality

A control group is a group that is not exposed to the experimental treatment, but which is otherwise similar to the experimental group. They are used to determine whether a change in the experimental group is due to the experimental treatment, or due to other factors.

For instance, if you want to test whether watching your favorite TV show will improve your memory, you should include an unaltered version of yourself as one of your subjects (the “control”).

If you don’t do this and only compare how well you remember things after watching TV against how well you remember them without watching TV at all the latter being what we would call “baseline”.

The performance then it could be difficult or impossible for us to tell whether there was indeed an improvement relative to baseline when comparing before and after results.

Map Out Your Measurement Plan In Advance To Avoid Bias In Analysis

When you’re planning how to measure the success of your experiment, it’s important to think about whether there is a natural comparison point. For example, if you’re trying to increase the number of people who sign up for an email newsletter.

Then it would be useful to compare that metric against another newsletter signup campaign from last year or even another company’s email campaign. It’s also helpful if you can identify what impact these data points should have on other business goals (e.g., increasing sales).

In addition, it’s important to set clear goals before starting any experiments so that you can evaluate their success objectively and adjust them as needed at each step along the way.

You should also keep in mind that people often have different motivations when they start experimenting with new things some want more money while others want more flexibility so make sure that whatever data points you decide upon are aligned with what matters most for your team member’s customers alike.

Online surveys are powerful tools, but they can fall short without proper execution. Explore our insights into why online surveys may fall short and how to overcome these challenges in The 13 Reasons Why Your Online Surveys Don’t Get the Results You Want.

Measure What Matters To Users, Not Just Activity Metrics Or Engagement Metrics

You should measure what matters to users, not just activity metrics or engagement metrics. Activity metrics tell you how much your site was used, but they don’t tell you whether it was a good experience for users. Engagement metrics can be even more misleading.

They usually don’t measure how satisfied users are with the product they’re using. Instead of measuring these things, consider measuring user satisfaction by asking people directly: How likely are you to recommend this product? Would you pay for it? Do you like it better than other products in its category?

Keep An Eye On Both The Individual Experiment Level (Power) And The Overall Project Level (Alpha)

The alpha value is the probability of detecting a difference between treatment and control if there is no difference; this is called your false-positive rate. You can use it to help you determine what size of the effect is worth investigating with your experiment (see point #10).

You’ll want to keep an eye on both the individual experiment level (power) and the overall project level (alpha). If you have high power, then you’ll be able to detect even small differences in results, but at the cost of a higher chance that any observed differences are due to chance rather than actual effects.

Alternatively, if your power is low, then that means that any observed results could be due largely or entirely to chance in which case all conclusions based on those results would need further evidence before being trusted.

Get Creative About Experiments That Can Be Run Without Access To User Data

A/B testing can be done without access to user data. It’s one of the best ways to determine causality and measure impact since you don’t need to worry about confounding variables that may have affected your experiment.

You also don’t need a large sample size (in fact, we’ve seen some successful experiments with as few as 1% of users). Example: We ran an A/B test on our homepage which showed that increasing the number of words per section increased CTR on social media links by 10%.

Be Prepared To Continue Your Learning Journey By Running Follow-Up Experiments Based On What You Learned From Previous Ones

Once you have completed a few experiments, you may be tempted to believe that your learning journey is complete. In reality, the learning journey is continuous and often has unexpected twists and turns. As an example, after we ran an experiment on how we could improve our donation process (workflow).

we learned that users who saw a progress bar were more likely to donate than those who didn’t. We used this insight in subsequent experiments around signup flows and fundraising pages to encourage more people to donate.

Be prepared to continue your learning journey by running follow-up experiments based on what you learned from previous ones.

Ensure Experiments Are Powered For Statistically Significant Results Within A Reasonable Timeframe

To ensure that your experiment is powered for statistically significant results within a reasonable timeframe, it’s important to understand the concept of power. Power refers to the probability that an experiment will detect a statistically significant difference between groups if one exists.

There are many ways to calculate statistical power we’ll look at two here: Excel and R. You can also find statistical power calculators online; here’s one from SAS. We’ll use this example scenario.

You’re running an ad campaign for a new product and want to test whether your target audience responds differently when exposed to different ad copy. The null hypothesis is that there is no difference between ad copy A or B (i.e., both sets of ads perform equally well).

Your alternative hypothesis is that Ad Copy B performs better than Ad Copy A (this would be considered “statistically significant”). Let’s say you have 1,000 impressions total across both sets of ads; 500 people click through on an impression from either set.

300 people follow through on their click and buy your product once they’ve reached its landing page; 200 people come back within 14 days after purchasing it; 100 people come back within 30 days after purchasing it; and finally, 50 come back within 90 days after purchasing them(these are called “retention metrics”).

Always Be Thinking About How To Improve Your Experiments And Make Them As Accurate As Possible

Next, always keep an eye on the individual experiment level (power) and the overall project level (alpha). The former is about making sure your experiments are powered for statistically significant results within a reasonable timeframe.

The latter is about ensuring that you’re appropriately resourced given the amount of effort required to run an experiment and do everything else involved in UX research.

Keywords lay the foundation for effective market research, shaping your approach and insights. Dive into our resource on The Best Keywords to Use for Market Research to learn how to optimize your research process for better results

Final Thoughts

The tips will help you get started with a strategy for experimentation and running A/B tests. While this is not a complete guide to experimentation, it will help you think through the process of getting started and running your first A/B test.

Further Reading

Cross-Browser Testing Strategies: Discover effective strategies for testing web applications across different browsers, ensuring consistent performance and user experience.

Mozilla Foundation Innovation Report: Gain insights into Mozilla’s innovative initiatives and projects that are shaping the future of the web.

Testing Strategies for React and Redux: Explore recommended testing approaches for React and Redux applications, enhancing the reliability and functionality of your codebase.

People Also Ask

What Is Mozilla’s Experimentation Strategy?

Mozilla’s Experimentation Strategy aims to make it easy for people to experiment with new ideas, products, and ways of working.

To achieve this goal, Mozilla has created an experimentation platform called “Experiment Hub” which allows employees to contribute their ideas, experiment with them, and create a culture that supports innovation.

 How Does Experiment Hub Work?

The Experiment Hub is a web application that allows employees to share their ideas, run experiments and track their progress. Employees can also discuss their ideas with other team members or even the whole company. This helps them improve upon their idea as they receive feedback from different perspectives.

Who Can Use The Experiment Hub?

Anyone who works at Mozilla can use the Experiment Hub to share their ideas, run experiments and track their progress.

What Is An Experiment?

An experiment is a test that we run to try out a new idea or approach that we haven’t used before. We run experiments to learn more about the impact of different variables on our users’ experience with our products and services.

What Happens If An Experiment Doesn’t Work Out As Planned?

Sometimes experiments don’t turn out the way we had hoped.

In those cases, it’s important for us not to make decisions based on what happened during an experiment we need to look at all of the data from the experiment so that we can determine whether there was something wrong with the experiment itself or if there was something wrong with how we implemented it.

What’s The Difference Between A Hypothesis And An Experiment?

Hypotheses and experiments are two different ways we can try to test our theories about how the world works. In a hypothesis, we try to explain a phenomenon as a cause-and-effect relationship. For example: “If I do X, then Y will happen.”

An experiment is when we test our hypothesis by making changes to X and measuring the results. For example: “If I do X, then Y will happen.”

How Does Mozilla’s Strategy Contribute To Its Success?

Mozilla’s strategy is to be an open and inclusive organization that supports freedom of expression and privacy. This is done by focusing on the user experience and building products that people love.

What Are The Main Goals Of Mozilla’s Strategy?

Mozilla wants to be a catalyst for positive change in the world by promoting openness, innovation, and opportunity for all. They want to do this by supporting their users’ rights and freedoms online, as well as their right to privacy.

What Are Some Key Elements Of Mozilla’s Strategy?

The key elements of Mozilla’s strategy include: being open-source; providing tools that give users more control over their online experiences; making sure their products are accessible and promoting diversity within the company itself.