# Mathematical model for maximizing testing velocity (Ecommerce CRO)

To maximize your Average Order Value (AOV) and drive greater profitability, you need to understand the concept of Average Order Value optimization. This metric is essential for retailers to track and measure as it helps offset customer acquisition costs, accelerates your payback time, and improves ROI.

However, simply tracking AOV isn't enough to ensure success. To maximize your AOV, you need to gain insight into Global Maxima vs Local Maxima, Testing Velocity, and Hypothesis-based testing. Let's take a look into each of these concepts in more detail:

When it comes to the concept of global maxima versus local maxima, it’s important to understand the basics. Global maxima is the absolute best that is possible, while local maxima is the current limit of your design.

Essentially, if you want to move past this limit, you must break some boundaries and come up with an entirely new solution.

First, you need to determine the boundaries of your design. This can involve understanding the conditions and constraints of your problem and deciding how far you can push the boundaries of the design.

Once you have these boundaries in place, you can start to understand what is a feasible solution and what is a global maxima. From there, you can also determine the local maxima of your design, as this is the current limit of what you have created.

By understanding these principles, you can begin to see the potential of global maxima versus local maxima. By pushing the boundaries of what is possible, you can create solutions that are better than the current limit. If you can do this consistently, then you can start to create solutions that are far better than what is currently available.

Testing velocity is an important concept in the CRO process. It is the measure of how quickly an experiment can move from research and conception to launch.

The faster you can test, the faster you can learn, and the more successful your CRO program will be. With a higher velocity, you can increase your return on investment (ROI) with fewer experiments.

When trying to increase the speed of testing, the goal should always be to increase the speed of startup while keeping the experiment quality (aka the “value”) high. A good testing velocity can range anywhere from once per day, to once per week, to once per month. The faster you can test, the more valuable insights you can garner.

The third concept of A/B testing is hypothesis-based testing. This involves forming an educated assumption based on data and research—we expect that by taking action X, we will be able to achieve result Y.

This hypothesis can then be tested using a variety of methods and sources to give a clear picture of areas of improvement on your website.

This process eliminates bias and the highest paid person's opinion, and ensures that A/B testing is based on the user experience. By relying on data-driven research rather than guesswork, you can ensure that you are making the most of your A/B testing efforts.

# Learn how you can also increase your store's eCommerce conversion rate today

# Learn how you can also increase your store's eCommerce conversion rate today

Now that you're aware of these basic concepts, let's bring them together in the context of store optimization. You have your current store, your hypothesis—how it could be improved—and a goal of increasing your conversion rates and testing velocity.

Estimating a three-month testing window is too long. So, use the concept of global maxima versus local maxima to accept some imperfections in the variant to speed up the testing process. This means testing only to prove or disprove the hypothesis, rather than perfect the variant. Furthermore, remember that most store tests fail—but that's the name of the game.

By synthesizing the concepts of global maxima versus local maxima, you can speed up your store test and increase your chances of success. Use this knowledge to create the perfect variant and reach your goal of higher conversion rates and faster testing velocity.

For example, if you know that the design will be 20% worse and the copy will be 10% worse than their optimal implementation, you can use a mathematical model to calculate the cost of these drawbacks. Multiply 0.9 and 0.8 to get 0.72, which means you accept a 28% decrease in the variant's effectiveness compared to the control.

Run a test to determine the actual effect of this decrease. If the variant only loses by 10%, it still performs 18% better than the control. This suggests that if you make the variant as perfect as possible, it could be a winner.

To confirm your findings, run the test again with the adjusted variant. By conducting these quick tests, you can determine the potential of the variant and decide whether it's worth investing in further refinement.

Understanding the interconnectedness of these factors is key. Improving one will often have a knock-on effect on the other, or both. It is important to consider how each factor can be improved in order to achieve the best overall outcome. By taking a holistic and interconnected approach to improving these factors, you can ensure that you are getting the most out of your efforts.