Optimization

This is a work in progress, and not quite ready for primetime

What is A/B Testing

A/B testing is about taking an unlimited number of confounding variables and narrowing them down to two:

  1. the change you made (A vs B)

  2. "luck" (of the draw)


Common issues using A/B test:

  1. Not understanding everything your testing

  2. Not fully understanding your entire audience

  3. Not considering the impact of luck

  4. Overestimating the scope of the impact

  5. Not understanding the rules of the game

Great Online Tools Links

Great, full featured calculators that are generally easy to use for data exploration.

Noteworthy Tools

Requires a little more understanding, but great for specific use cases

How A/B testing works

A/B testing (also known as split testing or bucket testing) is a method of comparing two versions of a webpage or app against each other to determine which one performs better. AB testing is essentially an experiment where two or more variants of a page are shown to users at random, and statistical analysis is used to determine which variation performs better for a given conversion goal. (https://www.optimizely.com/optimization-glossary/ab-testing/ )

A/B testing is about taking an unlimited number of confounding variables and narrowing them down to two:

  1. the change you made (A vs B)

  2. luck (of the draw)

In addition, the various possible statistical frameworks provide methods to help you understand how likely the impact you are seeing are due to your change or to luck

Common issues in using A/B test results...

Not understanding everything your testing

what your change eg tech error and page load 2

Not fully understanding your entire audience

80/20 (plats at Marriott), robots in digital

Not considering the impact of luck

and accounting for it

Overestimating the scope of the impact

eg it worked here so it will work here

Not understanding the rules of the game

and accidentally cheating eg erroneous data collection, including the garden of forking paths