Skip to main content

3 posts tagged with "tracking"

View All Tags

· 4 min read
Rob Kingston

Whilst improving Mojito's PRNG & devising an ITP2.X workaround last year we introduced a modular splitting tool in Mojito that lets users split traffic with hash functions. We're amazed by the features that hash functions enable in split testing such as:

  • Deterministic results: Users will always be bucketed the same way for a given input - regardless of when or where you're making the decision from (e.g. client-side/server-side/web/app)
  • Sufficiently random: Random numbers are uniformly distributed despite near-identical inputs

But hiding in plain sight was a novel ramping process that Lukas Vermeer, Booking.com's Director of Experimentation, pointed out to us. We'd not encountered it before. But now that we were using hash functions, it was possible...

· 5 min read
Rob Kingston

You've probably audited your Google Analytics setup and validated the data roughly matches data in your CRM etc (bonus points if you perform this QA process regularly).

How often do you audit tracking for Optimizely, VWO, Convert.com or other SaaS testing tools? Once a year? Just at implementation? Never?! It's no wonder we find the data in these tools trackers to be rather wonky.

· 6 min read
Rob Kingston

*Gasp* A JavaScript error appears

Remember the good old days of JS errors? (Image credit)

Building large, complex experiments introduces new logic, new code and sometimes new bugs. But most A/B testing tools don't perform error tracking or handling for you. So when you launch your experiment and it tanks...

...did your awesome new idea just not work? Or did bugs torpedo your idea?