Skip to main content

· 3 min read
Rob Kingston

Setting conversion rate goals with precision can be be hard to manage.

Image credit: Field & Stream

A common exercise product teams do at the end of each year is goal setting and revision. We often see conversion rate goals / objectives being set like:

Increase the conversion rate from 6.7% to 7.4%

When goals measure absolute conversion rates across date ranges, like the above, teams may end up working against each other.

· 4 min read
Rob Kingston

Whilst improving Mojito's PRNG & devising an ITP2.X workaround last year we introduced a modular splitting tool in Mojito that lets users split traffic with hash functions. We're amazed by the features that hash functions enable in split testing such as:

  • Deterministic results: Users will always be bucketed the same way for a given input - regardless of when or where you're making the decision from (e.g. client-side/server-side/web/app)
  • Sufficiently random: Random numbers are uniformly distributed despite near-identical inputs

But hiding in plain sight was a novel ramping process that Lukas Vermeer, Booking.com's Director of Experimentation, pointed out to us. We'd not encountered it before. But now that we were using hash functions, it was possible...

· 5 min read
Rob Kingston

Running A/B tests or experiments on the web requires injecting lots of JS and CSS into your web app to change the look and feel of the page. Reckless deployments of this code can (and sometimes does) break web applications. And when it does break, it results in failed experiments and terrible user experiences.

Minified code for split testing

An example of a split testing container JS file with KBs of minified, experimental code.

But could we make experiment deployment safer and more reliable through a CI workflow? We think it's possible through how we use Bitbucket Pipelines.

· One min read
Rob Kingston

Web Analytics Wednesday Melbourne chapter.

Earlier this month, I gave a talk to the local Web Analytics Wednesday group in Melbourne on running A/B tests and trustworthy experimentation. It features some of our biggest mistakes in split testing and the simple methods we take to avoid them.

See the slides#

How to avoid 5 A/B testing pitfalls & run trustworthy experiments

If you have any thoughts or questions, reach out to me on Twitter

· One min read
Rob Kingston

Docs site screenshot for building A/B tests with the Mojito split testing framework.

The Mojito split testing framework's docs are built upon Facebook's Docosaurus.

It's been a couple of months since we announced that we'd open-sourced Mojito... and at long last you can now find all the documentation for our split testing framework in one central resource! Here, you'll find:

Issues, help & contributing#

The website code is open-source, too. So if you get stuck or want to help us improve the documentation, drop an issue on the repo and hopefully we can help you out. The site is built using Facebook's Docosaurus and allows adding articles through simple markdown - It's an easy way to build comprehensive documentation.

Anyway, we hope you find this resource useful in getting started with Mojito.

Visit Mojito.mx.

· 4 min read
Rob Kingston

There's a reason tag managers are now the de facto for tag deployment.

Before tag managers, you'd embed tags directly into your application. It could take weeks or months to deploy them inside large, monolithic apps... Meanwhile, you'd be shifting precious developer time off high-value projects. And the practice of tagging the app just added further bloat/technical-debt to your heavy codebase.

...and then tag managers became popular.

Tag Managers comparison

Image credit: Blastam Analytics

Now, independent of the web application code, tags could be setup, QA'd and deployed before your coffee went cold. This led to an explosion in data collection and marketing efficiency.

This efficiency is critical in the fast-paced world of experimentation...

· 8 min read
Sam Chen
Rob Kingston

Update: We have just launched our documentation site for Mojito here.

We're excited to open source Mojito - the experimentation stack we've used to run well over 500 experiments for Mint Metrics' clients.

Logo for the Mojito stack.

It's a fully source-controlled experimentation stack for building, launching and analysing experiments from your favourite IDEs.

A better way to run experiments...#

· 5 min read
Rob Kingston

You've probably audited your Google Analytics setup and validated the data roughly matches data in your CRM etc (bonus points if you perform this QA process regularly).

How often do you audit tracking for Optimizely, VWO, Convert.com or other SaaS testing tools? Once a year? Just at implementation? Never?! It's no wonder we find the data in these tools trackers to be rather wonky.

· 4 min read
Rob Kingston

We typically find that relying just on Optimizely, VWO or Convert.com's A/B test tracking has hidden costs:

  • Restrictive analytics capabilities
  • Worse site performance
  • Increases your compliance obligations & compromises your data sovereignty

In our experience Analytics tools like GA and Snowplow are more trustworthy and full-featured. And, at Mint Metrics, all experiments get tracked into both GA & Snowplow for clients. We no longer use or trust SaaS testing tools' built-in trackers.

Here's how purpose-built analytics tools lifts your split testing game...