Skip to main content

9 posts tagged with "experimentation"

View All Tags

Using YoY/MoM conversion rate goals as targets can backfire

· 3 min read
Rob Kingston
Director / Co-Founder

Setting conversion rate goals with precision can be be hard to manage.

Image credit: Field & Stream

A common exercise product teams do at the end of each year is goal setting and revision. We often see conversion rate goals / objectives being set like:

Increase the conversion rate from 6.7% to 7.4%

When goals measure absolute conversion rates across date ranges, like the above, teams may end up working against each other.

Presentation: How to avoid 5 testing pitfalls & run trustworthy experiments (Web Analytics Wednesday Melbourne, 2019-11-06)

· One min read
Rob Kingston
Director / Co-Founder

Web Analytics Wednesday Melbourne chapter.

Earlier this month, I gave a talk to the local Web Analytics Wednesday group in Melbourne on running A/B tests and trustworthy experimentation. It features some of our biggest mistakes in split testing and the simple methods we take to avoid them.

See the slides

How to avoid 5 A/B testing pitfalls & run trustworthy experiments

If you have any thoughts or questions, reach out to me on Twitter

Why an A/B testing tool should form an experiments layer over your site

· 4 min read
Rob Kingston
Director / Co-Founder

There's a reason tag managers are now the de facto for tag deployment.

Before tag managers, you'd embed tags directly into your application. It could take weeks or months to deploy them inside large, monolithic apps... Meanwhile, you'd be shifting precious developer time off high-value projects. And the practice of tagging the app just added further bloat/technical-debt to your heavy codebase.

...and then tag managers became popular.

Tag Managers comparison

Image credit: Blastam Analytics

Now, independent of the web application code, tags could be setup, QA'd and deployed before your coffee went cold. This led to an explosion in data collection and marketing efficiency.

This efficiency is critical in the fast-paced world of experimentation...

Introducing Mojito: Mint Metrics' open-source split testing tool

· 8 min read
Sam Chen
Director / Co-Founder
Rob Kingston
Director / Co-Founder

Update: We have just launched our documentation site for Mojito here.

We're excited to open source Mojito - the experimentation stack we've used to run well over 500 experiments for Mint Metrics' clients.

Logo for the Mojito stack.

It's a fully source-controlled experimentation stack for building, launching and analysing experiments from your favourite IDEs.

A better way to run experiments...

Track Optimizely, VWO & Mojito tests into Google Optimize

· 5 min read
Rob Kingston
Director / Co-Founder

You've probably audited your Google Analytics setup and validated the data roughly matches data in your CRM etc (bonus points if you perform this QA process regularly).

How often do you audit tracking for Optimizely, VWO, Convert.com or other SaaS testing tools? Once a year? Just at implementation? Never?! It's no wonder we find the data in these tools trackers to be rather wonky.

Why purpose-built analytics tools beat Optimizely / VWO's A/B test tracking

· 4 min read
Rob Kingston
Director / Co-Founder

We typically find that relying just on Optimizely, VWO or Convert.com's A/B test tracking has hidden costs:

  • Restrictive analytics capabilities
  • Worse site performance
  • Increases your compliance obligations & compromises your data sovereignty

In our experience Analytics tools like GA and Snowplow are more trustworthy and full-featured. And, at Mint Metrics, all experiments get tracked into both GA & Snowplow for clients. We no longer use or trust SaaS testing tools' built-in trackers.

Here's how purpose-built analytics tools lifts your split testing game...

Why you need error tracking & handling in your split tests

· 6 min read
Rob Kingston
Director / Co-Founder

Gasp A JavaScript error appears

Remember the good old days of JS errors? (Image credit)

Building large, complex experiments introduces new logic, new code and sometimes new bugs. But most A/B testing tools don't perform error tracking or handling for you. So when you launch your experiment and it tanks...

...did your awesome new idea just not work? Or did bugs torpedo your idea?

How to reduce your A/B testing tool's page speed impact

· 4 min read
Rob Kingston
Director / Co-Founder

Client-side A/B testing tools get criticised for loading huge chunks of JS synchronously in the head (rightfully so). Despite the speed impact, these tools deliver far more value through the experiments they deliver. And luckily, we can help manage the problem in a few ways.

Compare split testing tools speed impact

Here are ways we manage container weight at Mint Metrics when managing a client-side A/B testing tool for our clients: