alviomer
Posted on December 7, 2022
With customer retention and engagement as the goal business try different forms and techniques in order to attain maximum conversion and retention on any digital platform. Ranging from making the most complex decisions about an articulate user experience to minor things like changing the colour and positioning of text, images, buttons etc. This gave rise to the concept of split testing or commonly referred to as A/B Testing in the world of product development.
Recently I explored Rails ability to perform an A/B Test with in the framework. Surprisingly it turned out to be super easy and flexible to spin up an A/B Test experiment in rails using it's gem called Split.
This blog is all I did to enable rails application to perform an A/B Test on any scenario.
All right, first things first some of the key ingredients to make this recipe work are below:
- Ruby on Rails
- Redis as your cache engine
- Split
Once you get all the ingredients right all we need is to start the preps to make this easy recipe.
This blog assumes that you already have ruby on rails as the platform up and running along with Redis as its cache store.
Below is the outline of this tutorial:
- First we need to install the split gem.
- Make some configurations to spin up our A/B Test.
- Instantiate the test.
- Finish the test.
- Explore the results.
Step I: Installing Gem
Crux of empowering your rails application to perform A/B Test is to install this gem which will do wonders for you. Pretty straightforward nothing too complex in that I reckon.
gem install split
Hurray! We are done with enabling our rails to perform any sort of A/B Test.
Step II: Essential Configurations
Next up is doing all the required or any additional configuration that need to be done in order to achieve the desired outcome. I have divided configurations in two parts
Experiment Configurations
These set of configurations involve everything we need to get our split test up and running. From defining experiments to deciding the type of algorithm you wish to use. For this we first create our split.rb
file in our rails initializers folder. I've used below as part of my configurations you can get a full list of configurations from Split homepage. I'll try to explain the ones I used for my work.
rails_root = Rails.root || "#{File.dirname(__FILE__)}/../.."
rails_env = Rails.env || 'development'
split_config = YAML.safe_load(ERB.new(File.read("#{rails_root}/config/split.yml")).result)
url = "redis://#{split_config[rails_env]}"
Split.configure do |config|
config.redis = url # this setting configures redis URL
config.db_failover = true # important to define redis failover mechanism
config.allow_multiple_experiments = true
config.enabled = true
config.persistence = :cookie # cookie persistence recomended for logged out users
config.persistence_cookie_length = 86400 # 1 day of cookie expiry time
config.include_rails_helper = true
config.store_override = true
config.experiments = {
:first_experiment => {
algorithm: 'Split::Algorithms::Whiplash',
alternatives: [
{ name: 'alternate_one', percent: 80 },
{ name: 'alternate_two', percent: 20 }
]
}
}
end
You might find this a bit overwhelming but I will explain everything I used in this.
First four lines of the file is just to fetch Redis URLs for different environments.
PS: I admit I could have made it much simpler but here I'm a lazy developer.
Each line wrapped in Split.configure do |config|
means:
-
redis
- This is the URL of the Redis server normally something likeredist_host:redis_port
. -
db_failover
- It is important to define what will happen in case of Redis failure, and to gracefully switch. -
allow_multiple_experiments
- This allows rails to run multiple experiments. -
persistence
- This property defines the persistence mechanism you want to use for your experiments. Generally for a logged out user cookies are used while for a logged in user, session adapters are used. However you can create your own customised adapters as well. More on this can be found on Split homepage. -
persistence_cookie_length
- Defines the length of cookie persistence, and its expiry the unit is seconds. -
store_override
- This is set to false if you don't want the statistics to get effected while you are forcefully testing an alternative. -
experiments
- This defines your experiments, here we've declared one experiment with the namefirst_experiment
which has two alternatesalternate_one
&alternate_two
, and is usingWhiplash
as the algorithm to select the instance of test while having a weights of 80 & 20 percent respectively.
The experiments in this configuration file can be defined in a seperate YML file as well, if there are multiple experiments to be configured.
Dashboard Configurations
Next we need to spin up an instance of a standalone dashboard that comes with this gem, where we can monitor all the tests that we have instantiated and how each of them are performing. In the dashboard we have the ability to stop an experiment where it will default back to just one of them. While you can even declare a winner and it will change everything on runtime. The split dashboard is mounted on your rails route so here is the configuration I used to mount it.
match "/split" => Split::Dashboard, anchor: false, via: [:get, :post, :delete], constraints: -> (request) do
request.env['warden'].authenticated?
request.env['warden'].authenticate!
request.env['warden'].user.admin?
end
The above addition in routes.rb
means that it will allow you to access split dashboard where you can monitor all the cool stuff happening on the go. However it is important to get this dashboard behind a security mechanism, so that it can not be accessed from outside. For this we have multiple options, we can be the best judge ourselves of how secure we want it to be some of the options are:
- Using basic rack authentication mechanism (very basic and simple)
- Using Devise based authentication mechanism
- Using Warden based authentication mechanism
In my configuration after defining split URL, I've given the access to authenticated admin accounts only.
Now you can access the split dashboard using
localhost:3000/split
.
Step III: Instantiating a Test
Yay!! We have done the hard yard now time to instantiate our first test based on the configurations we did in the above sections. In order to do that all we need is to call the method ab_test
this method takes the name of the experiment (defined in the configuration) as the first argument, and the variations can be passed as successive arguments.
ab_test("name_of_experiment", "alternate_1", "alternate_2")
You can get the A/B test running either in your views or your controller depending on the requirement:
Views
Let's say you want to change the text on your link, here's how you do it:
<a href="/url"><%= ab_test "link_experiment", "View", "Please View" %></a>
This can be modified to change anything on the go, like colour of your button, your partial views or your any of your html content.
Controllers
If you want to perform a test in your controller code we can use the same method to take different course of action within your controllers as well. Here is code snippet for that:
@ab_test = ab_test(:first_experiment, 'alternate_one', 'alternate_two')
The function will return one of the alternate and from there we can define two different course of actions that we intend to perform for each outcome, as shown in the below code.
@ab_test = ab_test(:first_experiment, 'alternate_one', 'alternate_two')
if @ab_test == 'alternate_one'
# do something for alternate one
else
# do something else for alternate two
end
Step IV: Finishing a Test
Yay!! Now that we have instantiated the test we need to eventually learn which alternate had a better chance of conversion, for this upon successful conversion of the alternate we need to end the test that we started. So that we can track the number of conversions for each alternate. In order to end the test for that session we can use below code snippet.
ab_finished(:first_experiment)
This function takes single argument which is the name of the experiment (defined in the configuration).
Explore Conversions
By now our split test is up and running and serving different variations to users based on the configurations we have chosen. We can give this a test by running our scenarios.
Question: Wait! What if I have to test a particular scenario, do I have to try my luck every time by refreshing the browser?
Answer: Absolutely Not! All we need to do is we need to pass the scenario in our URL to view a particular one. Something like thishttp://localhost:3000?ab_test[first_experiment]=alternate_two
. This will serve you the second alternate we defined in our configuration.
The standalone dashboard comes quite handy in exploring the results and quickly perform some actions. Here is the view of the dashboard that we mount on rails routes.
Posted on December 7, 2022
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.