Faster Releases and Experimentation – How We Do It
Overview
We at Enova are always striving for a bug-free production release, all the while adapting to fast-changing market needs. Through time and experience, we have evolved a process that helps support this endeavor. Here I would like to share how we at NetCredit pursue feature launches and rapid experimentation while maintaining system stability and quality.
Vikas’s earlier blog on Feature Flags sets the stage for what I have to say next. Feature toggling is a way to release code earlier, to implement new features safely and to get feedback faster. Toggling techniques like Feature Flags and A/B Testing are also good ways to mitigate risk – they avoid complex merge management and other complications arising from long-lived feature branches.
On the Enova NetCredit software development team, we use two key approaches:
- Feature Flags in config (.yml) files
- A/B Testing using an Enova internal Ruby gem
What is the Right Strategy?
The best approach depends on multiple factors. To choose the right strategy, we consider the following questions:
- Is a new feature being launched?
- Do we want to release work-in-progress without impacting production?
- Do we want to make feature testing easier while still dark releasing?
- Does the feature launch possibly need to be delayed?
- Do we want to release the feature in different environments (like staging/production) at different timelines?
If any of the above considerations apply, Feature Flags can provide the mechanism to enable/disable functionality. Perhaps the situation is different though, for example:
- Is an existing feature being refactored?
- Do multiple versions of the feature need to coexist?
- Do we need to collect and compare data points across different feature versions?
- Do we want to check customer reaction?
- Do we want to release incrementally?
- Do we want granular or segmented control over the feature?
In these cases, A/B Testing may just be what we need!
The How-To’s
Feature flags in config (.yml) files
In our Ruby environment, we add configuration to the project’s feature_flag.yml. Note that an environment-specific flag setting is usually maintained by overlaying the configuration for each environment.
password_reset: security_questions: true
The flag can now be used in the code flow as part of an if/else block, a utility or a dependency inversion. For example:
if/else block: Watch out for repetition and any opportunity to move the config read to a common location.
if AppConfig.feature_flags.password_reset.security_questions show_security_questions else send_reset_email
Utility: Create a utility to read and process config information. This might get difficult to read with multiple flags to process.
class FeaturesUtil class << self def config(flag_type: nil) return if flag_type.blank? AppConfig.feature_flags[flag_type] end def security_questions_pwd_reset? pwd_reset_flags = config(flag_type: :password_reset) pwd_reset_flags[:security_questions] end end
Dependency Inversion: Create an abstraction layer to separate the configuration from its use. The separation of concerns helps with testing. This also enables making changes to the flag structure or implementation without interrupting the consumer’s flow.
class LoginHelper def security_questions_for_pwd_reset? AppConfig.feature_flags.password_reset.security_questions end end class LoginController include LoginHelper def password_reset redirect_to security_questions_path and return if security_questions_for_pwd_reset? send_reset_email render :password_reset_success end end
A/B Testing – Using Internal Gem
Create Trial: We setup a trial in the database that includes trial details, cohorts and cohort weights. When ready, the trial is activated and a test is run by adding participants at the branching point. The gem takes care of assigning participants to cohorts based on weights and some randomization logic.
trial = Trial.create!( trial_name: 'password_reset_process', description: 'Test different ways to reset user passwords', trial_type: TrialType[:customer], is_active: true ) trial.cohorts.create!( cohort_name: 'email_reset', description: 'The control group sends a reset email to the user', is_active: true, weight: 0.8, ) trial.cohorts.create!( cohort_name: 'security_questions_reset', description: 'The test group will prompt for security questions to verify user identity', is_active: true, weight: 0.2, )
Create Helpers: An ABTestHelper.rb class is created in the app code to read the trial values and determine user participation.
module ABTestHelper def security_questions?(customer) trial = Trial.active.find_by(trial_name: 'password_reset_process') participant = trial && customer ? trial.add_participant(customer) : nil participant && participant.cohort_name == 'security_questions_reset' end end
Use: Invoke the helper at the branching point in the code. For example in the controller when deciding what to render.
class LoginController < ApplicationController include ABTestHelper def password_reset redirect_to security_questions_path and return if security_questions?(customer) send_reset_email render :password_reset_success end end
Calibrate & Repeat: The test(s) and control cohort weights are calibrated per the needs of the trial. The AB test is run until enough data is collected and a decision can made. Then comes deactivation and code clean up!
Some Gotchas
It is necessary to follow a disciplined approach to using and then removing Feature Flags and A/B Testing. Otherwise, they can soon accumulate the technical debt of unused confusing code in the repository.
It is also important to realize that Feature Flags increase code complexity by adding conditional logic. Long lasting flags or A/B tests can result in multiple code paths to maintain, so should be used in a time-bound judicious way.
Summary
With speed to market and experimentation being focus points for Enova, understanding and using feature toggling is key to efficient delivery and better decisioning. NetCredit has been using feature toggling effectively with several of its projects.
We are continuously looking to improve – we encourage and consider ideas for better tools and processes along the way.