In the online lending space, fraud can happen fast. The way it happens can change just as fast. The swiftly changing nature of fraud and the strategies to combat it make planning for fraud prevention difficult. The ROI of a fraud prevention feature may not be apparent to the product teams until the attack has occurred and the brand’s losses have already been realized.
During a fraud attack, it is important to learn and act quickly. Data scientists need to be agile and experiment rapidly, often using machine learning to identify incoming threats. To address these challenges, our Fraud Analytics team has adopted a DevOps mentality to fight fraud. Our approach has expanded the responsibilities and skill sets of our team to include much more software and platform engineering than would normally be practiced by a typical analyst at a financial institution.
Building Experimental Applications in the Background
The release and development of our fraud models and processes is very different from our approach to credit models. Credit decisions are held under strict scrutiny by both our business leaders and regulators. There are cases where both the input data as well as the modeling techniques are not at the discretion of the analyst building the model, but instead are dictated by regulatory and business requirements. Additionally, since credit models respond to slow moving trends, changes to these models can be pursued with a lower level of urgency that is not optimal for fraud.
While the credit decision at most FinTechs is almost entirely made by a model, fraud decisions are made by a combination of analytics and human review. Our Fraud Operations (FraudOps) team works to identify and prevent fraud in real time. FraudOps is housed in our analytics department alongside the Fraud Analytics group — Fraud Analytics has benefited from a close partnership with FraudOps since both teams launched at Enova. FraudOps are the subject matter experts on the types of fraud that Enova encounters. They manually review customer accounts that are identified as high risk through their own automated analysis, as well as accounts that are escalated through Enova’s contact center. Meanwhile, Fraud Analytics specializes in building out the predictive models used during underwriting to prevent fraud, and they work with the tech teams to incorporate fraud prevention throughout our customer journey.
Partnering with FraudOps allows Fraud Analytics to test new models and methodologies rapidly and iteratively. While a model released to Enova’s core systems makes immediate automated decisions on the fraud risk of applicants based on established patterns, other analysis runs in the background to flag suspicious activity and alert FraudOps of the need to manually review the application. Once flagged, the actual action — declining or approving a loan application — is done by a human agent. The systems that flag suspicious activity for manual review are the background applications built and maintained independently by the Fraud Analytics team.
The freedom of being able to release models into production has allowed Fraud Analytics to experiment with numerous machine learning and modeling techniques. Our two areas of focus over the last year have been upgrading to the newest machine learning classification models and beginning to utilize graph databases. Our first machine learning models were launched within our own environment without the need for any engineering support. Our data scientists used R & Python to automate the training and scoring of several types of models, including XGBoost and Random Forests. Our team has also begun including graphical analysis in our models and processes by launching our own instance of Neo4j in the cloud.
Full Stack Development
The focus of product development at Enova is often building out functionality within individual brands and addressing each brand’s operational concerns. One of the gaps the Fraud Analytics identified earlier on was the need to consider the fraud seen across all of our brands when making a decision on a new customer. In pursuit of a full-featured cross-brand fraud portal, our team decided to first build out and experiment with tools of our own.
Fraud Analytics used Python to develop the first version of WALNUT, a tool that allowed the FraudOps team to look across all Enova brands to identify shared fraud trends. Upon the framework of WALNUT , future Fraud Analytics teams added new functionality to quickly fill in operational gaps. Examples include a better UI for looking at third-party data and a way to download potentially fraudulent attachments for a list of specific customer IDs.
In addition to our front-end application, our team has built a vast array of other infrastructure. These include database views and tables that store and serve up information, alerts systems built around emails and Slack messages, and business intelligence dashboards. In each case, Fraud Analytics and Fraud Ops have worked together to find tools that meet the business’s needs quickly.
Our fraud detection and fraud monitoring applications rely not only on development of the ecosystem, but also on continuing support of the infrastructure. In order to develop, deliver and maintain our software, Fraud Analytics takes full responsibility for the end-to-end lifecycle. To us, DevOps means monitoring our own failures and fixing the problems ourselves. Our system of scripts and applications hooks into our corporate Slack account and posts job failures. We maintain our own Linux environments to execute our applications, so it’s important for our analysts to have a cursory knowledge of platform engineering in addition to our other skills.
At some points, it does seem like the time we spend learning these skills can take away from our analytics work, but we have found overall that it benefits the team by developing well rounded data scientists and helps our business by allowing us to fix our problems quickly without relying on outside help. Additionally, any time spent learning these new skills allows the analysts on our team to improve their skills in a variety of ways that become valuable as their careers progress.
Eyes Toward Production
Although our team has had great success building our own applications, we often run up to the limits of the scalability of our homegrown solutions. After we are able to prove out the business case for our solutions, our goal becomes working with engineering teams to collaborate on more scalable solutions. The work we have done in refining our approach makes it easier to clarify the potential ROI and allows us to better understand the benefits and pitfalls of different implementations.