Financial fraud is certainly not a new idea. While one of the earliest recorded examples of financial fraud involved a Greek merchant sinking his own ship to collect on an insurance policy in 300 B.C. (spoiler alert: it didn’t turn out too well for him), I’d imagine fraud has been around since the dawn of mankind. But with the advent of modern technology and online commerce, new fraud tactics arise every day. For example, fraudsters with long lists of stolen identities take out loans in the stolen names and never pay them back. The speed of technological innovation, combined with a commonly known psychological theory that humans are more inclined to commit crimes behind the anonymity of the computer – for instance, people are much more willing to pirate a piece of software than to shoplift the software from a retailer – mean online lending is a ripe place for potential fraud.
Fraud Analytics (FA) is a critical role at Enova that helps the company prevent itself from funding fraudsters’ bank accounts. Using analytic technologies, the FA team tries to detect the smallest of anomalies that could indicate fraudulent behaviors. In order to accomplish this, Enova’s FA team stores large amounts of data on each customer. This can range from how a consumer behaves on our website to the basic credit-worthiness characteristics of the customers which would indicate any sort of anomalous characteristics. This data is then analyzed both at an individual loan level and at a portfolio level. On a loan level, we have our algorithms ask questions such as: is this person traversing our website in a logical way? Further up, on a “portfolio” view, we can have our algorithms try to discover clusters of customers which may have come from a single data breach source.
Now, the more analytical readers may be asking themselves: well, why don’t you just feed all this data through a machine learning algorithm and let it solve the problem itself? This is almost impossible. In typical supervised training, one can come to conclusions such as: people similar to this will default at an n% rate. However, it’s a much more difficult problem when the method of the attack is unknown and is constantly changing; further, the target variable isn’t always the same. To make matters worse, the population FA tries to find is less than 1% of the population, and that 1% is intentionally trying to blend themselves into the general population to avoid detection. Common industry methods all involve some sort of over sampling or unsupervised learning, however, our experience has shown that these methods typically yield over fitted or just plain inaccurate results.
At Enova, the FA team believes that a little common sense and industry knowledge will almost always outperform blindly modelling. This is why we have our operations group work alongside the analytics group to be our “eyes and ears on the ground.” Our Fraud Operations group is responsible for reviewing certain populations of loans suggested by the FA group. The operations group provides invaluable insight into how fraudsters behave on a loan scale so that the analytics team can build and verify models to be deployed on a portfolio level. For instance, without listening to an abundance of phone calls, one would be hard pressed to identify a male fraudster who commonly steals female identities and pretends to be a women on the phone. Alternatively, we would never be able to identify that people who weren’t firefighters were using firemen.net as their email domain.
Every day brings a new problem, and every day a new fraudster will think they are clever enough to outsmart our defense. This cycle of identifying idiosyncratic behavior, building models to prevent fraudulent activity, while minimizing legitimate customer impact, is what Fraud Analytics does.