Loading...

Dormant Fraud and Onboarding Friction: How to Battle Both with Behavioral Analytics

Published: December 5, 2024 by Devon Smith

Dormant fraud, sleeper fraud, trojan horse fraud . . . whatever you call it, it’s an especially insidious form of account takeover fraud (ATO) that fraud teams often can’t detect until it’s too late. Fraudsters create accounts with stolen credentials or gain access to existing ones, onboard under the fake identity, then lie low, waiting for an opportunity to attack.

It takes a strategic approach to defeat the enemy from within, and fraudsters assume you won’t have the tools in place to even know where to start.

Dormant fraud uncovered: A case study

NeuroID, a part of Experian, has seen the dangers of dormant fraud play out in real time.

As a new customer to NeuroID, this payment processor wanted to backtest their user base for potential signs of fraud. Upon analyzing their customer base’s onboarding behavioral data, we discovered more than 100K accounts were likely to be dormant fraud. The payment processor hadn’t considered these accounts suspicious and didn’t see any risk in letting them remain active, despite the fact that none of them had completed a transaction since onboarding.

Why did we flag these as risky?

  • Low familiarity: Our testing revealed behavioral red flags, such as copying and pasting into fields or constant tab switching. These are high indicators that the applicant is applying with personally identifiable information (PII) that isn’t their own.
  • Fraud clusters: Many of these accounts used the same web browser, device, and IP address during sign-up, suggesting that one fraudster was signing up for multiple accounts. We found hundreds of clusters like these, many with 50 or more accounts belonging to the same device and IP address within our customer’s user base.

It was clear that this payment processor’s fraud stack had gaps that left them vulnerable. These dormant accounts could have caused significant damage once mobilized: receiving or transferring stolen funds, misrepresenting their financial position, or building toward a bust-out.

Dormant fraud thrives in the shadows beyond onboarding. These fraudsters keep accounts “dormant” until they’re long past onboarding detection measures. And once they’re in, they can often easily transition to a higher-risk account — after all, they’ve already confirmed they’re trustworthy. This type of attack can involve fraudulent accounts remaining inactive for months, allowing them to bypass standard fraud detection methods that focus on immediate indicators.

Dormant fraud gets even more dangerous when a hijacked account has built trust just by existing. For example, some banks provide a higher credit line just for current customers, no matter their activities to date. The more accounts an identity has in good standing, the greater the chance that they’ll be mistaken for a good customer and given even more opportunities to commit higher-level fraud.

This is why we often talk to our customers about the idea of progressive onboarding as a way to overcome both dormant fraud risks and the onboarding friction caused by asking for too much information, too soon.

Progressive onboarding, dormant fraud, and the friction balance

Progressive onboarding shifts from the one-size-fits-all model by gathering only truly essential information initially and asking for more as customers engage more. This is a direct counterbalance to the approach that sometimes turns customers off by asking for too much too soon, and adding too much friction at initial onboarding. It also helps ensure ongoing checks that fight dormant fraud. We’ve seen this approach (already growing popular in payment processing) be especially useful in every type of financial business. Here’s how it works:

  1. A prospect visits your site to explore options. They may just want to understand fees and get a feel for your offerings. At this stage, you might ask for minimal information — just a name and email — without requiring a full fraud check or credit score. It’s a low commitment ask that keeps things simple for casual prospects who are just browsing, while also keeping your costs low so you don’t spend a full fraud check on an uncommitted visitor.
  1. As the prospect becomes a true customer and begins making small transactions, say a $50 transfer, you request additional details like their date of birth, physical address, or phone number. This minor step-up in information allows for a basic behavioral analytics fraud check while maintaining a low barrier of time and PII-requested for a low-risk activity.
  1. With each new level of engagement and transaction value, the information requested increases accordingly. If the customer wants to transfer larger amounts, like $5,000, they’ll understand the need to provide more details — it aligns with the idea of a privacy trade-off, where the customer’s willingness to share information grows as their trust and need for services increase. Meanwhile, your business allocates resources to those who are fully engaged, rather than to one-time visitors or casual sign-ups, and keeps an eye on dormant fraudsters who might have expected no barrier to additional transactions.

Progressive onboarding is not just an effective approach for dormant fraud and onboarding friction, but also in fighting fraudsters who sneak in through unseen gaps. In another case, we worked with a consumer finance platform to help identify gaps in their fraud stack. In one attack, fraudsters probed until they found the product with the easiest barrier of entry: once inside they went on to immediately commit a full-force bot attack on higher value returns. The attack wasn’t based on dormancy, but on complacency. The fraudsters assumed this consumer finance platform wouldn’t realize that a low controls onboarding for one solution could lead to ease of access to much more. And they were right.

After closing that vulnerability, we helped this customer work to create progressive onboarding that includes behavior-based fraud controls for every single user, including those already with accounts, who had built that assumed trust, and for low-risk entry-points. This weeded out any dormant fraudsters already onboarded who were trying to take advantage of that trust, as they had to go through behavioral analytics and other new controls based on the risk-level of the product.

Behavioral analytics gives you confidence that every customer is trustworthy, from the moment they enter the front door to even after they’ve kicked off their shoes to stay a while.

Behavioral analytics shines a light on shadowy corners

Behavioral analytics are proven beyond just onboarding — within any part of a user interaction, our signals detect low familiarity, high-risk behavior and likely fraud clusters. In our experience, building a progressive onboarding approach with just these two signal points alone would provide significant results — and would help stop sophisticated fraudsters from perpetrating dormant fraud, including large-scale bust outs.

Want to find out how progressive onboarding might work for you? Contact us for a free demo and deep dive into how behavioral analytics can help throughout your user journey.

Related Posts

Learn four capabilities to consider when building out an ID verification strategy and next steps to take. Read more!

Published: January 8, 2025 by Guest Contributor

Learn how you can proactively fight credential stuffing attacks and protect your organization and customers.

Published: December 18, 2024 by Laura Burrows

Bots have been a consistent thorn in fraud teams’ side for years. But since the advent of generative AI (genAI), what used to be just one more fraud type has become a fraud tsunami. This surge in fraud bot attacks has brought with it:  A 108% year-over-year increase in credential stuffing to take over accounts1  A 134% year-over-year increase in carding attacks, where stolen cards are tested1  New account opening fraud at more than 25% of businesses in the first quarter of 2024  While fraud professionals rush to fight back the onslaught, they’re also reckoning with the ever-evolving threat of genAI. A large factor in fraud bots’ new scalability and strength, genAI was the #1 stress point identified by fraud teams in 2024, and 70% expect it to be a challenge moving forward, according to Experian’s U.S. Identity and Fraud Report.  This fear is well-founded. Fraudsters are wasting no time incorporating genAI into their attack arsenal. GenAI has created a new generation of fraud bot tools that make bot development more accessible and sophisticated. These bots reverse-engineer fraud stacks, testing the limits of their targets’ defenses to find triggers for step-ups and checks, then adapt to avoid setting them off.   How do bot detection solutions fare against this next generation of bots?  The evolution of fraud bots   The earliest fraud bots, which first appeared in the 1990s2 , were simple scripts with limited capabilities. Fraudsters soon began using these scripts to execute basic tasks on their behalf — mainly form spam and light data scraping. Fraud teams responded, implementing bot detection solutions that continued to evolve as the threats became more sophisticated.   The evolution of fraud bots was steady — and mostly balanced against fraud-fighting tools — until genAI supercharged it. Today, fraudsters are leveraging genAI’s core ability (analyzing datasets and identifying patterns, then using those patterns to generate solutions) to create bots capable of large-scale attacks with unprecedented sophistication. These genAI-powered fraud bots can analyze onboarding flows to identify step-up triggers, automate attacks at high-volume times, and even conduct “behavior hijacking,” where bots record and replicate the behaviors of real users.  How next-generation fraud bots beat fraud stacks  For years, a tried-and-true tool for fraud bot detection was to look for the non-human giveaways: lightning-fast transition speeds, eerily consistent keystrokes, nonexistent mouse movements, and/or repeated device and network data were all tell-tale signs of a bot. Fraud teams could base their bot detection strategies off of these behavioral red flags.  Stopping today’s next-generation fraud bots isn’t quite as straightforward. Because they were specifically built to mimic human behavior and cycle through device IDs and IP addresses, today’s bots often appear to be normal, human applicants and circumvent many of the barriers that blocked their predecessors. The data the bots are providing is better, too3, fraudsters are using genAI to streamline and scale the creation of synthetic identities.4 By equipping their human-like bots with a bank of high-quality synthetic identities, fraudsters have their most potent, advanced attack avenue to date.   Skirting traditional bot detection with their human-like capabilities, next-generation fraud bots can bombard their targets with massive, often undetected, attacks. In one attack analyzed by NeuroID, a part of Experian, fraud bots made up 31% of a business's onboarding volume on a single day. That’s nearly one-third of the business’s volume comprised of bots attempting to commit fraud. If the business hadn’t had the right tools in place to separate these bots from genuine users, they wouldn’t have been able to stop the attack until it was too late.   Beating fraud bots with behavioral analytics: The next-generation approach  Next-generation fraud bots pose a unique threat to digital businesses: their data appears legitimate, and they look like a human when they’re interacting with a form. So how do fraud teams differentiate fraud bots from an actual human user?  NeuroID’s product development teams discovered key nuances that separate next-generation bots from humans, and we’ve updated our industry-leading bot detection capabilities to account for them. A big one is mousing patterns: random, erratic cursor movements are part of what makes next-generation bots so eerily human-like, but their movements are still noticeably smoother than a real human’s. Other bot detection solutions (including our V1 signal) wouldn’t flag these advanced cursor movements as bot behavior, but our new signal is designed to identify even the most granular giveaways of a next-generation fraud bot.  Fraud bots will continue to evolve. But so will we. For example, behavioral analytics can identify repeated actions — down to the pixel a cursor lands on — during a bot attack and block out users exhibiting those behaviors. Our behavior was built specifically to combat next-gen challenges with scalable, real-time solutions. This proactive protection against advanced bot behaviors is crucial to preventing larger attacks.  For more on fraud bots’ evolution, download our Emerging Trends in Fraud: Understanding and Combating Next-Gen Bots report.  Learn more Sources 1 HUMAN Enterprise Bot Fraud Benchmark Report  2 Abusix 3 NeuroID 4 Biometric Update

Published: December 17, 2024 by James Craddick