Blog, Events & News
Scott Helme: Are we doing enough to tackle bad bots?
By Netacea / 21st Aug 2019
We recently collaborated with cybersecurity researcher Scott Helme to take on the topic of bad bots in our live webinar.
Alongside our own Head of Threat Research James Maude, Scott discussed the growing bad bot threat facing organisations on a global scale, highlighting why bot attacks are carried out in the first place.
In Scott’s blog Thinking More About Bots and Whether We Do Enough, he explores the bad bot threat with his site owner hat on, revealing how bots are affecting his traffic and his customers.
Bot protection goes beyond CAPTCHA
“One of the first things that many people, including myself, might think about when talking about bots, or more specifically stopping bots, is some kind of captcha. More specifically you’re probably familiar with the Google version of this, reCAPTCHA.
“The reason that we have these seems quite obvious, we don’t want someone/something spanning our registration form and filling our database with junk. That’s quite easy to understand, and as a result, we have reCAPTCHA on our registration endpoint. Job done! No? One of the conversations I had with James went into far more detail around why someone might not want a registration form to be abused in this manner and there are quite a few good reasons.
"What if you use the number of signups as some form of indicator around the success of marketing campaigns or promotional activities? It could give you quite a misleading result if you think something is doing a lot better than it actually is.”
Scott provides a use case on his site Report URI, which requires report ingestion. He can’t put reCAPTCHA or anything similar in front of the reports and needs a different solution to the bot problem. So, what about rate limiting? Well, as Scott aptly states, rate limiting isn’t bad, it’s just not advanced enough to do the job all by itself.
We need to know what bots want
Scott said: “What we need to do is look at applying bot countermeasures only when they are appropriate, when we have some reasonable suspicion that the current activity is bot behaviour.”
That’s where our technology comes in to play. We focus on understanding what bots are doing on your site and why they’re doing it; what’s their end game?
Because it really could be anything, as Scott goes on to explain.
“Now, I can’t see the particular attraction of going after something like a Spotify account. There’s no way to extract money, probably not an awful lot of useful/personal data in there and all you can really do is listen to music. But, it turns out, that’s enough for someone to sell and make money off! These attackers are finding Spotify accounts that have spare family slots left and selling access to them.
“And another heavily targeted category is loyalty points. You know when you go to the supermarket or buy petrol and you collect reward points? Yep, they’re a target too because they’re valuable and can be traded for money or used to buy valuable items if someone gets into your account.”
Taking a smarter approach to bot management
There’s more to bot management than CAPTCHA and IP rate limiting, and mitigation strategies differ according to an organisation’s unique threat model and appetite for risk.
At Netacea, we understand bot behaviour better than anyone else, thanks to a pioneering approach to detection and mitigation. Our Intent Analytics engine focuses on what the bots are doing (not how they’re doing it), so malicious bots are hunted out and genuine users are always prioritised.
Powered by machine learning, our approach ensures frictionless access for genuine users in real-time, while preventing non-human traffic from compromising your business. You’ll have the actionable intelligence you need, when you need it.
To start making smarter decisions about your traffic, talk to our team of data scientists today.