Blog, Events & News
The evolution of bots: generations 1, 2 & 3
By Netacea / 22nd Oct 2020
Bots are evolving dramatically and becoming more sophisticated and launching ever more complex and targeted attacks at ever increasing rates. This makes detecting bots more important than ever but also more difficult than ever. Bots of the more recent generations are harder to identify without expert bot detection tooling. These bots could put businesses at risk of exposure to threats such as scraping, carding, and credential stuffing.
In June, we held a live webinar, in partnership with Forrester Research, discussing what lies beneath your website traffic, including the various generations of bots. Bot attacks ranging in sophistication are becoming increasingly frequent and growing in scale. It is vital that we have the knowledge and tools available to defend against them.
The automated bot threat is evolving and we can categorise bots in this changing attack landscape in three distinctive generations:
First generation bots
First generation bots were simple crawlers, generally used to carry out simple activities such as:
First generation bots were usually executed from automation programs rather than from browsers and from a limited, fixed set of locations, often from servers rather than desktop locations.
This limited spread of host locations as well as the limited sophistication of these bots, means they are relatively easy to spot and put defences in place to stop them. The first generation of bot defences typically monitor network activity and take actions such as blocklisting known bad actor IP addresses or non-residential sources (such as data centres), and put in rate limits that stop more than a pre-determined number of requests from a specific location in a certain amount of time.
These defences were often provided as extensions to existing tools like WAFs and CDNs. During our webinar, Netacea CTO and co-founder Andy Still said:
“Most bots can now bypass these types of solutions.”
The graph below illustrates this, showing how a bot will keep trying different levels of requests to determine whether rate limiting is in place.
Second generation bots
Second generation bots made use of modern types of browser automation and innovations such as headless browser which made distributing larger scale execution via real browser practical. This allowed the complexity of attacks to increase the prevalence of attacks like:
These bots have evolved to make the process of launching and distributing bot attacks across many locations easier and the attacks more sophisticated. Crucially, they’re able to bypass first generation bot defences.
Whilst discussing second generation bots during our webinar, Andy Still said:
Third generation bots
Third generation bots look like browsers (and are indeed often executed from consumer browsers or even browsers modified to bypass client side protections) and can carry out much more sophisticated types of:
They can simulate basic human-like interactions, such as simple mouse movements and keystrokes.
To effectively detect third generation bots you can no longer depend on being able to provide client side defences, trusting that the protection in place on the device is reliable and accurate. Instead you need to look at the area of the interaction that you have control of – what is happening on the server – and analyse the system functionality that each user is executing to determine whether that is legitimate or not. A bot management solution with a server-side approach is the most effective solution for third generation bots.
To find out more about the changing bot landscape and evolution of the bot threat, watch our live webinar on-demand, recorded with Forrester Principal Analyst in Bot Management, Sandy Carielli:
Watch the webinar on-demand: What Lies Beneath Your Website Traffic?
- Anti-Fingerprint Browsers: What You Need to Know
- When Robots Strike: The Hidden Dangers of Business Logic Attacks
- What is Non-Human Traffic?