Adaptive Threat Architecture
By / 10th Jul 2018
Your Threat Score
At Netacea we’re all about delivering insight that you can actually use. Every web estate is different, and many larger web sites have custom targeted bots written specifically to target them. We don’t believe you can just add the same ‘black box’ solution as a bot mitigation layer and expect it to work for you.
Machine Learning Based Labelling
Instead we offer a simple, visually intuitive interface that allows you to set the thresholds and key critical paths for the types of behaviour that may or may not be acceptable on your site. Scraper Bots may be very useful for creating affiliate marketing revenues on one site, and a disaster on a content site, whose IP is constantly scrapped and re-produced without a paywall elsewhere on the Internet.
We give you the tools to teach the machine learning, what is acceptable and which paths are critical, and need higher levels of protection in your web estate.
Low Latency Performance
Our adaptive architecture automatically pre-empts potential bad traffic, and kicks in-line only when critical conversion path are under threat, or abnormal behavioural activity is detected.
Our machine learning learns from your environment and auto enables our powerful adaptive threat architecture that pre-empts attacks and offers inline protection and enhanced protection from bad actors only when required.
Under normal behavioural mode, the machine learning is in passive mode, and builds up a complete behavioural picture of your enterprise visitors. The behavioural mode ensures that there is no speed reduction or latency whatsoever in our architecture. This unique approach ensures that we only operate as a reverse proxy and add latency to visitors we already suspect may cause an adverse affect to a critical path on your infrastructure.
Machine learning takes large amounts of processing power, and examining the complex interactions of web visitors, together with the IP history, digital fingerprints and the whole range of digital provenance data just takes time. Once the threat is assessed and then sent to a WAF or other bot mitigation service, the total time elapsed can easily be several minutes - far too late to prevent the breach.
Once the machine learning understands your web estates critical paths and your own risk criteria, which can be set using a simple visual tool, the machine learning can start to understand the visitor flow in the background without being in-line as a reverse proxy examining all the data. Your threat appetite also changes according to the path taken by visitors on your web site. For example, large sudden increases in inbound visitors to the shopping cart or login pages will naturally cause more concern that visits to a product page.
All the data is analysed by the machine learning engine historically, which is then able to provide a rich and detailed profile of who are authentic versus the fake actors, browser emulators, and obvious bad actors. The heavy processing needed to establish standard deviations of ‘normal behaviour’ v abnormal behaviour is all done out of line without affecting your site’s visitors in any way.
Adaptive Threat Mitigation - Find The Slow & Low
When we do pick up a potential threat, Netacea is able to dynamically reconfigure itself, and use wait state as a mitigation method on just the suspicious traffic levels. Normal behaviour and cookies users carry on as normal and are not affected. The potentially suspicious traffic is then placed in an alternate path and the reverse proxy is used as a gigantic buffer zone to protect your estate from unwanted visitors, and allow us to identify the source of the suspicious traffic. The wait state in these circumstance can be a very powerful way of dealing with both the high threshold bot attacks, or the ‘slow and low’ constant attacks that are deliberately set to avoid the normal WAF threshold based rule sets.