What are the top misconceptions about machine learning?
By Alex McConnell / 11th May 2021
Many businesses are now talking about artificial intelligence (AI), and specifically machine learning, as a way to solve data problems more effectively.
In theory, this sounds easy. What could be better than using AI to get a computer to learn how to solve a problem over time, without manual intervention?
The reality is very different, however. Depending on the nature of the data being analyzed, there are many caveats and challenges to using machine learning as an effective tool – and various approaches to machine learning that must be carefully considered.
Lessons in machine learning from a data scientist
At Netacea we use machine learning algorithms to power our AI-based advanced bot management solution, which earned us the position of Forrester market leader for bot management in 2020, so we know a thing or two about the finer details of using machine learning appropriately.
During our recent webinar, “Netacea’s approach to machine learning in advanced bot management”, we asked our Head of Data Science, Matt Jackson, for his thoughts on the most common misconceptions around AI and machine learning.
“Machine learning isn’t this silver bullet that’s going to solve all your problems,” Matt explained. “There’s lots of nuance and lots of types of machine learning.”
“The misconception I see most is that it’s easy. Just hire some data scientists and off you go. But that isn’t the case.”
In fact, Matt provided detail on a variety of machine learning approaches later in the webinar, with the right approach depending heavily on the problem the data scientist is trying to solve. Selecting the best model for the job requires domain knowledge, and a combination of approaches could be required in this application of artificial intelligence.
Choosing the right machine learning methods
Using our specialism in bot management as an example, let’s look at a few examples of differing approaches to machine learning and its algorithms to demonstrate how these can be more or less appropriate depending on the data we are analyzing.
It’s important to note that, in many cases, it’s useful to apply opposing approaches together to get the full value.
Real-time vs. batch
A real-time approach to machine learning aims to react instantly to the data as it comes in. This is powerful when we at Netacea are mitigating high risk bot attacks, such as credential stuffing or carding attacks, but is one of the most challenging approaches because of the customization and technical infrastructure required.
A batch approach to processing data with machine learning algorithms is slower but can aggregate information over time, providing more context to act on; this is a useful strategy for “low and slow” bot attacks, like scrapers, attempting to fly under the radar to avoid detection.
In reality, we often combine these approaches to get the benefits of both.
Supervised machine learning vs. unsupervised machine learning
Because supervised data is labelled, it is a very scientifically rigorous approach that is easy to evaluate. However, these labels make many assumptions about the data and aren’t good at capturing new or under-represented bot behaviors.
Conversely, an unsupervised approach does not need labelled data, allowing it to be more adaptive and able to make use of more data. The trade-off is this data is less easy to evaluate and monitor. Once again, we often blend these approaches to get the benefits of both.
General vs. specific
General approaches to bot mitigation allow us to provide value very quickly to new domains based on what we have seen before. General models can adapt to new scenarios and are easy to maintain. Being a general solution, though, means sacrificing the ability to catch very specific business logic attacks.
For this reason, we mix general models with more specific ones, designed to capture particular bot attack types (or patterns of data). These are incredibly effective, but it is unfeasible to maintain a specific model for every stage within every attack type. Combining specific models with a general approach provides the best coverage across the bot threat landscape.
Machine learning requires specialist skills, knowledge and adaptability
Machine learning is not a silver bullet to solve all data science problems. Applying machine learning algorithms effectively requires domain knowledge, skill and careful consideration of the context of your data.
At Netacea, AI and machine learning are foundations of our Intent AnalyticsTM engine, which assesses each request made to a website, mobile application or API to detect sophisticated bot activity. Hear more about Netacea’s approach to machine learning in advanced bot management in our recent on-demand webinar, hosted by Head of Data Science Matt Jackson.
If you would like to assess the malicious bots threatening your website, mobile application or API using advanced artificial intelligence, click here to book a free demo of Netacea’s leading bot mitigation solution.