Cybersecurity Sessions recap: Artificial Engagement and Ad FraudAd fraud, such as fake clicks and impressions generated by bots on ad networks, are costing marketeers huge chunks of their digital advertising budgets.
In a recent episode of the Cybersecurity Sessions podcast, Netacea CTO Andy Still quizzed Beacon CTO Stewart Boutcher about ‘artificial engagement’, a term Stewart has championed as a member of the Data and Marketing Association North Council. Artificial engagement refers to fake clicks and impressions generated by bots on ad networks, costing marketeers huge chunks of their digital advertising budgets.
Andy and Stewart found plenty to talk about, both being big on bot issues but from different perspectives. At Netacea, Andy focuses on business logic attacks, such as credential stuffing, account takeover, carding and scalping carried out by bots, whereas at Beacon, Stewart concentrates on cracking down on bot-induced ad fraud.
You can listen to the 20-minute episode in full at any time, but here are just a few of the topics that came up.
Who are the winners and losers in ad fraud?
The losers in ad fraud are easy to define. In simple terms, anyone running digital advertising online can be a victim of ad fraud. Stewart defines ad fraud as a term that “encompasses a lot of different areas: fake clicks, fake impressions, programmatic ad fraud. But broadly speaking, it’s any attempt to defraud digital advertising networks for financial gain.”
While it’s often less clear to distinguish, an obvious ‘winner’ in ad fraud are the perpetrators, usually those who build and operate the bots and botnets that carry out ad fraud. The reason for doing this in huge volumes is to build up realistic personas of people who engage with adverts, so that these bots can then go and click on adverts placed on a fake website. The publisher of this fake website nets a profit by bots clicking on real adverts, paid for by advertisers who believe the site to be legitimate based on the amount of realistic and seemingly valuable traffic it’s receiving.
This poses the question of where ad networks sit. Since they get paid per click or impression, regardless of whether it is performed by a human or a bot, do they care about ad fraud? Or do they need to act against ad fraud to keep their customers happy?
Stewart and Andy agreed that, whilst ad networks probably don’t want bots to click on their customers’ adverts, detecting such activity is hard. These sophisticated bots are constantly evolving to evade detection, looking ever-more human to ad networks and advertisers. It takes special focus to develop technology that can stand up to the challenge of detecting bots.
Building up fake consumer profiles
Part of the problem is that ad networks place higher value on clicks with stronger intent signals. Advertisers can target specific audiences, not just based on demographic but also based on behavior – for example, are they a returning visitor? How much money have they spent on the site previously? What products are they looking at?
Much of this information is collected using third party tracking cookies, which have become controversial with consumers and created privacy concerns due to the amount of information they collect. Cookies can easily be manipulated by bots to create a valuable “persona” that ad networks will charge a premium to advertise to.
Andy asked about the alternatives to third party cookies, which are largely being phased out by mainstream browsers. Whilst discussing first party tracking, where companies like Google can track our behavior when logged in via our browsers, Stewart warned that “they have the same problems built in… which is that they’re gameable by a sophisticated bit of software that behaves in certain kinds of ways making it look human. So, it doesn’t solve that problem really.”
Another issue with artificial engagement Stewart pointed out was that, while some businesses (for example, within the job services industry) buy traffic to inflate their visitor numbers and make themselves more attractive to advertisers, there is no way to validate whether this traffic is ‘real’ or bot. This could be down to a naïve belief that the traffic is legitimate, or “cynically, you might argue that some of these sites aren’t that bothered about whether it’s human, they’re merely looking at the traffic levels,” Stewart suggested.
Stewart also reminisced on the era when it was almost standard practice when setting up a new Twitter account to buy 500 or 1,000 followers to create the illusion of popularity with real users, aiding organic growth.
Why is ad fraud so rife across industries?
Aside from being highly profitable, operating ad fraud bots is now easier than ever. In the podcast, Stewart mentioned that the advent of cloud computing removed many of the technical barriers to creating highly distributed botnets: “You don’t have to worry about compromising someone’s computer, you don’t have to worry about downloading something and breaking it.”
What can be done to stop ad fraud bots?
Ad fraud is undoubtedly a huge problem for any business using online advertising, with the cost amounting to 20-40% of overall ad budgets. In addition to the cost of serving the adverts, Andy pointed out that ad fraud bots are also taking the place of real customers and their potential sales.
Whilst acknowledging the difficulty in blocking 100% of bots from clicking adverts, Stewart asserted that for a business experiencing 45% of their ad clicks coming from bots, being able to reduce this to 7% or 8% made a huge difference.
Listen to the podcast in full below:
Netacea's clean, scalable architecture delivers fast insight and accurate
bot filtering that protects your revenue and customers on auto-pilot.