How bots are ruining online gaming for players and publishers
By Alex McConnell / 05th Aug 2021
The old saying goes “cheaters never prosper”, but sadly that is not always the case in online gaming. In dark corners of the internet, new ways of cheating at online games – and getting away with it – are being developed on an alarming scale.
Both purchasable and “free to play” (F2P) games now offer rewards either in exchange for real world currency or through “grinding” in game, which takes time and effort. These rewards can be purely cosmetic, provide status in game, or can offer competitive advantages over other players.
In 2020, profits made from microtransactions within “free to play” PC games were worth $22.7bn. The value of these rewards makes them an attractive prospect to bad actors, who can turn a profit by gaining them illegitimately using online gaming bots and selling in-game assets on to unscrupulous players at a reduced price.
How do online gaming cheaters gain their advantage?
Here are some of the most common methods bad actors will use to gain access to in-game currency or assets for resale:
Account takeover (ATO) attacks
The adversary gains access to existing accounts, which already have a points or currency balance, and sell these on forums or the dark web. Credentials are usually stolen via data dumps, with login details verified or cracked using credential stuffing bots and tools such as OpenBullet.
New accounts are mass created using automated bots. These are then leveled up or assets earned through grinding, either manually or using specialized in-game automation tools. Once a certain balance of in-game currency or assets are obtained, the accounts are sold on. The more assets or the higher the level of the account, the higher the selling price.
Theft of in-game assets
Adversaries develop tools that can remove the cost of items in game, allowing users to bypass microtransactions and steal assets. These tools are sold for a low price direct to players.
If cheating is detected by the game, accounts or even devices are banned by the publisher. Adversaries sell the means to bypass such bans, either by creating fresh accounts or by spoofing other devices or connections, and thus are able to use stolen or illegitimately obtained in-game assets without reprisal.
Competitive cheat bots
While the previously mentioned tools are designed to extract currency for financial gain, others are designed to give players an unfair competitive advantage. The most notorious examples of online gaming bots in this category are aimbots and trigger bots, which rely on incoming data about other players to automatically aim at and shoot enemies as soon as they appear in the bot user’s field of view.
World-hacking is another well-known method used by cheaters. Dishonest players modify their game to receive location information about opponents, whilst making walls and textures in their own game translucent, allowing them to seek out other players and gain an unjust advantage. This is referred to as the “wallhack” cheat.
Why is cheating such a problem for players and game developers?
Other players gaining a competitive advantage through cheating is obviously frustrating for those who play fairly. It makes the game less fun, and players can quickly tell when they are playing against a bot or someone using a modified game. This goes against the appeal of peer-to-peer online gaming.
However, unprincipled players stealing in-game currency and assets is also unfair on honest gamers, who must spend real money or significant time and effort earning such rewards.
As annoying as cheating is for legitimate players, it’s even worse for game developers and publishers.
In-game virtual economy crashes
As with any other economy, a virtual economy found within any online game with its own currency has sources and sinks of currency. These must be balanced carefully so players can earn and then spend their currency at a steady rate, or else the virtual economy becomes unstable.
When currency becomes easily farmed or obtained through automation rather than through the means intended by the games’ developers (usually being earned at a regular pace by human players, not bots), the buying power of that currency can become weak due to inflation, crashing the virtual economy of the game. The value of players “grinding” or even buying virtual currency with real world money becomes diminished, fewer people will play the game, and the game will generate less profit.
Although the assets being stolen from online games are virtual and theoretically infinite, widespread and accessible means of theft results in fewer microtransactions for the game’s publisher.
Additional overheads and resource to stop the cheaters
Game developers spend significant time and money trying to detect and block exploits used by bad actors, which creates operational overheads. When a cheating method is blocked, the adversary responsible often finds a way around the block very quickly, creating an arms race that drains resources for the game’s developers. This can impede other work, such as the release of new features that might be commercially advantageous.
Game lifecycles are reduced
All these factors contribute to the lifecycle of the game becoming shorter, meaning gamers will stop playing it sooner in favor of other titles. This means the publisher can no longer generate revenue from the game and must spend significant money to develop new titles; it is more economical to keep an already released title popular than it is to develop, produce and market a brand-new title.
How can game publishers stop the bots and the cheaters prospering?
To truly curb the damage caused by such attacks, game publishers must make it financially unviable for adversaries to operate on their online games.
Netacea have spearheaded the creation of the BLADE (Business Logic Attack Definition) framework, which breaks automated attacks down into stages based on the techniques being used. By looking at the different stages of the attack, businesses can determine the best place to break the chain for minimum intervention and cost, and maximum effectiveness.
For example, one of the earliest interventions would be to prevent account takeover (ATO) attacks so that adversaries can’t steal accounts or mass create new ones and go on to execute further attacks.
Using server-based Bot Management from Netacea, powered by AI, we can identify the intent of each request made to any web-based system. Our bot detection algorithms can examine behavioral characteristics to pick out credential stuffing activity and block these requests in real time, using our Intent Analytics™ engine or our ever-evolving database of known offenders. As a result, these attacks are hampered at the first hurdle, cutting off a valuable stream of income for bad actors.