AI vs AI. The evolution of cheating in video games, Part 2.
- Andrew Hogan
- 15 hours ago
- 8 min read

Brief recap
In Part 1 of this briefing we looked at how AI is changing both the development of cheats, and the anti-cheat systems designed to detect and stop them. We showed how AI is helping cheat devs develop aimbots and wallhacks that appear to be more human and are thus harder to detect; at the same Game Security teams are leveraging machine learning trained on ever increasing datasets of player behavior to radically improve heuristic detections.
We also explored how cheat devs are increasingly adopting adversarial techniques optimised by AI, and in turn how security teams make use of LLM’s to handle growing volumes of threat intelligence.
A battle of AI vs AI is already underway. What about the future?
In Part 2 we will describe the key challenges and limitations security teams face, the risks and ethical concerns, and finally what the future holds for both the cheat devs and those protecting our favourite games.
The same old challenges
False positives are not only an issue for AI enhanced anti-cheat systems - they’ve been one of the key challenges faced by Game Security teams since day one. However, increasingly relying on AI to have the final say on whether someone’s using a cheat, will likely make false positives a growing concern and cause of complaint in player communities.
Although AI can often provide higher efficiency and accuracy by processing huge datasets rapidly, and accurately identify cheaters much faster than traditional manual systems are able to, false positives remain an issue.
Highly skilled players or edge-case behavior can be misclassified as cheating if models are not tuned carefully, leading to aggressive models banning legitimate pro-players. Hence the need to always include manual review for high-impact cases.
Human play is extremely variable and skill, style, hardware, latency, and even physical conditions (like using high DPI or controller sensitivity) can produce behavior that looks “non-human” to a model. If the AI hasn’t seen enough legitimate variation during training, it may flag legitimate players who simply have unusually high accuracy or reaction speed.
The next challenge is also nothing new - player privacy.
There have been concerns about anti-cheat infringing privacy for a number of years - particularly since the introduction of kernel level systems such as Riot Games’s Vanguard back in 2020 (1) and the debate will rage on about the balance between security and privacy.
This will certainly intensify as AI enhanced anti-cheat systems rely on the continuous collection of detailed gameplay, as well deeper monitoring including key logging and memory scanning. This will raise privacy concerns further, and calls for greater transparency and clear consent from players will increase.

The question will remain, how much surveillance is justified to maintain fair play? What is the right balance?
It has to be acknowledged that while preventing cheating protects the community, which the majority of players want, excessive monitoring risks crossing into invasion of privacy if data collection becomes opaque, disproportionate, or persistent. To maintain the support of the player base, overcollection, persistent tracking, and secondary use e.g. for unrelated analytics, must be avoided. Transparency and explanation are key.
Let’s face it, most players have little understanding of what anti-cheat tools do under the hood - even less so those that employ lots of AI. Meaningful disclosure of what data is collected, and why and how long it’s kept, is essential. And consent must not be buried in EULAs.
As André Pimenta the Co-Founder and CEO from Anybrain says,
Player privacy is non-negotiable. Our systems must follow “privacy by design” principles — collecting only what is necessary, only within the context of the game, and only for as long as needed. AI should be a tool of the game, not a surveillance system beyond it….if you are banned, you should know why and how that decision was reached. Trust depends on clarity.
Then there's the perennial question of money, which with many developers and publishers struggling at the moment, is particularly pertinent.
As we all know, cheat behavior always evolves, so detection models must retrain and adapt continuously to be successful. Naturally this costs money, and as with other uses of AI, the true financial cost can be tricky to forecast.
This is likely to make it harder for smaller developers to keep up with the rising costs involved - at least in the short run.
Whilst there’s obviously benefits to using AI to enhance anti-cheat and protect a game, unlike with other use cases for AI, there’s unlikely to be a reduction in resources requires eg headcount, to offset costs against. After all, even at the larger publishers, game security is generally looked after by relatively small teams.
Looking ahead
So where are we heading? Where is AI taking us when it comes to game security? Fair play bliss? Cheaters paradise? Never ending false positives?
Starting with the bad guys, one thing is certain, AI will continue to help cheat devs make their cheats harder to detect by appearing more and more human in a kind of twisted version of the Turing test.
Aimbots will continue evolving, becoing even more adaptive with AI models that learn the player’s unique style and mimic it better to stay under detection thresholds. Perfect headshots will be replaced with “statistically plausible” accuracy curves. Cheats will run reinforcement learning loops that simulate legitimate human error, randomization, and reaction delay, to defeat behavioral anomaly models.
The other sure thing is that AI will continue helping cheat devs develop and update their products quicker - whether that’s through AI assisted reverse engineering to speed up development after every patch, or powering reinforcement learning frameworks that can test hundreds or even thousands of cheat variants across games every day.
It will also do what it’s doing across all sorts of industries, helping less-technical engineers do much more than they were able previously, and could thus attract even more wannabe cheat devs to have a go.
Looking further ahead, as AI models get smaller and devices are built to run them more efficiently, cheat developers could start using tiny AI models on separate hardware, like a phone or a small plug-in device, to power advanced cheats.
Unlike large AI systems that are too slow or need powerful GPUs, these compact models could handle quick, specialized tasks that give players an unfair advantage without slowing the game down. We’ve already seen cheats that use a second computer to run scripts, but this new approach could take things further. By running AI on a second device that works alongside the main console or PC, the cheat could process game data and make its own decisions without showing any obvious signs on the main system that anti-cheat tools usually monitor.
These small AI systems wouldn’t need much memory but could still analyze what’s happening in the game in real time, without the lag required if you tried using server-based LLMs. They might handle things like automatically aiming, predicting enemy movement, or even chaining together complex actions. In effect, they’d act like a smarter version of devices such as the Cronus Zen, just much more capable.
We've already seen how smart developers have leveraged signals such as minimaps and sound to build AI-generated ESPs, but enough training data on a specialized device could go far beyond that. Over time, these cheats could even learn to adapt, changing how they behave based on what’s happening in the match, or mimicking small human errors to stay under the radar.
Perhaps more impactful is how AI will also help cheat vendors further industrialise their marketing, sales and customer service - just as it’s doing for legit businesses and brands around the world.

Automated customer management will help them provide better service for more of their customers with timely announcements, prompt user support, channel moderation, and customer marketing all seeing the benefit.
As the vendors optimize their offering and how they get it to market, one side effect will be an even bigger focus on monetization throughout the community. Cheating-as-a-service markets in particular will flourish, as more bad actors seek to take advantage through boosting and account trading.
If you think cheat vendors already act like professional businesses, then just wait for what AI can do for them.
How about the good guys?
Fortunately, Game Security teams and their anti-cheat providers also stand to benefit over the coming years.
Just as AI will continue making cheats appear more human and in turn harder to detect, so it will also make anti-cheat systems harder to fool and better at detecting cheats.
For example, adversarial testing will become even more thorough as anti-cheat systems are trained on an ever increasing amount of player data to detect new anomalies quicker, and going further still, AI will generate synthetic “cheat” behaviors to stress-test its own detectors, in effect self-red-teaming.
In turn, real-time adaptation, possibly using federated insight and data from across millions of players and different games, will enable it to update detection patterns continuously.
However, the biggest opportunity for security and anti-cheat teams will likely come in the shape of meta-AI systems (no, not that one!) that in effect operate as the AI for other AI systems. Unlike ‘regular’ AI, a meta-AI system is self referential and can reason about, develop and optimise AI systems.
This can be applied to anti-cheat and security in several ways and could revolutionize video game anti-cheat by operating at a strategic level above traditional detection methods.
For example, instead of just detecting specific cheats, a meta-AI could analyze how cheat behaviors evolve across different games and automatically develop new detection strategies. It could also orchestrate different detection methods, whether that be client-side, server-side, behavioral analysis, and determine the optimal combination for different scenarios based on the current threat landscape.
Predicting new exploits may also be possible. By analyzing the meta-patterns of how cheats evolve (e.g. moving from aim-bots to more subtle assistance), the system could anticipate next-generation cheating methods before they become widespread.
As André Pimenta puts it,
The future of this struggle will be about predictive security. AI won’t just react; it will forecast new forms of cheating before they appear.
This would be particularly powerful because cheat developers would essentially be competing against a system that's constantly learning the most effective ways to learn about cheating, itself.
Critically, as a meta-AI seeks to improve itself it will study its mistakes, understanding why some cheats slipped through and crucially why some innocent players were flagged via false positives, adjusting itself all the time.
We’re not talking about general AI here - or anything else clearly far off - but on both sides it will likely be a natural evolution of the techniques and models we have already seen implemented; crucially being orchestrated in an agentic framework that promotes learning and picking the right tool for the job.
There’s no doubt some of this is a little scary - AI’s learning to learn on their own etc - and this definitely raises important questions about safety and the potential for game security teams to lose control. So again, there will need to be a degree of human in the loop, and getting that balance right will be incredibly important.

In conclusion - a battle for trust
Over the next few years, AI will transform both cheating and anti-cheat into an automated, adversarial ecosystem.
Cheat developers are already adopting machine learning to create adaptive, human-like tools that evade detection by imitating legitimate player behavior, dynamically adjusting to updates, and testing thousands of evasion variants using reinforcement learning. These systems will industrialize cheat production, allowing near-instant adaptation and creating sophisticated social and behavioral deception within multiplayer environments.
For players, it means that detecting cheats will become harder and more nuanced, accusations of cheating will fly around even more, and communities will become ever more toxic. Trust could become non-existent, both between players, and towards game publishers.
In response, anti-cheat will evolve into a self-learning defense platform powered by meta-AI orchestration, continuous retraining, and cross-game AI enhanced threat intelligence. AI will enable predictive and adaptive detection, fusing behavioral, client, and community signals to identify coordinated abuse before it escalates.
However, this shift will heighten privacy, transparency, and ethical challenges as monitoring becomes more intrusive. The outcome of the next decade will depend on how effectively publishers balance automation and fairness and develop AI that not only detects deception but earns player trust through accountability and explainability.
References
(1.)The Contoversy over Riot's Vanguard anti-cheat software



Comments