AI vs AI. The evolution of cheating in video games
- Andrew Hogan
- Nov 10
- 8 min read

Introduction
Video games have always had their share of cheating — from simple exploits, speed hacks, and boundary clipping to more complex tools like aimbots and wallhacks. But as artificial intelligence grows more powerful and more accessible, cheat developers are adopting AI techniques to make cheats smarter, subtler, and critically, harder to detect.
At the same time, game developers and anti-cheat systems are employing AI to keep up. What we’re witnessing has been referred to as an arms race - but less man vs machine, more machine vs machine - with AI the chosen weapon for all the combatants, at every stage:
Cheat developers use AI to make behavior more human-like.
Anti-cheat responds with more sophisticated models using AI to detect more cheats.
Cheat developers use AI to adopt adversarial strategies to evade detection.
Anti-cheat teams use AI to adopt and optimize red-teaming, capture-the-flag testing etc.
And so it goes. It’s critical for the video game publishers, and the players of their games, that they stay in this race, countering the developments of cheat devs with developments of their own, ensuring they stay at least one step ahead and don't fall behind.
This is one of the reasons we've produced this 2-part briefing to help those responsible for protecting games, particularly those working in smaller teams who may not have the resources of someone at a AAA.
In Part 1 we’ll explore:
How traditional cheats work.
How AI is changing cheat development.
Countermeasures and how security teams and anti-cheat developers use AI.
Before going on in Part 2 to look at:
Challenges and ethical concerns.
What the future of this struggle look like.
A quick recap
Before AI became a serious factor, cheating in games traditionally relied on:
Memory hacking / injection: modifying game memory or injecting code to manipulate values (e.g. health, ammo).
Speed hacks / teleport / movement hacks: altering the game’s physics or movement constraints to gain advantage.
Overlay / ESP hacks: reading game state or “seeing through walls” by reading object positions and drawing on top.
Aimbots & aim assistance to enable locking on to enemies faster than a human could.
Industrializing exploits and bugs to gain an advantage.
Packet modification / network tampering: intercepting and altering packets between client and server to trick the game logic or disrupt other players’ games.
These cheats often exhibit behaviors that are unnatural or extreme e.g. snapping instantly between targets, perfect tracking - and quite often are just too good, making them detectable by heuristics or anomaly detectors. Thus anti-cheat systems have long monitored for patterns e.g. improbably precise aim, too little deviation or integrity checks e.g. verifying memory signatures.
So, with the help of AI, cheat developers have started to further “humanize” their tools and obfuscate them further, making them harder to detect.
Smarter, stealthier cheating
They’re doing this in several ways which we’ll explore here.
Object detection and computer vision
When it comes to developing cheats the best known application of AI is the use of computer vision object detection tools like YOLO. As covered in one of our previous briefings - ‘The new Ai-mbots’ (1) these scan the game screen like a player does, rather than reading memory, and then aim at the enemy by ‘moving’ the mouse. This gives the cheat some plausible deniability (it behaves more like a human observing the screen) and ideally evades memory-scanning anti-cheat.

If you want to see the levels to which developers will go to create new ways of cheating you just need to look at this one where the actual mousepad is controlled by the AI aimbot rather than the mouse, to make it even less detectable. It’s reported to match the scores of pro-players but fortunately it seems to be at concept stage rather than available to purchase, and only works in Valorant’s shooting range challenge.(2)

GAN-Aimbots
One study from 2022, ‘GAN Aimbots: Using Machine Learning for Cheating in First Person Shooters’ (3) focused on the authors developing GAN-Aimbots (using Generative Adversarial Networks) to generate aim trajectories that mimic human movements and avoid detection. The idea is to train a model on legitimate human input data, and then use it to produce “legitimate-looking” aim behavior, making it harder for anti-cheat systems to flag it as robotic.
In the study the GAN-Aimbot the authors developed was both more effective than a standard YOLO based AI aimbot, but significantly, when viewed by experienced judges and regular FPS players, was much harder to tell apart from a genuine human player using no cheats.

This is where AI currently offers the most benefit to cheat devs - smoothing aim paths, introducing micro-errors, mimicking human delay or variability, so that cheats can avoid detection better and fly under the radar.
Adaptive and evasive behavior
So, just like the infamous Tyrel Corporation in Bladerunner, for the cheat developers using AI, it’s all about being ‘more human than human’ and harder to detect.

To further disguise themselves as human, cheat systems can use Reinforcement Learning (RL) to decide when to trigger actions such as shooting or hiding. For example, the cheat learns over time the optimal conditions under which to act, balancing risk (avoid detection) and reward (kills and winning). Such decision-making can be dynamic and adaptive.
More advanced cheat systems may also attempt to detect when anti-cheat scanning is active, or when conditions are risky, and then throttle or disable their features temporarily (stealth mode). They may also randomize internal behavior, vary reaction times, and simulate human imperfections to avoid patterns.
Hardware and embedded AI
There’s some evidence (in forums, videos, marketing) that cheat developers are embedding AI into hardware (e.g. specialized cheat devices or microcontrollers) to assist with low-latency decisioning. Because hardware is harder to inspect or isolate, this could further complicate detection. What’s even crazier is that it’s not just the cheat devs doing this, but also legit tech manufacturers like MSI who released a monitor at CES in 2024 that effectively has a built-in League of Legends cheat via an AI module that can detect enemies and signals to the player where they’re coming from.(4)

The case for the defence: anti-cheat meets AI
Fortunately game developers and 3rd party anti-cheat providers aren’t standing still. Whilst the cheat devs use AI to appear as human as possible and circumvent detection, Game Security teams are incorporating AI and statistical techniques to fine tune the defences to better detect anomalies, adapt in real time, and fight back.
Machine learning and behavioral pattern analysis
Instead of rigid heuristics, anti-cheat systems can use ML models trained on large datasets of player behavior such as movement paths, reaction times, decision patterns, aim deviations. These models can flag outliers or suspicious patterns that deviate from human norms much better, and continuously collect new data, retraining anomaly detectors. In this way publishers are combining AI with behavioral analytics to shift detection toward adaptive models.
André Pimenta, the Co-Founder and CEO from Anybrain, puts it this way:
Machine learning allows us to analyze gameplay patterns from reaction times to aiming precision and movement flow to detect when something is statistically inconsistent with normal human performance. This behavioral analysis, combined with anomaly detection and data-driven review, gives us the ability to identify cheating not by chasing known hacks, but by recognizing the telltale traces of inauthentic play.
Input and timing analysis
AI can also be used to analyze input timing (mouse, keyboard) streams at fine granularity, looking for micro-patterns e.g. consistency that's too perfect, lack of jitter, or too much vertical movement. So, to counter AI powered cheats adopting human imperfections and behaviours in an attempt to trick the anti-cheat, the latter is using AI to make its analysis even better and harder to fool, as AI can distinguish human imperfection from machine precision.
Of course the other benefit AI brings is speed, offering real time anomaly detection so that suspicious behaviour can be flagged mid-match.
Adversarial detection and red-teaming
Red teaming is one of the aspects of protecting games where AI can make a real difference.
Cheat engineers can generate mock cheats, and train detection models on “rogue” data and simulated attacks, and they can do this quicker as AI makes it easier to surface bypass techniques through much faster ingestion of cheat forums and data such as the telemetry from test servers.
Then by synthesising the signals cheat developers leave, e.g naming patterns or obfuscation styles, red teams can create test targets that mirror real adversaries and exercise detection logic against plausible, up-to-date threats.
This automation and scale is where AI delivers some of its the biggest efficiency wins: using ML to synthesize large volumes of benign and adversarial telemetry for training and stress-testing detectors, fuzz game network protocols to reveal fragile parsers, and auto-generate obfuscated test artefacts to validate signature-resistant detection.
Then post attack, LLMs accelerate triage, reporting and remediation as they can cluster findings, link detections to cheat taxonomies, and produce a prioritized plan of action.
Making the most of your intelligence
Speaking as a business that specializes in threat intelligence for video game publishers, the ways AI is transforming threat intel gathering, extraction and dissemination of data, is game changing.
Even with the focused and relevant data a specialist threat intel platform like Intorqa provides, there is still a lot of information for a Game Security analyst to keep on top of day in, day out.
But by leveraging an LLM trained on relevant threat data sets, intelligence teams can scale their ability to process, understand and act upon threat related intel. This is because once intelligence is gathered, LLMs can enhance its structure and usability through contextual enrichment and correlation. By cross-referencing raw findings with known indicators, vulnerabilities, and TTPs (Tactics, Techniques, and Procedures), an LLM can build connections between seemingly isolated data points. For example, it might link a newly promoted or shared exploit to a previously documented threat actor based on linguistic patterns or shared infrastructure.
This contextualization makes the resulting intelligence more actionable, helping analysts move from data collection to insight faster, while reducing the noise and false positives common in large datasets.
Security teams can also use LLMs to streamline how intelligence gathered is communicated within and between organizations, whether it’s drafting concise situation reports, generating executive summaries, or translating technical findings into non-technical language for stakeholders. This natural-language accessibility enables faster decision-making, democratizes access to intelligence across teams, and ensures that critical threat insights reach the right people at the right time.
So, what’s next?
As we’ve outlined here, the battle lines have been drawn, with both cheat developers and the anti-cheat teams trying to stop them, leveraging AI to gain an advantage.
So where is this all going?
In Part 2 we will explore the challenges facing game publishers as they rely more on AI, and look at what the near future holds for both sides of the struggle.
References
(1) ’The New Ai.mbots’ https://www.intorqa.gg/post/the-new-ai-mbots
(3) ‘GAN-Aimbots: Using Machine Learning for Cheating in First Person Shooters’ - Anssi Kanervisto, Tomi Kinnunen, and Ville Hautamäki - May 2022
(4) MSI's AI-powered gaming monitor helps you cheat at 'League of Legends,' looks great doing it - Tom’s Hardware https://www.tomshardware.com/monitors/msis-ai-powered-gaming-monitor-helps-you-cheat-at-league-of-legends-looks-great-doing-it