Recently, Streams Charts teamed up with Audiencly to publish a whitepaper on viewbotting and how it can undermine marketing efforts, distort metrics, and damage the industry as a whole, from platforms and creators to advertisers.
Available to download for free, the whitepaper has brought public attention to the viewbotting issue and sparked discussions about artificial viewers and inflated statistics. To clear up any confusion about the paper and its content, Streams Charts answers some of the most common questions we’ve received regarding the paper.
Streams Charts × Audiencly Whitepaper: Frequently Asked Questions
Who’s the worst offender?
As an independent analytics platform, we believe exposing individual streamers would be neither ethical nor constructive. Our goal is to highlight systemic patterns in the industry and call attention to recognizing suspicious viewership, not to single out creators who may even be unaware of the fraudulent activity. For a wider, industry-wide overview, we can turn to the worst-offending categories on platforms.
In our Q2 2025 data, the Virtual Casino category recorded the largest portion of fake traffic for Twitch. The gambling category is a natural hotspot for viewbotting, where possibilities of partnerships make it especially lucrative for streamers to fraudulently inflate their viewership. For the quarter, Virtual Casino hosted over 13% of Twitch’s suspicious watch time for the quarter
Another hotspot for viewbotting, the Just Chatting section is where over 10% of suspicious activity took place on Twitch. This section of livestreaming is especially competitive and saturated, and a boost in viewership can move streamers up the browse page enough to considerably increase their exposure.
Other categories most-affected by viewbotting on Twitch include Counter-Strike and GTA V streams. Again, these categories are saturated and extremely competitive, meaning many turn to viewbotting as a way to stand out from the crowd.
Do bots watch ads and do platforms pay out ad revenue on bot views?
Modern viewbots are more human-like than ever before, and bots do indeed register as viewers and can therefore inflate ad impressions. As bots become more sophisticated, fraudulently-inflated payments to bad actors may slip through the system; however, platforms have counter-measures in place, such as:
-
Platforms may readjust or pause ad revenue payments to streamers with suspicious viewer behavior, or remove ad monetization from their account entirely.
-
Much more lucrative for fraudulent streamers is to present fake viewership metrics to sponsors or partners to ensure a better deal, or to dishonestly reach a viewership milestone for a bonus payment during a sponsored segment.
Combating viewbotting requires transparent data, careful methodology, and coordinated action across platforms, analytics companies and commercial partners. The Streams Charts × Audiencly whitepaper lays out the issue, the evidence, and how to best identify and avoid viewbotting streamers.
How can you tell if a channel is being botted?
Detecting viewbotting isn’t always straightforward, modern bots are designed to look real, and sophisticated ones may even fake engagement such as chat messages. Still, there are some red flags to look out for.
Sudden or unnatural spikes in viewership are suspicious — which cannot be explained by something like a Twitch Raid, and especially suspect are spikes without corresponding chat activity or follower growth. On the flip side, unnaturally steady viewership can also be a clear indicator of fraudulent activity. Stream activity should follow natural, steady patterns, without thousands appearing and vanishing in minutes.
Identical chat messages from users, or messages which don’t match the content at all, are also key to look out for. Deeper in the statistical side, metrics such as audience retention and viewer authentication can be applied to verify the legitimacy of a channel’s viewership.
Are there now more channels averaging >50 concurrent viewers than in 2022, and are they less suspicious?
In our whitepaper, we dive into the 2023 to Q2 2025 statistics of Kick and Twitch. On both accounts, platforms showed a net increase in the number of channels averaging >50 viewers; Twitch’s count almost doubled comparing start and end-points, and Kick’s count grew significantly, especially in 2025, now with 585% more channels compared to Q1 2023.
Alongside this total growth, both the count and proportional share of channels with suspicious viewer activity has also increased over the years. However, detection methods for viewbotting have also improved over the years; this growth in viewbotting could also be explained as the industry uncovering more already-existing fraudulent streamers.
In Q2 2025, Twitch recorded close to 41,000 channels with suspicious behavior, over 35% of active channels with >50 Average Viewers; also, for the first time ever, the share of creators exhibiting clear, persistent viewbotting exceeded 10% of all suspicious channels. On Kick, the total count of suspicious channels exceeded 18,000, and the share of these channels exhibiting persistent viewbotting was over 16.4%.
Is it the platforms' fault for failing to stop viewbotting?
Short answer, no single party is entirely to blame. In recent months alone, major platforms have all rolled out new waves of measures against viewbotting. Kick has also been detecting and removing viewbotting streamers from their Kick Partnership Program, reported to affect roughly 1.5% of their roster by Head of KICK Studios Andrew Santamaria.
The fight against viewbotting is an arms race, with platforms investing in detection and enforcement and the viewbots themselves constantly evolving and working around platforms’ policing methods.
Modern viewbots are engineered to mimic real users, with variable behavior, the ability to change device fingerprint, and even engage in conversation to fool both human moderators and automated detection systems. Although each platform employs its own strategies and resources, the shared challenge of ongoing detection and prevention remains.
How about Twitter (X) and TikTok — do they face the same problems?
Yes, although the mechanics might differ, inauthentic engagement is a universal problem faced by social media platforms. On both platforms, fake impressions and automated engagement are prevailing issues, but the high velocity and scale of short-form content make detection difficult.
Both TikTok and X combat fraud through automatic detection models and flagged accounts; a continuous arms race against the bots. Many social media are experimenting with cutting edge technology to develop their algorithms and detection systems, which scan countless short clips not only for content, but also fraudulent activity.
What’s the point of reporting viewbotting — why does this matter?
Viewbotting harms the livestreaming and wider influencer marking scene by distorting the economics and signals of viewership and engagement. The effects of viewbotting are felt not only by those involved with the marketing side of the industry, but also all creators seeking partnerships and deals.
For marketers, the risks of viewbotting can range from an overpaid or misallocated budget to misleading reach and inflated engagement rate. These issues make estimating a true conversion rate or click-through rate difficult, eroding companies’ trust in influencer marketing and affecting the entire streaming industry.
Not only may legitimate streamers miss out on sponsorship opportunities to bad actors, but streamers may struggle to find any willing sponsors in the future. Especially on smaller platforms, a mere handful of inflated streams can skew metrics dramatically. In some extreme cases, it can even create the illusion of trends, making certain games or content types appear more popular than they are.
Over what period of time did you collect data? Out of how many total watch hours is viewbotting present?
For our data across livestreaming platforms in the whitepaper, we processed statistics spanning from the start of 2023 through Q2 2025. We also excluded channels averaging fewer than 50 concurrent viewers to focus on creators most likely to impact the market, and reduce statistical noise from smaller channels.
However, for our research, we focused on the total count of channels exhibiting suspicious viewership behavior rather than what portion of watch time platform-wide is fake. This presents a more comprehensive view of the issue on the platform, and avoids the problems that arise with working with falsified data, ie. watch time for viewbotted streamers.
That being said, we still collected watch time statistics for viewbotted channels in livestreaming. In Q2 2025 on Twitch, we recorded roughly 41,000 channels which exhibited suspicious viewer behavior at least once, and in total, more than 30,000,000 Hours Watched were generated by bots. Comparatively, this accounts for about 0.6% of Twitch’s total watch time for the quarter.
Where can I learn more?
The Streams Charts × Audiencly whitepaper breaks down our data, detection methodology, and clearly presents trends across platforms. Download the full whitepaper to explore viewbottings wide effect the livestreaming economy, complete with expert insights — even from platforms themselves.
For enterprise solutions such as detailed reports, streaming data API access, or a custom request, contact our team to learn how we can help.