Follow Us

Bot traffic and fake users

Bot traffic and fake users

Spam bots and e-commerce performance

A recent e-commerce industry report included the startling claim that during the 2024 Christmas season, automated bots accounted for 57% of e-commerce web traffic. If confirmed, this would mark the first time that machines outnumbered human buyers online. At the same time, press reports and forum posts document a growing concern about the proliferation of ‘scalping’ and other unscrupulous, automated ticket buying strategies; clearly, the e-commerce web environment is facing new challenges. 

Seasonal bot-buying is a simple strategy: you buy a large portion of the available supply of a popular seasonal product when demand is low, and benefit from the ensuing scarcity to sell the same products back for a higher price during the high season. This strategy is sometimes known as scalping, and has long been familiar in the ticket-buying industry. E-commerce sites that facilitate third party sales are particularly vulnerable to this strategy, which can take place as competition between sellers or rival platforms. The systematic hoarding and reselling of seasonal products has escalated rapidly over the last decade, as have outright fake purchases aimed to reduce available stock without taking ownership of the product. 

In large part, this development is due to low-cost AI chatbots which have simplified the creation and scaling of bots for malicious purposes. The problem affects virtually every sector of online retail. Recent estimates claim bots now make up over half of all web traffic; the exact proportion of malicious bots is difficult to measure, but surely high. For e-commerce businesses already operating on thin margins, the implications are severe. Businesses like Queue·it are locked in an arms race with increasingly sophisticated spam bots; to find solutions to this problem, a thorough understanding of fake traffic is necessary.

The cost of fake traffic

Bot traffic may be benign or malicious. When it is harmful, it creates costs that extend far beyond client annoyance or downtime. The most immediate impact is on infrastructure expenses. When a significant portion of server capacity, bandwidth, and computing resources serve automated programmes rather than genuine customers, businesses essentially subsidise attacks against themselves. Every bot request consumes processing power, database queries, and network capacity that could otherwise serve paying customers.

Analytics pollution represents another insidious cost. Marketing teams make critical decisions based on traffic data, conversion rates, and user behaviour patterns, and seasonal sales patterns mean marketers cannot afford mistakes through faulty data. Unfortunately, most techniques are highly vulnerable to bot traffic: A/B tests produce skewed results, customer journey maps reflect bot behaviour rather than human patterns, with the result that marketing spend gets allocated based on false signals. 

Variants of scalping that include false buy orders are particularly cost-intensive for e-commerce providers, as they result in fees to payment providers and risk slowing or shutting down the most commercially important function of the site. Harder to quantify but no less damaging, customer experience is degraded and brand value suffers. It is obvious that the problem is getting worse – but why?

The evolution of fake users

Early bots were relatively easy to detect through simple checks like CAPTCHA challenges or rate limiting. Modern bots employ far more advanced techniques: AI botnets can discover and scrape unstructured data in inconsistent formats and use business intelligence to enhance decision-making through collecting, extracting, and processing data. They rotate IP addresses using residential proxies, making them appear to come from legitimate consumer connections. This makes the simple solution of range-blocking IP addresses ineffective.

Beyond blocking

Traditional security measures struggle against sophisticated bots. Attackers are targeting applications by combining bot attacks with web application vulnerability exploits, business logic attacks, and API-focused attacks. This multi-vector approach requires integrated security strategies that address threats across different attack surfaces simultaneously.

Effective bot mitigation requires layered defences that analyse multiple signals at once. Behaviour analysis examines patterns across entire sessions to identify automated activity that individual page requests might not reveal. Device fingerprinting creates unique identifiers that persist even when bots rotate IP addresses or use proxy services. Machine learning models can identify subtle patterns distinguishing human behaviour from bot activity, though these systems require constant updating as bot operators adapt their techniques in response.

Virtual waiting rooms for e-commerce

For e-commerce sites facing high-demand product launches or seasonal sales events, virtual waiting rooms offer a practical solution to both capacity and fairness problems. Rather than allowing unlimited traffic to overwhelm backend systems, these tools control the flow of visitors by placing them in a queue when demand exceeds capacity. Legitimate customers receive transparent information about their position and estimated wait time, whilst the site itself operates within safe performance parameters.

The primary advantage lies in applying bot detection at the queue stage, before visitors reach purchase infrastructure. This filtering approach means server capacity, payment systems, and inventory databases serve genuine customers rather than automated programmes. Virtual waiting rooms also eliminate the speed advantage that bots exploit during product launches. Since queue positions are assigned by arrival time or through randomisation rather than checkout speed, human shoppers compete on equal terms with automated systems. This addresses a fundamental fairness issue that has eroded customer trust across multiple industries.

Implementation of queue systems also provides breathing room for other security measures. When traffic flow is controlled, behaviour analysis and device fingerprinting systems have adequate time to evaluate each visitor properly, improving detection accuracy without degrading the experience for legitimate users. The combination of traffic management and sophisticated bot detection creates a more robust defence than either approach could achieve alone.

Conclusion

Bot traffic poses a significant and evolving threat to online businesses, particularly in sectors vulnerable to scalping and automated purchasing. The costs extend beyond infrastructure expenses to include analytics pollution, payment processing fees, and degraded customer experience. As bot operators deploy increasingly sophisticated techniques, businesses require strategic responses that combine multiple defensive layers. Virtual waiting rooms represent one effective approach, particularly for high-demand events where both capacity and fairness are critical concerns. The technology exists to address these challenges, but success requires treating bot defence as a core business priority rather than a technical afterthought.