Bots typically don’t spend much time on a website—they simply check for updates and move on. Most of the time, they behave like visitors by repeatedly visiting the same page during different sessions. If you notice multiple visits to the same page within just a few seconds, and each visit creates a new session (even though it’s from the same IP or device), it’s almost certainly a bot.
The only exception might be if a real user has cookies blocked or is browsing in Incognito mode. However, the speed and repetitive nature of these visits make it highly unlikely to be human behavior.
In our app, we use a library called CrawlerDetect. CrawlerDetect is an open-source PHP library widely used to identify web crawlers and bots based on their User-Agent strings and other HTTP headers. This tool is particularly useful for website owners, developers, and analysts who need to filter out non-human traffic to ensure accurate analytics or to implement crawler-specific rules. It uses the following list: Crawlers.txt.
While we already filter out thousands of bots, some still manage to slip through due to constantly changing User-Agent strings. We assure you that we'll try to improve our filters and see how we could ignore a larger amount of bots, since they are permanently changing.
Until then, there is a workaround to manually ignore some of them by their IP addresses. You can check the article ‘Why am I getting visits from countries that my site is not related to?’ in our Feature Functionality section, or block the access to some bots, creating the robot.txt file and exclude some of the major ones.