It depends on how you implemented it and how smart the detection tools are.
Take care of the user agent first. If you don't install it explicitly, it will be something like "Java-1.6." Browsers send their "unique" user agents, so you can simply simulate browser behavior and send User-Agents from MSIE or FireFox (for example).
Second, check out the other HTTP headers. Probably some browsers send their specific headers. Take one example and follow it i.e. Try adding headers to your queries (even if you don't need them).
A person acts relatively slowly. The robot can act very quickly, that is, get the page, and then "click", i.e. Run another HTTP GET. Put a random dream between these operations.
The browser extracts not only the main HTML. He then uploads images and other materials. If you really do not want to be discovered, you need to parse the HTML and download this material, that is, actually be a โbrowserโ.
And the last one. This is obviously not your business, but it is almost impossible to implement the robot that Capcha passes. This is another way to detect a robot.
Happy hack!
source share