Replies: 1 comment
-
The banning by detection of changing the user agents per IP is very questionable, IMHO. And that for many reasons - for instance if some IP is a "public" proxy, the users behind this proxy, may surely use different browsers and therefore different user agents, what would cause a false positives for such proxies. Also I don't understand what is the win by changing the user agent by such bots? I mean why shall they do that at all? It is pretty simple to set the user agent and use the same agent for every request. Therefore I never understand the idea to consider user agents by the filtering or banning. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I'm currently in a situation where we identified a "sort of" botnet like approach to spam a webserver system of mine. The requests alone look pretty normal, the interesting thing is when one looks for the user agents and other identifiers and realizes, that all those requests never hit more than ~4x a day, they always only get a single HTTP request without loading any content, style sheets etc. so some sort of bot behavior and come from a huge amount of various IP addresses.
Digging further one sees, that the IPs are rapidly changing as well as the URLs (it's always random URLs from the site from different paths, products etc.) and they rotate user agents as well. As soon as I logged all requests to a REAL old Opera version, I could make them visible a lot better as the IPs came back after hours with their single hits but a completely different user agent, language setting etc.
So my question is about the following possibility:
Is there a way to let fail2ban "remember" some key-value-pairing or DBentry of an IP with some "tripwire" UA like "Opera 8.x / X11" and make it "ban" when the IP comes back with another UA (and another and another)?
Or is there perhaps another/better way to set this up?
Cheers
\jens
Beta Was this translation helpful? Give feedback.
All reactions