Processing / Prevention of Potentially Malicious Requests (AWS, Node.js)

I have a server that runs on aws - it balances the load on some ec2 instances that run node.js servers. Security groups are configured in such a way that only LB can hit them on the HTTP port.

I processed some log files and saw a bunch (about 50 or several times, periodically several) of requests to /manager/html - AFAIK, this is like trying to expose a vulnerability in my application or gain access to the database manager.

My questions:

  • Am I single-minded or are these random scanners? This is on a service that is not yet running, so it is definitely unclear. There was a little press about the service, so it is possible that a person will know about our domain, but this subdomain has not been made public.

  • Are there general conventions preventing these types of requests from hitting my instances? Preferably, I could set up some kind of frequency or blacklist in my LB, and never get these types of requests in the instance. Not sure how to detect malicious and regular traffic.

  • Should I run a local proxy on my ec2 instances to avoid this kind of thing? Are there any existing node.js solutions that can simply refuse application-level requests? It is a bad idea?

  • Bonus: if I were to register the origin of these queries, would this information be useful? Should I try to go rogue and track down the lineage and do them some harm? Should I bemwineguns of origin of IP, if this is the only origin? (I understand this is stupid, but may inspire some fun answers).

Currently, these requests do not affect me, they receive 401 or 404, and this practically does not affect other clients. But if it grows to scale, what are my options?

+6
source share
3 answers

We have faced similar problems in the past, and we have taken some preventive measures to stop such attacks, although this cannot guarantee their complete cessation, but it has shown significant measures to reduce such attacks.

Hope this helps.

+1
source

Too many random automatic requests are executed, even if I host the nodejs server, they try to use cgi and phpmyadmin / wordpress configs. You can simply use the basic redis-throttle [https://npmjs.org/package/redis-throttle] methods for your NodeJS and ssh fail2ban server to protect yourself from simple DoS attacks.

Automated queries cannot be harmful unless NodeJS or libraries have known flaws, so you should always enter and verify security on the entire server. You do not have to worry if you code well. (Do not clear errors to users, disinfect input, etc.)

You can register your 401 and 404 per week and filter the most common ones through your LB. Hunting for IP addresses and sources will not help you if you are not a Hollywood producer or are struggling with terrorists, since the yoır problem is not so important, and, most importantly, these requests are mainly related to botnets.

+2
source

Consider starting a proxy cache, such as Varnish, in front of application servers. Use its VCL to allow access only to the URI that you define, and to reject everything else, allow GET, but block PUT and POST, etc. You can also use it to filter the headers of the HTTP responses you return. This will allow you to mask your node.js server as apache, for example. Many of them work online to realize this.

+1
source

All Articles