0

Simplified path of our setup:

Client ->

VM1- instance on aws, haproxy terminating ssl configured to use acl to direct traffic by requested domain to appropriate backend through wireguard tunnel ->

VM2- VM at local site, nginx reverse proxy directing traffic to services -> services on multiple VMs

.

Problem: Decided to implement fail2ban (f2b) for additional security. Installed at the reverse proxy and the entries created by f2b in iptables have no affect. Actually, any ip based entry has zero affect regardless of who or how it was entered in the chain.

Nginx access logs do show the ip of client however ufw logs show ip of nginx proxy. I'm learning that this could be due to the actions of iptables being at the tcp layer and the headers which contain the client ip passed along are not at that layer. I assume nginx is reading headers and ufw is looking tcp packets?

...firewalls like iptables work at the tcp layer. To look at the x-forwarded-for http header you need to accept the connection and accept at least the request headers from the client before you can evaluate the x-forwarded-for IP.

I'm thinking of these questions while preferring to limit aws responsabilties...

Is there a method to block traffic at the reverse proxy?

Is there a way to remotely send iptables instruction to haproxy?

Is there a way to remotely retrieve logs from nginx?

.

Any pointers graciously accepted.

1 Answer 1

0

For the sake of the following discussion, it is not important what is the server behind load balancer is; it's back end server that does some logging and some job for us, while load balancer is a front end which sees the real connection. In your case, back end server being reverse proxy makes itself a front end for something else, but that's not important here; we abstract away from it.

So the problem is, logs are collected on one system, the back end, but the firewall rules has to be set up on another, the front end, which is the only system which sees the real remote IPs on the network level. You can still use fail2ban, and here are two possible solutions:

  1. Write your own actions that install rules remotely, and run fail2ban on the back-end server. That is, it would use commands like ssh <lb> iptables ... instead of just iptables ... as default actions are using. You can take the default iptables or/and ipset actions, copy them and adjust. (I am not implying you have to use iptables actions as a basis; I prefer ipset and I always wondered why it was not made default with fail2ban.) This would imply some key setup and so on, but overall this should be not hard to do. Advantage is that such setup is generally quite robust; disadvantage is the need to establish a new SSH connection for every change. fail2ban is already notoriously slow to start or stop when there are a lot of banned addresses; this would make it crawl; it is not able to "batch" them.

  2. Send relevant logs from back-end to front-end and run fail2ban on the front-end. I've did that with Postfix, the mail system; it logs using standard syslog, so it was natural to use the syslog-ng networking capability to channel logs over the network (and rsyslogd can do it too, that's the standard networked syslog on port 514). It worked perfectly. If you can make your back-end to log to syslog, and if you can set up your syslog daemon to receive that stream, that might be solution.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.