1

I have a webserver with several applications of several users. Since I'm not sure what the applications are doing and which outbound http/https traffic they produce, I want to get more control about it. So my thought is to use an internal Squid which is only listening to 127.0.0.1:3128. At first I just want to have a look into the access log, the second step would be a black/whitelist for securityrelevant urls and domains. These lists should filter the outbound traffic of the apache and all the child processes (for example: one of the applications is running a curl as system call).

I already added http_proxy to /etc/sysconfig/proxy, to /etc/environment and to .bashrc of the apache system user. Everything is worling fine when I'm using the shell, the apache doesn't use the proxy at all. I've already restarted the apche after the changes, but without success.

By the way I have OpenSuse 11 running on the web server.

The solution: (thanks to ALex_hha, sorry I guess I was reading your answer too fast) I entered the following iptables - rules:

iptables -t nat -I OUTPUT -p tcp --dport 80 -m owner --uid-owner apache -j DNAT --to-destination 127.0.0.1:3128 

and set the Squid to transparent mode:

http_port 127.0.0.1:3128 transparent 

and now it's running very fine.

1 Answer 1

1

You can redirect all outgoing http traffic to the squid. The squid should be running in transparent mode

# iptables -t nat -I OUTPUT -p tcp --dport 80 -j DNAT --to-destination 127.0.0.1:3128 

But you can't identify from which user was the traffic. But you could try to run squid on a few ports, each one for specific user

http_port 127.0.0.1:3128 transparent http_port 127.0.0.1:3129 transparent http_port 127.0.0.1:3130 transparent 

And then redirect outgoing traffic to specific port

# iptables -t nat -I OUTPUT -p tcp --dport 80 -m owner --uid-owner apache -j DNAT --to-destination 127.0.0.1:3128 # iptables -t nat -I OUTPUT -p tcp --dport 80 -m owner --uid-owner joe -j DNAT --to-destination 127.0.0.1:3129 # iptables -t nat -I OUTPUT -p tcp --dport 80 -m owner --uid-owner jack -j DNAT --to-destination 127.0.0.1:3130 

And use lp in the squid log format

logformat uniq_user %lp %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %>Hs %<st %Ss:%Sh 

I'm getting a lot of 403 errors in the log file. I added the IP-address of the server to the cal but that didn't work. I think the request would end up in an endless loop anyways

you need to bypass all requests from squid himself

# iptables -t nat -I OUTPUT -p tcp --dport 80 -m owner --uid-owner squid -j ACCEPT 

I assumed, that your instance of the squid is running under squid:squid, which is by default

Working config of the squid

acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow all http_access allow localnet http_access allow localhost http_access deny all http_port localhost:3128 transparent http_port localhost:3129 transparent http_port localhost:3130 transparent hierarchy_stoplist cgi-bin ? coredump_dir /var/spool/squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 logformat uniq_user %lp %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %>Hs %<st %Ss:%Sh access_log /var/log/squid/squid-users.log uniq_user 

Squid version and system details

# squid -v | head -1 Squid Cache: Version 3.1.10 # uname -r 2.6.32-358.14.1.el6.x86_64 # cat /etc/redhat-release CentOS release 6.4 (Final) 

Make sure that first rule in the OUTPUT chain is - "-p tcp --dport 80 -m owner --uid-owner squid -j ACCEPT"

8
  • I added this rule to my iptables but the squid couldn't handle the request. The requested URL could not be retrieved. Unfortunately now even the Nagios isn't running anymore (the nagios requests are running through the squid as well now). Commented Aug 14, 2013 at 14:20
  • I guess I have to switch the squid into some kind of a transparent mode before I can use the iptables rule. Commented Aug 14, 2013 at 14:39
  • What's to prevent the requests proxied by squid from being redirected back to squid? Commented Aug 14, 2013 at 15:17
  • yes, that could be the reason why this config didn't work. Perhaps my requests ended up in an endless loop. Isn't there a way to solve this without iptables? Commented Aug 14, 2013 at 19:23
  • So now I switched the squid into transparent mode but it doesn't work either. I'm getting a lot of 403 errors in the log file. I added the IP-address of the server to the cal but that didn't work. I think the request would end up in an endless loop anyways. Commented Aug 15, 2013 at 9:11

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.