1

We have a server setup with AWS that must satisfy the following requirements:

1: Incoming requests originating from the load balancer (specifically, health checks and latency checks) must be allowed to reach the web servers.

2: Valid requests containing correct Host strings must be allowed to reach the web server.

3: All other requests must be rejected with the non-standard Nginx '444' rejection, meaning that they will be ignored.

Additionally, our website has several subdomains, each running essentially the same code for different clients. We've set up Nginx to redirect all http traffic to these subdomains to https. I'll call these subdomains "a.example.com", "b.example.com", and "c.example.com".

We've noticed in our logs that our Django code is returning a lot of 500 errors due to bogus 'Host' strings making it past Nginx. The host strings we are seeing for each of these requests is "*.example.com", which matches any one of our subdomains and therefore makes it to the Django code. Since that host string is unrecognized, Django returns a 500 error.

The following is a close approximate of our Nginx sites-available file:

# Repeat this for each subdomain: server { server_name a.example.com; listen 80; return 301 https://a.example.com$request_uri; } server { server_name a.example.com; listen 443; location / { set $my_host $host; if ($host ~ "\d+\.\d+\.\d+\.\d+") { set $my_host "elb.example.com"; } } } 

We've attempted catching this bad host string with the following "black hole" server definition in Nginx:

server { server_name *.example.com; listen 80 default_server; listen 443 default_server; return 444 } 

However, the "*.example.com" host string matches one of the https server definitions and is forwarded to the Django code.

What am I missing?

1 Answer 1

0

In your attempted config, all you needed to alter was the server_name variable to something which wont match any other server block but will be valid, eg localhost, and it should be the server that requests fall back on if they don't match any other hostname (since you marked it as default_server - it only complicates things to also use a wildcard host name).

server { server_name localhost; listen 80 default_server; listen 443 default_server; return 444 } 
4
  • There is one problem with this solution: health checks don't make it through, and our servers get removed from the load balancer when that happens. How can this same solution be modified to pass health check/latency check requests? Commented Feb 17, 2015 at 22:51
  • 1
    I've settled on using an if (gasp!) to return 444 for all but health/latency checks. Commented Feb 18, 2015 at 0:28
  • Great - can you post your working config then please (generalised if you wish of course) and mark it as the correct answer for potential future visitors of this question? Commented Feb 18, 2015 at 8:50
  • 1
    You can also use server_name _;, this allows health check requests from localhost to work. Commented Feb 18, 2015 at 10:29

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.