3

I have researched this problem, but in most cases the cause of the 502 error is an improperly configured nginx.conf or upstream service. I believe this is different..

As the title suggests, I upgraded ubuntu server 14.04 to 16.04. I use nginx as my web server, and am also running a java/tomcat server, setup in my nginx config as a proxy_pass.

Since the upgrade, every time the server starts up nginx displays error 502: Bad Gateway when attempting to connect to the proxy_pass site. All other sites specified in my config work as expected.

Is it possible that the order in which services are started could cause a persistent 502 error?

To resolve the issue, I must sudo systemctl restart nginx, after which, the proxy_pass service works as expected, until the next reboot.

From the error.log:

2018/01/24 11:33:20 [error] 1886#1886: *202 connect() failed (111: Connection refused) while connecting to upstream, client: 10.0.0.1, server: localhost, request: "GET /radio/rest2/savePlayQueue.view?u=user&p=enc:xxxxxxxx&v=2.0.0&c=DSub&id=0000&current=0000&position=0 HTTP/1.1", upstream: "http://[::1]:4040/radio/rest2/savePlayQueue.view?u=user&p=enc:xxxxxxxx&v=2.0.0&c=DSub&id=0000&current=0000&position=0", host: "www.myhostname.tld" 

At the time when this error was generated by nginx, I was able to use lynx from that server to connect to localhost:4040/radio, and was served the appropriate content. Even after that the 502 error remains when connecting through nginx.

There is no defined upstream block for this, however the location block is:

location ^~ /radio/ { proxy_pass http://localhost:4040; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; proxy_set_header Connection ""; # health_check; # nginx: [emerg] unknown directive "health_check" } 

I don't want to have to restart nginx every time I boot up. How can I resolve this problem?

3
  • Nginx being lightweight, Tomcat just the contrary, could it be that Tomcat isn't ready when nginx checks its availability. If the healthcheck is passive, the backend will be marked as down and stay down. An active healthcheck might help. Commented Jan 24, 2018 at 18:40
  • @GerardH.Pille I believe this is likely the cause of the problem. I had toyed with the idea of modifying systemctl depends options, but this is new to me. I am using vanilla nginx and it does not recognize the 'health_check' directive which seems to be a feature only of plus. Is there an alternative module or method I can use? Commented Jan 24, 2018 at 18:50
  • Since it's Tomcat, please also share the <Connector> from your server.xml. Commented Jan 24, 2018 at 19:02

1 Answer 1

2

upstream: "http://[::1]:4040/…

Your upstream is probably only listening on IPv4 localhost (127.0.0.1:4040) whereas nginx is trying to connect to IPv6 localhost ([::1]:4040).

lynx works because it tries both.

GUESS: nginx may be failing because it tries both at the start, both fail, then it sticks with IPv6 from there on.

FIX: change upstream to use 127.0.0.1 explicitly or change the upstream to listen on both IPv4 and IPv6.

4
  • This modification resolved my issue. The new location block specifies 127.0.0.1:4040 and after a reboot, it worked as expected. Commented Jan 24, 2018 at 19:18
  • netstat -pl --numeric-ports | grep 4040 tcp6 0 0 127.0.0.1:4040 :::* LISTEN 1848/java Commented Jan 24, 2018 at 19:18
  • @chrismeu Because IPv6 is the default protocol, it's preferred to listen on IPv6. So you'd set something like <Connector address='::1' ... Commented Jan 24, 2018 at 19:21
  • @MichaelHampton I'll try to do that - thank you for your help as well! Commented Jan 24, 2018 at 19:23

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.