1

I'm running a docker swarm on a system with multiple nodes. Often, when the system restarts, some of the nodes are marked as "Down" when I run docker node ls Some of the nodes take a little longer to start than the manager node, which may be why this happens.

I would expect docker to check periodically, and mark nodes as "Ready" when they're up and running.

I'm able to restore the "Down" nodes by manually removing them from swarm (docker node rm), generating the join token (docker swarm join-token worker), telling each node to leave the swarm (docker swarm leave) and then join (docker swarm join...).

The leave step is necessary because if I just try docker swarm join directly, it tells me it's already part of a swarm.

I'd prefer to not have to manually tell docker that its nodes are working again... but I can't find anything in the documentation about configuring it to check.

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.