2

I have containers that perform specific tasks, for example run an R-project command or create the waveform of an audio file by running "docker (run|exec) run.sh ", I am looking for ways to run those from inside other containers without having to do extra work for each new task.

The current way I am thinking of solving this is to give access to the docker daemon by binding the socket inside the container. My host runs a docker container which runs an application as a user, app.

The host docker socket is mounted inside the docker container and a script is created by root, /usr/local/run_other_docker.sh.

Now, user app does not have access rights on the mounted docker socket, but is allowed to run /usr/local/run_other_docker.sh after being given passwordless access as a sudoer.

How dangerous is this?

Is there a standard/safe way of starting other task containers from inside a container without binding to the host docker socket?

The only other solution I have come across involves creating a microservice that runs in the second container for the first one to call. This is undesirable because it adds more things asking for maintaintenance for each such use case.

3
  • 1
    Not the most knowledgeable, but I found your question hard to follow. Also, you want to grant access, but you're asking about security -- probably no one wants to "touch" the question since it sounds like production security. I suggest drawing a diagram clearly describing what you want. That said, this sounds less like a docker socket question and more like you want containers to know the state of other containers. Try searching for that problem. I'd probably track state in something like Redis, and have whatever job poll a variable to know when to run. Commented Mar 29, 2018 at 15:55
  • I don't really want to check the state, I have come across literature on that, but it's irrelevant to this. What I want to do is run commands across containers. I have containers that perform specific tasks, for example run an R-project command or create the waveform of an audio file by running "docker (run|exec) <container> run.sh <input>", I am looking for ways to run those from inside other containers without having to do extra work for each new task. Commented Apr 15, 2018 at 14:48
  • It sounds like your app has the ability to talk to the host's docker as root. So it can do anything- run any image, do anything to the file system, etc. In a prod environment this is a terrible idea. Now, running workflows of tasks where each task is a container is a pretty common problem. A couple of approaches: dray.it and for kubernetes: github.com/argoproj/argo Commented Jun 26, 2018 at 0:11

1 Answer 1

1
+100

I was planning on rephrasing the question after the bounty was set, unfortunately this week has been crazy and I did not have time to do it.

After looking into it more and granted that there was no answer with a solution or at least a little bit more interest, I take it that looking to do what I wanted is probably bad architecture.

What I did in the end in order to avoid using sudo inside containers was to mount a host path as a volume and use a file watcher on the host which runs the job specific containers when the main app creates new files asking for processing under that path.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.