I have containers that perform specific tasks, for example run an R-project command or create the waveform of an audio file by running "docker (run|exec) run.sh ", I am looking for ways to run those from inside other containers without having to do extra work for each new task.
The current way I am thinking of solving this is to give access to the docker daemon by binding the socket inside the container. My host runs a docker container which runs an application as a user, app.
The host docker socket is mounted inside the docker container and a script is created by root, /usr/local/run_other_docker.sh.
Now, user app does not have access rights on the mounted docker socket, but is allowed to run /usr/local/run_other_docker.sh after being given passwordless access as a sudoer.
How dangerous is this?
Is there a standard/safe way of starting other task containers from inside a container without binding to the host docker socket?
The only other solution I have come across involves creating a microservice that runs in the second container for the first one to call. This is undesirable because it adds more things asking for maintaintenance for each such use case.