This cannot work. The driver configuration configures the engine, and the host in and of itself does not have access to the DNS provided for the compose file.
NOTE The docker-compose.yaml for both solutions are available on GitHub
However, with some slight modifications, you can achieve what you want:
configs: fluent-bit-config: content: | [SERVICE] Flush 1 Daemon Off Log_Level debug [INPUT] Name forward Listen 0.0.0.0 Port 24224 [OUTPUT] Name stdout Match * services: logger-app: image: alpine:latest command: ["sh", "-c", "while true; do echo 'Hello, World!'; sleep 5; done"] deploy: replicas: 1 logging: driver: fluentd options: fluentd-address: ${FLUENTBIT_HOST_IP:?please set FLUENTBIT_HOST_IP in .env}:24224 fluentd-async: "true" fluentbit: image: fluent/fluent-bit:latest configs: - source: fluent-bit-config target: /fluent-bit/etc/fluent-bit.conf ports: - "24224:24224"
With that being said: this is a poor man's solution.
There are at least two approaches that imho make more sense. A much better approach would be to roll out a dedicated fluent-bit and configure the docker daemon to use the fluentd log driver by default.
However, imho this is still less than ideal, because you need to change the config, need to go back and forth between the local and internal network of the application and whatnot.
So let me introduce my preferred option: Grafana Alloy. This nifty little agent comes along with autodiscovery capabilities, the ability to not only collect logs but also metrics and traces, OpenTelemetry outputs (and a heck lot more) and on top of that it is incredibly lightweight (for what it does).
The elegant part here is that logs are not pushed to alloy, but pulled by alloy.
configs: alloy-config: content: | logging { level = "info" format = "logfmt" } # We use the auto-discovery for docker via # the docker socket. # You can also use a tcp connection here. discovery.docker "nix" { host = "unix:///var/run/docker.sock" # We only want to collect from containers # with the label "logaggregation" set to "yes". # See https://grafana.com/docs/alloy/latest/reference/components/discovery/discovery.docker/#filter filter { name="label" values=["logaggregation=yes"] } } loki.source.docker "default" { # Read the logs of the discovered targets... targets = discovery.docker.nix.targets # ...via the docker socket... host = "unix:///var/run/docker.sock" # ... and send them to the echo receiver called "example". forward_to = [loki.echo.example.receiver] } # loki.echo is a receiver which _understands_ Loki messages # and writes them to stdout. loki.echo "example" { } services: logger-app: image: alpine:latest command: ["sh", "-c", "while true; do echo 'Hello, World!'; sleep 5; done"] labels: # This label causes the logs to # be processed by alloy. logaggregation: "yes" deploy: replicas: 1 alloy: image: grafana/alloy:v1.8.2 command: - run - --cluster.enabled=false - --server.http.listen-addr=0.0.0.0:12345 - --storage.path=/var/lib/alloy/data - /etc/alloy/config.alloy ports: # We enabled the web ui, you might want to visit it. - "12345:12345" configs: - source: alloy-config target: /etc/alloy/config.alloy volumes: - /var/run/docker.sock:/var/run/docker.sock:ro