1

Given: We have a service that is currently run on a VM with a MAC address of 4E-3C-FC-EF-C5-45. Our DHCP server assigns it a name of "pixie1". The IP address doesn't matter because everything accesses it by name.

The Goal: We're trying to move most of our services from VMs to containers, so we've set up a new machine to host the docker images. All of these services need to continue operating such that they can be accessed by host name on the existing lan (10.41.x.x) - which is managed using static leases from our DHCP server.

The Problem: There's a TON of conflicting information out there over the past 8 years that say "it can't be done", "it can be done", "use macvlan", "use ipvlan", and so on.

Every couple of years, I've looked into this and spent days diving into rabbit holes that lead nowhere.

Most of the "solutions" involve self-assigning a particular IP address, but then we end up with the exact problem that DHCP solves: No central management of IP addresses and hostnames. Or we end up with split management where multiple DHCP servers (the central corporate one, and now also a local Docker one), which now have to coordiante with each other.

There's also docker-net-dhcp, which hasn't been maintained in years.

It doesn't have to be docker. I'll use podman or anything that can run Dockerfiles - I just want to be able to make a Dockerfile or compose.yml and get a hostname and IP address from our central DHCP server's static leases (by mac address).

What I'd like to know is:

  1. Can this actually be done in docker-compose (or podman-compose or whatever), using ONLY the corporate DHCP server to provide host names and IP addresses?

  2. HOW specifically does one do it (assuming the docker hosting machine is connected to 10.41.x.x via /dev/eno0 and the physical host has, for example, address 10.41.0.55)? As in make the docker image say "Hi, I'm a machine on this physical LAN (10.41.x.x), and I have MAC address 4E-3C-FC-EF-C5-45. Corporate DHCP server, please give me an IP address and host name!"

5
  • I think the real problem is, it's not exactly a matter of whether that's "(more-or-less) do-able", but a matter of "that's not how containerization is supposed to work". It's like "moving from VMs to containers, but not really". It would mean that all your containers would need to have DHCP client running in it. Certainly if you are talking about like just ONE VM, it might still make some sense. Commented Jun 30, 2024 at 8:31
  • 1
    You would typically handle ip-to-service mapping with containers by using port publishing rather than assigning each container an address on the local network. That is, if you have a web service, you would expose that with -p <ip_address>:<host_port>:<container_port>, where <ip_address> is something that you would assign statically to the host, not the container. Commented Jun 30, 2024 at 11:23
  • Bu why is that the case? What's so bad about running a DHCP client in a container and letting it behave as if it were a complete, standalone machine? Then there's no mucking about with juggling port numbers and using nonstandard ports for https for example. It's just another host on the network, no fuss no muss. In fact, LXD allows you to do this, and it's a container system... And it works great! Commented Jul 1, 2024 at 6:44
  • @Karl well, let's just say docker isn't designed to work like that. Although I'm not familiar with lxc/lxd, but I'm aware of container that is more "VM-ish" like systemd-nspawn as well. While you can probably build / "stack up" customized docker image that is more like a "general system", AFAIK the core idea of docker is to have more or less "per-app" containers that consist of and (especially) run as less as possible as it can. I think one can say it's a bit like "snap/flatpak for service". Commented Jul 1, 2024 at 7:27
  • Im fact yesterday when I check, there doesn't seems to be an actual way to disable IPAM, and at least by default, apparently one can't add or remove from within a docker container. (It might be possible by leveraging ip netns on the host. I didn't bother to test that far.) Commented Jul 1, 2024 at 7:32

1 Answer 1

0

As mentionned in the comments the best solution in the context of Docker containers is to publish the port with:

docker run -d --publish HOST_PORT:CONTAINER_PORT my-container 

You can use the short form:

docker run -d -p HOST_PORT:CONTAINER_PORT my-container 

You also use Docker Compose with a compose.yaml file:

services: app: image: my-container ports: - 8080:80 

You can then either publish the port directly or through a reverse proxy such as Apache or nginx. In the latter case, you have to publish the Docker port only to the localhost interface with:

ports: - "127.0.0.1:8001:8001" 

Be aware that by default, ports are published to all network interfaces. This means any traffic that reaches your machine can access the published application.

In the reverse proxy you can add HTTP headers such as:

To let the application know the public URL used by the client. In some applications, this public URL can also be set at the configuration level.

2
  • So why is this better than simply allowing a container to get its own IP address via DHCP? It makes managing things so much easier because everything is uniform and all the tools are designed that way. No need to hack about with forwarding and nonstandard port numbers. LXC/LXD lets you do it, why not Docker? Commented Jul 1, 2024 at 6:47
  • In this way, the container's image is independent from the network layer. The same image can then be used on K8s. Commented Jul 1, 2024 at 7:21

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.