0

On Fedora 30, I'm trying to see multicast packets in another process on the same host. Using netstat, iperf, and tcpdump, I've verified that the group is joined and packets are sent to the multicast address, but the server-mode iperf never receives anything.

When I try this on another machine (CentOS 7) on a different network (that I didn't set up), I see the packets leave, but I don't see packets coming back but the server iperf does print out received packets. I'm guessing this is a kernel thing, but how to I enable this?

Here is some of the terminal session:

 jnordwick@jnkde ~ iperf -s -u -B 226.94.1.1 -i 1 ------------------------------------------------------------ Server listening on UDP port 5001 Binding to local address 226.94.1.1 Joining multicast group 226.94.1.1 Receiving 1470 byte datagrams UDP buffer size: 208 KByte (default) ------------------------------------------------------------ 

and the other terminal

 jnordwick@jnkde ~ netstat -g IPv6/IPv4 Group Memberships Interface RefCnt Group --------------- ------ --------------------- lo 1 all-systems.mcast.net eno1 1 226.94.1.1 

now sending packets

 jnordwick@jnkde ~ iperf -c 226.94.1.1 -u -T 32 -t 3 -i 1 ------------------------------------------------------------ Client connecting to 226.94.1.1, UDP port 5001 Sending 1470 byte datagrams, IPG target: 11215.21 us (kalman adjust) Setting multicast TTL to 32 UDP buffer size: 208 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.2.155 port 47755 connected with 226.94.1.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 131 KBytes 1.07 Mbits/sec [ 3] 1.0- 2.0 sec 128 KBytes 1.05 Mbits/sec [ 3] 0.0- 3.0 sec 385 KBytes 1.05 Mbits/sec [ 3] Sent 268 datagrams 

Nothing appears in the server side perf, but if I run the exact same command on another network, I can see:

[jnordwick@network2 ~]$ iperf -s -u -B 226.94.1.1 -i 1 ------------------------------------------------------------ Server listening on UDP port 5001 Binding to local address 226.94.1.1 Joining multicast group 226.94.1.1 Receiving 1470 byte datagrams UDP buffer size: 208 KByte (default) ------------------------------------------------------------ [ 3] local 226.94.1.1 port 5001 connected with 204.2.57.7 port 58971 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0- 1.0 sec 129 KBytes 1.06 Mbits/sec 0.000 ms 0/ 90 (0%) [ 3] 1.0- 2.0 sec 128 KBytes 1.05 Mbits/sec 0.001 ms 0/ 89 (0%) [ 3] 2.0- 3.0 sec 128 KBytes 1.05 Mbits/sec 0.001 ms 0/ 89 (0%) [ 3] 0.0- 3.0 sec 385 KBytes 1.05 Mbits/sec 0.001 ms 0/ 268 (0%) 

tcpdump confirms the igmp joins and that packets are sent. I've dumped every interface possible (including the bonded and its slaves), and I can see the one interface the multicast packets leave from, but nothing comes back on any of them. I assume this is a kernel thing, since I don't think switches usually send multicast/broadcast traffic back to the sending host.

11
  • You seem to be using a RESERVED (225.0.0.0 to 231.255.255.255) multicast group, and this is a very bad idea because it could have a bad conflict at some point. The Organization-Local scope (239.0.0.0 to 239.255.255.255) is set aside for such things. Commented Nov 4, 2019 at 21:54
  • Also, you do not assign a multicast group as the address of an interface. Interfaces are assigned unicast addresses, and joining a multicast group on the interface tells the network stack for that interface to allow traffic destined to that multicast group up the network stack. If you assume that sending a multicast packet out an interface will also result in the multicast packet coming back into the same interface, then you are incorrect. IPv4 multicast does not have a way to loop back into the host, but that was added for IPv6 multicast (Interface-Local scope). Commented Nov 4, 2019 at 21:59
  • @RonMaupin I don't understand you're second comment (don't worry about the address - this is just testing). What are you saying is the difference between the two setups? From what I've read (and seems to be what you are saying), I shouldn't expect the packet back over the interface. So the kernel needs to handle it in some way. Why does the CentOS machine handle it fine, but the Fedora one doesn't. Is there some kernel parameter I need to turn on? Commented Nov 4, 2019 at 22:39
  • The thing about Linux, and the different variants is that they do not always follow the RFCs. For example, you simply do not set an interface address to a multicast group address. Multicast addresses are only used in packets as a destination address. Multicast is very different than unicast, but I have seen some Linux variants able to route it as unicast, and that violates the standards, and you should not do that because it can cause you trouble. There are proper methods for IPC inside a host, but using the network stack is pretty inefficient. Commented Nov 4, 2019 at 22:43
  • 1
    @JasonN well the same socat example, when the client is added the option ip-multicast-loop=0 fails (and works with ip-multicast-loop=1 which is default here): this controls looping back the multicast packet on the same host. You could strace iperf and see if there's something similar to setsockopt(5, SOL_IP, IP_MULTICAST_LOOP, "\0", 1) = 0. There's also the firewall to check. you could be dropping those looped back packets.(for this case the best way to accept them is to mark them on output and accept marks on input). Commented Nov 4, 2019 at 23:21

1 Answer 1

1

This appears to be a firewalld issue, and any ipchains stuff I tried didn't work, but this did.

firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 \ -m pkttype --pkt-type multicast -j ACCEPT 

also, just turning off firewalld works too:

systemctl stop firewalld 

Taken from: https://www.centos.org/forums/viewtopic.php?t=60395

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.