20

I realise many similar questions have already been asked, but so far I've yet to find a solution to my problem.

I have a virtual linux server (running Debian Squeeze) that I use for testing of website speeds in order to measure increase and decrease in load time of said websites. I'm attempting to limit the bandwidth and latency of this server in order to be able to get close to real world load times on the websites, but have so far failed.

What I want specifically is the following:

  • To set an incoming and outgoing latency of 50 ms.
  • To set an incoming bandwidth limit of 512 kbps.
  • To set an outgoing bandwidth limit of 4096 kbps.

I've been reading up on netem and using the tc command, but it's still all a bit over my head. I've managed to put together this command to control the latency which seems to work, but I'm not even sure if that only handles the outgoing latency or both:

tc qdisc add dev eth0 root netem delay 50ms 

Any network gurus around that can help me out?

Edit:

After further research I've gotten halfway to my goal, using this command all outgoing traffic behaves as I want it to:

tc qdisc add dev eth0 root tbf rate 4.0mbit latency 50ms burst 50kb mtu 10000 

However, I still haven't been able to throttle the incoming traffic properly. I've learnt that I'm supposed to use an "Ingress Policer filter" I've been trying to do just that with the command below, playing around with different values, but no luck.

tc qdisc add dev eth0 ingress tc filter add dev eth0 parent ffff: protocol ip u32 match ip src 0.0.0.0/0 flowid :1 police rate 1.0mbit mtu 10000 burst 10k drop 

The bandwidth is affected by the command though, the values above make the speed start at 2MB/s and, as a transfer progresses, slowly dropping down to around 80-90kB/s which it reaches after about 30 seconds of transfer.

Any ideas on what I'm doing wrong?

2
  • netem delay 50ms does not limit the latency. It increases the latency by 50ms compared to what it would otherwise have been. Commented Jul 9, 2014 at 9:23
  • Indeed you are right. Removed the word limit because it was an increase of 50ms I was actually looking for (as it was a virtual machine on the same computer the original latency was close enough to 0 anyway) Commented Jul 18, 2014 at 11:05

2 Answers 2

16

I finally settled for just setting the outgoing bandwidth/latency on the server, and then doing the same on the client, effectively reaching the same result.

These are the commands I ran on the server and client respectively to reach my goals:

Server: 4 Mbit 50 ms

tc qdisc add dev eth0 handle 1: root htb default 11 tc class add dev eth0 parent 1: classid 1:1 htb rate 1000Mbps tc class add dev eth0 parent 1:1 classid 1:11 htb rate 4Mbit tc qdisc add dev eth0 parent 1:11 handle 10: netem delay 50ms 

Client: 512 kbit 50 ms

tc qdisc add dev vmnet1 handle 1: root htb default 11 tc class add dev vmnet1 parent 1: classid 1:1 htb rate 1000Mbps tc class add dev vmnet1 parent 1:1 classid 1:11 htb rate 512kbit tc qdisc add dev vmnet1 parent 1:11 handle 10: netem delay 50ms 
4
  • I've been looking for this for months. Thanks. One question? How do you delete the rule? tc class del dev eth0 root shows RTNETLINK answers: No such file or directory Commented Jan 21, 2014 at 2:31
  • It was a few months ago, but I seem to remember it being enough to remove the qdisc: tc qdisc del dev eth0 root Commented Jan 21, 2014 at 6:51
  • 1
    By executing the commands on server are we saying that the incoming traffic is restricted by 4Mbps bandwidth and 50ms latency? What does each command mean? Commented May 23, 2023 at 22:35
  • Great! Can someone explain why default is 11? Why there are 2 classes? Can I just have 1? Commented Dec 13, 2024 at 14:53
2

Some 80-90 kByte / s is about what to expect from

 tc filter add ... police rate 1.0mbit ... 

You ask incoming data to be thrown away when it arrives at 1 mBit / s, that's about 125 kByte / s. The remote server will then drop to considerably lower than that (maybe half, not sure). After that, all packets come through, so the remote end slowly picks up speed until 125 kByte / s are again reached. You get an average throughput considerably below 125 kByte / s, which is typical of ingress shaping.

I'm a bit surprised that the speed should reach 2 MByte / s with the ingress policy filter already in place. Where did you measure - at the downstream client (programm) or at some upstream router? Or maybe you first started the connection and only afterwards you kicked the ingress policy filter in place?

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.