All Categories


Linux Traffic Control using tc ( /sbin/tc ) for OpenVZ and KVM

Ever wondered how to do linux traffic shaping or linux traffic control on an IP or Interface? Ever wanted to do linux traffic control or linux traffic shaping on individual VPS's on OpenVZ, KVM, Xen, platforms? The Linux traffic control command tc ( /sbin/tc ) will enable you to do this, also known as linux qos.

This guide is intended to help you get traffic control and traffic shaping ( qos ) working on your computer/server in the quickest time possible. Below are the simplest commands you'll need to get linux traffic control working on Centos 6, the same commands will work on Centos 7, the only difference is the Nic devices are labelled differently (Centos 6 = eth0, Centos 7 = enp2s0). In this guide we'll be setting up traffic control on a Centos 6 OpenVZ server (eth0) and controlling traffic to individual VPS's (Virtual Private Servers). This is not the only way to get traffic control working, but it's the simplest way I could get it to work without giving myself a headache :)

Platform Used in this guide

The example commands below were performed on an OpenVZ server to control and shape traffic to individual VPS's (VE's). The same approach would also apply to KVM and other Virtualization platforms. The key goal is to setup 2 traffic pipes for each VPS, one for INCOMING traffic and one for OUTGOING traffic. The traffic control commands simply connect the Nic devices (Node NIC and VPS NIC) with pipes and add a controlling "faucet" (valve), to control bandwidth flow rate. Below is a listing of the VPS's (VE's) on the example OpenVZ server.

[root@melbourne1 ~]# vzlist -a
      CTID      NPROC STATUS    IP_ADDR         HOSTNAME
      1000         68 running   109.11.28.2     openvz-vps1
      2000         78 running   109.11.28.3     openvz-vps2
      3000        102 running   109.11.28.4     openvz-vps3
[root@melbourne1 ~]#
[root@melbourne1 ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:25:90:E0:4D:4B  
          inet addr:109.11.28.1  Bcast:109.11.28.255  Mask:255.255.255.0
          inet6 addr: fe80::255:91ff:fed0:3d4a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:11505055383 errors:0 dropped:0 overruns:1603571 frame:0
          TX packets:13011399810 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:4363120173777 (3.9 TiB)  TX bytes:5514049529460 (5.0 TiB)
          Memory:fba20000-fba40000 

eth1      Link encap:Ethernet  HWaddr 00:23:90:E2:2B:4B  
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
          Memory:fba00000-fba20000 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:48390 errors:0 dropped:0 overruns:0 frame:0
          TX packets:48390 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2837245 (2.7 MiB)  TX bytes:2837245 (2.7 MiB)

venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          inet6 addr: fe80::1/128 Scope:Link
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:12367117163 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10515529411 errors:0 dropped:16351 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:5169970808545 (4.7 TiB)  TX bytes:3815344895196 (3.4 TiB)

[root@melbourne1 ~]#

As you can see there are 3 OpenVZ VPS's (CTID's: 1000, 2000, 3000) and the Nodes active NIC is eth0, with venet0 being the shared NIC for all VPS's. In this guide we are going to setup traffic controls for each VPS using IP filters to identify each VPS's traffic.


STEP#1 Enable HTB queueing discipline ( qdisc ) module in Node Kernel

Firstly you need to make sure the HTB qdisc kernel module is enabled on your server (computer). HTB (Hierarchichal Token Bucket) is just a type of queueing discipline that determines the order and priority (scheduling) of network packets through your interface. HTB uses the concepts of tokens and buckets along with a class-based system and filters to allow for complex and granular control over traffic. To make sure the HTB queueing discipline is on your server/computer simply execute the below commands, "modprobe" installs the sch_htb kernel module and "lsmod | grep htb" confirms its enabled.

[root@melbourne1 ~]# modprobe sch_htb
[root@melbourne1 ~]# lsmod | grep htb
sch_htb                22278  0 
[root@melbourne1 ~]#



STEP#2 Setup OUTGOING HTB Queueing Discipline ( qdisc )

Your interface device (Nic card) will already have an outgoing queueing discipline ( qdisc ) setup, propbably pfifo (first in first out). We need to change the queueing discipline to htb as shown below, the second command "tc qdisc show dev eth0" simply verifies this change was successfull.

[root@melbourne1 ~]# tc qdisc add dev eth0 root handle 1:0 htb
[root@melbourne1 ~]# tc qdisc show dev eth0
qdisc htb 1: root refcnt 2 r2q 10 default 0 direct_packets_stat 0
[root@melbourne1 ~]#



STEP#3 Create OUTGOING Root Class

The root (parent) class is always required, its just the start of the traffic control pipe and sets the overall maximum flow rate. In this example we are setting the maximum bandwidth rate at 100mbits for the eth0 device (nic). The second command "tc class show dev eth0" is simply to verify the change was successfull.

[root@melbourne1 ~]# tc class add dev eth0 parent 1:0 classid 1:1 htb rate 100mbit
[root@melbourne1 ~]# tc class show dev eth0
class htb 1:1 root prio 0 rate 100000Kbit ceil 100000Kbit burst 1600b cburst 1600b 
[root@melbourne1 ~]#

Note if you were simply wanting to control the outgoing traffic rate for one or more NIC's, then this would be enough. You'd simply repeat the command for each NIC (ethernet device) and set its desired rate accordingly. Note rate values can also be set as "Kbit" if you want to enter slower speeds.


STEP#4 Create OUTGOING classes for each VPS

Here we are creating 3 classes to regulate outgoing traffic for each VPS. To make it easier to follow we are giving each class the same ID (label) as the VPS CTID's listed at the top of this guide (CTID's: 1000, 2000, 3000). Also note each VPS will be given a "rate" value of 500Kbit and a "ceiling" value of 1000Kbit. The rate value is the expected average traffic rate while the ceiling value is the maximum allowed traffic rate. The "tc class show dev eth0" command simply confirms the success of these commands - displaying the 3 classes we just created.

[root@melbourne1 ~]# tc class add dev eth0 parent 1:1 classid 1:1000 htb rate 500Kbit ceil 1000Kbit
[root@melbourne1 ~]# tc class add dev eth0 parent 1:1 classid 1:2000 htb rate 500Kbit ceil 1000Kbit
[root@melbourne1 ~]# tc class add dev eth0 parent 1:1 classid 1:3000 htb rate 500Kbit ceil 1000Kbit
[root@melbourne1 ~]# 
[root@melbourne1 ~]# tc class show dev eth0
class htb 1:1 root rate 100000Kbit ceil 100000Kbit burst 1600b cburst 1600b 
class htb 1:1000 parent 1:1 prio 0 rate 500000bit ceil 1000Kbit burst 1600b cburst 1600b 
class htb 1:2000 parent 1:1 prio 0 rate 500000bit ceil 1000Kbit burst 1600b cburst 1600b 
class htb 1:3000 parent 1:1 prio 0 rate 500000bit ceil 1000Kbit burst 1600b cburst 1600b 
[root@melbourne1 ~]#

Ok now we have 3 traffic control classes setup waiting to be used. Each of these classes can be used to limit outgoing traffic to 1000Kbit (1mbit)


STEP#5 Create IP Filters for OUTGOING traffic classes

At present the 3 OUTGOING classes we just created are not being used, they are just sitting doing nothing. To use them we need to setup IP filters so we can send specific IP traffic through these classes (pipes) and subsequently control the IP traffic. For class 1:1000 we will setup a filter for IP 109.11.28.2 (VPS1) for class 1:2000 we will setup a filter for IP 109.11.28.3 (VPS2) and for class 1:3000 we will setup a filter for IP 109.11.28.4 (VPS3).

In this step its handy to know the hexidecimal format of your IP addresses. To find these you can use the following IP to Hex converter Converter
For the VPS IPs in this guide:

109.11.28.2 = 6D0B1C02
109.11.28.3 = 6D0B1C03
109.11.28.4 = 6D0B1C04

Note in the below commands "flowid" is what connects the filter to the classes. The "tc filter show dev eth0" command confirms the filters were setup correctly, note here you need the hexidecimal versions of your IPs to confirm success.

[root@melbourne1 ~]# tc filter add dev eth0 parent 1:0 protocol ip prio 1 u32 match ip src 109.11.28.2 flowid 1:1000
[root@melbourne1 ~]# tc filter add dev eth0 parent 1:0 protocol ip prio 1 u32 match ip src 109.11.28.3 flowid 1:2000
[root@melbourne1 ~]# tc filter add dev eth0 parent 1:0 protocol ip prio 1 u32 match ip src 109.11.28.4 flowid 1:3000
[root@melbourne1 ~]# 
[root@melbourne1 ~]# tc filter show dev eth0
filter parent 1: protocol ip pref 1 u32 
filter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1 
filter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:1000 
  match 6d0b1c02/ffffffff at 12
filter parent 1: protocol ip pref 1 u32 fh 800::801 order 2049 key ht 800 bkt 0 flowid 1:2000 
  match 6d0b1c03/ffffffff at 12
filter parent 1: protocol ip pref 1 u32 fh 800::802 order 2050 key ht 800 bkt 0 flowid 1:3000 
  match 6d0b1c04/ffffffff at 12
[root@melbourne1 ~]#

Ok now we have completed setting up OUTGOING traffic control limits. Now we need to repeat similar commands to setup the INCOMING traffic control limits.


STEP#6 Repeat for INCOMING traffic

Below is a summarised list of commands to setup the INCOMING traffic controls. Note for OpenVZ we now use the venet0 interface instead of eth0 and the root class number 2 rather than 1. Note for KVM and other Virtualization platforms rather than venet0 you would simply use the bridge device or the tap interfaces associated with the VPS's.

Create INCOMING Root Class 2

tc qdisc add dev venet0 root handle 2:0 htb

Create INCOMING classes for each VPS

tc class add dev venet0 parent 2:1 classid 2:1000 htb rate 500Kbit ceil 1000Kbit
tc class add dev venet0 parent 2:1 classid 2:2000 htb rate 500Kbit ceil 1000Kbit
tc class add dev venet0 parent 2:1 classid 2:3000 htb rate 500Kbit ceil 1000Kbit

Create IP Filters for INCOMING traffic classes

tc filter add dev venet0 parent 2:0 protocol ip prio 1 u32 match ip src 109.11.28.2 flowid 2:1000
tc filter add dev venet0 parent 2:0 protocol ip prio 1 u32 match ip src 109.11.28.3 flowid 2:2000
tc filter add dev venet0 parent 2:0 protocol ip prio 1 u32 match ip src 109.11.28.4 flowid 2:3000


This complete the traffic control setup for each VPS. In this example we have entered the same class "rate" and "ceiling" numbers for each VPS, but in your own setup you can set these to any numbers you like. To make modifications you need to first delete the previous setting and then re-execute the tc command with the new values. In order to modify an IP filter you would first need to erase the existing filter. So for example to change the INCOMING IP filter for VPS "1000" you would first need to execute the following command:

tc filter del dev venet0 parent 2:0 protocol ip prio 1 u32 flowid 1:1000


This deletes the existing incoming filter for 1:1000 to allow you to enter a new IP filter - with a different IP.

As another example to change the OUTGOING class "rate" and "ceiling" values for VPS "1000" you would first need to remove the current OUTGOING IP filter and then the current OUTGOING class with the following commands:

tc filter del dev eth0 parent 1:0 protocol ip prio 1 u32 flowid 1:1000
tc class del dev eth0 classid 1:1000


You could then again execute the commands to recreate the 1:1000 INCOMING class with the new "rate" and "ceiling" values you want, followed by again executing the command to recreate the INCOMING IP filter.


And there you have it! You now have full control over the incoming and outgoing traffic to your VPS's




About the Author

Administrator

Most Viewed - All Categories