scaling.txt 18 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379
  1. Scaling in the Linux Networking Stack
  2. Introduction
  3. ============
  4. This document describes a set of complementary techniques in the Linux
  5. networking stack to increase parallelism and improve performance for
  6. multi-processor systems.
  7. The following technologies are described:
  8. RSS: Receive Side Scaling
  9. RPS: Receive Packet Steering
  10. RFS: Receive Flow Steering
  11. Accelerated Receive Flow Steering
  12. XPS: Transmit Packet Steering
  13. RSS: Receive Side Scaling
  14. =========================
  15. Contemporary NICs support multiple receive and transmit descriptor queues
  16. (multi-queue). On reception, a NIC can send different packets to different
  17. queues to distribute processing among CPUs. The NIC distributes packets by
  18. applying a filter to each packet that assigns it to one of a small number
  19. of logical flows. Packets for each flow are steered to a separate receive
  20. queue, which in turn can be processed by separate CPUs. This mechanism is
  21. generally known as “Receive-side Scaling” (RSS). The goal of RSS and
  22. the other scaling techniques is to increase performance uniformly.
  23. Multi-queue distribution can also be used for traffic prioritization, but
  24. that is not the focus of these techniques.
  25. The filter used in RSS is typically a hash function over the network
  26. and/or transport layer headers-- for example, a 4-tuple hash over
  27. IP addresses and TCP ports of a packet. The most common hardware
  28. implementation of RSS uses a 128-entry indirection table where each entry
  29. stores a queue number. The receive queue for a packet is determined
  30. by masking out the low order seven bits of the computed hash for the
  31. packet (usually a Toeplitz hash), taking this number as a key into the
  32. indirection table and reading the corresponding value.
  33. Some advanced NICs allow steering packets to queues based on
  34. programmable filters. For example, webserver bound TCP port 80 packets
  35. can be directed to their own receive queue. Such “n-tuple” filters can
  36. be configured from ethtool (--config-ntuple).
  37. ==== RSS Configuration
  38. The driver for a multi-queue capable NIC typically provides a kernel
  39. module parameter for specifying the number of hardware queues to
  40. configure. In the bnx2x driver, for instance, this parameter is called
  41. num_queues. A typical RSS configuration would be to have one receive queue
  42. for each CPU if the device supports enough queues, or otherwise at least
  43. one for each memory domain, where a memory domain is a set of CPUs that
  44. share a particular memory level (L1, L2, NUMA node, etc.).
  45. The indirection table of an RSS device, which resolves a queue by masked
  46. hash, is usually programmed by the driver at initialization. The
  47. default mapping is to distribute the queues evenly in the table, but the
  48. indirection table can be retrieved and modified at runtime using ethtool
  49. commands (--show-rxfh-indir and --set-rxfh-indir). Modifying the
  50. indirection table could be done to give different queues different
  51. relative weights.
  52. == RSS IRQ Configuration
  53. Each receive queue has a separate IRQ associated with it. The NIC triggers
  54. this to notify a CPU when new packets arrive on the given queue. The
  55. signaling path for PCIe devices uses message signaled interrupts (MSI-X),
  56. that can route each interrupt to a particular CPU. The active mapping
  57. of queues to IRQs can be determined from /proc/interrupts. By default,
  58. an IRQ may be handled on any CPU. Because a non-negligible part of packet
  59. processing takes place in receive interrupt handling, it is advantageous
  60. to spread receive interrupts between CPUs. To manually adjust the IRQ
  61. affinity of each interrupt see Documentation/IRQ-affinity.txt. Some systems
  62. will be running irqbalance, a daemon that dynamically optimizes IRQ
  63. assignments and as a result may override any manual settings.
  64. == Suggested Configuration
  65. RSS should be enabled when latency is a concern or whenever receive
  66. interrupt processing forms a bottleneck. Spreading load between CPUs
  67. decreases queue length. For low latency networking, the optimal setting
  68. is to allocate as many queues as there are CPUs in the system (or the
  69. NIC maximum, if lower). The most efficient high-rate configuration
  70. is likely the one with the smallest number of receive queues where no
  71. receive queue overflows due to a saturated CPU, because in default
  72. mode with interrupt coalescing enabled, the aggregate number of
  73. interrupts (and thus work) grows with each additional queue.
  74. Per-cpu load can be observed using the mpstat utility, but note that on
  75. processors with hyperthreading (HT), each hyperthread is represented as
  76. a separate CPU. For interrupt handling, HT has shown no benefit in
  77. initial tests, so limit the number of queues to the number of CPU cores
  78. in the system.
  79. RPS: Receive Packet Steering
  80. ============================
  81. Receive Packet Steering (RPS) is logically a software implementation of
  82. RSS. Being in software, it is necessarily called later in the datapath.
  83. Whereas RSS selects the queue and hence CPU that will run the hardware
  84. interrupt handler, RPS selects the CPU to perform protocol processing
  85. above the interrupt handler. This is accomplished by placing the packet
  86. on the desired CPU’s backlog queue and waking up the CPU for processing.
  87. RPS has some advantages over RSS: 1) it can be used with any NIC,
  88. 2) software filters can easily be added to hash over new protocols,
  89. 3) it does not increase hardware device interrupt rate (although it does
  90. introduce inter-processor interrupts (IPIs)).
  91. RPS is called during bottom half of the receive interrupt handler, when
  92. a driver sends a packet up the network stack with netif_rx() or
  93. netif_receive_skb(). These call the get_rps_cpu() function, which
  94. selects the queue that should process a packet.
  95. The first step in determining the target CPU for RPS is to calculate a
  96. flow hash over the packet’s addresses or ports (2-tuple or 4-tuple hash
  97. depending on the protocol). This serves as a consistent hash of the
  98. associated flow of the packet. The hash is either provided by hardware
  99. or will be computed in the stack. Capable hardware can pass the hash in
  100. the receive descriptor for the packet; this would usually be the same
  101. hash used for RSS (e.g. computed Toeplitz hash). The hash is saved in
  102. skb->rx_hash and can be used elsewhere in the stack as a hash of the
  103. packet’s flow.
  104. Each receive hardware queue has an associated list of CPUs to which
  105. RPS may enqueue packets for processing. For each received packet,
  106. an index into the list is computed from the flow hash modulo the size
  107. of the list. The indexed CPU is the target for processing the packet,
  108. and the packet is queued to the tail of that CPU’s backlog queue. At
  109. the end of the bottom half routine, IPIs are sent to any CPUs for which
  110. packets have been queued to their backlog queue. The IPI wakes backlog
  111. processing on the remote CPU, and any queued packets are then processed
  112. up the networking stack.
  113. ==== RPS Configuration
  114. RPS requires a kernel compiled with the CONFIG_RPS kconfig symbol (on
  115. by default for SMP). Even when compiled in, RPS remains disabled until
  116. explicitly configured. The list of CPUs to which RPS may forward traffic
  117. can be configured for each receive queue using a sysfs file entry:
  118. /sys/class/net/<dev>/queues/rx-<n>/rps_cpus
  119. This file implements a bitmap of CPUs. RPS is disabled when it is zero
  120. (the default), in which case packets are processed on the interrupting
  121. CPU. Documentation/IRQ-affinity.txt explains how CPUs are assigned to
  122. the bitmap.
  123. == Suggested Configuration
  124. For a single queue device, a typical RPS configuration would be to set
  125. the rps_cpus to the CPUs in the same memory domain of the interrupting
  126. CPU. If NUMA locality is not an issue, this could also be all CPUs in
  127. the system. At high interrupt rate, it might be wise to exclude the
  128. interrupting CPU from the map since that already performs much work.
  129. For a multi-queue system, if RSS is configured so that a hardware
  130. receive queue is mapped to each CPU, then RPS is probably redundant
  131. and unnecessary. If there are fewer hardware queues than CPUs, then
  132. RPS might be beneficial if the rps_cpus for each queue are the ones that
  133. share the same memory domain as the interrupting CPU for that queue.
  134. RFS: Receive Flow Steering
  135. ==========================
  136. While RPS steers packets solely based on hash, and thus generally
  137. provides good load distribution, it does not take into account
  138. application locality. This is accomplished by Receive Flow Steering
  139. (RFS). The goal of RFS is to increase datacache hitrate by steering
  140. kernel processing of packets to the CPU where the application thread
  141. consuming the packet is running. RFS relies on the same RPS mechanisms
  142. to enqueue packets onto the backlog of another CPU and to wake up that
  143. CPU.
  144. In RFS, packets are not forwarded directly by the value of their hash,
  145. but the hash is used as index into a flow lookup table. This table maps
  146. flows to the CPUs where those flows are being processed. The flow hash
  147. (see RPS section above) is used to calculate the index into this table.
  148. The CPU recorded in each entry is the one which last processed the flow.
  149. If an entry does not hold a valid CPU, then packets mapped to that entry
  150. are steered using plain RPS. Multiple table entries may point to the
  151. same CPU. Indeed, with many flows and few CPUs, it is very likely that
  152. a single application thread handles flows with many different flow hashes.
  153. rps_sock_flow_table is a global flow table that contains the *desired* CPU
  154. for flows: the CPU that is currently processing the flow in userspace.
  155. Each table value is a CPU index that is updated during calls to recvmsg
  156. and sendmsg (specifically, inet_recvmsg(), inet_sendmsg(), inet_sendpage()
  157. and tcp_splice_read()).
  158. When the scheduler moves a thread to a new CPU while it has outstanding
  159. receive packets on the old CPU, packets may arrive out of order. To
  160. avoid this, RFS uses a second flow table to track outstanding packets
  161. for each flow: rps_dev_flow_table is a table specific to each hardware
  162. receive queue of each device. Each table value stores a CPU index and a
  163. counter. The CPU index represents the *current* CPU onto which packets
  164. for this flow are enqueued for further kernel processing. Ideally, kernel
  165. and userspace processing occur on the same CPU, and hence the CPU index
  166. in both tables is identical. This is likely false if the scheduler has
  167. recently migrated a userspace thread while the kernel still has packets
  168. enqueued for kernel processing on the old CPU.
  169. The counter in rps_dev_flow_table values records the length of the current
  170. CPU's backlog when a packet in this flow was last enqueued. Each backlog
  171. queue has a head counter that is incremented on dequeue. A tail counter
  172. is computed as head counter + queue length. In other words, the counter
  173. in rps_dev_flow[i] records the last element in flow i that has
  174. been enqueued onto the currently designated CPU for flow i (of course,
  175. entry i is actually selected by hash and multiple flows may hash to the
  176. same entry i).
  177. And now the trick for avoiding out of order packets: when selecting the
  178. CPU for packet processing (from get_rps_cpu()) the rps_sock_flow table
  179. and the rps_dev_flow table of the queue that the packet was received on
  180. are compared. If the desired CPU for the flow (found in the
  181. rps_sock_flow table) matches the current CPU (found in the rps_dev_flow
  182. table), the packet is enqueued onto that CPU’s backlog. If they differ,
  183. the current CPU is updated to match the desired CPU if one of the
  184. following is true:
  185. - The current CPU's queue head counter >= the recorded tail counter
  186. value in rps_dev_flow[i]
  187. - The current CPU is unset (equal to RPS_NO_CPU)
  188. - The current CPU is offline
  189. After this check, the packet is sent to the (possibly updated) current
  190. CPU. These rules aim to ensure that a flow only moves to a new CPU when
  191. there are no packets outstanding on the old CPU, as the outstanding
  192. packets could arrive later than those about to be processed on the new
  193. CPU.
  194. ==== RFS Configuration
  195. RFS is only available if the kconfig symbol CONFIG_RPS is enabled (on
  196. by default for SMP). The functionality remains disabled until explicitly
  197. configured. The number of entries in the global flow table is set through:
  198. /proc/sys/net/core/rps_sock_flow_entries
  199. The number of entries in the per-queue flow table are set through:
  200. /sys/class/net/<dev>/queues/rx-<n>/rps_flow_cnt
  201. == Suggested Configuration
  202. Both of these need to be set before RFS is enabled for a receive queue.
  203. Values for both are rounded up to the nearest power of two. The
  204. suggested flow count depends on the expected number of active connections
  205. at any given time, which may be significantly less than the number of open
  206. connections. We have found that a value of 32768 for rps_sock_flow_entries
  207. works fairly well on a moderately loaded server.
  208. For a single queue device, the rps_flow_cnt value for the single queue
  209. would normally be configured to the same value as rps_sock_flow_entries.
  210. For a multi-queue device, the rps_flow_cnt for each queue might be
  211. configured as rps_sock_flow_entries / N, where N is the number of
  212. queues. So for instance, if rps_sock_flow_entries is set to 32768 and there
  213. are 16 configured receive queues, rps_flow_cnt for each queue might be
  214. configured as 2048.
  215. Accelerated RFS
  216. ===============
  217. Accelerated RFS is to RFS what RSS is to RPS: a hardware-accelerated load
  218. balancing mechanism that uses soft state to steer flows based on where
  219. the application thread consuming the packets of each flow is running.
  220. Accelerated RFS should perform better than RFS since packets are sent
  221. directly to a CPU local to the thread consuming the data. The target CPU
  222. will either be the same CPU where the application runs, or at least a CPU
  223. which is local to the application thread’s CPU in the cache hierarchy.
  224. To enable accelerated RFS, the networking stack calls the
  225. ndo_rx_flow_steer driver function to communicate the desired hardware
  226. queue for packets matching a particular flow. The network stack
  227. automatically calls this function every time a flow entry in
  228. rps_dev_flow_table is updated. The driver in turn uses a device specific
  229. method to program the NIC to steer the packets.
  230. The hardware queue for a flow is derived from the CPU recorded in
  231. rps_dev_flow_table. The stack consults a CPU to hardware queue map which
  232. is maintained by the NIC driver. This is an auto-generated reverse map of
  233. the IRQ affinity table shown by /proc/interrupts. Drivers can use
  234. functions in the cpu_rmap (“CPU affinity reverse map”) kernel library
  235. to populate the map. For each CPU, the corresponding queue in the map is
  236. set to be one whose processing CPU is closest in cache locality.
  237. ==== Accelerated RFS Configuration
  238. Accelerated RFS is only available if the kernel is compiled with
  239. CONFIG_RFS_ACCEL and support is provided by the NIC device and driver.
  240. It also requires that ntuple filtering is enabled via ethtool. The map
  241. of CPU to queues is automatically deduced from the IRQ affinities
  242. configured for each receive queue by the driver, so no additional
  243. configuration should be necessary.
  244. == Suggested Configuration
  245. This technique should be enabled whenever one wants to use RFS and the
  246. NIC supports hardware acceleration.
  247. XPS: Transmit Packet Steering
  248. =============================
  249. Transmit Packet Steering is a mechanism for intelligently selecting
  250. which transmit queue to use when transmitting a packet on a multi-queue
  251. device. To accomplish this, a mapping from CPU to hardware queue(s) is
  252. recorded. The goal of this mapping is usually to assign queues
  253. exclusively to a subset of CPUs, where the transmit completions for
  254. these queues are processed on a CPU within this set. This choice
  255. provides two benefits. First, contention on the device queue lock is
  256. significantly reduced since fewer CPUs contend for the same queue
  257. (contention can be eliminated completely if each CPU has its own
  258. transmit queue). Secondly, cache miss rate on transmit completion is
  259. reduced, in particular for data cache lines that hold the sk_buff
  260. structures.
  261. XPS is configured per transmit queue by setting a bitmap of CPUs that
  262. may use that queue to transmit. The reverse mapping, from CPUs to
  263. transmit queues, is computed and maintained for each network device.
  264. When transmitting the first packet in a flow, the function
  265. get_xps_queue() is called to select a queue. This function uses the ID
  266. of the running CPU as a key into the CPU-to-queue lookup table. If the
  267. ID matches a single queue, that is used for transmission. If multiple
  268. queues match, one is selected by using the flow hash to compute an index
  269. into the set.
  270. The queue chosen for transmitting a particular flow is saved in the
  271. corresponding socket structure for the flow (e.g. a TCP connection).
  272. This transmit queue is used for subsequent packets sent on the flow to
  273. prevent out of order (ooo) packets. The choice also amortizes the cost
  274. of calling get_xps_queues() over all packets in the flow. To avoid
  275. ooo packets, the queue for a flow can subsequently only be changed if
  276. skb->ooo_okay is set for a packet in the flow. This flag indicates that
  277. there are no outstanding packets in the flow, so the transmit queue can
  278. change without the risk of generating out of order packets. The
  279. transport layer is responsible for setting ooo_okay appropriately. TCP,
  280. for instance, sets the flag when all data for a connection has been
  281. acknowledged.
  282. ==== XPS Configuration
  283. XPS is only available if the kconfig symbol CONFIG_XPS is enabled (on by
  284. default for SMP). The functionality remains disabled until explicitly
  285. configured. To enable XPS, the bitmap of CPUs that may use a transmit
  286. queue is configured using the sysfs file entry:
  287. /sys/class/net/<dev>/queues/tx-<n>/xps_cpus
  288. == Suggested Configuration
  289. For a network device with a single transmission queue, XPS configuration
  290. has no effect, since there is no choice in this case. In a multi-queue
  291. system, XPS is preferably configured so that each CPU maps onto one queue.
  292. If there are as many queues as there are CPUs in the system, then each
  293. queue can also map onto one CPU, resulting in exclusive pairings that
  294. experience no contention. If there are fewer queues than CPUs, then the
  295. best CPUs to share a given queue are probably those that share the cache
  296. with the CPU that processes transmit completions for that queue
  297. (transmit interrupts).
  298. Further Information
  299. ===================
  300. RPS and RFS were introduced in kernel 2.6.35. XPS was incorporated into
  301. 2.6.38. Original patches were submitted by Tom Herbert
  302. (therbert@google.com)
  303. Accelerated RFS was introduced in 2.6.35. Original patches were
  304. submitted by Ben Hutchings (bhutchings@solarflare.com)
  305. Authors:
  306. Tom Herbert (therbert@google.com)
  307. Willem de Bruijn (willemb@google.com)