ixgbe.txt 10 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261
  1. Linux Base Driver for 10 Gigabit PCI Express Intel(R) Network Connection
  2. ========================================================================
  3. Intel Gigabit Linux driver.
  4. Copyright(c) 1999 - 2010 Intel Corporation.
  5. Contents
  6. ========
  7. - Identifying Your Adapter
  8. - Additional Configurations
  9. - Performance Tuning
  10. - Known Issues
  11. - Support
  12. Identifying Your Adapter
  13. ========================
  14. The driver in this release is compatible with 82598 and 82599-based Intel
  15. Network Connections.
  16. For more information on how to identify your adapter, go to the Adapter &
  17. Driver ID Guide at:
  18. http://support.intel.com/support/network/sb/CS-012904.htm
  19. SFP+ Devices with Pluggable Optics
  20. ----------------------------------
  21. 82599-BASED ADAPTERS
  22. NOTES: If your 82599-based Intel(R) Network Adapter came with Intel optics, or
  23. is an Intel(R) Ethernet Server Adapter X520-2, then it only supports Intel
  24. optics and/or the direct attach cables listed below.
  25. When 82599-based SFP+ devices are connected back to back, they should be set to
  26. the same Speed setting via ethtool. Results may vary if you mix speed settings.
  27. 82598-based adapters support all passive direct attach cables that comply
  28. with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach
  29. cables are not supported.
  30. Supplier Type Part Numbers
  31. SR Modules
  32. Intel DUAL RATE 1G/10G SFP+ SR (bailed) FTLX8571D3BCV-IT
  33. Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDDZ-IN1
  34. Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2
  35. LR Modules
  36. Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT
  37. Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDDZ-IN1
  38. Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZ-IN2
  39. The following is a list of 3rd party SFP+ modules and direct attach cables that
  40. have received some testing. Not all modules are applicable to all devices.
  41. Supplier Type Part Numbers
  42. Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL
  43. Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ
  44. Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL
  45. Finisar DUAL RATE 1G/10G SFP+ SR (No Bail) FTLX8571D3QCV-IT
  46. Avago DUAL RATE 1G/10G SFP+ SR (No Bail) AFBR-703SDZ-IN1
  47. Finisar DUAL RATE 1G/10G SFP+ LR (No Bail) FTLX1471D3QCV-IT
  48. Avago DUAL RATE 1G/10G SFP+ LR (No Bail) AFCT-701SDZ-IN1
  49. Finistar 1000BASE-T SFP FCLF8522P2BTL
  50. Avago 1000BASE-T SFP ABCU-5710RZ
  51. 82599-based adapters support all passive and active limiting direct attach
  52. cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.
  53. Laser turns off for SFP+ when ifconfig down
  54. -------------------------------------------
  55. "ifconfig down" turns off the laser for 82599-based SFP+ fiber adapters.
  56. "ifconfig up" turns on the later.
  57. 82598-BASED ADAPTERS
  58. NOTES for 82598-Based Adapters:
  59. - Intel(R) Network Adapters that support removable optical modules only support
  60. their original module type (i.e., the Intel(R) 10 Gigabit SR Dual Port
  61. Express Module only supports SR optical modules). If you plug in a different
  62. type of module, the driver will not load.
  63. - Hot Swapping/hot plugging optical modules is not supported.
  64. - Only single speed, 10 gigabit modules are supported.
  65. - LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module
  66. types are not supported. Please see your system documentation for details.
  67. The following is a list of 3rd party SFP+ modules and direct attach cables that
  68. have received some testing. Not all modules are applicable to all devices.
  69. Supplier Type Part Numbers
  70. Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL
  71. Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ
  72. Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL
  73. 82598-based adapters support all passive direct attach cables that comply
  74. with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach
  75. cables are not supported.
  76. Flow Control
  77. ------------
  78. Ethernet Flow Control (IEEE 802.3x) can be configured with ethtool to enable
  79. receiving and transmitting pause frames for ixgbe. When TX is enabled, PAUSE
  80. frames are generated when the receive packet buffer crosses a predefined
  81. threshold. When rx is enabled, the transmit unit will halt for the time delay
  82. specified when a PAUSE frame is received.
  83. Flow Control is enabled by default. If you want to disable a flow control
  84. capable link partner, use ethtool:
  85. ethtool -A eth? autoneg off RX off TX off
  86. NOTE: For 82598 backplane cards entering 1 gig mode, flow control default
  87. behavior is changed to off. Flow control in 1 gig mode on these devices can
  88. lead to Tx hangs.
  89. Additional Configurations
  90. =========================
  91. Jumbo Frames
  92. ------------
  93. The driver supports Jumbo Frames for all adapters. Jumbo Frames support is
  94. enabled by changing the MTU to a value larger than the default of 1500.
  95. The maximum value for the MTU is 16110. Use the ifconfig command to
  96. increase the MTU size. For example:
  97. ifconfig ethx mtu 9000 up
  98. The maximum MTU setting for Jumbo Frames is 16110. This value coincides
  99. with the maximum Jumbo Frames size of 16128.
  100. Generic Receive Offload, aka GRO
  101. --------------------------------
  102. The driver supports the in-kernel software implementation of GRO. GRO has
  103. shown that by coalescing Rx traffic into larger chunks of data, CPU
  104. utilization can be significantly reduced when under large Rx load. GRO is an
  105. evolution of the previously-used LRO interface. GRO is able to coalesce
  106. other protocols besides TCP. It's also safe to use with configurations that
  107. are problematic for LRO, namely bridging and iSCSI.
  108. Data Center Bridging, aka DCB
  109. -----------------------------
  110. DCB is a configuration Quality of Service implementation in hardware.
  111. It uses the VLAN priority tag (802.1p) to filter traffic. That means
  112. that there are 8 different priorities that traffic can be filtered into.
  113. It also enables priority flow control which can limit or eliminate the
  114. number of dropped packets during network stress. Bandwidth can be
  115. allocated to each of these priorities, which is enforced at the hardware
  116. level.
  117. To enable DCB support in ixgbe, you must enable the DCB netlink layer to
  118. allow the userspace tools (see below) to communicate with the driver.
  119. This can be found in the kernel configuration here:
  120. -> Networking support
  121. -> Networking options
  122. -> Data Center Bridging support
  123. Once this is selected, DCB support must be selected for ixgbe. This can
  124. be found here:
  125. -> Device Drivers
  126. -> Network device support (NETDEVICES [=y])
  127. -> Ethernet (10000 Mbit) (NETDEV_10000 [=y])
  128. -> Intel(R) 10GbE PCI Express adapters support
  129. -> Data Center Bridging (DCB) Support
  130. After these options are selected, you must rebuild your kernel and your
  131. modules.
  132. In order to use DCB, userspace tools must be downloaded and installed.
  133. The dcbd tools can be found at:
  134. http://e1000.sf.net
  135. Ethtool
  136. -------
  137. The driver utilizes the ethtool interface for driver configuration and
  138. diagnostics, as well as displaying statistical information. The latest
  139. ethtool version is required for this functionality.
  140. The latest release of ethtool can be found from
  141. http://ftp.kernel.org/pub/software/network/ethtool/
  142. FCoE
  143. ----
  144. This release of the ixgbe driver contains new code to enable users to use
  145. Fiber Channel over Ethernet (FCoE) and Data Center Bridging (DCB)
  146. functionality that is supported by the 82598-based hardware. This code has
  147. no default effect on the regular driver operation, and configuring DCB and
  148. FCoE is outside the scope of this driver README. Refer to
  149. http://www.open-fcoe.org/ for FCoE project information and contact
  150. e1000-eedc@lists.sourceforge.net for DCB information.
  151. MAC and VLAN anti-spoofing feature
  152. ----------------------------------
  153. When a malicious driver attempts to send a spoofed packet, it is dropped by
  154. the hardware and not transmitted. An interrupt is sent to the PF driver
  155. notifying it of the spoof attempt.
  156. When a spoofed packet is detected the PF driver will send the following
  157. message to the system log (displayed by the "dmesg" command):
  158. Spoof event(s) detected on VF (n)
  159. Where n=the VF that attempted to do the spoofing.
  160. Performance Tuning
  161. ==================
  162. An excellent article on performance tuning can be found at:
  163. http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Mark_Wagner.pdf
  164. Known Issues
  165. ============
  166. Enabling SR-IOV in a 32-bit Microsoft* Windows* Server 2008 Guest OS using
  167. Intel (R) 82576-based GbE or Intel (R) 82599-based 10GbE controller under KVM
  168. -----------------------------------------------------------------------------
  169. KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM. This
  170. includes traditional PCIe devices, as well as SR-IOV-capable devices using
  171. Intel 82576-based and 82599-based controllers.
  172. While direct assignment of a PCIe device or an SR-IOV Virtual Function (VF)
  173. to a Linux-based VM running 2.6.32 or later kernel works fine, there is a
  174. known issue with Microsoft Windows Server 2008 VM that results in a "yellow
  175. bang" error. This problem is within the KVM VMM itself, not the Intel driver,
  176. or the SR-IOV logic of the VMM, but rather that KVM emulates an older CPU
  177. model for the guests, and this older CPU model does not support MSI-X
  178. interrupts, which is a requirement for Intel SR-IOV.
  179. If you wish to use the Intel 82576 or 82599-based controllers in SR-IOV mode
  180. with KVM and a Microsoft Windows Server 2008 guest try the following
  181. workaround. The workaround is to tell KVM to emulate a different model of CPU
  182. when using qemu to create the KVM guest:
  183. "-cpu qemu64,model=13"
  184. Support
  185. =======
  186. For general information, go to the Intel support website at:
  187. http://support.intel.com
  188. or the Intel Wired Networking project hosted by Sourceforge at:
  189. http://e1000.sourceforge.net
  190. If an issue is identified with the released source code on the supported
  191. kernel with a supported adapter, email the specific information related
  192. to the issue to e1000-devel@lists.sf.net