Kconfig 6.7 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200
  1. menu "Xen driver support"
  2. depends on XEN
  3. config XEN_BALLOON
  4. bool "Xen memory balloon driver"
  5. default y
  6. help
  7. The balloon driver allows the Xen domain to request more memory from
  8. the system to expand the domain's memory allocation, or alternatively
  9. return unneeded memory to the system.
  10. config XEN_SELFBALLOONING
  11. bool "Dynamically self-balloon kernel memory to target"
  12. depends on XEN && XEN_BALLOON && CLEANCACHE && SWAP && XEN_TMEM
  13. default n
  14. help
  15. Self-ballooning dynamically balloons available kernel memory driven
  16. by the current usage of anonymous memory ("committed AS") and
  17. controlled by various sysfs-settable parameters. Configuring
  18. FRONTSWAP is highly recommended; if it is not configured, self-
  19. ballooning is disabled by default but can be enabled with the
  20. 'selfballooning' kernel boot parameter. If FRONTSWAP is configured,
  21. frontswap-selfshrinking is enabled by default but can be disabled
  22. with the 'noselfshrink' kernel boot parameter; and self-ballooning
  23. is enabled by default but can be disabled with the 'noselfballooning'
  24. kernel boot parameter. Note that systems without a sufficiently
  25. large swap device should not enable self-ballooning.
  26. config XEN_BALLOON_MEMORY_HOTPLUG
  27. bool "Memory hotplug support for Xen balloon driver"
  28. default n
  29. depends on XEN_BALLOON && MEMORY_HOTPLUG
  30. help
  31. Memory hotplug support for Xen balloon driver allows expanding memory
  32. available for the system above limit declared at system startup.
  33. It is very useful on critical systems which require long
  34. run without rebooting.
  35. Memory could be hotplugged in following steps:
  36. 1) dom0: xl mem-max <domU> <maxmem>
  37. where <maxmem> is >= requested memory size,
  38. 2) dom0: xl mem-set <domU> <memory>
  39. where <memory> is requested memory size; alternatively memory
  40. could be added by writing proper value to
  41. /sys/devices/system/xen_memory/xen_memory0/target or
  42. /sys/devices/system/xen_memory/xen_memory0/target_kb on dumU,
  43. 3) domU: for i in /sys/devices/system/memory/memory*/state; do \
  44. [ "`cat "$i"`" = offline ] && echo online > "$i"; done
  45. Memory could be onlined automatically on domU by adding following line to udev rules:
  46. SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'"
  47. In that case step 3 should be omitted.
  48. config XEN_SCRUB_PAGES
  49. bool "Scrub pages before returning them to system"
  50. depends on XEN_BALLOON
  51. default y
  52. help
  53. Scrub pages before returning them to the system for reuse by
  54. other domains. This makes sure that any confidential data
  55. is not accidentally visible to other domains. Is it more
  56. secure, but slightly less efficient.
  57. If in doubt, say yes.
  58. config XEN_DEV_EVTCHN
  59. tristate "Xen /dev/xen/evtchn device"
  60. default y
  61. help
  62. The evtchn driver allows a userspace process to triger event
  63. channels and to receive notification of an event channel
  64. firing.
  65. If in doubt, say yes.
  66. config XEN_BACKEND
  67. bool "Backend driver support"
  68. depends on XEN_DOM0
  69. default y
  70. help
  71. Support for backend device drivers that provide I/O services
  72. to other virtual machines.
  73. config XENFS
  74. tristate "Xen filesystem"
  75. select XEN_PRIVCMD
  76. default y
  77. help
  78. The xen filesystem provides a way for domains to share
  79. information with each other and with the hypervisor.
  80. For example, by reading and writing the "xenbus" file, guests
  81. may pass arbitrary information to the initial domain.
  82. If in doubt, say yes.
  83. config XEN_COMPAT_XENFS
  84. bool "Create compatibility mount point /proc/xen"
  85. depends on XENFS
  86. default y
  87. help
  88. The old xenstore userspace tools expect to find "xenbus"
  89. under /proc/xen, but "xenbus" is now found at the root of the
  90. xenfs filesystem. Selecting this causes the kernel to create
  91. the compatibility mount point /proc/xen if it is running on
  92. a xen platform.
  93. If in doubt, say yes.
  94. config XEN_SYS_HYPERVISOR
  95. bool "Create xen entries under /sys/hypervisor"
  96. depends on SYSFS
  97. select SYS_HYPERVISOR
  98. default y
  99. help
  100. Create entries under /sys/hypervisor describing the Xen
  101. hypervisor environment. When running native or in another
  102. virtual environment, /sys/hypervisor will still be present,
  103. but will have no xen contents.
  104. config XEN_XENBUS_FRONTEND
  105. tristate
  106. config XEN_GNTDEV
  107. tristate "userspace grant access device driver"
  108. depends on XEN
  109. default m
  110. select MMU_NOTIFIER
  111. help
  112. Allows userspace processes to use grants.
  113. config XEN_GRANT_DEV_ALLOC
  114. tristate "User-space grant reference allocator driver"
  115. depends on XEN
  116. default m
  117. help
  118. Allows userspace processes to create pages with access granted
  119. to other domains. This can be used to implement frontend drivers
  120. or as part of an inter-domain shared memory channel.
  121. config SWIOTLB_XEN
  122. def_bool y
  123. depends on PCI
  124. select SWIOTLB
  125. config XEN_TMEM
  126. bool
  127. default y if (CLEANCACHE || FRONTSWAP)
  128. help
  129. Shim to interface in-kernel Transcendent Memory hooks
  130. (e.g. cleancache and frontswap) to Xen tmem hypercalls.
  131. config XEN_PCIDEV_BACKEND
  132. tristate "Xen PCI-device backend driver"
  133. depends on PCI && X86 && XEN
  134. depends on XEN_BACKEND
  135. default m
  136. help
  137. The PCI device backend driver allows the kernel to export arbitrary
  138. PCI devices to other guests. If you select this to be a module, you
  139. will need to make sure no other driver has bound to the device(s)
  140. you want to make visible to other guests.
  141. The parameter "passthrough" allows you specify how you want the PCI
  142. devices to appear in the guest. You can choose the default (0) where
  143. PCI topology starts at 00.00.0, or (1) for passthrough if you want
  144. the PCI devices topology appear the same as in the host.
  145. The "hide" parameter (only applicable if backend driver is compiled
  146. into the kernel) allows you to bind the PCI devices to this module
  147. from the default device drivers. The argument is the list of PCI BDFs:
  148. xen-pciback.hide=(03:00.0)(04:00.0)
  149. If in doubt, say m.
  150. config XEN_PRIVCMD
  151. tristate
  152. depends on XEN
  153. default m
  154. config XEN_ACPI_PROCESSOR
  155. tristate "Xen ACPI processor"
  156. depends on XEN && X86 && ACPI_PROCESSOR && CPU_FREQ
  157. default m
  158. help
  159. This ACPI processor uploads Power Management information to the Xen
  160. hypervisor.
  161. To do that the driver parses the Power Management data and uploads
  162. said information to the Xen hypervisor. Then the Xen hypervisor can
  163. select the proper Cx and Pxx states. It also registers itslef as the
  164. SMM so that other drivers (such as ACPI cpufreq scaling driver) will
  165. not load.
  166. To compile this driver as a module, choose M here: the module will be
  167. called xen_acpi_processor If you do not know what to choose, select
  168. M here. If the CPUFREQ drivers are built in, select Y here.
  169. endmenu