sysfs-devices-system-cpu 6.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204
  1. What: /sys/devices/system/cpu/
  2. Date: pre-git history
  3. Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
  4. Description:
  5. A collection of both global and individual CPU attributes
  6. Individual CPU attributes are contained in subdirectories
  7. named by the kernel's logical CPU number, e.g.:
  8. /sys/devices/system/cpu/cpu#/
  9. What: /sys/devices/system/cpu/sched_mc_power_savings
  10. /sys/devices/system/cpu/sched_smt_power_savings
  11. Date: June 2006
  12. Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
  13. Description: Discover and adjust the kernel's multi-core scheduler support.
  14. Possible values are:
  15. 0 - No power saving load balance (default value)
  16. 1 - Fill one thread/core/package first for long running threads
  17. 2 - Also bias task wakeups to semi-idle cpu package for power
  18. savings
  19. sched_mc_power_savings is dependent upon SCHED_MC, which is
  20. itself architecture dependent.
  21. sched_smt_power_savings is dependent upon SCHED_SMT, which
  22. is itself architecture dependent.
  23. The two files are independent of each other. It is possible
  24. that one file may be present without the other.
  25. Introduced by git commit 5c45bf27.
  26. What: /sys/devices/system/cpu/kernel_max
  27. /sys/devices/system/cpu/offline
  28. /sys/devices/system/cpu/online
  29. /sys/devices/system/cpu/possible
  30. /sys/devices/system/cpu/present
  31. Date: December 2008
  32. Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
  33. Description: CPU topology files that describe kernel limits related to
  34. hotplug. Briefly:
  35. kernel_max: the maximum cpu index allowed by the kernel
  36. configuration.
  37. offline: cpus that are not online because they have been
  38. HOTPLUGGED off or exceed the limit of cpus allowed by the
  39. kernel configuration (kernel_max above).
  40. online: cpus that are online and being scheduled.
  41. possible: cpus that have been allocated resources and can be
  42. brought online if they are present.
  43. present: cpus that have been identified as being present in
  44. the system.
  45. See Documentation/cputopology.txt for more information.
  46. What: /sys/devices/system/cpu/probe
  47. /sys/devices/system/cpu/release
  48. Date: November 2009
  49. Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
  50. Description: Dynamic addition and removal of CPU's. This is not hotplug
  51. removal, this is meant complete removal/addition of the CPU
  52. from the system.
  53. probe: writes to this file will dynamically add a CPU to the
  54. system. Information written to the file to add CPU's is
  55. architecture specific.
  56. release: writes to this file dynamically remove a CPU from
  57. the system. Information writtento the file to remove CPU's
  58. is architecture specific.
  59. What: /sys/devices/system/cpu/cpu#/node
  60. Date: October 2009
  61. Contact: Linux memory management mailing list <linux-mm@kvack.org>
  62. Description: Discover NUMA node a CPU belongs to
  63. When CONFIG_NUMA is enabled, a symbolic link that points
  64. to the corresponding NUMA node directory.
  65. For example, the following symlink is created for cpu42
  66. in NUMA node 2:
  67. /sys/devices/system/cpu/cpu42/node2 -> ../../node/node2
  68. What: /sys/devices/system/cpu/cpu#/node
  69. Date: October 2009
  70. Contact: Linux memory management mailing list <linux-mm@kvack.org>
  71. Description: Discover NUMA node a CPU belongs to
  72. When CONFIG_NUMA is enabled, a symbolic link that points
  73. to the corresponding NUMA node directory.
  74. For example, the following symlink is created for cpu42
  75. in NUMA node 2:
  76. /sys/devices/system/cpu/cpu42/node2 -> ../../node/node2
  77. What: /sys/devices/system/cpu/cpu#/topology/core_id
  78. /sys/devices/system/cpu/cpu#/topology/core_siblings
  79. /sys/devices/system/cpu/cpu#/topology/core_siblings_list
  80. /sys/devices/system/cpu/cpu#/topology/physical_package_id
  81. /sys/devices/system/cpu/cpu#/topology/thread_siblings
  82. /sys/devices/system/cpu/cpu#/topology/thread_siblings_list
  83. Date: December 2008
  84. Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
  85. Description: CPU topology files that describe a logical CPU's relationship
  86. to other cores and threads in the same physical package.
  87. One cpu# directory is created per logical CPU in the system,
  88. e.g. /sys/devices/system/cpu/cpu42/.
  89. Briefly, the files above are:
  90. core_id: the CPU core ID of cpu#. Typically it is the
  91. hardware platform's identifier (rather than the kernel's).
  92. The actual value is architecture and platform dependent.
  93. core_siblings: internal kernel map of cpu#'s hardware threads
  94. within the same physical_package_id.
  95. core_siblings_list: human-readable list of the logical CPU
  96. numbers within the same physical_package_id as cpu#.
  97. physical_package_id: physical package id of cpu#. Typically
  98. corresponds to a physical socket number, but the actual value
  99. is architecture and platform dependent.
  100. thread_siblings: internel kernel map of cpu#'s hardware
  101. threads within the same core as cpu#
  102. thread_siblings_list: human-readable list of cpu#'s hardware
  103. threads within the same core as cpu#
  104. See Documentation/cputopology.txt for more information.
  105. What: /sys/devices/system/cpu/cpuidle/current_driver
  106. /sys/devices/system/cpu/cpuidle/current_governer_ro
  107. Date: September 2007
  108. Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
  109. Description: Discover cpuidle policy and mechanism
  110. Various CPUs today support multiple idle levels that are
  111. differentiated by varying exit latencies and power
  112. consumption during idle.
  113. Idle policy (governor) is differentiated from idle mechanism
  114. (driver)
  115. current_driver: displays current idle mechanism
  116. current_governor_ro: displays current idle policy
  117. See files in Documentation/cpuidle/ for more information.
  118. What: /sys/devices/system/cpu/cpu#/cpufreq/*
  119. Date: pre-git history
  120. Contact: cpufreq@vger.kernel.org
  121. Description: Discover and change clock speed of CPUs
  122. Clock scaling allows you to change the clock speed of the
  123. CPUs on the fly. This is a nice method to save battery
  124. power, because the lower the clock speed, the less power
  125. the CPU consumes.
  126. There are many knobs to tweak in this directory.
  127. See files in Documentation/cpu-freq/ for more information.
  128. In particular, read Documentation/cpu-freq/user-guide.txt
  129. to learn how to control the knobs.
  130. What: /sys/devices/system/cpu/cpu*/cache/index3/cache_disable_{0,1}
  131. Date: August 2008
  132. KernelVersion: 2.6.27
  133. Contact: discuss@x86-64.org
  134. Description: Disable L3 cache indices
  135. These files exist in every CPU's cache/index3 directory. Each
  136. cache_disable_{0,1} file corresponds to one disable slot which
  137. can be used to disable a cache index. Reading from these files
  138. on a processor with this functionality will return the currently
  139. disabled index for that node. There is one L3 structure per
  140. node, or per internal node on MCM machines. Writing a valid
  141. index to one of these files will cause the specificed cache
  142. index to be disabled.
  143. All AMD processors with L3 caches provide this functionality.
  144. For details, see BKDGs at
  145. http://developer.amd.com/documentation/guides/Pages/default.aspx