fs.txt 10 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282
  1. Documentation for /proc/sys/fs/* kernel version 2.2.10
  2. (c) 1998, 1999, Rik van Riel <riel@nl.linux.org>
  3. (c) 2009, Shen Feng<shen@cn.fujitsu.com>
  4. For general info and legal blurb, please look in README.
  5. ==============================================================
  6. This file contains documentation for the sysctl files in
  7. /proc/sys/fs/ and is valid for Linux kernel version 2.2.
  8. The files in this directory can be used to tune and monitor
  9. miscellaneous and general things in the operation of the Linux
  10. kernel. Since some of the files _can_ be used to screw up your
  11. system, it is advisable to read both documentation and source
  12. before actually making adjustments.
  13. 1. /proc/sys/fs
  14. ----------------------------------------------------------
  15. Currently, these files are in /proc/sys/fs:
  16. - aio-max-nr
  17. - aio-nr
  18. - dentry-state
  19. - dquot-max
  20. - dquot-nr
  21. - file-max
  22. - file-nr
  23. - inode-max
  24. - inode-nr
  25. - inode-state
  26. - nr_open
  27. - overflowuid
  28. - overflowgid
  29. - pipe-user-pages-hard
  30. - pipe-user-pages-soft
  31. - suid_dumpable
  32. - super-max
  33. - super-nr
  34. ==============================================================
  35. aio-nr & aio-max-nr:
  36. aio-nr is the running total of the number of events specified on the
  37. io_setup system call for all currently active aio contexts. If aio-nr
  38. reaches aio-max-nr then io_setup will fail with EAGAIN. Note that
  39. raising aio-max-nr does not result in the pre-allocation or re-sizing
  40. of any kernel data structures.
  41. ==============================================================
  42. dentry-state:
  43. From linux/fs/dentry.c:
  44. --------------------------------------------------------------
  45. struct {
  46. int nr_dentry;
  47. int nr_unused;
  48. int age_limit; /* age in seconds */
  49. int want_pages; /* pages requested by system */
  50. int dummy[2];
  51. } dentry_stat = {0, 0, 45, 0,};
  52. --------------------------------------------------------------
  53. Dentries are dynamically allocated and deallocated, and
  54. nr_dentry seems to be 0 all the time. Hence it's safe to
  55. assume that only nr_unused, age_limit and want_pages are
  56. used. Nr_unused seems to be exactly what its name says.
  57. Age_limit is the age in seconds after which dcache entries
  58. can be reclaimed when memory is short and want_pages is
  59. nonzero when shrink_dcache_pages() has been called and the
  60. dcache isn't pruned yet.
  61. ==============================================================
  62. dquot-max & dquot-nr:
  63. The file dquot-max shows the maximum number of cached disk
  64. quota entries.
  65. The file dquot-nr shows the number of allocated disk quota
  66. entries and the number of free disk quota entries.
  67. If the number of free cached disk quotas is very low and
  68. you have some awesome number of simultaneous system users,
  69. you might want to raise the limit.
  70. ==============================================================
  71. file-max & file-nr:
  72. The value in file-max denotes the maximum number of file-
  73. handles that the Linux kernel will allocate. When you get lots
  74. of error messages about running out of file handles, you might
  75. want to increase this limit.
  76. Historically,the kernel was able to allocate file handles
  77. dynamically, but not to free them again. The three values in
  78. file-nr denote the number of allocated file handles, the number
  79. of allocated but unused file handles, and the maximum number of
  80. file handles. Linux 2.6 always reports 0 as the number of free
  81. file handles -- this is not an error, it just means that the
  82. number of allocated file handles exactly matches the number of
  83. used file handles.
  84. Attempts to allocate more file descriptors than file-max are
  85. reported with printk, look for "VFS: file-max limit <number>
  86. reached".
  87. ==============================================================
  88. nr_open:
  89. This denotes the maximum number of file-handles a process can
  90. allocate. Default value is 1024*1024 (1048576) which should be
  91. enough for most machines. Actual limit depends on RLIMIT_NOFILE
  92. resource limit.
  93. ==============================================================
  94. inode-max, inode-nr & inode-state:
  95. As with file handles, the kernel allocates the inode structures
  96. dynamically, but can't free them yet.
  97. The value in inode-max denotes the maximum number of inode
  98. handlers. This value should be 3-4 times larger than the value
  99. in file-max, since stdin, stdout and network sockets also
  100. need an inode struct to handle them. When you regularly run
  101. out of inodes, you need to increase this value.
  102. The file inode-nr contains the first two items from
  103. inode-state, so we'll skip to that file...
  104. Inode-state contains three actual numbers and four dummies.
  105. The actual numbers are, in order of appearance, nr_inodes,
  106. nr_free_inodes and preshrink.
  107. Nr_inodes stands for the number of inodes the system has
  108. allocated, this can be slightly more than inode-max because
  109. Linux allocates them one pageful at a time.
  110. Nr_free_inodes represents the number of free inodes (?) and
  111. preshrink is nonzero when the nr_inodes > inode-max and the
  112. system needs to prune the inode list instead of allocating
  113. more.
  114. ==============================================================
  115. overflowgid & overflowuid:
  116. Some filesystems only support 16-bit UIDs and GIDs, although in Linux
  117. UIDs and GIDs are 32 bits. When one of these filesystems is mounted
  118. with writes enabled, any UID or GID that would exceed 65535 is translated
  119. to a fixed value before being written to disk.
  120. These sysctls allow you to change the value of the fixed UID and GID.
  121. The default is 65534.
  122. ==============================================================
  123. pipe-user-pages-hard:
  124. Maximum total number of pages a non-privileged user may allocate for pipes.
  125. Once this limit is reached, no new pipes may be allocated until usage goes
  126. below the limit again. When set to 0, no limit is applied, which is the default
  127. setting.
  128. ==============================================================
  129. pipe-user-pages-soft:
  130. Maximum total number of pages a non-privileged user may allocate for pipes
  131. before the pipe size gets limited to a single page. Once this limit is reached,
  132. new pipes will be limited to a single page in size for this user in order to
  133. limit total memory usage, and trying to increase them using fcntl() will be
  134. denied until usage goes below the limit again. The default value allows to
  135. allocate up to 1024 pipes at their default size. When set to 0, no limit is
  136. applied.
  137. ==============================================================
  138. suid_dumpable:
  139. This value can be used to query and set the core dump mode for setuid
  140. or otherwise protected/tainted binaries. The modes are
  141. 0 - (default) - traditional behaviour. Any process which has changed
  142. privilege levels or is execute only will not be dumped.
  143. 1 - (debug) - all processes dump core when possible. The core dump is
  144. owned by the current user and no security is applied. This is
  145. intended for system debugging situations only. Ptrace is unchecked.
  146. This is insecure as it allows regular users to examine the memory
  147. contents of privileged processes.
  148. 2 - (suidsafe) - any binary which normally would not be dumped is dumped
  149. anyway, but only if the "core_pattern" kernel sysctl is set to
  150. either a pipe handler or a fully qualified path. (For more details
  151. on this limitation, see CVE-2006-2451.) This mode is appropriate
  152. when administrators are attempting to debug problems in a normal
  153. environment, and either have a core dump pipe handler that knows
  154. to treat privileged core dumps with care, or specific directory
  155. defined for catching core dumps. If a core dump happens without
  156. a pipe handler or fully qualifid path, a message will be emitted
  157. to syslog warning about the lack of a correct setting.
  158. ==============================================================
  159. super-max & super-nr:
  160. These numbers control the maximum number of superblocks, and
  161. thus the maximum number of mounted filesystems the kernel
  162. can have. You only need to increase super-max if you need to
  163. mount more filesystems than the current value in super-max
  164. allows you to.
  165. ==============================================================
  166. aio-nr & aio-max-nr:
  167. aio-nr shows the current system-wide number of asynchronous io
  168. requests. aio-max-nr allows you to change the maximum value
  169. aio-nr can grow to.
  170. ==============================================================
  171. 2. /proc/sys/fs/binfmt_misc
  172. ----------------------------------------------------------
  173. Documentation for the files in /proc/sys/fs/binfmt_misc is
  174. in Documentation/binfmt_misc.txt.
  175. 3. /proc/sys/fs/mqueue - POSIX message queues filesystem
  176. ----------------------------------------------------------
  177. The "mqueue" filesystem provides the necessary kernel features to enable the
  178. creation of a user space library that implements the POSIX message queues
  179. API (as noted by the MSG tag in the POSIX 1003.1-2001 version of the System
  180. Interfaces specification.)
  181. The "mqueue" filesystem contains values for determining/setting the amount of
  182. resources used by the file system.
  183. /proc/sys/fs/mqueue/queues_max is a read/write file for setting/getting the
  184. maximum number of message queues allowed on the system.
  185. /proc/sys/fs/mqueue/msg_max is a read/write file for setting/getting the
  186. maximum number of messages in a queue value. In fact it is the limiting value
  187. for another (user) limit which is set in mq_open invocation. This attribute of
  188. a queue must be less or equal then msg_max.
  189. /proc/sys/fs/mqueue/msgsize_max is a read/write file for setting/getting the
  190. maximum message size value (it is every message queue's attribute set during
  191. its creation).
  192. /proc/sys/fs/mqueue/msg_default is a read/write file for setting/getting the
  193. default number of messages in a queue value if attr parameter of mq_open(2) is
  194. NULL. If it exceed msg_max, the default value is initialized msg_max.
  195. /proc/sys/fs/mqueue/msgsize_default is a read/write file for setting/getting
  196. the default message size value if attr parameter of mq_open(2) is NULL. If it
  197. exceed msgsize_max, the default value is initialized msgsize_max.
  198. 4. /proc/sys/fs/epoll - Configuration options for the epoll interface
  199. --------------------------------------------------------
  200. This directory contains configuration options for the epoll(7) interface.
  201. max_user_watches
  202. ----------------
  203. Every epoll file descriptor can store a number of files to be monitored
  204. for event readiness. Each one of these monitored files constitutes a "watch".
  205. This configuration option sets the maximum number of "watches" that are
  206. allowed for each user.
  207. Each "watch" costs roughly 90 bytes on a 32bit kernel, and roughly 160 bytes
  208. on a 64bit one.
  209. The current default value for max_user_watches is the 1/32 of the available
  210. low memory, divided for the "watch" cost in bytes.