slow.rst 7.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180
  1. .. _slow:
  2. ============================
  3. Why the Build System is Slow
  4. ============================
  5. A common complaint about the build system is that it's slow. There are
  6. many reasons contributing to its slowness. We will attempt to document
  7. them here.
  8. First, it is important to distinguish between a :term:`clobber build`
  9. and an :term:`incremental build`. The reasons for why each are slow can
  10. be different.
  11. The build does a lot of work
  12. ============================
  13. It may not be obvious, but the main reason the build system is slow is
  14. because it does a lot of work! The source tree consists of a few
  15. thousand C++ files. On a modern machine, we spend over 120 minutes of CPU
  16. core time compiling files! So, if you are looking for the root cause of
  17. slow clobber builds, look at the sheer volume of C++ files in the tree.
  18. You don't have enough CPU cores and MHz
  19. =======================================
  20. The build should be CPU bound. If the build system maintainers are
  21. optimizing the build system perfectly, every CPU core in your machine
  22. should be 100% saturated during a build. While this isn't currently the
  23. case (keep reading below), generally speaking, the more CPU cores you
  24. have in your machine and the more total MHz in your machine, the better.
  25. **We highly recommend building with no fewer than 4 physical CPU
  26. cores.** Please note the *physical* in this sentence. Hyperthreaded
  27. cores (an Intel Core i7 will report 8 CPU cores but only 4 are physical
  28. for example) only yield at most a 1.25x speedup per core.
  29. We also recommend using the most modern CPU model possible. Haswell
  30. chips deliver much more performance per CPU cycle than say Sandy Bridge
  31. CPUs.
  32. This cause impacts both clobber and incremental builds.
  33. You are building with a slow I/O layer
  34. ======================================
  35. The build system can be I/O bound if your I/O layer is slow. Linking
  36. libxul on some platforms and build architectures can perform gigabytes
  37. of I/O.
  38. To minimize the impact of slow I/O on build performance, **we highly
  39. recommend building with an SSD.** Power users with enough memory may opt
  40. to build from a RAM disk. Mechanical disks should be avoided if at all
  41. possible.
  42. Some may dispute the importance of an SSD on build times. It is true
  43. that the beneficial impact of an SSD can be mitigated if your system has
  44. lots of memory and the build files stay in the page cache. However,
  45. operating system memory management is complicated. You don't really have
  46. control over what or when something is evicted from the page cache.
  47. Therefore, unless your machine is a dedicated build machine or you have
  48. more memory than is needed by everything running on your machine,
  49. chances are you'll run into page cache eviction and you I/O layer will
  50. impact build performance. That being said, an SSD certainly doesn't
  51. hurt build times. And, anyone who has used a machine with an SSD will
  52. tell you how great of an investment it is for performance all around the
  53. operating system. On top of that, some automated tests are I/O bound
  54. (like those touching SQLite databases), so an SSD will make tests
  55. faster.
  56. This cause impacts both clobber and incremental builds.
  57. You don't have enough memory
  58. ============================
  59. The build system allocates a lot of memory, especially when building
  60. many things in parallel. If you don't have enough free system memory,
  61. the build will cause swap activity, slowing down your system and the
  62. build. Even if you never get to the point of swapping, the build system
  63. performs a lot of I/O and having all accessed files in memory and the
  64. page cache can significantly reduce the influence of the I/O layer on
  65. the build system.
  66. **We recommend building with no less than 8 GB of system memory.** As
  67. always, the more memory you have, the better. For a bare bones machine
  68. doing nothing more than building the source tree, anything more than 16
  69. GB is likely entering the point of diminishing returns.
  70. This cause impacts both clobber and incremental builds.
  71. You are building on Windows
  72. ===========================
  73. New processes on Windows are about a magnitude slower to spawn than on
  74. UNIX-y systems such as Linux. This is because Windows has optimized new
  75. threads while the \*NIX platforms typically optimize new processes.
  76. Anyway, the build system spawns thousands of new processes during a
  77. build. Parts of the build that rely on rapid spawning of new processes
  78. are slow on Windows as a result. This is most pronounced when running
  79. *configure*. The configure file is a giant shell script and shell
  80. scripts rely heavily on new processes. This is why configure on Windows
  81. can run over a minute slower on Windows.
  82. Another reason Windows builds are slower is because Windows lacks proper
  83. symlink support. On systems that support symlinks, we can generate a
  84. file into a staging area then symlink it into the final directory very
  85. quickly. On Windows, we have to perform a full file copy. This incurs
  86. much more I/O. And if done poorly, can muck with file modification
  87. times, messing up build dependencies. As of the summer of 2013, the
  88. impact of symlinks is being mitigated through the use
  89. of an :term:`install manifest`.
  90. These issues impact both clobber and incremental builds.
  91. Recursive make traversal is slow
  92. ================================
  93. The build system has traditionally been built by employing recursive
  94. make. Recursive make involves make iterating through directories / make
  95. files sequentially and executing each in turn. This is inefficient for
  96. directories containing few targets/tasks because make could be *starved*
  97. for work when processing these directories. Any time make is starved,
  98. the build isn't using all available CPU cycles and the build is slower
  99. as a result.
  100. Work has started in bug 907365 to fix this issue by changing the way
  101. make traverses all the make files.
  102. The impact of slow recursive make traversal is mostly felt on
  103. incremental builds. Traditionally, most of the wall time during a
  104. no-op build is spent in make traversal.
  105. make is inefficient
  106. ===================
  107. Compared to modern build backends like Tup or Ninja, make is slow and
  108. inefficient. We can only make make so fast. At some point, we'll hit a
  109. performance plateau and will need to use a different tool to make builds
  110. faster.
  111. Please note that clobber and incremental builds are different. A clobber
  112. build with make will likely be as fast as a clobber build with e.g. Tup.
  113. However, Tup should vastly outperform make when it comes to incremental
  114. builds. Therefore, this issue is mostly seen when performing incremental
  115. builds.
  116. C++ header dependency hell
  117. ==========================
  118. Modifying a *.h* file can have significant impact on the build system.
  119. If you modify a *.h* that is used by 1000 C++ files, all of those 1000
  120. C++ files will be recompiled.
  121. Our code base has traditionally been sloppy managing the impact of
  122. changed headers on build performance. Bug 785103 tracks improving the
  123. situation.
  124. This issue mostly impacts the times of an :term:`incremental build`.
  125. A search/indexing service on your machine is running
  126. ====================================================
  127. Many operating systems have a background service that automatically
  128. indexes filesystem content to make searching faster. On Windows, you
  129. have the Windows Search Service. On OS X, you have Finder.
  130. These background services sometimes take a keen interest in the files
  131. being produced as part of the build. Since the build system produces
  132. hundreds of megabytes or even a few gigabytes of file data, you can
  133. imagine how much work this is to index! If this work is being performed
  134. while the build is running, your build will be slower.
  135. OS X's Finder is notorious for indexing when the build is running. And,
  136. it has a tendency to suck up a whole CPU core. This can make builds
  137. several minutes slower. If you build with ``mach`` and have the optional
  138. ``psutil`` package built (it requires Python development headers - see
  139. :ref:`python` for more) and Finder is running during a build, mach will
  140. print a warning at the end of the build, complete with instructions on
  141. how to fix it.