vgic-mapped-irqs.txt 9.3 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188
  1. KVM/ARM VGIC Forwarded Physical Interrupts
  2. ==========================================
  3. The KVM/ARM code implements software support for the ARM Generic
  4. Interrupt Controller's (GIC's) hardware support for virtualization by
  5. allowing software to inject virtual interrupts to a VM, which the guest
  6. OS sees as regular interrupts. The code is famously known as the VGIC.
  7. Some of these virtual interrupts, however, correspond to physical
  8. interrupts from real physical devices. One example could be the
  9. architected timer, which itself supports virtualization, and therefore
  10. lets a guest OS program the hardware device directly to raise an
  11. interrupt at some point in time. When such an interrupt is raised, the
  12. host OS initially handles the interrupt and must somehow signal this
  13. event as a virtual interrupt to the guest. Another example could be a
  14. passthrough device, where the physical interrupts are initially handled
  15. by the host, but the device driver for the device lives in the guest OS
  16. and KVM must therefore somehow inject a virtual interrupt on behalf of
  17. the physical one to the guest OS.
  18. These virtual interrupts corresponding to a physical interrupt on the
  19. host are called forwarded physical interrupts, but are also sometimes
  20. referred to as 'virtualized physical interrupts' and 'mapped interrupts'.
  21. Forwarded physical interrupts are handled slightly differently compared
  22. to virtual interrupts generated purely by a software emulated device.
  23. The HW bit
  24. ----------
  25. Virtual interrupts are signalled to the guest by programming the List
  26. Registers (LRs) on the GIC before running a VCPU. The LR is programmed
  27. with the virtual IRQ number and the state of the interrupt (Pending,
  28. Active, or Pending+Active). When the guest ACKs and EOIs a virtual
  29. interrupt, the LR state moves from Pending to Active, and finally to
  30. inactive.
  31. The LRs include an extra bit, called the HW bit. When this bit is set,
  32. KVM must also program an additional field in the LR, the physical IRQ
  33. number, to link the virtual with the physical IRQ.
  34. When the HW bit is set, KVM must EITHER set the Pending OR the Active
  35. bit, never both at the same time.
  36. Setting the HW bit causes the hardware to deactivate the physical
  37. interrupt on the physical distributor when the guest deactivates the
  38. corresponding virtual interrupt.
  39. Forwarded Physical Interrupts Life Cycle
  40. ----------------------------------------
  41. The state of forwarded physical interrupts is managed in the following way:
  42. - The physical interrupt is acked by the host, and becomes active on
  43. the physical distributor (*).
  44. - KVM sets the LR.Pending bit, because this is the only way the GICV
  45. interface is going to present it to the guest.
  46. - LR.Pending will stay set as long as the guest has not acked the interrupt.
  47. - LR.Pending transitions to LR.Active on the guest read of the IAR, as
  48. expected.
  49. - On guest EOI, the *physical distributor* active bit gets cleared,
  50. but the LR.Active is left untouched (set).
  51. - KVM clears the LR on VM exits when the physical distributor
  52. active state has been cleared.
  53. (*): The host handling is slightly more complicated. For some forwarded
  54. interrupts (shared), KVM directly sets the active state on the physical
  55. distributor before entering the guest, because the interrupt is never actually
  56. handled on the host (see details on the timer as an example below). For other
  57. forwarded interrupts (non-shared) the host does not deactivate the interrupt
  58. when the host ISR completes, but leaves the interrupt active until the guest
  59. deactivates it. Leaving the interrupt active is allowed, because Linux
  60. configures the physical GIC with EOIMode=1, which causes EOI operations to
  61. perform a priority drop allowing the GIC to receive other interrupts of the
  62. default priority.
  63. Forwarded Edge and Level Triggered PPIs and SPIs
  64. ------------------------------------------------
  65. Forwarded physical interrupts injected should always be active on the
  66. physical distributor when injected to a guest.
  67. Level-triggered interrupts will keep the interrupt line to the GIC
  68. asserted, typically until the guest programs the device to deassert the
  69. line. This means that the interrupt will remain pending on the physical
  70. distributor until the guest has reprogrammed the device. Since we
  71. always run the VM with interrupts enabled on the CPU, a pending
  72. interrupt will exit the guest as soon as we switch into the guest,
  73. preventing the guest from ever making progress as the process repeats
  74. over and over. Therefore, the active state on the physical distributor
  75. must be set when entering the guest, preventing the GIC from forwarding
  76. the pending interrupt to the CPU. As soon as the guest deactivates the
  77. interrupt, the physical line is sampled by the hardware again and the host
  78. takes a new interrupt if and only if the physical line is still asserted.
  79. Edge-triggered interrupts do not exhibit the same problem with
  80. preventing guest execution that level-triggered interrupts do. One
  81. option is to not use HW bit at all, and inject edge-triggered interrupts
  82. from a physical device as pure virtual interrupts. But that would
  83. potentially slow down handling of the interrupt in the guest, because a
  84. physical interrupt occurring in the middle of the guest ISR would
  85. preempt the guest for the host to handle the interrupt. Additionally,
  86. if you configure the system to handle interrupts on a separate physical
  87. core from that running your VCPU, you still have to interrupt the VCPU
  88. to queue the pending state onto the LR, even though the guest won't use
  89. this information until the guest ISR completes. Therefore, the HW
  90. bit should always be set for forwarded edge-triggered interrupts. With
  91. the HW bit set, the virtual interrupt is injected and additional
  92. physical interrupts occurring before the guest deactivates the interrupt
  93. simply mark the state on the physical distributor as Pending+Active. As
  94. soon as the guest deactivates the interrupt, the host takes another
  95. interrupt if and only if there was a physical interrupt between injecting
  96. the forwarded interrupt to the guest and the guest deactivating the
  97. interrupt.
  98. Consequently, whenever we schedule a VCPU with one or more LRs with the
  99. HW bit set, the interrupt must also be active on the physical
  100. distributor.
  101. Forwarded LPIs
  102. --------------
  103. LPIs, introduced in GICv3, are always edge-triggered and do not have an
  104. active state. They become pending when a device signal them, and as
  105. soon as they are acked by the CPU, they are inactive again.
  106. It therefore doesn't make sense, and is not supported, to set the HW bit
  107. for physical LPIs that are forwarded to a VM as virtual interrupts,
  108. typically virtual SPIs.
  109. For LPIs, there is no other choice than to preempt the VCPU thread if
  110. necessary, and queue the pending state onto the LR.
  111. Putting It Together: The Architected Timer
  112. ------------------------------------------
  113. The architected timer is a device that signals interrupts with level
  114. triggered semantics. The timer hardware is directly accessed by VCPUs
  115. which program the timer to fire at some point in time. Each VCPU on a
  116. system programs the timer to fire at different times, and therefore the
  117. hardware is multiplexed between multiple VCPUs. This is implemented by
  118. context-switching the timer state along with each VCPU thread.
  119. However, this means that a scenario like the following is entirely
  120. possible, and in fact, typical:
  121. 1. KVM runs the VCPU
  122. 2. The guest programs the time to fire in T+100
  123. 3. The guest is idle and calls WFI (wait-for-interrupts)
  124. 4. The hardware traps to the host
  125. 5. KVM stores the timer state to memory and disables the hardware timer
  126. 6. KVM schedules a soft timer to fire in T+(100 - time since step 2)
  127. 7. KVM puts the VCPU thread to sleep (on a waitqueue)
  128. 8. The soft timer fires, waking up the VCPU thread
  129. 9. KVM reprograms the timer hardware with the VCPU's values
  130. 10. KVM marks the timer interrupt as active on the physical distributor
  131. 11. KVM injects a forwarded physical interrupt to the guest
  132. 12. KVM runs the VCPU
  133. Notice that KVM injects a forwarded physical interrupt in step 11 without
  134. the corresponding interrupt having actually fired on the host. That is
  135. exactly why we mark the timer interrupt as active in step 10, because
  136. the active state on the physical distributor is part of the state
  137. belonging to the timer hardware, which is context-switched along with
  138. the VCPU thread.
  139. If the guest does not idle because it is busy, the flow looks like this
  140. instead:
  141. 1. KVM runs the VCPU
  142. 2. The guest programs the time to fire in T+100
  143. 4. At T+100 the timer fires and a physical IRQ causes the VM to exit
  144. (note that this initially only traps to EL2 and does not run the host ISR
  145. until KVM has returned to the host).
  146. 5. With interrupts still disabled on the CPU coming back from the guest, KVM
  147. stores the virtual timer state to memory and disables the virtual hw timer.
  148. 6. KVM looks at the timer state (in memory) and injects a forwarded physical
  149. interrupt because it concludes the timer has expired.
  150. 7. KVM marks the timer interrupt as active on the physical distributor
  151. 7. KVM enables the timer, enables interrupts, and runs the VCPU
  152. Notice that again the forwarded physical interrupt is injected to the
  153. guest without having actually been handled on the host. In this case it
  154. is because the physical interrupt is never actually seen by the host because the
  155. timer is disabled upon guest return, and the virtual forwarded interrupt is
  156. injected on the KVM guest entry path.