riscv-non-isa/server-soc

Rules around access to non-existent address space, DMA/hart access to memory?

andreiw opened this issue · 3 comments

The Arm BSA (https://developer.arm.com/documentation/den0094/latest/) has a section on "Memory Map" (3.4) with these rules. Do we need similar ones?

RB_MEM_02: Where a memory access is to an unpopulated part of the addressable memory space, accesses must be
terminated in a manner that is presented to the PE as either a precise Data Abort, or as a system error
interrupt, or an SPI, or LPI interrupt to be delivered to the GIC.

RB_MEM_03: All Non-secure on-chip DMA requesters in a base system that are expected to be under the control of the
operating system or hypervisor must be capable of addressing all of the Non-secure address space.

RB_MEM_04: If the DMA requests goes through a SMMU then the requester must be capable of addressing all of the
Non-secure address space when the SMMU is turned off.

RB_MEM_05: All PEs must be able to access all of the Non-secure address space

RB_MEM_06: Non-secure off-chip devices that cannot directly address all of the Non-secure address space must be placed
behind a stage 1 SMMU that is compatible with the Arm SMMUv2 or SMMUv3 specification, that has an
output address size large enough to address all of the Non-secure address space. See Section 3.7.

RB_MEM_07: Where it is possible for the forward progress of a memory transaction to depend on a second memory access,
the system must avoid deadlock if the memory access gets ordered behind the original transaction.

  1. For non-existent - i.e. vacant memory regions - the RISC-V architecture classifies them as I/O regions but with attributes specifying that no accesses are supported. Such accesses will lead to an access-fault exception corresponding to the original access type. Since there isnt that optionality a mandate here was not required.
  2. The ADR_020 requires the host bridge to enforce physical memory attribute checks (analogous to PMA) and physical memory protection checks (analogous to PMP) and treat violating requests as unsupported requests.
  3. The RISC-V IOMMU defines a Off mode that does not do any address translation or protection checks. The ADR_020 requirements on PMA/PMP checks still apply to all transactions even when IOMMU is in Off mode. This is mandated by the IOMMU specification.
  4. Hart access to memory is governed by PMA/PMP checks and so further mandates were not required.
  5. Supporting an IOMMU and having all DMA capable peripherals be governed by IOMMUs is mandated in IOM_010 and IOM_020. IOM_050 requires support for all virtual memory system modes supported by the harts. IOM_220 requires the IOMMU to support a physical address width that is at least as wide as that supported by the application processor harts in the SoC.
  6. CCA_010 requires the host bridges to honor the PCIe memory ordering rules. The IOMMU specification requires the IOMMU data structures to be located in Main memory and the host bridge is required to enforce physical memory attribute checks. The RISC-V IOMMU also does not support a "block on fault" which can lead to deadlocks with protocols like PCIe.

Thanks, that clarifies things. Do we need a rule like the RB_MEM_05 one or is that also implied somewhere that every hart can access every bit of memory?

For RISC-V architecture I cant find a restriction that prohibits a RV64 implementation from attempting to access all of 2^56 address space and RV32 from accessing all of 2^34 memory. Whether the access is allowed is determined by the memory protections defined by the PMP and the attributes of the memory defined by PMA. So that requirement may not needed. And the rb_mem_05 is somewhat very broad because some memory ranges may only be accessible by certain types of accesses - e.g., some memory ranges may have an Atomicity PMA that disallows Atomic operations and for some memory ranges the access type PMA may restrict the width of the access, etc.