Memory requirement for new test framework
stnolting opened this issue ยท 3 comments
I'm trying to port the new test framework to a custom RISC-V core.
While testing each of the new tests I came across the jump/branch tests - for example the I
/jal-01.S
test:
inst_0:
// rd==x21, imm_val < 0,
// opcode: jal; dest:x21; immval:0x55556; align:0
TEST_JAL_OP(x13, x21, 0x55556, 1b, x2, 0,0)
As far as I can see the "immval" (=0x55556) in this test case defines the "distance" or offset of the jump-and-link target by inserting NOPs
. Thus, the generated program requires more than 400kB of program memory - mainly consisting of NOPs
.
My question - part 1:
What is the purpose of this large offset? I think the actual functionality could also be tested even with smaller offsets.
Or is the intention to test all/most of JAL
's 20-bit offset capabilities?
My question - part 2:
Is there any option to globally reduce these large offsets to generate a smaller program? For example if the targeted processor is memory-limited (let's say a small microcontroller with only a few kB of memory)?
Or is this not possible as the reference results include data words that are based on these (fixed?) large offsets?
Thank you for the detailed answer!
I managed to use a simulation-only model of the memory to run the compliance tests overcoming the physical memory limitations. Now the whole test suite (v2.1) works like a charm
I am wondering, is there something like an "official label" like RISC-V-Compliant
that can be used for a core if it passes certain tests?
By the way, I have noticed the reference signatures use zero-padding to make the size of the signature always a multiple of 4 words (at least for rv32i
). I cannot remember to have read that anywhere in the documentation, so it might be helpful to add a note somewhere.
But there is a basic question here; suppose it's an IOT chip with very
limited memory, and so it only implements the low 16 bits for address
calculations.
Can that be considered a compatible implementation?
Good question...
From my humble point of view a core can only be compliant/compatible/whatever to the standard if it fulfills the specifications. When the spec says N bit are used for certain things, N bits have to be supported/implemented. But of course any constraints of architecture-specific issues like address computations could be seen as "custom extension"... But how to check that?! I am stumped...
I'm looking forward to the next version of the framework.