chipsalliance/Cores-VeeR-EH1

tlu_flush_path_e4

kingstone1927 opened this issue · 7 comments

Can someone help me understand this snippet of code from dec_tlu_ctl.sv? I am not familiar with the naming convention of the SweRV core either, so I would also appreciate if someone can enlighten me on that.

   assign tlu_flush_path_e4[31:1] = take_reset ? rst_vec[31:1] :

                                     ( ({31{~take_nmi & i0_mp_e4}} & exu_i0_flush_path_e4[31:1]) |
                                      ({31{~take_nmi & ~i0_mp_e4 & i1_mp_e4 & ~rfpc_i0_e4 & ~lsu_i0_exc_dc4}} & exu_i1_flush_path_e4[31:1]) |
                                      ({31{~take_nmi & sel_npc_e4}} & npc_e4[31:1]) |
                                      ({31{~take_nmi & rfpc_i0_e4}} & dec_tlu_i0_pc_e4[31:1]) |
                                      ({31{~take_nmi & rfpc_i1_e4}} & dec_tlu_i1_pc_e4[31:1]) |
                                      ({31{interrupt_valid}} & interrupt_path[31:1]) |
                                      ({31{(i0_exception_valid_e4 | lsu_exc_valid_e4 | (trigger_hit_e4 & ~trigger_hit_dmode_e4)) & ~interrupt_valid}} & {mtvec[30:1],1'b0}) |
                                      ({31{~take_nmi & mret_e4 & ~wr_mepc_wb}} & mepc[31:1]) |
                                      ({31{~take_nmi & debug_resume_req_f}} & dpc[31:1]) |
                                      ({31{~take_nmi & sel_npc_wb}} & npc_wb[31:1]) |
                                      ({31{~take_nmi & mret_e4 & wr_mepc_wb}} & dec_csr_wrdata_wb[31:1]) );

Thanks

This is the PC address that the core is going to fetch after flushing the entire pipeline due to one or more of:

  • Interrupt taken,
  • mispredict from i0 at E4,
  • mispredict from i1 at E4,
  • exception,
  • microarch refetch of the current pc,
  • microarch refetch of the next pc,
  • mret,
  • resume from halt,
  • resume from debug halt,
  • reset,
  • and nmi.

@aprnath Thank you! This is a huge help for me.

Can you point me to code sections that control/command "flushing the entire pipeline" ? Also, where in the design are the pipelines defined and created? I look forward to you answers.

I appreciate your help.

Hi @kingstone1927 ,
Follow these signals from the dec/tlu block:

design/dec/dec.sv:  output logic dec_tlu_flush_noredir_wb ,    // Tell fetch to idle on this flush
design/dec/dec.sv:   output logic dec_tlu_flush_leak_one_wb,   // single step
design/dec/dec.sv:   output logic dec_tlu_flush_err_wb,        // iside perr/ecc rfpc
design/dec/dec.sv:   output logic        dec_tlu_flush_lower_wb,     // tlu flush due to late mp, exception, rfpc, or int
design/dec/dec.sv:   output logic [31:1] dec_tlu_flush_path_wb,      // tlu flush target

@aprnath Thank you!

Can you point me to where the pipeline registers are instantiated/created?

Again, I appreciate your help

I am afraid there is single place where all the pipeline registers exist. It is distributed around the core. Look for stage name suffixes. The PRM has a pipeline block diagram that could be your guide.

@aprnath Thank you! I will take a look at the PRM

@aprnath I just have one more question for this post

rvdffe #(32) csr_rs1_ff (.*, .en(i0_e1_data_en), .din(csr_rs1_in_d[31:0]), .dout(exu_csr_rs1_e1[31:0]));

Is this an example of a pipeline register?

Thank you