1. IntroductionΒΆ

The configurability and openness of the RISC-V ISA, has led to its rapid adoption globally by both industry and academia. As a consequence, today there are nearly hundreds of open/closed source hard/soft implementations of the RISC-V ISA. Exploiting the freedom of optional sub extensions and no micro-architectural mandates from the ISA, each of these implementations could possible vary from each other in any of the following :

  • The language chosen to build the target could be classical HDLs like Verilog, VHDL, or advanced languages like BSV, Chisel, etc all the way to High Level Synthesis languages. Each of these languages offer a wide variety of features which in turn defines the capabilities of target developed.

  • Each implementation can vary in its choice of the sub extensions of the ISA begin supported. This choice is typically driven by the end-use or end-domain being targeted.

  • Implementation can vary significantly in their micro-architectural designs, while still being compatible to the RISC-V ISA. A deeply pipelined core maybe be optimized for higher performance, while another core supporting the same sub-extensions may have a shorter pipeline and be optimized for area and power.

  • Core generators are also becoming quite common nowadays. For eg., the Chromite core generator from InCore is capable of configuring and controlling major aspects of the ISA and the micro-architecture and thereby generating a customized core for the end-user.

Irrespective of the above choice, one of the most common challenges faced by any RISC-V design would be : Verification. As is well known, it is crucial to enable verification at the very beginning of the design stage. In order to verify a RISC-V processor one would require:

  • An RTL/Target which needs to be tested. This target can be characterized by any of the above mentioned choices.

  • A set of tests to run on the processor.

  • A reference model to compare the results against and declare pass/fail decisions.

Compared to the immense growth of open-source RISC-V designs seen by the community, the effort in building open source verification test suites is not as impressive. One effort by the RISC-V International (RVI) in this direction has been to build an Architectural Test Suite (ATS) (a.k.a compliance suite earlier) which serves as a mechanism to confirm that the target operates in accordance to the RISC-V ISA specification. However, the ATS is not a verification suite, as it only checks for basic behaviors and does not test the functional aspects of the processor. Passing the ATS, would only mean that their interpretation of the spec is correct. It does not mean that the processor is bug-free.

To fill this gap to a certain extent, there have been a few open source random program generators like AAPG, RISC-V Torture, and Microtesk. AAPG is a python based pseudo-random assembly program generator which large number of configurable knobs which are controlled by YAML file. Torture on the other have java-based backend to generate random assembly programs. MicroTESK provides a Ruby-based language to express test scenarios, or so-called templates. Now the challenge in using all of these generators is that each requires its own environment to configure and produces its own artifacts: linker scripts, header files, libraries, etc. Therefore, each generator would require its own minimal environment to run the tests on the RISC-V target. A framework to encapsulate each of these generators into a standardized plugin/API which provides a common output interface of files, compile commands, environments, etc. which can be easily consumed by a target to run these tests would be a great asset for any designer to enable early verification

RVI has recently adopted the RISC-V SAIL formal model as the golden model of the specifications. However, SAIL will require a little more feature update to enable it to act as a reference model for verification. The primary blocking feature has been the DSL in which SAIL is written which making it less approachable to hack/modify. In view of this, the community has also adopted Spike as a pseudo golden model owing to its simple implementation and an active contribution community. One of the common practices of using these models as a reference model in verification has been to generate and compare the execution logs which are generated by the target and the reference model. A difference in the logs indicates the presence of a bug. The format of the logs produced by spike, sail and the target could be different and thus some post-processing is required to enable such a verification environment. Another point of difference across reference models would be on how they can be configured to and to what extent to mimic the DUT. One should also note that some features of the spec maybe available in one simulator and missing in the other. Thus it may be required to switch out and switch in simulators based on what is being tested. A framework which can hide these post processing and configuration mechanisms behind a standard API/plugin will enable newer designs to use any such models as a reference model very easily.

RiVer Core is an open source python based verification framework primarily aimed at addressing some of the above mentioned limitations. RiVer Core enables running tests generated from any source (random or directed) on any target (irrespective of the language of design and simulation environment) and compare results with any choice of a valid golden reference model. RiVer Core achieves this by splitting the entire verification flow into multiple standardized python-plugin calls. Each plugin encapsulates either a test-generator, target test-environment or the reference simulation environment. The framework itself provides a central control point for calling these plugins and thereby generating, compiling and simulating tests on different targets. It provides a management surface of sorts. The next section provides an overview of the framework.