IP vs SoC Verification

IP Vs SoC Verification

Why do we prefer random SystemVerilog[SV] Testcases for the IP verification and directed C-Testcases for the SoC verification?

Let me address this question, explaining the difference between IP and SoC verification methodologies and how the traditional SoC verification methodology is going to evolve further.

Though random testcase is powerful in terms of finding bugs faster than the directed one, usually we prefer random test cases for the module and sub-system level verification and mostly we prefer directed testcases for the SoC level verification. At the module/sub-system level, we could randomize the scenarios as much as possible [Regression Testing] and make the IPs stable, finding more bugs and achieving the coverage closure, as the complexity of IP’s functionality is less than SoC. As the complex SoC uses such pre-verified stable IPs, SoC verification engineers generally prefer directed testcases to verify how the entire system works fine with the software [Firmware] running on the processors, than the exhaustive regression simulation with random SV/UVM testcases.

The most common SoC architecture consists of one or more embedded processors, some on-chip memory, additional functional units, and interfaces to standard buses and perhaps off-chip memory as well. Some sort of on-chip bus like AHB/AXI bus connects all the units together. Since the embedded processors run the software, the complete SoC is really the chip plus the software code that runs on these processors.

For example, if the SoC uses an ARM processor, usually we replace the ARM RTL [Encrypted Netlist] by its functional model called DSM [Design Simulation Model] that can use the firmware [Written in C] as a stimulus to initiate any operation and drive all other peripherals [RTL IPs]. So SoC verification folks write C testcases to generate various directed scenarios through firmware and verify the SoC functionality. During the simulation, the complete C source code is compiled as an object code which will be loaded into on-chip RAM. The ARM processor model [DSM] reads the object code from the memory and initiates the operation, by configuring & driving all the RTL peripheral blocks [Verilog/VHDL].

In order to do an exhaustive verification and improve the bug-finding rate at the SoC level, the ARM processor model can also be created as an AHB/AXI master BFM/Agent in SV/UVM[Based on the on-chip bus protocol] that can generate various random sequences in terms of ARM core instructions. We can define various random scenarios in UVM to model the firmware operation sequences which can eventually drive the existing lower-level IP-UVM sequences. This way we can scale-up the IP level random testcases to SoC level and do the exhaustive regression testing at the SoC level too, but still there may be many challenges of achieving coverage closure due to redundant testcases and slow simulation speed. Also, there are ways how we can create synthesizable SV/UVM BFMs [behavioural synthesis of emulation] and do this regression simulation at higher speed using hardware-based solutions like emulation/acceleration.

Also, the new verification methodology PSS [Portable Test and Stimulus Standard] is evolving to address the ongoing SoC verification challenge: porting the IP/sub-system level verification environment [HDL/SV/UVM/C testcases, Simulation/Emulation/FPGA Prototyping Platform,  environment, etc.]  to SoC level and reusing everything to verify the SoCs.

Related Posts