LLM4Cov: Execution-Aware Agentic Learning for High-coverage Testbench Generation

1University of California, San Diego, 2NVIDIA, 3Agentry
† Work done while in NVIDIA
LLM4Cov Overview figure

Main components of LLM4Cov.

Abstract

Execution-aware LLM agents offer a promising paradigm for learning from tool feedback, but such feedback is often expensive and slow to obtain, making online reinforcement learning (RL) impractical. High-coverage hardware verification exemplifies this challenge due to its reliance on industrial simulators and non-differentiable execution signals. We propose LLM4Cov, an offline agent-learning framework that models verification as memoryless state transitions guided by deterministic evaluators. Building on this formulation, we introduce execution-validated data curation, policy-aware agentic data synthesis, and worst-state-prioritized sampling to enable scalable learning under execution constraints. We further curate a reality-aligned benchmark adapted from an existing verification suite through a revised evaluation protocol. Using the proposed pipeline, a compact 4B-parameter model achieves 69.2% coverage pass rate under agentic evaluation, outperforming its teacher by 5.3% and demonstrating competitive performance against models an order of magnitude larger.

BibTeX

@article{zhang2026llm4cov,
  title={LLM4Cov: Execution-Aware Agentic Learning for High-coverage Testbench Generation},
  author={Zhang, Hejia and Yu, Zhongming and Ho, Chia-Tung and Ren, Haoxing and Khailany, Brucek and Zhao, Jishen},
  journal={arXiv preprint arXiv:2602.16953},
  year={2026}
}