Fine-Grained Uncertainty Quantification for Long-Form Language Model Outputs: A Comparative Study
arXiv:2602.17431v1 Announce Type: new Abstract: Uncertainty quantification has emerged as an effective approach to closed-book hallucination detection for LLMs, but existing methods are largely designed …