Skip to main content
Academic

Provably Safe Generative Sampling with Constricting Barrier Functions

arXiv:2602.21429v1 Announce Type: new Abstract: Flow-based generative models, such as diffusion models and flow matching models, have achieved remarkable success in learning complex data distributions. However, a critical gap remains for their deployment in safety-critical domains: the lack of formal guarantees that generated samples will satisfy hard constraints. We address this by proposing a safety filtering framework that acts as an online shield for any pre-trained generative model. Our key insight is to cooperate with the generative process rather than override it. We define a constricting safety tube that is relaxed at the initial noise distribution and progressively tightens to the target safe set at the final data distribution, mirroring the coarse-to-fine structure of the generative process itself. By characterizing this tube via Control Barrier Functions (CBFs), we synthesize a feedback control input through a convex Quadratic Program (QP) at each sampling step. As the tube

D
Darshan Gadginmath, Ahmed Allibhoy, Fabio Pasqualetti
· · 1 min read · 5 views

arXiv:2602.21429v1 Announce Type: new Abstract: Flow-based generative models, such as diffusion models and flow matching models, have achieved remarkable success in learning complex data distributions. However, a critical gap remains for their deployment in safety-critical domains: the lack of formal guarantees that generated samples will satisfy hard constraints. We address this by proposing a safety filtering framework that acts as an online shield for any pre-trained generative model. Our key insight is to cooperate with the generative process rather than override it. We define a constricting safety tube that is relaxed at the initial noise distribution and progressively tightens to the target safe set at the final data distribution, mirroring the coarse-to-fine structure of the generative process itself. By characterizing this tube via Control Barrier Functions (CBFs), we synthesize a feedback control input through a convex Quadratic Program (QP) at each sampling step. As the tube is loosest when noise is high and intervention is cheapest in terms of control energy, most constraint enforcement occurs when it least disrupts the model's learned structure. We prove that this mechanism guarantees safe sampling while minimizing the distributional shift from the original model at each sampling step, as quantified by the KL divergence. Our framework applies to any pre-trained flow-based generative scheme requiring no retraining or architectural modifications. We validate the approach across constrained image generation, physically-consistent trajectory sampling, and safe robotic manipulation policies, achieving 100% constraint satisfaction while preserving semantic fidelity.

Executive Summary

This article proposes a novel safety filtering framework for flow-based generative models, addressing the critical gap of lacking formal guarantees for generated samples to satisfy hard constraints in safety-critical domains. The framework, based on constricting barrier functions, cooperates with the generative process and relaxes a safety tube from the initial noise distribution to the final data distribution. By synthesizing a feedback control input through a convex Quadratic Program (QP), the mechanism guarantees safe sampling while minimizing the distributional shift from the original model. The approach is validated across various constrained tasks, achieving 100% constraint satisfaction while preserving semantic fidelity.

Key Points

  • Proposes a safety filtering framework for flow-based generative models
  • Uses constricting barrier functions to guarantee safe sampling
  • Minimizes distributional shift from the original model through a convex Quadratic Program (QP)
  • Validated across various constrained tasks with 100% constraint satisfaction

Merits

Strength

The framework provides formal guarantees for generated samples to satisfy hard constraints, addressing a critical gap in the deployment of generative models in safety-critical domains.

Strength

The approach minimizes the distributional shift from the original model, preserving semantic fidelity and ensuring that the generated samples remain faithful to the underlying data distribution.

Strength

The framework is applicable to any pre-trained flow-based generative scheme, requiring no retraining or architectural modifications.

Demerits

Limitation

The framework assumes a pre-trained generative model, which may not be readily available in practice. Developing such models can be computationally expensive and may require significant expertise.

Limitation

The approach relies on the accuracy of the control barrier function (CBF) in characterizing the safe set, which may not always be the case in complex and high-dimensional systems.

Expert Commentary

The article presents a novel and innovative approach to ensuring the safety of generated samples from flow-based generative models. The use of constricting barrier functions to guarantee safe sampling is a particularly interesting aspect of the framework. However, the assumption of a pre-trained generative model and the reliance on the accuracy of the control barrier function are potential limitations that need to be addressed. Nevertheless, the proposed framework has the potential to revolutionize the use of generative models in safety-critical domains.

Recommendations

  • Further research is needed to develop more efficient and scalable methods for computing the control barrier function.
  • The framework should be tested on more complex and high-dimensional systems to evaluate its robustness and scalability.

Sources