🤖 AI Summary
Possibilistic inference models suffer from low computational efficiency in computing possibility distributions (or their α-cuts).
Method: This paper introduces the first variational approximate inference framework tailored for possibility measures. Leveraging possibilistic theory and convex optimization, we design a surrogate objective function that integrates interval propagation and constraint relaxation, ensuring semantic rigor while enabling efficient approximation.
Contribution/Results: Our key innovation is the systematic adaptation of variational inference principles to the possibilistic reasoning paradigm—preserving both interpretability and scalability. Evaluated on standard uncertainty benchmarks, our method achieves 3–5× speedup in inference time, with possibility profile error ≤ 0.02, enabling real-time deployment.