🤖 AI Summary
This study addresses the challenge posed by excessive zero claims in insurance data, which hinders standard count models from simultaneously accommodating excess zeros and stochastic monotonicity, leading to inconsistent posterior credibility adjustments. To resolve this, the paper introduces stochastic monotonicity constraints into a count random-effects model with excess zeros, leveraging both zero-inflated and hurdle frameworks. The proposed approach incorporates random effects to capture longitudinal dependence and unobserved heterogeneity while rigorously enforcing stochastic monotonicity. By accurately modeling the joint distribution of zero and positive claims under this constraint, the model achieves theoretically coherent and empirically robust credibility inference, marking the first integration of stochastic monotonicity into such over-dispersed, zero-heavy count settings.
📝 Abstract
Standard count models such as the Poisson and Negative Binomial models often fail to capture the large proportion of zero claims commonly observed in insurance data. To address such issue of excessive zeros, zero-inflated and hurdle models introduce additional parameters that explicitly account for excess zeros, thereby improving the joint representation of zero and positive claim outcomes. These models have further been extended with random effects to accommodate longitudinal dependence and unobserved heterogeneity. However, their consistency with fundamental probabilistic principles in insurance, particularly stochastic monotonicity, has not been formally examined. This paper provides a rigorous analysis showing that standard counting random-effect models for excessive zeros may violate this property, leading to inconsistencies in posterior credibility. We then propose new classes of counting random-effect models that both accommodate excessive zeros and ensure stochastic monotonicity, thereby providing fair and theoretically coherent credibility adjustments as claim histories evolve.