🤖 AI Summary
This work addresses the challenge of fair allocation under the constraints of indivisible goods and no monetary transfers, where trade-offs arise between economic efficiency and envy-based fairness. The authors propose FairFormer, the first method to apply the Transformer architecture to discrete fair division. FairFormer employs a two-tower, permutation-equivariant design that encodes agents and items as unordered sets of tokens and generates complete allocations in a single pass via self- and cross-attention mechanisms. Trained end-to-end to maximize expected log Nash welfare, it requires neither solver supervision, iterative unrolling, nor explicit fairness labels. By combining row-wise argmax discretization with lightweight post-processing, the model enforces EF1 fairness while achieving near-optimal welfare—attaining 96–97% of optimal Nash welfare and 95–96% of utilitarian welfare—and outperforms strong baselines in both solution quality and computational efficiency.
📝 Abstract
We propose a deep neural network-based solution to the problem of allocating indivisible goods under additive subjective valuations without monetary transfers, trading off economic efficiency with envy-based fairness. We introduce FairFormer, an amortized, permutation-equivariant two-tower transformer that encodes items and agents as unordered token sets, applies self-attention within each set, and uses item-to-agent cross-attention to produce per-item assignment distributions in a single forward pass. FairFormer is trained end-to-end to maximize expected log-Nash welfare on sampled instances, requiring no solver supervision, unrolled allocation procedures, or fairness labels. At test time, we discretize by row-wise $\arg\max$ and apply a lightweight post-processing routine that transfers items to eliminate violations of envy-freeness up to one item while prioritizing improvements in Nash welfare. Our approach generalizes beyond its training regime and achieves near-optimal welfare (e.g., for uniformly sampled valuations, $96$--$97\%$ for Nash welfare; $95$--$96\%$ for utilitarian welfare), outperforming strong baselines in solution quality and/or runtime.