π€ AI Summary
Existing methods for neural network parameter generation are constrained by fixed architectures, weight dimensions, and permutation symmetries, hindering generalization across diverse architectures. This work proposes a width-agnostic network generation framework that overcomes these limitations by partitioning weight matrices into structured local fields. For the first time, it jointly models discrete architectural tokens and continuous weight blocks within a unified sequence model. The approach integrates a graph hypernetwork, a CNN decoder, and a patch-based weight representation to achieve structural alignment in weight space. Evaluated on the ManiSkill3 robotic manipulation benchmark, the method achieves over 85% success rate in generating functional weights for unseen architectural topologies, substantially outperforming baseline approaches that fail to generalize beyond their training architectures.
π Abstract
Generative modeling of neural network parameters is often tied to architectures because standard parameter representations rely on known weight-matrix dimensions. Generation is further complicated by permutation symmetries that allow networks to model similar input-output functions while having widely different, unaligned parameterizations. In this work, we introduce Neural Network Diffusion Transformers (NNiTs), which generate weights in a width-agnostic manner by tokenizing weight matrices into patches and modeling them as locally structured fields. We establish that Graph HyperNetworks (GHNs) with a convolutional neural network (CNN) decoder structurally align the weight space, creating the local correlation necessary for patch-based processing. Focusing on MLPs, where permutation symmetry is especially apparent, NNiT generates fully functional networks across a range of architectures. Our approach jointly models discrete architecture tokens and continuous weight patches within a single sequence model. On ManiSkill3 robotics tasks, NNiT achieves >85% success on architecture topologies unseen during training, while baseline approaches fail to generalize.