🤖 AI Summary
This paper studies learning a random $k$-uniform hypergraph via single-round non-adaptive queries: given $n$ vertices, the hidden Erdős–Rényi-type hypergraph is identified using subset queries that test whether a queried subset contains any complete hyperedge. We establish, for the first time, a rigorous equivalence between this problem and classical group testing—bypassing the generic $Omega(min{m^2 log n,, n^2})$ query lower bound applicable to arbitrary hypergraphs. Leveraging group testing theory and probabilistic analysis, we design a combinatorial non-adaptive querying strategy achieving $O(m log n)$ query complexity. This complexity is information-theoretically optimal for random $k$-uniform hypergraphs and comes with provable exact recovery guarantees—significantly improving upon worst-case lower bounds.
📝 Abstract
We study the problem of learning a hidden hypergraph $G=(V,E)$ by making a single batch of queries (non-adaptively). We consider the hyperedge detection model, in which every query must be of the form: ``Does this set $Ssubseteq V$ contain at least one full hyperedge?'' In this model, it is known that there is no algorithm that allows to non-adaptively learn arbitrary hypergraphs by making fewer than $Omega(min{m^2log n, n^2})$ even when the hypergraph is constrained to be $2$-uniform (i.e. the hypergraph is simply a graph). Recently, Li et al. overcame this lower bound in the setting in which $G$ is a graph by assuming that the graph learned is sampled from an ErdH{o}s-R'enyi model. We generalize the result of Li et al. to the setting of random $k$-uniform hypergraphs. To achieve this result, we leverage a novel equivalence between the problem of learning a single hyperedge and the standard group testing problem. This latter result may also be of independent interest.