🤖 AI Summary
This study addresses the challenge of inferring static relational structure from multivariate time series, with a focus on quantifying “frustration”—i.e., structural imbalance—in brain networks within neuroscience. We propose a novel signed brain network framework: (1) constructing a significance-tested signed correlation network; (2) generating an interpretable null model via constrained entropy maximization; and (3) performing robust network inference and module detection by integrating the Bayesian Information Criterion with a signed stochastic block model. Key contributions include: (1) statistically significant frustration is pervasive in human functional brain networks; (2) subgraphs dominated by negative connections exhibit heightened sensitivity to individual differences; and (3) identified modules align closely with canonical functional brain regions and exhibit topological properties consistent with predictions of relaxed balance theory. The framework establishes an information-theoretic paradigm for modeling cross-subject variability in brain network organization.
📝 Abstract
Many complex systems - be they financial, natural or social - are composed by units - such as stocks, neurons or agents - whose joint activity can be represented as a multivariate time series. An issue of both practical and theoretical importance concerns the possibility of inferring the presence of a static relationships between any two units solely from their dynamic state. The present contribution aims at providing an answer within the frame of traditional hypothesis testing. Briefly speaking, our suggestion is that of linking any two units if behaving in a sufficiently similar way. To achieve such a goal, we project a multivariate time series onto a signed graph, by i) comparing the empirical properties of the former with those expected under a suitable benchmark and ii) linking any two units with a positive (negative) edge in case the corresponding series share a significantly large number of concordant (discordant) values. To define our benchmarks, we adopt an information-theoretic approach that is rooted into the constrained maximisation of Shannon entropy, a procedure inducing an ensemble of multivariate time series that preserves some of the empirical properties on average while randomising everything else. We showcase the possible applications of our method by addressing one of the most timely issues in the domain of neurosciences, i.e. that of determining if brain networks are frustrated or not - and, in case, to what extent. As our results suggest, this is indeed the case, the structure of the negative subgraph being more prone to inter-subject variability than the complementary, positive subgraph. At the mesoscopic level, instead, the minimisation of the Bayesian Information Criterion instantiated with the Signed Stochastic Block Model reveals that brain areas gather into modules aligning with the statistical variant of the Relaxed Balance Theory.