🤖 AI Summary
Existing focused web crawlers struggle to jointly optimize page-level and domain-level relevance, leading to suboptimal coverage of both relevant pages and authoritative domains.
Method: This paper proposes a topic-oriented reinforcement learning framework that models crawling as a novel Markov Decision Process (MDP), explicitly optimizing both page harvest rate and relevant domain discovery. To address the intractable state-action space, we introduce Tree-Frontier—a tree-structured adaptive discretization algorithm—that provably preserves theoretical optimality while reducing per-step URL evaluation complexity by several orders of magnitude.
Results: Evaluated on real-world online web data across multiple authentic topical tasks, our method achieves Pareto dominance over current state-of-the-art approaches, significantly improving dual coverage—i.e., both relevant pages and relevant domains—without compromising scalability or theoretical guarantees.
📝 Abstract
A focused crawler aims at discovering as many web pages and web sites relevant to a target topic as possible, while avoiding irrelevant ones. Reinforcement Learning (RL) has been a promising direction for optimizing focused crawling, because RL can naturally optimize the long-term profit of discovering relevant web locations within the context of a reward. In this paper, we propose TRES, a novel RL-empowered framework for focused crawling that aims at maximizing both the number of relevant web pages (aka extit{harvest rate}) and the number of relevant web sites ( extit{domains}). We model the focused crawling problem as a novel Markov Decision Process (MDP), which the RL agent aims to solve by determining an optimal crawling strategy. To overcome the computational infeasibility of exhaustively searching for the best action at each time step, we propose Tree-Frontier, a provably efficient tree-based sampling algorithm that adaptively discretizes the large state and action spaces and evaluates only a few representative actions. Experimentally, utilizing online real-world data, we show that TRES significantly outperforms and Pareto-dominates state-of-the-art methods in terms of harvest rate and the number of retrieved relevant domains, while it provably reduces by orders of magnitude the number of URLs needed to be evaluated at each crawling step.