π€ AI Summary
This study addresses the challenge of accurately estimating the absolute coverage of a web crawler over the crawlable URL space in the absence of external ground-truth data. The authors propose a statistical method that relies solely on longitudinal crawl data from a single crawler. By analyzing the intersections of URLs across multiple consecutive crawls, they formulate an urn-model-based estimation framework and employ linear regression to infer the coverage ratio. Notably, the approach requires neither external benchmarks nor comparisons across multiple crawlers, making it applicable to any focused longitudinal crawling scenario. Experiments on 15 semi-annual crawls of the German academic web from 2013 to 2021 demonstrate that, under stable configurations, the crawler achieves approximately 46% coverage, thereby validating the methodβs effectiveness and practical utility.
π Abstract
Web archives preserve portions of the web, but quantifying their completeness remains challenging. Prior approaches have estimated the coverage of a crawl by either comparing the outcomes of multiple crawlers, or by comparing the results of a single crawl to external ground truth datasets. We propose a method to estimate the absolute coverage of a crawl using only the archive's own longitudinal data, i.e., the data collected by multiple subsequent crawls. Our key insight is that coverage can be estimated from the empirical URL overlaps between subsequent crawls, which are in turn well described by a simple urn process. The parameters of the urn model can then be inferred from longitudinal crawl data using linear regression. Applied to our focused crawl configuration of the German Academic Web, with 15 semi-annual crawls between 2013-2021, we find a coverage of approximately 46 percent of the crawlable URL space for the stable crawl configuration regime. Our method is extremely simple, requires no external ground truth, and generalizes to any longitudinal focused crawl.