π€ AI Summary
This study addresses the unclear role of reverse image search platforms in visual misinformation fact-checking by conducting a large-scale, 15-day systematic audit encompassing 34,486 results. Integrating content classification and ranking analysis, it extends algorithmic gatekeeping theory into the visual domain for the first time. The findings reveal that less than 30% of results consist of debunking content, which is consistently ranked lower; a substantial portion comprises irrelevant or repetitious misinformation, with an initial βdata voidβ observed early in the misinformation lifecycle. Overall information quality follows an inverted U-shaped trajectory over time. These results suggest that reverse image search may amplify rather than correct visual misinformation, highlighting its potential risks as an algorithmic gatekeeper within the visual information ecosystem.
π Abstract
As visual misinformation becomes increasingly prevalent, platform algorithms act as intermediaries that curate information for users' verification practices. Yet, it remains unclear how algorithmic gatekeeping tools, such as reverse image search (RIS), shape users' information exposure during fact-checking. This study systematically audits Google RIS by reversely searching newly identified misleading images over a 15-day window and analyzing 34,486 collected top-ranked search results. We find that Google RIS returns a substantial volume of irrelevant information and repeated misinformation, whereas debunking content constitutes less than 30% of search results. Debunking content faces visibility challenges in rankings amid repeated misinformation and irrelevant information. Our findings also indicate an inverted U-shaped curve of RIS results page quality over time, likely due to search engine "data voids" when visual falsehoods first appear. These findings contribute to scholarship of visual misinformation verification, and extend algorithmic gatekeeping research to the visual domain.