DISCO: A Browser-Based Privacy-Preserving Framework for Distributed Collaborative Learning

📅 2025-11-24
📈 Citations: 0
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
Data silos—arising from privacy regulations, legal constraints, and intellectual property rights—undermine statistical power and exacerbate accessibility bias in machine learning. To address this, we propose a lightweight, browser-based distributed collaborative learning framework enabling non-technical users to jointly train models without sharing raw data. Our modular architecture unifies federated and decentralized paradigms, integrating client-side browser training, a frontend ML inference engine, end-to-end encrypted communication, and multi-tiered privacy safeguards. It supports customizable weight aggregation strategies to enhance model personalization and robustness against bias. The open-source platform is cross-device compatible—including smartphones—and requires only a web browser for participation. Empirical evaluation demonstrates significant improvements in usability, fairness, and scalability of collaborative modeling while preserving data confidentiality and regulatory compliance.

Technology Category

Application Category

📝 Abstract
Data is often impractical to share for a range of well considered reasons, such as concerns over privacy, intellectual property, and legal constraints. This not only fragments the statistical power of predictive models, but creates an accessibility bias, where accuracy becomes inequitably distributed to those who have the resources to overcome these concerns. We present DISCO: an open-source DIStributed COllaborative learning platform accessible to non-technical users, offering a means to collaboratively build machine learning models without sharing any original data or requiring any programming knowledge. DISCO's web application trains models locally directly in the browser, making our tool cross-platform out-of-the-box, including smartphones. The modular design of disco offers choices between federated and decentralized paradigms, various levels of privacy guarantees and several approaches to weight aggregation strategies that allow for model personalization and bias resilience in the collaborative training. Code repository is available at https://github.com/epfml/disco and a showcase web interface at https://discolab.ai
Problem

Research questions and friction points this paper is trying to address.

Enables collaborative ML without sharing original data
Provides privacy-preserving distributed learning for non-technical users
Offers federated/decentralized training with model personalization options
Innovation

Methods, ideas, or system contributions that make the work stand out.

Browser-based federated learning without sharing data
Cross-platform local training including smartphones
Modular design supporting privacy and personalization
🔎 Similar Papers
No similar papers found.
J
Julien T. T. Vignoud
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
V
Valérian Rousset
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
H
Hugo El Guedj
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
I
Ignacio Aleman
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
W
Walid Bennaceur
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
B
Batuhan Faik Derinbay
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
E
Eduard Ďurech
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
D
Damien Gengler
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
L
Lucas Giordano
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
Felix Grimberg
Felix Grimberg
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
F
Franziska Lippoldt
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
C
Christina Kopidaki
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
J
Jiafan Liu
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
L
Lauris Lopata
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
N
Nathan Maire
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
P
Paul Mansat
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
M
Martin Milenkoski
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
E
Emmanuel Omont
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
G
GĂŒneß ÖzgĂŒn
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
M
Mina Petrović
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
F
Francesco Posa
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
M
Morgan Ridel
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
G
Giorgio Savini
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland
Marcel Torne
Marcel Torne
Researcher, Stanford University
RoboticsMachine LearningReinforcement LearningDeep Reinforcement Learning
L
Lucas Trognon
School of Computer and Communication Sciences, EPFL (Ecole polytechnique fédérale de Lausanne), Lausanne, Switzerland