Challenges and Solutions in Selecting Optimal Lossless Data Compression Algorithms

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Lossless compression algorithms face inherent trade-offs among multidimensional performance metrics—particularly compression ratio and encoding/decoding speed—posing challenges in latency-sensitive, high-fidelity applications such as medical imaging. Method: This paper introduces the first unified, quantifiable multi-objective evaluation framework for lossless compression, employing normalized weighted modeling to dynamically balance compression ratio and speed across diverse data modalities (e.g., images, text). Contribution/Results: Its key innovation lies in a standardized multi-objective scoring model that bridges the gap between theoretical metrics and practical deployment requirements. Extensive experiments demonstrate the framework’s robustness in identifying scenario-optimal compressors: learned codecs achieve superior compression ratios, while traditional algorithms retain advantages in speed-critical tasks. The framework enables principled, application-aware algorithm selection without requiring domain-specific re-engineering.

Technology Category

Application Category

📝 Abstract
The rapid growth of digital data has heightened the demand for efficient lossless compression methods. However, existing algorithms exhibit trade-offs: some achieve high compression ratios, others excel in encoding or decoding speed, and none consistently perform best across all dimensions. This mismatch complicates algorithm selection for applications where multiple performance metrics are simultaneously critical, such as medical imaging, which requires both compact storage and fast retrieval. To address this challenge, we present a mathematical framework that integrates compression ratio, encoding time, and decoding time into a unified performance score. The model normalizes and balances these metrics through a principled weighting scheme, enabling objective and fair comparisons among diverse algorithms. Extensive experiments on image and text datasets validate the approach, showing that it reliably identifies the most suitable compressor for different priority settings. Results also reveal that while modern learning-based codecs often provide superior compression ratios, classical algorithms remain advantageous when speed is paramount. The proposed framework offers a robust and adaptable decision-support tool for selecting optimal lossless data compression techniques, bridging theoretical measures with practical application needs.
Problem

Research questions and friction points this paper is trying to address.

Selecting optimal lossless compression algorithms with conflicting performance trade-offs
Balancing compression ratio, encoding speed, and decoding speed requirements
Providing objective comparisons for diverse applications like medical imaging
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mathematical framework integrates compression metrics
Balances compression ratio and speed via weighting
Enables objective algorithm comparison for applications
🔎 Similar Papers
No similar papers found.
Md. Atiqur Rahman
Md. Atiqur Rahman
Lecturer, Islamic University of Technology
Computer VisionFew Shot LearningFedearated LearningSelf Supervised Learning
M
MM Fazle Rabbi
Department of Computer Science and Engineering, Bangladesh University of Business and Technology (BUBT), Dhaka, Bangladesh.