🤖 AI Summary
The performance benefits and trade-offs of WebAssembly (Wasm) in edge–cloud collaborative serverless environments remain poorly understood. This paper introduces Lumos, a performance modeling and benchmarking framework that systematically evaluates containerized, interpreted Wasm, and ahead-of-time (AOT)-compiled Wasm runtimes across the edge–cloud continuum. It identifies key workload-, system-, and environment-level factors influencing runtime behavior. The work proposes a novel, multi-dimensional Wasm performance model tailored for serverless computing and quantitatively demonstrates that AOT-compiled Wasm reduces image size by up to 30× and cold-start latency by up to 16%, whereas interpreted Wasm incurs up to 55× higher warm-start latency and 10× increased I/O serialization overhead. These findings provide empirical evidence and theoretical foundations for informed Wasm runtime selection and optimization in edge–cloud serverless deployments.
📝 Abstract
WebAssembly has emerged as a lightweight and portable runtime to execute serverless functions, particularly in heterogeneous and resource-constrained environments such as the Edge Cloud Continuum. However, the performance benefits versus trade-offs remain insufficiently understood. This paper presents Lumos, a performance model and benchmarking tool for characterizing serverless runtimes. Lumos identifies workload, system, and environment-level performance drivers in the Edge-Cloud Continuum. We benchmark state-of-the-art containers and the Wasm runtime in interpreted mode and with ahead-of-time compilation. Our performance characterization shows that AoT-compiled Wasm images are up to 30x smaller and decrease cold-start latency by up to 16% compared to containers, while interpreted Wasm suffers up to 55x higher warm latency and up to 10x I/O-serialization overhead.