Paper titled "Towards Demystifying Intra-Function Parallelism in Serverless Computing" accepted at ACM/IFIP WoSC@Middleware'21

In this paper, the authors Michael Kiener, Mohak Chadha, and Michael Gerndt analyse the effect of parallelising serverless workloads wrt performance and cost on popular commercial FaaS and CaaS platforms.

Serverless computing offers a pay-per-use model with high elasticity and automatic scaling for a wide range of applications. Since cloud providers abstract most of the underlying infrastructure, these services work similarly to black-boxes. As a result, users can influence the resources allocated to their functions, but might not be aware that they have to parallelize them to profit from the additionally allocated virtual CPUs (vCPUs).

In this paper, we analyze the impact of parallelization within a single function and container instance for AWS Lambda, Google Cloud Functions (GCF), and Google Cloud Run (GCR). We focus on compute-intensive workloads since they benefit greatly from parallelization. Furthermore, we investigate the correlation between the number of allocated CPU cores and vCPUs in serverless environments.

Our results show that the number of available cores to a function/container instance does not always equal the number of allocated vCPUs. By parallelizing serverless workloads, we observed cost savings up to 81% for AWS Lambda, 49% for GCF, and 69.8% for GCR.