The hidden challenges of serverless functions

Serverless Functions: Ideal for Small Tasks, But Not for Everything

Cloud-based computing with serverless functions has become increasingly popular due to its simplicity and scalability. These functions are perfect for quick tasks like analyzing photos or processing IoT device events. Major cloud providers like AWS, Microsoft, and Google offer serverless functions, making them easily accessible for developers.

For simple applications, serverless functions are a great choice. However, when dealing with complex workflows that involve managing large datasets, like an airline handling thousands of flights daily, serverless functions may not be the most efficient solution.

One of the limitations of serverless functions is that they require computing resources to be allocated every time they are invoked, leading to overhead. Additionally, they are stateless and need to retrieve data from external data stores, which can slow down processing. Building large systems with serverless functions can also be challenging as they lack a clear software architecture for complex workflows.

An alternative approach to overcome these limitations is to move the code to the data using in-memory computing. By leveraging in-memory computing, developers can run code on objects stored in primary memory distributed across a server cluster. This approach eliminates the need to repeatedly access external data stores, resulting in faster processing and reduced network data flow.

In-memory computing offers several benefits for structuring code in complex workflows, combining the strengths of data-structure stores like Redis and actor models. Unlike serverless functions, in-memory data grids can restrict processing on objects to methods defined by their data types, simplifying the development process.

To illustrate the performance differences between serverless functions and in-memory computing, a benchmarking example was conducted using AWS Lambda functions and ScaleOut Digital Twins. The results showed that ScaleOut Digital Twins processed tasks significantly faster than serverless functions, especially when handling large workloads.

In conclusion, while serverless functions are suitable for small tasks and ad hoc applications, they may not be the best choice for complex workflows that involve managing large datasets. Moving the code to the data with in-memory computing can significantly improve performance, scalability, and simplify application design.

To explore more about ScaleOut Digital Twins and its approach to managing data objects in complex workflows, visit: https://www.scaleoutdigitaltwins.com/landing/scaleout-data-twins.