Blog

Akava is a technology transformation consultancy delivering

delightful digital native, cloud, devops, web and mobile products that massively scale.

We write about
Current & Emergent Trends,
Tools, Frameworks
and Best Practices for
technology enthusiasts!

Six Tips for Working with AWS Lambda

Six Tips for Working with AWS Lambda

Gonzalo Maldonado Gonzalo Maldonado
8 minute read

Listen to article
Audio generated by DropInBlog's Blog Voice AI™ may have slight pronunciation nuances. Learn more
AWS Lambda (Image credit: Amazon Web Services, Inc.)

Contents

  • Introduction
  • Tip #1: CPU power and memory scale proportionally
  • Tip #2: Simplicity is a virtue
  • Tip #3: Mind your concurrency
  • Tip #4: Use native handling of partial batch failures
  • Tip #5: Necessary runs only
  • Tip #6: Avoid using AWS Lambda
  • Conclusion
  • References

Introduction

AWS Lambda is a highly scalable compute service offered by Amazon Web Services (AWS). It allows developers to run code in a variety of languages on a highly available and completely managed compute infrastructure. It can also run almost any type of application or backend service, but it is best suited to event-based, short-running processes. This is because the execution time for each invocation of a Lambda function is constrained to a maximum of 15 minutes.

Cost management is an ever-present concern when using cloud service providers, so keep the following in mind as you read further.

The number of executions and execution time of a Lambda function are its main cost variables. Additionally, the execution time cost for a given Lambda function depends on how much memory is allocated to it. Check the AWS Lambda pricing guide for details.

This post shares 6 tips for writing and deploying Lambda functions with more confidence. The tips include cost optimization, performance configuration, and operational suggestions.

Tip #1: CPU power and memory scale proportionally

If you have a Lambda function that is running longer than expected, the most obvious remedy may be to increase its timeout setting so that the function can run to completion. However, if the function is CPU-bound, you can improve its performance and reduce its execution time by increasing its memory size. This is because CPU power for Lambda functions scale proportionally with their allocated memory.

Alas, increasing a function’s timeout setting remains the only method for ensuring that IO-bound functions, such as those that spend most of their time waiting on responses from databases or APIs, have a chance of completion.

Tip #2: Simplicity is a virtue

There are multiple advantages to ensuring that your Lambda functions are as simple as necessary. Minimizing cold-start times and making functions more maintainable are two of the most glaring benefits.

You can use Lambda layers to share code across multiple Lambda functions and keep deployment package sizes to a minimum. Lambda layers can make Lambda functions more maintainable as they ensure only unique, essential logic is contained in the function’s body. They also make updating shared logic is easier, since only the layer needs updating to have the effect of a change applied to all functions that use the changed layer.

If you don’t need a dependency or artifact at runtime, it should not be part of the Lambda function’s deployment package.

Furthermore, smaller deployment package (and container image) sizes mean faster start up times (cold-start times) for Lambda functions. The golden rule here is, “if you don’t need a dependency or artifact at runtime, it should not be part of the Lambda function’s deployment package.” Smaller package size may also mean easier deployment procedures, since zipped packages below 50 MB can be directly upload to AWS Lambda instead of employing Amazon S3 for storing zipped packages or Amazon Elastic Container Registry (Amazon ECR) for storing container images.

Tip #3: Mind your concurrency

Lambda functions can almost instantaneously go from zero to a thousand concurrently running invocations. This is great for scalability in general but can also disastrous for the systems a Lambda function depends on. It is a good practice to set a maximum concurrency limit for Lambda functions. Determining what that number is depends on the AWS resource quotas a Lambda function may affect or the load allowances of services the function depends on (e.g. databases and APIs).

There is also an upper limit on the number of Lambda function invocations that can be ran concurrently in an AWS account region. A group of one of more Lambda functions with an unset concurrency limit may quickly reach the upper limit and prevent other functions in the account-region from executing.

You will also want to set appropriate alarms to ensure that the right people are notified when quotas are nearing their limits, anticipated costs are exceeded, and failures prop up.

There are two types of concurrency often discussed regarding Lambda functions, reserved and provisional. The concurrency recommendations made in this section refer to reserved concurrency. Learn more about managing AWS Lambda function concurrency.

Tip #4: Use native handling of partial batch failures

Amazon SQS is one of the most frequently used event sources for Lambda functions. But until recently, processing a batch of multiple messages from a SQS queues meant introducing more complexity in a Lambda function code’s than desirable. An entire batch of messages needed to be successfully processed by a function for the invocation to be considered a total success. This meant that should any message in a batch fail to be processed, the whole batch would be retried at a later time, which is configurable using the visibility timeout setting. There were workarounds to prevent double processing and other accompanying ills, but none of the workarounds were particularly satisfying. This made adopting a one message per batch policy a highly preferred alternative for many people writing Lambda functions.

In case you missed the AWS announcement in late 2021, they released an update to the AWS Lambda SQS event source that allows a partial batch response. This ensures that when a partial batch failure occurs, reprocessing is attempted only for the individual messages that were unsuccessfully processed. Learn more about reporting batch item failures.

Tip #5: Necessary runs only

Recall that the main pricing variables for Lambda functions are number of executions and execution time. Therefore, one way of reducing cost is to ensure that there are no unnecessary executions. A Lambda function should only attempt to process requests and events it is capable of handling.

In late 2021, AWS added the ability to filter events for Amazon SQS, DynamoDB, and Kinesis event sources. These filters are applied before a Lambda function is invoked, so if you were previously skipping undesirable events in your Lambda function code, now your function code can be simplified. You also reduce cost by no longer running the function for those undesired events.

Tip #6: Avoid using AWS Lambda

AWS Lambda is an attractive service for many reasons, but sometimes it is not the appropriate choice. If you need more control over the underlying compute resources your applications or services run on, take a look at Amazon Elastic Compute Cloud (Amazon EC2) or AWS Elastic Beanstalk. As previously mentioned, AWS Lambda is best suited for applications and services that only need to run for a short while (15 minutes) to service requests and process events. Applications and services that need more time to do that may be better ran on other compute services such as Amazon Elastic Container Service (Amazon ECS) or AWS Fargate. Also, if the average cold-start time for AWS Lambda functions is not acceptable for your application or service, consider running your application or service in an always-on mode with any of the previous mentioned compute services.

Conclusion

AWS Lambda is a great choice for creating applications and services that are event-based and require minimal time to service requests and process events. Additionally, it is a highly scalable compute service that requires no management of its underlying compute resources on the part of function programmers. But despite its apparent simplicity, there are many things function programmers must keep in mind when thinking about, writing, and deploying Lambda functions. I covered 6 of these things in this postwhich generally include tips for cost and configuration optimizations as well operational suggestions. However, several other more concerns and practices were not discussed that you may find discussed in the AWS Lambda documentation and some of the references. Take a peek and let me know what surprises you!

References

  1. What is AWS Lambda?
  2. AWS Lambda enables functions that can run up to 15 minutes
  3. Lambda deployment packages
  4. Amazon Elastic Container Registry (Amazon ECR)
  5. Managing Lambda reserved concurrency
  6. Managing AWS Lambda Function Concurrency
  7. Best Practices for Developing on AWS Lambda
  8. Working with Lambda layers and extensions in container images
  9. AWS Lambda now supports partial batch response for SQS as an event source
  10. Using Lambda with Amazon SQS — Reporting batch item failures
  11. Amazon Elastic Container Service
  12. What is AWS Fargate?
  13. Filtering event sources for AWS Lambda functions
  14. AWS Lambda now supports event filtering for Amazon SQS, Amazon DynamoDB, and Amazon Kinesis as event sources 


Akava would love to help your organization adapt, evolve and innovate your modernization initiatives. If you’re looking to discuss, strategize or implement any of these processes, reach out to [email protected] and reference this post.

« Back to Blog