5 common mistakes deploying Lambda functions

LambdaAWS

If you’re new to building Functions-as-a-Service applications and to AWS Lambda in particular, there are several snags you will likely hit when you’re deploying your first few functions. The list below will hopefully ensure you don’t make the same mistakes as I did.

1. Not using a deployment toolkit

Manually uploading code packages then configuring them in the AWS Console is both slow and error-prone. If you use a deployment toolkit such as the Serverless framework, this automates all the packaging and deployment as well as allowing your Lambda config to be version controlled and related functions to be grouped into the same deployment package.

2. Setting and forgetting memory allocation

You’re billed on how much memory is allocated to each function invocation so you don’t want to allocate significantly more memory than is needed to complete the task. However, this is not the whole story. As memory allocation increases so does the amount of CPU power that AWS allocates to your function. This means that by allocating more memory your function could complete quicker (especially if it is CPU bound), thus cutting your execution time charges.

This will involve some trial and error to get the sweet spot and you should edge on the side of over-allocating to start off with (say around 512MB). Once your function is deployed, your Lambda invocations are automatically logged to CloudWatch where you can see a line similar to the following which you can use to compare the Max Memory Used to the Memory Size (allocated):

REPORT RequestId: be2fc81e-8345-11e8-b334-cb4ce1d5d651	Duration: 30914.88 ms
Billed Duration: 31000 ms Memory Size: 256 MB	Max Memory Used: 111 MB

This particular example shows a function which uses less than half its allocated memory. If you know that this function is IO bound (e.g. it hands off an event to another API), then this would be a good candidate for decreasing the allocation as it probably doesn’t need all that’s allocated to it. The Serverless framework defaults to allocating 1024MB to a function which may be too much if your function is only doing some simple pass-through network calls.

However, if your function is doing something more computationally expensive, then you should try out increasing the memory allocation to see if it brings down the total overall cost. See point 1 in this article for an example of a Lambda performance test showing the benefits of a larger memory allocation.

Thanks to Chris Munns for pointing out my over-simplification of Lambda memory allocation in the first version of this article which hadn’t considered the benefits of extra CPU power for CPU-bound functions.

3. Not considering how to manage secrets

AWS does allow you to pass secrets (passwords, API keys, database connection strings, etc) as environment variables which are stored encrypted at rest using AWS KMS when you are deploying your Lambda function. However, at deployment time you still need to tell AWS where to fetch the values for these environment variables from. You obviously do not want to hardcode them into your deployment scripts.

A recommended approach is to use the Systems Manager Parameter Store to store your secrets. Deployment toolkits such as the Serverless framework allows you to map a Parameter Store key to an environment variable key which it will read from at deploy-time and store encrypted in the Lambda’s configuration.

4. Timeouts in Node.js functions

If you’re implementing your Lambdas using Node.js, there is the possibility that your functions may timeout, even after seeming to have completed their processing.

The reason for this is the JavaScript event loop may not have cleared (e.g. if there is an unhandled callback in a downstream function). When this happens you will end up paying for the full period of time until the function times out (the default timeout is 3 seconds) rather than just the time doing the actual processing.

To fix this, you should always set the following as the first line of code in your handler function:

context.callbackWaitsForEmptyEventLoop = false;

5. Not setting up budget alerts

This mistake isn’t specific to Lambda, but if you’re new to developing on AWS, the last thing you want to do is to rack up an unexpected bill. Since Lambda is so easy to get up-and-running with (especially when using a deployment toolkit), you could accidentally trigger your function to run on a regular schedule and be unaware of it until the end of the month bill comes in.

To avoid this, create a budget alert so you will be emailed whenever a threshold is hit.

Originally published .Last updated .

Other articles you might enjoy:

Free Email Course

How to transition your team to a serverless-first mindset

In this 5-day email course, you’ll learn:

  • Lesson 1: Why serverless is inevitable
  • Lesson 2: How to identify a candidate project for your first serverless application
  • Lesson 3: How to compose the building blocks that AWS provides
  • Lesson 4: Common mistakes to avoid when building your first serverless application
  • Lesson 5: How to break ground on your first serverless project

    🩺
    Architecture & Process Review

    Built a serverless app on AWS, but struggling with performance, maintainability, scalability or DevOps practices?

    I can help by reviewing your codebase, architecture and delivery processes to identify risk areas and their causes. I will then recommend solutions and help you with their implementation.

    Learn more >>

    🪲 Testing Audit

    Are bugs in production slowing you down and killing confidence in your product?

    Get a tailored plan of action for overhauling your AWS serverless app’s tests and empower your team to ship faster with confidence.

    Learn more >>