In this article, I want to share a few challenges I’ve encountered while working with a serverless stack on AWS Lambda, so you can learn from my experience.
For reference, the stack I use includes:
- API Gateway
- Several other AWS Lambda services.
Disclaimer: While I have gained expertise through my experience, I am not an expert in this field. If you have any ideas on how I can avoid these problems or if you have faced similar situations, please share! I am eager to learn and improve.
Resources handling and deployment
In a serverless stack like this, everything is a function, and all the functions are independent. While this can be great, it requires handling each function independently, for REAL. There is not just one deploy; there are many deploys, and a lot of configuration between functions (security groups, variables, dependencies, etc.).
It is important to plan ahead when dealing with a complex system, as it can quickly become a nightmare, especially if you have a medium team.
For small projects, you can manually create your functions using the AWS console. However, for larger projects, you will need a tool (or tools) to help manage them.
Here are a few tools that I have used (or have heard about) that you can use to fix (or at least help you):
I personally use SAM, which is essentially CloudFormation with extra features to work easily with Lambda. It has a few limitations, such as the number of functions you can create and the configuration of API Gateway that you can use, but overall, it’s a pretty good tool. It can help you feel like you are working on an integrated project instead of feeling like you are working on many connected projects.
These are just a few recommendations, and there are several tools available right now to manage serverless functions. The selection will depend on your project’s needs, programming language, and other architectural concerns that you have. I like SAM because it is supported by the AWS team, it’s easy to use, and it’s based on CloudFormation, which gives you a lot of advantages in the AWS documentation.
Share code between function
This is similar to the problem above but has another solution. So, you know that our code is independent, REAL independent. Imagine the function like an app, so you will need to share code between apps. To solve this, there are several ways we use a couple of them:
- Monorepo: put all the lambda in 1 repo and use the code between them. The pro is that all code is in one place, so there are no limitations to using it. The con is that all the lambda will have this code in the artifact, so you will have the same code deployed in each lambda. That’s not really a BIG problem you may think, but remember you instance the lambda function when it’s called (not always, but time to time let’s keep it simple). So, if you need to rebuild the app with a lot of code, you will lose performance.
- Lambda Layer: the AWS native solution for this is basically a dependency, but AWS handles it for us and it’s previously processed. The pro is that it’s the faster option and one of the simplest. The con is that it’s a native way, so if you need to move your code, you will need an option in the next provider.
- Library: basically, it’s just a library that you will install using something like pip, npm, or the package manager that you want. The pro is that it’s the most standard and simple way to do it. The con is the same as splitting your repos; you will need to handle the package (send it to npm or pip, tag it, deploy it, etc.).
In summary, the selection depends on your needs. These options are just possible solutions for how to handle it.
I have to admit, dealing with the limitations of serverless architecture can be frustrating. However, these limitations are in place to ensure functionality and stability.
There are several limitations that I have had to navigate while working with serverless architecture. Some of the most common limitations include the following.
Lambda functions have a maximum execution time of 15 minutes. Therefore, you need to ensure that your function can perform its task within that time frame. While you certainly wouldn’t want an API user waiting for 15 minutes, there are plenty of use cases where longer processing times are required, such as video processing, batch processing, sending email or notifications, or data extraction.
To overcome this limitation, it’s essential to design your system using the following techniques:
- Async Endpoints: These endpoints don’t return information immediately. Instead, they take the request, process it asynchronously, and then you can request the result using the same endpoint or another.
- Message Queue: Take the request and the information you need to process and split it into smaller chunks of information. Send these chunks to an SQS queue and then process them with one or more lambda functions. This approach works well with Lambda and offers several advantages.
- Cache: Pre-process information and use Lambda to return the information to the user or other services. This reduces the amount of processing required by the Lambda function.
- Webhooks: Instead of consuming and processing information, use webhooks to keep the system updated and process only what changes. This helps reduce the processing time required by the Lambda function..
Environments variables size
Lambda has a limitation of only supporting up to 4 kilobytes in environment variables. While I’m a big fan of environment variables because they can make a system testable and independent of the environment, it’s important to be aware of this limitation.
To work around this issue, we tokenize the environment variables to reduce their size, and use an external system to consume the information that we need when the lambda is initialized. This adds a bit more complexity to our system, but it’s necessary to ensure that our Lambdas work properly.
Payload size (10 megabytes)
When your system heavily relies on user-uploaded videos and pictures, it can pose a challenge. Initially, we attempted to simplify the process by using the base64 approach and uploading the image as part of the payload. However, Lambda and API Gateway have a limitation of 10 megabytes, and reaching out to AWS for an increase in the limit was not something we preferred to do. Due to this limitation, we had to change our approach.
We chose to upload the metadata of the image and video through a REST API and use the S3 client to upload the media. Although this approach added more complexity to our system, it worked well for us. However, using the S3 client can be a little tricky due to the necessary configuration and permission requirements, especially for heavy load media.
When you talk about the artifact size all developers think of Android, iOS, or web pages, but in the case of serverless this is a big issue.
Imaging this instead of having a server 24 hours running where you have your artifact you have several application sleeping that is waiting for a user to request then install and build themselves and when that user end up working with this the app just go to sleep and then delete itself for the memory.
So based on this quick and sad history you need to be careful with the bundle size because you will need to install this from time to time.
This is a simplification of the lambda livecycle:
- Init: initialization of the executable, install, build, and memory allocation.
- Invoke: execution of our code.
- Shutdown: cleaning of the code and release of infrastructure.
To assure that lambda requires that every lambda follows these restrictions:
- 50MB for deployment: bundle compress for upload to AWS
- 250MB for execution: bundle decompress including the lambda layer (all dependencies)
- 3MB for code: your actual code need to be less than 3MB to be able to edit in the console if you use Monorepo or a direct dependency to shared code between lambda this is going to be a problem because all your project will need to be less than 3MB.
This is pretty tricky for a large project but these are a couple of actions that we do to accomplish this:
- Remove unused dependency: Identify dependencies that are for development, not in use, or only used for scripting. These dependencies are not part of the runtime.
- Identify dead code: Remove any code that is not being executed in your application.
- Stick to what is necessary: Only write the code necessary for the solution to function.
- Limit each function to the necessary dependency: Each function or endpoint should be limited to the code necessary for its operation.
Serveless is an interesting stack, like all tools it has its limitation and advantage you only need to have a positive attitude and learn and you can overcome the challenges.
I hope you learn a little something, let me know if you have any questions or suggestions.
- Lambda quotas: https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
- Env size: https://aws.amazon.com/premiumsupport/knowledge-center/lambda-environment-variable-size/
- Lifecycle of Lambdas: https://docs.aws.amazon.com/lambda/latest/dg/runtimes-context.html#runtimes-lifecycle-ib
- Lambda layer: https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html
- SAM: https://aws.amazon.com/serverless/sam/
- Serveless: https://www.serverless.com/
- Presigned URL: https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html
- Cloudformation: https://aws.amazon.com/cloudformation/