One of the most common performance challenge in serverless app is the Lambda cold start that small delay when AWS spins up a fresh execution environment after inactivity.
Some easy tricks to make Lambda functions start faster in production:
1.𝐔𝐬𝐞 𝐏𝐫𝐨𝐯𝐢𝐬𝐢𝐨𝐧𝐞𝐝 𝐂𝐨𝐧𝐜𝐮𝐫𝐫𝐞𝐧𝐜𝐲
Keeps a set number of Lambda instances always warm.
AWS pre-initialise execution environments for you so no first-call delay.
Configure via Console, CLI, or CDK:
AWS Docs: AWS provisioned concurrency
Tip: Perfect for APIs or real-time workloads (e.g., login or payment endpoints).
2.𝐊𝐞𝐞𝐩 𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐬 𝐖𝐚𝐫𝐦 𝐰𝐢𝐭𝐡 𝐚 𝐒𝐜𝐡𝐞𝐝𝐮𝐥𝐞𝐫
Trigger your function periodically (every 5–10 mins) to prevent it from going cold.
Use Amazon EventBridge or CloudWatch Schedule for simple pinging.
EventBridge Docs:EventBridge
Tip: Combine this with a lightweight health-check lambda endpoint (like /ping). This is useful in microservices setups for example, after a new deployment, you can quickly test and verify that all functions are warm and responding correctly.
3.𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐞 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭 𝐏𝐚𝐜𝐤𝐚𝐠𝐞 𝐒𝐢𝐳𝐞
The bigger your Lambda zip, the longer AWS takes to initialize it.
Bundle only what’s needed (exclude dev dependencies).
Use tools like esbuild, webpack, or AWS SAM build.
Tip: For Node.js avoid installing large libraries like aws-sdk (it’s pre-included) and remove unused packages with library called depcheck.
4.𝐂𝐡𝐨𝐨𝐬𝐞 𝐭𝐡𝐞 𝐑𝐢𝐠𝐡𝐭 𝐑𝐮𝐧𝐭𝐢𝐦𝐞
Cold start time depends heavily on the runtime.
Node.js and Python start up faster than Java or .NET.
AWS SnapStart Doc:AWS SnapStart
Tip: If you need Java, enable AWS Lambda SnapStart
to pre-initialise the function state.
5.𝐁𝐞 𝐂𝐚𝐫𝐞𝐟𝐮𝐥 𝐰𝐢𝐭𝐡 𝐕𝐏𝐂 𝐂𝐨𝐧𝐟𝐢𝐠𝐮𝐫𝐚𝐭𝐢𝐨𝐧
Putting Lambdas inside a VPC adds ENI (Elastic Network Interface) setup time.
Avoid VPC unless you truly need private resources (like RDS).
If required, use VPC endpoints or NAT optimization.
Best Practices for VPC Lambda Networking: VPC configuration
Tip: For DynamoDB, S3, or SNS we can go without a VPC entirely.
6.𝐒𝐩𝐥𝐢𝐭 𝐋𝐚𝐫𝐠𝐞 𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐬
Monolithic Lambdas = longer cold starts.
Break down your logic into multiple smaller Lambdas (micro-Lambdas).
Tip: This also improves deploy speed and observability.
In one project, our first API call took ~2.4s (cold Lambda in VPC).
After making a few changes like
• Removed VPC dependency
• Enabled provisioned concurrency (2 instances)
• Added an EventBridge warm-up every 10 mins
Cold start dropped to ~150ms consistently.
