Taking Screenshots in AWS Lambda Without Fighting Chromium
Running Chromium in AWS Lambda is one of those engineering tasks that looks simple on Stack Overflow and turns into a multi-day yak shave. The 250 MB layer limit, the 10-second cold starts, the memory ceilings, the missing fonts — every step has a footgun.
This guide explains the pain points, then shows a serverless-friendly alternative that works on Lambda, Vercel Functions, Netlify Functions, and Cloudflare Workers — without packaging Chromium.
The Lambda + Chromium pain points
The 250 MB layer limit
Standard Chromium is ~280 MB. To fit it in a Lambda layer you need chromium-min or
@sparticuz/chromium — stripped builds around 50-60 MB. They work, but you give up
features (fonts, codecs) and add a deployment dependency that updates frequently and breaks subtly.
Cold starts
Spinning up Chromium in a cold Lambda takes 2-8 seconds depending on memory configuration. For a user-facing endpoint this is a non-starter. Provisioned concurrency fixes it but doubles your bill.
Memory pressure
Chromium wants 512 MB minimum. With 1 GB Lambdas you can render but not much else, and you'll OOM on heavy pages. 2 GB Lambdas work better but cost 4x more.
The missing fonts problem
Lambda's Linux base image has almost no fonts. Render any page with non-Latin text (Chinese, Japanese, Arabic, emoji) and you get tofu boxes. Fix this by bundling fonts into your layer (more space) or using webfonts (more latency).
Concurrency and queuing
Lambda's per-account concurrency limit is 1000 by default. Each render holds a Lambda for the duration of the page load. A traffic spike can starve the rest of your account.
The simpler approach: call an API from Lambda
Instead of packaging Chromium, your Lambda just makes an HTTP call to a screenshot API. The result: tiny deployment, fast cold start, no Chromium binary, no font issues.
// handler.mjs
import { Client } from "screenshotapis";
const client = new Client(process.env.SCREENSHOT_API_KEY);
export const handler = async (event) => {
const { url } = JSON.parse(event.body);
const { data } = await client.screenshot({
url,
format: "png",
full_page: true,
});
return {
statusCode: 200,
headers: { "Content-Type": "image/png" },
body: data.toString("base64"),
isBase64Encoded: true,
};
};
That's the entire function. No layer, no Chromium, no fonts to manage. Cold start is 200 ms because the deployment package is essentially the SDK plus your handler.
Performance comparison
| Metric | Self-hosted Chromium | API call from Lambda |
|---|---|---|
| Cold start | 2-8 seconds | 200 ms |
| Warm start | 0-200 ms | 0-50 ms |
| Render time | 1-3 seconds | 1-3 seconds |
| Lambda memory | 2 GB | 128 MB |
| Deployment size | 50-280 MB layer | < 1 MB |
| Cost per 10K invocations | ~$3 (2 GB × 3s) | ~$0.10 (128 MB × 1.5s) + API |
When this approach makes sense
- You're already on Lambda and don't want to operate browser infrastructure
- Your traffic is bursty — Lambda + API scales to zero, dedicated Chromium servers don't
- You need PDFs as well as images — packaging that in Lambda is even harder
- Your team is small and DevOps time is more expensive than per-render fees
When to keep Chromium in Lambda
- Compliance requires that page rendering happens in your own AWS account
- You have very specific Chromium flag requirements that no API exposes
- You're operating at 10M+ renders/month and have the team to maintain a custom build
Other serverless platforms
The same pattern works everywhere:
- Vercel Functions — same code, no Chromium runtime constraints
- Netlify Functions — same code, works on Edge functions too
- Cloudflare Workers — Workers can't run Chromium at all, so an API call is your only option
- Google Cloud Functions — same code, similar cost profile to Lambda
Skip the Chromium-in-Lambda yak shave — 100 free renders/month
Get your API key — free