Taking Screenshots in AWS Lambda Without Fighting Chromium

April 28, 2026

Running Chromium in AWS Lambda is one of those engineering tasks that looks simple on Stack Overflow and turns into a multi-day yak shave. The 250 MB layer limit, the 10-second cold starts, the memory ceilings, the missing fonts — every step has a footgun.

This guide explains the pain points, then shows a serverless-friendly alternative that works on Lambda, Vercel Functions, Netlify Functions, and Cloudflare Workers — without packaging Chromium.

The Lambda + Chromium pain points

The 250 MB layer limit

Standard Chromium is ~280 MB. To fit it in a Lambda layer you need chromium-min or @sparticuz/chromium — stripped builds around 50-60 MB. They work, but you give up features (fonts, codecs) and add a deployment dependency that updates frequently and breaks subtly.

Cold starts

Spinning up Chromium in a cold Lambda takes 2-8 seconds depending on memory configuration. For a user-facing endpoint this is a non-starter. Provisioned concurrency fixes it but doubles your bill.

Memory pressure

Chromium wants 512 MB minimum. With 1 GB Lambdas you can render but not much else, and you'll OOM on heavy pages. 2 GB Lambdas work better but cost 4x more.

The missing fonts problem

Lambda's Linux base image has almost no fonts. Render any page with non-Latin text (Chinese, Japanese, Arabic, emoji) and you get tofu boxes. Fix this by bundling fonts into your layer (more space) or using webfonts (more latency).

Concurrency and queuing

Lambda's per-account concurrency limit is 1000 by default. Each render holds a Lambda for the duration of the page load. A traffic spike can starve the rest of your account.

The simpler approach: call an API from Lambda

Instead of packaging Chromium, your Lambda just makes an HTTP call to a screenshot API. The result: tiny deployment, fast cold start, no Chromium binary, no font issues.

// handler.mjs
import { Client } from "screenshotapis";

const client = new Client(process.env.SCREENSHOT_API_KEY);

export const handler = async (event) => {
  const { url } = JSON.parse(event.body);

  const { data } = await client.screenshot({
    url,
    format: "png",
    full_page: true,
  });

  return {
    statusCode: 200,
    headers: { "Content-Type": "image/png" },
    body: data.toString("base64"),
    isBase64Encoded: true,
  };
};

That's the entire function. No layer, no Chromium, no fonts to manage. Cold start is 200 ms because the deployment package is essentially the SDK plus your handler.

Performance comparison

MetricSelf-hosted ChromiumAPI call from Lambda
Cold start2-8 seconds200 ms
Warm start0-200 ms0-50 ms
Render time1-3 seconds1-3 seconds
Lambda memory2 GB128 MB
Deployment size50-280 MB layer< 1 MB
Cost per 10K invocations~$3 (2 GB × 3s)~$0.10 (128 MB × 1.5s) + API

When this approach makes sense

When to keep Chromium in Lambda

Other serverless platforms

The same pattern works everywhere:

Skip the Chromium-in-Lambda yak shave — 100 free renders/month

Get your API key — free