Edge vs Serverless vs Container: Choosing the Right Deployment for Your Frontend
Not sure whether to deploy on Cloudflare Workers, AWS Lambda, or a Docker container? This deep dive breaks down Edge, Serverless, and Container deployments with real latency numbers, cold start comparisons, cost analysis, and a practical decision framework for frontend engineers.
The Deployment Dilemma Every Frontend Engineer Faces
You just finished building a beautiful Next.js app. Now the real question hits: where do you deploy it? Your PM wants it fast. Your CTO wants it cheap. Your users want it to never go down. And somewhere in the Slack channel, someone just dropped a link to a Cloudflare Workers tutorial while another person is arguing that "just use Docker" is the only sane answer.
Welcome to the modern deployment paradox. You have more options than ever — and each one comes with real tradeoffs that matter at scale. This article cuts through the hype and gives you a practical framework for choosing between Edge Functions, Serverless, and Containers — with real numbers, real limitations, and real guidance based on your use case.
Understanding the Three Paradigms
Edge Computing: Where Code Meets the CDN
Edge functions run your code at CDN nodes distributed globally — often hundreds of them. When a user in Ho Chi Minh City hits your app, the code runs at a nearby PoP (Point of Presence), not in a data center in us-east-1.
Key platforms:
- Cloudflare Workers — Runs on V8 isolates at 300+ locations worldwide. Sub-millisecond cold starts. Supports TypeScript, WebAssembly, and a growing ecosystem of bindings (KV, D1, R2, Queues).
- Vercel Edge Functions — Built on the same V8 isolate model, deeply integrated with Next.js. Runs at Vercel's Edge Network, with native support for middleware and streaming responses.
- Deno Deploy — Similar edge-first model with excellent TypeScript support and global distribution.
The defining characteristic of edge functions is the V8 isolate runtime — not a full Node.js environment. This means no native modules, limited APIs, and strict memory constraints. But the tradeoff is that isolates spin up in microseconds, not seconds.
Cloudflare Workers cold start: ~0ms (isolates are pre-warmed). Vercel Edge cold start: ~5–50ms. Compare that to AWS Lambda: 100ms–2000ms for a cold start.
Serverless Functions: The Event-Driven Middle Ground
Serverless functions — sometimes called FaaS (Function as a Service) — run in response to events. You write a function, deploy it, and pay only when it executes. No servers to manage, no scaling knobs to turn.
Key platforms:
- AWS Lambda — The OG serverless. Supports Node.js, Python, Go, Rust, and more. Integrates with the full AWS ecosystem. Up to 15 minutes execution time, up to 10GB memory.
- Vercel Functions — Serverless functions with zero config for Next.js. Runs in AWS Lambda under the hood, with a polished DX layer on top.
- Netlify Functions / Cloudflare Pages Functions — Simpler alternatives targeting JAMstack use cases.
Unlike edge functions, serverless runs in a full Node.js (or other runtime) environment — which means npm install sharp works, you can use native binaries, and you have real filesystem access during execution. But there's a price: the cold start problem.
Containers: The Full Control Option
Containers package your app and its entire runtime environment — OS libraries, Node version, native dependencies — into a portable image. You ship the box, not just the code.
Key platforms:
- Docker + self-hosted (DigitalOcean, Hetzner, VPS) — Maximum control, lowest per-hour cost at scale. Requires ops knowledge.
- Railway — Deploy Docker containers from a Dockerfile or auto-detected buildpack. Simple, great DX, surprisingly affordable with usage-based pricing.
- DigitalOcean App Platform — Managed container hosting. Handles TLS, auto-deploy from GitHub, horizontal scaling.
- Fly.io — Runs containers on "micro VMs" at edge-adjacent locations. Unique blend of container flexibility and geographic distribution.
- Google Cloud Run / AWS Fargate — Managed container platforms that scale to zero (serverless containers).
Containers have no inherent cold start issue if you keep instances running. They support any runtime, any dependency, any architecture. The tradeoff: you pay for idle time, and you're responsible for a lot more operational surface area.
The Cold Start Problem: Real Numbers
Cold starts are the Achilles heel of serverless and edge architectures. Here's a realistic comparison:
- Cloudflare Workers: ~0–5ms (V8 isolates, no cold start in practice)
- Vercel Edge Functions: 5–50ms (V8 isolates, slightly slower due to infrastructure)
- Vercel Serverless Functions (Node.js): 200ms–800ms cold, ~10–50ms warm
- AWS Lambda (Node.js, 128MB): 100ms–500ms cold, ~1–5ms warm
- AWS Lambda (Node.js, with large bundle): 1000ms–3000ms cold
- Container (always-on, Railway/DO): ~0ms (no cold start — already running)
- Cloud Run (scale-to-zero): 1s–4s cold start
For frontend APIs serving real users, a 2-second cold start is catastrophic. This is why the Lambda provisioned concurrency feature exists — and why it costs extra. Edge functions sidestep this problem entirely through the isolate model.
Cost Comparison: At What Scale Does Each Win?
Cost is context-dependent, but here are ballpark figures for a typical API backend serving a frontend app:
Edge Functions
Cloudflare Workers Free tier: 100,000 requests/day, 10ms CPU time per request. Paid: $5/month for 10M requests, then $0.50 per million. For most startups, Workers is effectively free.
Serverless (AWS Lambda + API Gateway)
Free tier: 1M requests/month, 400,000 GB-seconds. After that, roughly $0.20 per million requests + $3.50 per million API Gateway calls. At moderate traffic (10M req/month), you're looking at $35–70/month. Scales predictably but API Gateway costs can surprise you.
Containers
Railway: Starts at $5/month, usage-based after that. A small Node.js container with 512MB RAM runs about $5–15/month. DigitalOcean Droplet: $6/month for 1GB RAM. DigitalOcean App Platform: $5/month for basic container hosting. At low traffic, containers cost more than serverless (you're paying for idle time). At high, sustained traffic, containers win — you're not paying per-invocation.
Rule of thumb: If your traffic is spiky and unpredictable, serverless or edge wins on cost. If your traffic is steady and high-volume, containers win.
Limitations You Need to Know
Edge Function Limitations
- No full Node.js APIs: No
fs, no native modules, nochild_process. You're in a subset of the Web API standard. - CPU time limits: Cloudflare Workers has a 50ms CPU time limit on the free plan (10ms on Hobby). For CPU-intensive tasks like image processing or PDF generation, this is a dealbreaker.
- Memory limits: 128MB on Cloudflare Workers. Can't load large ML models or do heavy in-memory computation.
- Statelessness by design: No persistent connections to databases (use connection pooling via Prisma Accelerate, PlanetScale, or Neon's HTTP driver).
- Bundle size limits: Cloudflare Workers: 1MB compressed (10MB uncompressed). Large npm dependencies (like
aws-sdk) may not fit.
Serverless Limitations
- Cold starts: Already covered — can be brutal for low-traffic or newly deployed functions.
- Execution time limits: AWS Lambda: 15 minutes max. Vercel Functions: 10 seconds on Hobby, 60s on Pro, 900s on Enterprise. Long-running jobs need a different approach.
- Database connections: Traditional connection-pooling doesn't work at scale. Lambda functions can spawn thousands of concurrent executions, exhausting PostgreSQL connection limits. RDS Proxy or connection pooling middleware is required.
- Stateless between invocations: Global variables may persist between warm invocations (a common gotcha), but you can't rely on it. Design for statelessness.
- Local development friction: Emulating Lambda locally is doable (SAM CLI, serverless-offline) but never perfectly matches production.
Container Limitations
- Operational complexity: Deployments, health checks, restart policies, resource limits — all yours to manage (less so on managed platforms like Railway, but still more than serverless).
- Scaling latency: Horizontal scaling means spinning up new containers — typically 10–60 seconds. Not instant.
- Cost floor: You're paying 24/7 even at zero traffic. For low-traffic side projects, this can be wasteful.
- No native CDN/edge distribution: Without a CDN layer, your container serves from a single region. Latency for global users suffers.
Decision Framework: Which Should You Use?
Use Edge Functions When:
- You need globally low latency — authentication middleware, A/B testing, geolocation routing, personalization headers
- Your logic is lightweight and CPU-bound is minimal — request transformation, token validation (JWT verification), URL rewrites
- You're building Next.js Middleware — it's designed to run at the edge
- You want zero cold starts for public-facing endpoints
- Budget is tight — Cloudflare Workers free tier is genuinely generous
Use Serverless When:
- You need full Node.js compatibility — native modules, file system access, complex npm packages
- Your traffic is spiky and unpredictable — scale to zero when idle, burst to hundreds of concurrent executions when needed
- You're building API routes for a Next.js or Nuxt app on Vercel — the platform optimizes for this
- You need longer execution times — webhook processors, report generators (up to 15 min on Lambda)
- You're deep in the AWS ecosystem — SQS triggers, S3 events, DynamoDB streams
Use Containers When:
- You have sustained, predictable traffic — always-on services benefit from the fixed cost model
- Your app has special runtime requirements — specific Node version, native binaries, Python + Node polyglot, ML model serving
- You need WebSockets or persistent connections — serverless can't maintain these; containers can
- You're running a full-stack app with a long-running process (background workers, cron jobs, queue processors)
- You want maximum portability — Docker images run anywhere
- Your team has existing DevOps muscle and finds Kubernetes/Docker natural
Real-World Architecture Patterns
The Hybrid Stack (Most Common in Production)
The best teams don't pick just one. A common modern stack:
- Edge (Vercel/Cloudflare): Auth middleware, bot detection, geo-routing, A/B tests
- Serverless (Lambda/Vercel Functions): API routes, form handlers, webhook receivers
- Container (Railway/Fly.io): Background workers, WebSocket server, cron jobs, heavy computation
This isn't overengineering — it's using the right tool where it provides the most value. Static assets go to the CDN. Authentication logic runs at the edge. Your /api/checkout endpoint runs serverless because it needs Stripe's Node SDK. Your queue worker runs in a container because it processes jobs continuously.
The "Just Use Vercel" Stack (Great for Startups)
If you're early-stage and want to ship fast, Vercel handles all three tiers automatically with Next.js: static assets on the CDN, middleware at the edge, API routes as serverless functions. You don't think about it. The DX is exceptional. The cost is fine until you reach significant scale, at which point you have more resources to re-evaluate.
The "Own Your Stack" (Great for Cost at Scale)
At meaningful traffic, a containerized Next.js app on Hetzner or a DigitalOcean managed Kubernetes cluster can be significantly cheaper than Vercel Pro/Enterprise. Companies like Coolify and Dokku exist specifically to give you Heroku-style DX on your own hardware. The tradeoff: your team needs to care about infrastructure.
Practical Tips Before You Deploy
- Measure your cold start budget: Check your p99 latency requirements. If SLA requires <100ms globally, edge is likely necessary for at least some endpoints.
- Profile your bundle size early: Large dependencies bite you in serverless (slow cold starts) and hard-fail in edge functions. Use
@next/bundle-analyzeror Webpack Bundle Analyzer before deploying. - Use connection pooling for databases: Regardless of serverless or edge, never use direct Postgres connections without a pooler. PgBouncer, Prisma Accelerate, or Supabase's connection pooler are essential.
- Test cold starts in CI: Deploy to a branch, sleep for 15 minutes, then hit the function. If the response time is acceptable, you're good. If not, consider provisioned concurrency or a always-warm container.
- Don't pay for features you don't use: Vercel's edge middleware is powerful, but if you're not doing global routing or auth checks at the CDN layer, you're not getting value from it — a simple serverless function is fine.
The Verdict
There's no universally correct answer — but there's usually a clearly correct answer for your situation. Edge functions are magical for latency-sensitive, globally distributed logic that's lightweight. Serverless is the sweet spot for most API backends: zero ops overhead, scales automatically, full Node.js compatibility. Containers win for anything long-running, stateful, WebSocket-based, or requiring specialized runtimes.
The real skill isn't memorizing these categories — it's developing the architectural judgment to know which deployment primitive fits which slice of your system. And increasingly, the answer is "all three, in the right places."
Start simple. Measure. Evolve. And don't let the infrastructure distract you from shipping the actual product.
Admin
Cal.com
Open source scheduling — self-host your booking system, replace Calendly. Free & privacy-first.
DigitalOcean
Simple VPS & cloud hosting. $200 credit for new users over 60 days.
Comments (0)
Sign in to comment
No comments yet. Be the first to comment!